prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>I've got <code>Deployment</code> which has <code>n</code> nodes and I have service that exposes <code>4369</code>. I want to connect to one of those nodes via IEX. I am using <code>MiniKube</code> for my local development <code>Kubernetes</code> cluster which binds to some IP and I <code>can</code> access it's dashboard.</p>
<p>I tried calling <code>minikube service thatServiceName</code>, but after few moments of w8ing it ends work and <code>does not output</code> link that it supposed to give me.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: erlangpl-demo-mnesia
labels:
app: erlangpl-demo-mnesia
spec:
clusterIP: None
ports:
- port: 10000
targetPort: 10000
name: disterl-mesh-0
- port: 4369
targetPort: 4369
name: epmd
selector:
app: erlangpl-demo-mnesia
type: ClusterIP
</code></pre>
<p>Could anyone let me know what am I missing or what am I doing wrong?</p>
| <p><code>type: ClusterIP</code> with <code>clusterIP: None</code> looks fishy to me. I do not think that minikube provides support for that service type.</p>
<p>I would try using <code>type: NodePort</code>, which should expose the service on the minikube IP. </p>
|
<p>I've got <code>Deployment</code> which has <code>n</code> nodes and I have service that exposes <code>4369</code>. I want to connect to one of those nodes via IEX. I am using <code>MiniKube</code> for my local development <code>Kubernetes</code> cluster which binds to some IP and I <code>can</code> access it's dashboard.</p>
<p>I tried calling <code>minikube service thatServiceName</code>, but after few moments of w8ing it ends work and <code>does not output</code> link that it supposed to give me.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: erlangpl-demo-mnesia
labels:
app: erlangpl-demo-mnesia
spec:
clusterIP: None
ports:
- port: 10000
targetPort: 10000
name: disterl-mesh-0
- port: 4369
targetPort: 4369
name: epmd
selector:
app: erlangpl-demo-mnesia
type: ClusterIP
</code></pre>
<p>Could anyone let me know what am I missing or what am I doing wrong?</p>
| <p>You can connect to the pod directly:</p>
<pre><code>kubectl exec -it your-pod-name
</code></pre>
<p>it defaults to bash, which I didn't had so I have to do:</p>
<pre><code>kubectl exec -it your-pod-name -- /bin/sh
</code></pre>
<p>I hope that helps.</p>
|
<p>Is it possible to turn off/remove/disable the basic auth in GKE that was added by default?</p>
<p>It's possible to authenticate towards the GKE master using a number of ways, <a href="http://kubernetes.io/docs/admin/authentication/" rel="nofollow noreferrer">as listed in the documentation</a>.</p>
<p>When you create a cluster using GKE it creates a username/password for basic authentication to the master. </p>
<p>I want to turn this off to tighten up security (the other authentication methods are significantly better and are used transparently by the tooling AFAIK).</p>
<p>Is it possible? I have searched the <a href="https://github.com/kubernetes/kubernetes/issues" rel="nofollow noreferrer">kubernetes github issues</a> list but not found anyone with the exact same problem (yet).</p>
<p>(The default password is 16 characters, and should be OK, but it is not possible to change without tearing down the entire cluster. I just want to disable basic auth.)</p>
<p>Thanks.</p>
| <p>Yes, you can disable Basic Authentication on cluster creation:</p>
<p><a href="https://i.stack.imgur.com/j88bg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/j88bg.png" alt="enter image description here"></a></p>
|
<p>I am trying to make an HTTPS call in a Docker container running a Go binary. This gives me the following error:</p>
<blockquote>
<p>x509: failed to load system roots and no roots provided</p>
</blockquote>
<p>Having looked this up, it seems the problem is that the BusyBox docker image does not have a root CA certificate. From other answers on StackOverflow it seems that the best approach is to mount the CA root into the <code>/etc/ssl/certs</code> container directory.</p>
<p>To test locally, it makes sense to mount the host machine's root CA certificate. When running in production (I use Google Container Engine), I'm not sure how to specify a root CA certificate. Do I need to create one myself? Or is there an existing cert in GKE that I can reuse?</p>
| <p>There are multiple options you can have</p>
<p><strong>Share certificates from host</strong></p>
<p>As you pointed out you can share <code>/etc/ssl/certs</code> from the host.</p>
<p><strong>Use busybox with certificates</strong></p>
<p>You can use a image like <code>odise/busybox-curl</code> which already has the certificates installed.</p>
<p><strong>Use docker-compose and shared volumes for this</strong></p>
<p>This is a better approach as it would not require you to have dependency on host</p>
<pre><code>version: '2'
services:
busybox:
image: busybox
command: sleep 1000
volumes:
- certificates:/etc/ssl/certs:ro
certifcate_installer:
image: alpine
command: sh -c 'apk update && apk add ca-certificates'
volumes:
- certificates:/etc/ssl/certs
volumes:
certificates:
</code></pre>
<p><strong>Build it using multi-stage Dockerfile</strong></p>
<pre><code>FROM alpine as certs
RUN apk update && apk add ca-certificates
FROM busybox
COPY --from=certs /etc/ssl/certs /etc/ssl/certs
</code></pre>
<p>And then build it like normal file</p>
<pre><code>vagrant@vagrant:~/certs$ docker build -t busyboxcerts .
Sending build context to Docker daemon 49.66kB
Step 1/4 : FROM alpine as certs
---> 4a415e366388
Step 2/4 : RUN apk update && apk add ca-certificates
---> Running in 0059f93b5fc5
fetch http://dl-cdn.alpinelinux.org/alpine/v3.5/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.5/community/x86_64/APKINDEX.tar.gz
v3.5.2-131-g833fa41a4d [http://dl-cdn.alpinelinux.org/alpine/v3.5/main]
v3.5.2-125-g9cb91a548a [http://dl-cdn.alpinelinux.org/alpine/v3.5/community]
OK: 7966 distinct packages available
(1/1) Installing ca-certificates (20161130-r1)
Executing busybox-1.25.1-r0.trigger
Executing ca-certificates-20161130-r1.trigger
OK: 5 MiB in 12 packages
---> 1a84422237e4
Removing intermediate container 0059f93b5fc5
Step 3/4 : FROM busybox
---> efe10ee6727f
Step 4/4 : COPY --from=certs /etc/ssl/certs /etc/ssl/certs
---> af9936f55fc4
Removing intermediate container 1af54c34a5b5
Successfully built af9936f55fc4
Successfully tagged busyboxcerts:latest
vagrant@vagrant:~/certs$ docker run busyboxcerts:latest ls /etc/ssl/certs
02265526.0
024dc131.0
03179a64.0
</code></pre>
<p>For more details on multistage build refer to <a href="https://docs.docker.com/engine/userguide/eng-image/multistage-build/#before-multi-stage-builds" rel="noreferrer">https://docs.docker.com/engine/userguide/eng-image/multistage-build/#before-multi-stage-builds</a></p>
<p>All methods have their own pros and cons. I would prefer the last or the second last method personally</p>
|
<p>I am running 1.6.2, and am hitting the <code>/apis/batch/v2alpha1/namespaces/<namespace>/cronjobs</code> endpoint, with a valid namespace and a request body of </p>
<pre><code>{
"body": {
"apiVersion": "batch/v2alpha1",
"kind": "CronJob",
"metadata": {
"name": "hello"
},
"spec": {
"schedule": "*/1 * * * *",
"jobTemplate": {
"spec": {
"template": {
"spec": {
"containers": [
{
"name": "hello",
"image": "busybox",
"args": [
"/bin/sh",
"-c",
"date; echo Hello from the Kubernetes cluster"
]
}
],
"restartPolicy": "OnFailure"
}
}
}
}
}
}
</code></pre>
<p>}</p>
<p>I receive a response of </p>
<pre><code>{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "the server could not find the requested resource",
"reason": "NotFound",
"details": {},
"code": 404
}
</code></pre>
<p>According to the documentation, this endpoint should exist. I figure I probably have some setting set incorrectly, but I'm not sure which one and how to correct it. Any help is appreciated.</p>
| <p>The v2alpha1 features are not enabled by default. Make sure you are starting your kube-apiserver with this switch to enable the CronJob resource: <code>--runtime-config=batch/v2alpha1=true</code>.</p>
|
<p>I have a Kubernetes Pod created by a Stateful Set (not sure if that matters). There are two containers in this pod. When one of the two containers fails and use the <code>get pods</code> command, 1/2 containers are Ready and the Status is "Error." The second container never attempts a restart and I am unable to destroy the pod except by using the <code>--grace-period=0 --force</code> flags. A typical delete leaves the pod hanging in a "terminating" state either forever or for a very very long time. What could be causing this behavior and how to go about debugging it?</p>
| <p>I encounter a similar problem on a node in my k8s 1.6 cluster esp. when the node has been running for a couple of weeks. It can happen to any node. When this happens, I restart kubelet on the node and the errors go away.</p>
<p>It's not the best thing to do, but it always solves the problem. It's also not detrimental to the cluster if you restart kubelet because the running pods continue to stay up.</p>
<p><code>kubectl get po -o wide</code> will likely reveal to you that the errant pods are running on one node. SSH to that node and restart kubelet.</p>
|
<p>Assume currently I have 2 <code>PVC</code>s with <code>ReadWriteOnce</code>, claimed by <code>Pod #1</code> and <code>Pod #2</code> respectively. Both are running on <code>Node #1</code>.</p>
<p>Next, <code>Pod #2</code> is updated with the newer Docker image. However, at the same time <code>Pod #3</code> is run and allocated to <code>Node #1</code>. Since <code>Node #1</code> is now full, <code>Pod #2</code> could only be allocated by Kubernetes to <code>Node #2</code>.</p>
<p>Since AWS EBS and Google PersistentDisk could only be mounted on single Node, would <code>Pod #2</code> become unable to connect to the-previously-claimed <code>PVC</code>? </p>
<p>If yes, how to avoid having this issue?</p>
| <p>Yes, that is the downside of pv/pvc with the current storage deliveries of AWS and GCE.</p>
<p>To avoid this, you would have to use a different storage infrastructure which does not have this limitation. Possibilities are CEPH, Gluster, scaleIO (and others). These solutions abstract the storage away from the disks and provide a storage layer which is not node dependent anymore.</p>
|
<p>I am able to set up standalone GRPC ( in Java) server on pods and its communicating with my local GRPC (nodejs) client over http2.</p>
<p>Now as a next step - I am trying to set up GRPC server + spring boot on our pods - </p>
<ol>
<li>Do I have to start spring boot and GRPC as two separate processes on different ports ? Limitation on pod is if I start main process (Spring boot process in this case ?) I can not start child process (GRPC server in this case ?)another process on a different port. </li>
<li>How can I ensure that the spring boot + grpc server communicates over http2 </li>
</ol>
<p>I see documentation with spring boot + eureka server + grpc but need to understand above details to proceed.</p>
| <ol>
<li><p>You can start spring-boot and GRPC in the one process and listening different ports at the same time.</p></li>
<li><p>gRPC server communicates over http2 if you start the gRPC server by <code>io.grpc.netty.NettyServerBuilder</code></p></li>
</ol>
<p>You can see the project in <a href="https://github.com/LogNet/grpc-spring-boot-starter" rel="nofollow noreferrer">https://github.com/LogNet/grpc-spring-boot-starter</a></p>
|
<p>Is it possible to deploy a self-healing and scaling Redis-like key-value store that I can run in Kubernetes (or Swarm or any other automated cloud env)?</p>
<p>The challenges I found with Redis:</p>
<ol>
<li>You need to create a cluster manually with <code>redis-trib</code></li>
<li>New nodes need to be added to the cluster explicitly</li>
<li>Nodes need to be removed explicitly</li>
<li>Nodes do not replicate data in their shards peer to peer, but rather use a master-slave model</li>
</ol>
<p>The above means that the following scenarios all will fail. I have a simple 3-master and 3-slave cluster. "Master A" fails, leading Kubernetes to start a new "Master A" in <1 second:</p>
<ul>
<li>new "Master A" has no knowledge of the cluster and will not join</li>
<li>"Slave A", which had copies of the data, now syncs from the new "Master A", and loses all data, defeating the purpose of slave replica</li>
<li>In completely new startup, "Master A" might start before "Master B", is initialized (assuming I even can automate the cluster init), before "Master B" is ready, and thus "Master B" never really joins the cluster</li>
</ul>
<p>Questions:</p>
<ol>
<li>Is there a way to automate Redis cluster init and sync in a non-predictable, non-persistent storage without <em>any</em> human interaction? </li>
<li>If not, is there an alternative that is a clean, in-memory key-value store (persistence is less important to me), that is self-healing and works peer-to-peer?</li>
</ol>
<p>Consul/etcd/zookeeper all work 100% peer-to-peer and self-heal (which is great), but their performance (supposedly) is far below Redis in-memory KV. They aren't built, e.g. for looking up a session with each Web API request. This is partially due to non-sharding (100% copies), partially due to disk writes.</p>
<p>Kafka's model (although a message queue, not a KV store) works well too (but depends on zk underneath): partitions and replicas, but essentially self-healing. I talk to one broker, it tells me where my topic ("shard") lives, I get it. </p>
<p>Is there any way to get that full autonomy using Redis so I can deploy in kube/swarm/cloud, or an alternative that provides similar performance with the autonomous model?</p>
| <p>You can use the Kubernetes Redis Example which work with Redis Sentinel. In case the master fails, sentinel promotes a slave to the new master. The Replication Controller boots a new Slave Pod. Your application connects to sentinel and from that service you will get the ip of the new master. </p>
<p>Redis Sentinel <a href="https://redis.io/topics/sentinel" rel="nofollow noreferrer">https://redis.io/topics/sentinel</a></p>
<p>Kubernetes Example <a href="https://github.com/kubernetes/examples/tree/master/staging/storage/redis" rel="nofollow noreferrer">https://github.com/kubernetes/examples/tree/master/staging/storage/redis</a></p>
|
<p>I am trying to setup a multi broker kafka on a kubernetes cluster hosted in Azure. I have a single broker setup working. For the multi broker setup, currently I have an ensemble of zookeeper nodes(3) that manage the kafka service. I am deploying the kafka cluster as a replication controller with replication factor of 3. That is 3 brokers. How can I register the three brokers with Zookeeper such that they register different IP addresses with the Zookeeper? </p>
<p>I bring up my replication controller after the service is deployed and use the Cluster IP in my replication-controller yaml file to specify two advertised.listeners, one for SSL and another for PLAINTEXT. However, in this scenario all brokers register with the same IP and write to replicas fail. I don't want to deploy each broker as a separate replication controller/pod and service as scaling becomes an issue. I would really appreciate any thoughts/ideas on this. </p>
<p>Edit 1:</p>
<p>I am additionally trying to expose the cluster to another VPC in cloud. I have to expose SSL and PLAINTEXT ports for clients which I am doing using advertised.listeners. If I use a statefulset with replication factor of 3 and let kubernetes expose the canonical host names of the pods as host names, these cannot be resolved from an external client. The only way I got this working is to use/expose an external service corresponding to each broker. However, this does not scale. </p>
| <p>Kubernetes has the concept of <code>Statefulsets</code> to solve these issues. Each instance of a statefulset has it's own DNS name so you can reference to each instance by a dns name. </p>
<p>This concept is described <a href="http://blog.kubernetes.io/2016/12/statefulset-run-scale-stateful-applications-in-kubernetes.html" rel="nofollow noreferrer">here</a> in more detail. You can also take a look at this <a href="https://raw.githubusercontent.com/kubernetes/kubernetes.github.io/master/docs/tutorials/stateful-application/zookeeper.yaml" rel="nofollow noreferrer">complete example</a>: </p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: zk-headless
labels:
app: zk-headless
spec:
ports:
- port: 2888
name: server
- port: 3888
name: leader-election
clusterIP: None
selector:
app: zk
---
apiVersion: v1
kind: ConfigMap
metadata:
name: zk-config
data:
ensemble: "zk-0;zk-1;zk-2"
jvm.heap: "2G"
tick: "2000"
init: "10"
sync: "5"
client.cnxns: "60"
snap.retain: "3"
purge.interval: "1"
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: zk-budget
spec:
selector:
matchLabels:
app: zk
minAvailable: 2
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: zk
spec:
serviceName: zk-headless
replicas: 3
template:
metadata:
labels:
app: zk
annotations:
pod.alpha.kubernetes.io/initialized: "true"
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- zk-headless
topologyKey: "kubernetes.io/hostname"
containers:
- name: k8szk
imagePullPolicy: Always
image: gcr.io/google_samples/k8szk:v1
resources:
requests:
memory: "4Gi"
cpu: "1"
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: server
- containerPort: 3888
name: leader-election
env:
- name : ZK_ENSEMBLE
valueFrom:
configMapKeyRef:
name: zk-config
key: ensemble
- name : ZK_HEAP_SIZE
valueFrom:
configMapKeyRef:
name: zk-config
key: jvm.heap
- name : ZK_TICK_TIME
valueFrom:
configMapKeyRef:
name: zk-config
key: tick
- name : ZK_INIT_LIMIT
valueFrom:
configMapKeyRef:
name: zk-config
key: init
- name : ZK_SYNC_LIMIT
valueFrom:
configMapKeyRef:
name: zk-config
key: tick
- name : ZK_MAX_CLIENT_CNXNS
valueFrom:
configMapKeyRef:
name: zk-config
key: client.cnxns
- name: ZK_SNAP_RETAIN_COUNT
valueFrom:
configMapKeyRef:
name: zk-config
key: snap.retain
- name: ZK_PURGE_INTERVAL
valueFrom:
configMapKeyRef:
name: zk-config
key: purge.interval
- name: ZK_CLIENT_PORT
value: "2181"
- name: ZK_SERVER_PORT
value: "2888"
- name: ZK_ELECTION_PORT
value: "3888"
command:
- sh
- -c
- zkGenConfig.sh && zkServer.sh start-foreground
readinessProbe:
exec:
command:
- "zkOk.sh"
initialDelaySeconds: 15
timeoutSeconds: 5
livenessProbe:
exec:
command:
- "zkOk.sh"
initialDelaySeconds: 15
timeoutSeconds: 5
volumeMounts:
- name: datadir
mountPath: /var/lib/zookeeper
securityContext:
runAsUser: 1000
fsGroup: 1000
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 20Gi
</code></pre>
|
<p>I developed two web apps(we call them A and B) and deployed them to k8s. The web app A refers to the images generated by web app B. In previous environment, B has the fixed IP and port which makes it very easier to reach the images hosted on B.</p>
<p>In the K8S environment, I use the service type - NodePort. In that case, the port is randomly generated upon deployment each time. </p>
<p>The question is whether there is a way to fetch B service's node port programmatically from the A Pod. Or is it possible to fix the node port for one service? </p>
| <p>There are three answers to your question:</p>
<p>The first is that one should not be destroying the <code>Service</code> just because the <code>Pod</code>s behind it are redeployed. The <code>Service</code> is designed to be the long-lived contract that abstracts over the placement (or count) of the servicing <code>Pod</code>s</p>
<p>The second is that if you do wish to have web-app A use a known-to-you <code>NodePort</code>, you can put a value in the <a href="https://kubernetes.io/docs/api-reference/v1.7/#serviceport-v1-core" rel="nofollow noreferrer">nodePort:</a> field of the <code>ports:</code> mapping inside the <code>Service</code>, and assuming it is not already assigned, kubernetes will defer to your judgement </p>
<p>The third answer is that every <code>Pod</code> that runs within the cluster automatically gains an API token for communication with the kubernetes API (it lives in <code>/run/secrets/kubernetes.io</code> or similar), so you can ask the API from within the Pods to give you the current value of the Service's nodePort</p>
|
<p>I just want to find out if I understood the documentation right:</p>
<p>Suppose I have an nginx server configured with a Deployment, version 1.7.9 with 4 replicas.</p>
<pre><code>apiVersion: apps/v1beta1 # for versions before 1.6.0 use extensions/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 4
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
</code></pre>
<p>Now I update the image to version 1.9.1:</p>
<p><code>kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1</code></p>
<p>With<code>kubectl get pods</code> I see the following:</p>
<pre><code> > kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-2100875782-c4fwg 1/1 Running 0 3s
nginx-2100875782-vp23q 1/1 Running 0 3s
nginx-390780338-bl97b 1/1 Terminating 0 17s
nginx-390780338-kq4fl 1/1 Running 0 17s
nginx-390780338-rx7sz 1/1 Running 0 17s
nginx-390780338-wx0sf 1/1 Running 0 17s
</code></pre>
<p>2 new instances (c4fwg, vp23q) of 1.9.1 have been started coexising for a while with 3 instances of the 1.7.9 version.</p>
<p>What happens to the request made to the service at this moment? Do all request go to the old pods until all the new ones are available? Or are the requests load balanced between the new and the old pods?</p>
<p>In the last case, is there a way to modify this behaviour and ensure that all traffic goes to the old versions until all new pods are started?</p>
| <p>The answer to "what happens to the request" is that they will be round-robin-ed across all Pods that match the selector within the Service, so yes, they will all receive traffic. I believe kubernetes considers this to be a feature, not a bug.</p>
<p>The answer about the traffic going to the old Pods can be answered in two ways: perhaps Deployments are not suitable for your style of rolling out new Pods, since that is the way they operate. The other answer is that you can update the Pod selector inside the Service to more accurately describe "this Service is for Pods 1.7.9", which will pin that Service to the "old" pods, and then after even just one of the 1.9.1 Pods has been started and is Ready, you can update the selector to say "this Service is for Pods 1.9.1"</p>
<p>If you find all this to be too much manual labor, there are a whole bunch of intermediary traffic managers that have more fine-grained control than just using pod selectors, or you can consider a formal rollout product such as Spinnaker that will automate what I just described (presuming, of course, you can get Spinnaker to work; I wish you luck with it)</p>
|
<p>I have created two different namespaces for different environment. one is devops-qa and another is devops-dev. I created two ingress in different namespaces. So while creating ingress of qa env in devops-qa namespace, the rules written inside ingress of qa is working fine. Means I am able to access the webpage of qa env. The moment I will create the ingress of dev env in devops-dev namespace, I will be able to access the webpage of dev env but wont be able to access the webpage of qa. And when I delete the dev ingress then again I will be able to access the qa env website</p>
<p>Below is the ingree of both dev and qa env.</p>
<p><strong>Dev Ingress</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
name: cafe-ingress-dev
namespace: devops-dev
spec:
tls:
- hosts:
- cafe-dev.example.com
secretName: default-token-drk6n
rules:
- host: cafe-dev.example.com
http:
paths:
- path: /
backend:
serviceName: miqpdev-svc
servicePort: 80
</code></pre>
<p><strong>QA Ingress</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
name: cafe-ingress-qa
namespace: devops-qa
spec:
tls:
- hosts:
- cafe-qa.example.com
secretName: default-token-jdnqf
rules:
- host: cafe-qa.example.com
http:
paths:
- path: /greentea
backend:
serviceName: greentea-svc
servicePort: 80
- path: /blackcoffee
backend:
serviceName: blackcoffee-svc
servicePort: 80
</code></pre>
<p>The token mentioned in the ingress file is of each namespace. And the nginx ingress controller is running in QA namespace
How can i run both the ingress and will be able to get all the websites deployed in both dev and qa env ?</p>
| <p>I actually Solved my problem. I did everything correct. But only thing I did not do is to map the hostname with the same ip in Route53. And instead of accessing the website with hostname, I was accessing it from IP. Now after accessing the website from hostname, I was able to access it :) </p>
|
<p>While trying to run the next Docker command:</p>
<pre><code>docker run \
--net=host \
--pid=host \
--privileged=true \
-d \
gcr.io/google_containers/hyperkube-amd64:v1.7.0 \
/nsenter \
--target=1 \
--mount \
--wd=. \
-- ./hyperkube kubelet \
--hostname-override="127.0.0.1" \
--address="0.0.0.0" \
--api-servers=http://localhost:8080 \
--config=etc/kubernetes/manifests \
--v=2
</code></pre>
<p>I am getting an error</p>
<blockquote>
<p>nsenter: failed to execute ./hyperkube: No such file or directory</p>
</blockquote>
<p>I have been trying a lot of combination, but nothing seems to work, have someone else tried to do this before?</p>
<p>My docker version is</p>
<pre><code>Client:
Version: 1.12.6
API version: 1.24
Go version: go1.6.4
Git commit: 78d1802
Built: Wed Jan 11 00:23:16 2017
OS/Arch: darwin/amd64
Server:
Version: 1.12.6
API version: 1.24
Go version: go1.6.4
Git commit: 78d1802
Built: Wed Jan 11 00:23:16 2017
OS/Arch: linux/amd64
</code></pre>
<p>Thanks</p>
| <p>kubernetes <a href="https://github.com/kubernetes/minikube/pull/1542#issuecomment-315201536" rel="nofollow noreferrer">doesn't support Docker 17.06</a>. Try Docker 1.12.6 instead.</p>
|
<p>Currently I'm using kubeadm 1.7.2 to install kubernetes.</p>
<p>It will download images like <code>gcr.io/google_containers/etc-amd64:3.0.17</code>.
But in gcr.io, I only see <code>google-containers</code>.</p>
<p>So <code>gcr.io.google_containers</code> is the same <code>gcr.io.google-containers</code> ????</p>
<p>If they are the same, why when I tag them to google-containers instead of google-containers kubeadm will download the google_containers/xxx ???</p>
| <blockquote>
<p>So gcr.io.google_containers is the same gcr.io.google-containers</p>
</blockquote>
<p>Since 2015, yes: see <a href="https://github.com/kubernetes/kubernetes/issues/6229" rel="nofollow noreferrer">kubernetes issue 6229</a></p>
<blockquote>
<p>It turns out that that is because I am using an older version of docker which does not allow "<code>-</code>" in namespaces.<br>
I get <code>Invalid namespace name (google-containers), only [a-z0-9_] are allowed, size between 4 and 30</code> error on running <code>$docker pull gcr.io/google-containers/heapster</code>.</p>
<p>@thockin suggested that we can change the path to use <code>google_containers</code> instead of <code>google-containers</code> since that works for all versions of docker.</p>
</blockquote>
|
<p>I am trying to assign a static IP address to a pod on deployment.</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: aws-test-mysql
spec:
replicas: 1
template:
metadata:
labels:
app: aws-test-mysql
spec:
containers:
- name: aws-test-mysql
image: 461677341235123.dkr.ecr.us-east-1.amazonaws.com/aws-test-mysql
securityContext:
privileged: true
ports:
- containerPort: 3306
hostIP: 172.20.32.50
hostPort: 3306
resources:
requests:
cpu: 100m
imagePullSecrets:
- name: ecrkey
</code></pre>
<p>As you can see when I described my pod it is created with another IP.</p>
<pre><code>test-mbp1:aws test$ kubectl describe pods | grep IP
IP: 100.96.1.3
</code></pre>
<p>I'm trying to deploy a pod with a static IP on "kind: Deployment" and not as a service.</p>
<p>Is this posible ?</p>
| <p>A static IP cannot be assigned to a Pod because of the dynamic nature of kubernetes' IP layer.</p>
<p>Since you don't want to attach a Service (which is the best way imho), a close alternative is to convert the Deployment to a StatefulSet. This will give the Pod a static hostname which more-or-less fulfils your requirement.</p>
<p>The first replica of the StatefulSet will be called <code>aws-test-mysql-0.<kubernetes.cluster.tld></code>.</p>
|
<p>I understand how to set targetPort as integer value when defining service in k8s.</p>
<p>However, I'm a little confused about how to set targetPort with string value.</p>
<p>Is there any example about this?</p>
<p>Thanks,</p>
| <p>This Service is for Prometheus. In the following manifest, you first have to define <code>web</code> in the Deployment before you can refer to it as a string in <code>targetPort</code>.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
prometheus: k8s
name: prometheus-k8s
namespace: monitoring
spec:
ports:
- name: web
nodePort: 30900
port: 9090
protocol: TCP
targetPort: web
selector:
prometheus: k8s
type: NodePort
</code></pre>
|
<p>In kubernetes, is there a way to mount <a href="https://kubernetes.io/docs/concepts/storage/volumes/#types-of-volumes" rel="noreferrer">hostPath volume</a> after pod got started?</p>
| <p>There is no way to add a volume to a running Docker container, so Kubernetes has no way to add a volume to a running Pod. Modifying the Pod to include a new <code>hostPath</code> volume will recreate the container(s) in the Pod and mount the volumes as specified.</p>
|
<p>Jenkins ver. 2.60.1 (running in container on kubernetes)</p>
<p>Kubernetes Plugin ver. 0.11 (<a href="https://github.com/jenkinsci/kubernetes-plugin" rel="nofollow noreferrer">https://github.com/jenkinsci/kubernetes-plugin</a>)</p>
<p>Pipeline test:</p>
<pre><code>podTemplate(
label: 'mypod',
volumes: [
persistentVolumeClaim(claimName: 'nfs-maven', mountPath: '/mnt/', readOnly: false)],
envVars: [
containerEnvVar(key: 'FOO', value: 'BAR'),
],
containers: [
containerTemplate(name: 'golang',
image: 'golang',
ttyEnabled: true,
command: 'cat',
)]
)
{
node('mypod') {
stage('test env') {
container('golang') {
stage('build') {
sh 'echo $FOO'
sh 'sleep 3600'
}
}
}
}
}
</code></pre>
<p>The vars are not passed into the containers.
The echo echoes nothing. echo $FOO or echo \$FOO
I have tried on the pod level and container level.</p>
<p>When i describe the created pod i only get the following environment vars:</p>
<pre><code>Environment:
JENKINS_LOCATION_URL: http://ldn1-kube1:31000/
JENKINS_SECRET: 107cb696a8792f998fd41b6ccacf833ea74941fc9a95c39c4b2a1cde4c008b35
JENKINS_JNLP_URL: http://10.233.60.248:8080/computer/kubernetes-57beb710bfb44cea8f964d63049b2942-355760c790d6b/slave-agent.jnlp
JENKINS_TUNNEL: 10.233.60.248:50000
JENKINS_NAME: kubernetes-57beb710bfb44cea8f964d63049b2942-355760c790d6b
JENKINS_URL: http://10.233.60.248:8080
HOME: /home/jenkins
</code></pre>
| <p>Upgrading the kubernetes-plugin to 0.12 (29/07/2017) and restarting jenkins has fixed the issue!</p>
|
<p>I have one own nginx configuration <code>/home/ubuntu/workspace/web.conf</code> generated by script. I prefer to have it under <code>/etc/nginx/conf.d</code> besides <code>default.conf</code></p>
<p>Below is the <code>nginx.yaml</code></p>
<pre>
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
volumes:
- name: webconf
hostPath:
path: /home/ubuntu/workspace/web.conf
containers:
- image: nginx
name: nginx
ports:
- containerPort: 18001
protocol: TCP
volumeMounts:
- mountPath: /etc/nginx/conf.d/web.conf
name: web
</pre>
<p>While it is mapped as folder only</p>
<pre>
$ kubectl create -f nginx.yaml
pod "nginx" created
$ kubectl exec -it nginx -- bash
root@nginx:/app# ls -al /etc/nginx/conf.d/
total 12
drwxr-xr-x 1 root root 4096 Aug 3 12:27 .
drwxr-xr-x 1 root root 4096 Aug 3 11:46 ..
-rw-r--r-- 2 root root 1093 Jul 11 13:06 default.conf
drwxr-xr-x 2 root root 0 Aug 3 11:46 web.conf
</pre>
<p>It works for docker container <code>-v hostfile:containerfile</code>.</p>
<p>How can I do this in kubernetes ?</p>
<p>BTW: I use minikube <code>0.21.0</code> on <code>Ubuntu 16.04 LTS</code> with <code>kvm</code></p>
| <p>Try using the <code>subPath</code> key on your <code>volumeMounts</code> like this:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: singlefile
spec:
containers:
- image: ubuntu
name: singlefiletest
command:
- /bin/bash
- -c
- ls -la /singlefile/ && cat /singlefile/hosts
volumeMounts:
- mountPath: /singlefile/hosts
name: etc
subPath: hosts
volumes:
- name: etc
hostPath:
path: /etc
</code></pre>
<p>Example:</p>
<pre><code>$ kubectl apply -f singlefile.yaml
pod "singlefile" created
$ kubectl logs singlefile
total 24
drwxr-xr-x. 2 root root 4096 Aug 3 12:50 .
drwxr-xr-x. 1 root root 4096 Aug 3 12:50 ..
-rw-r--r--. 1 root root 1213 Apr 26 21:25 hosts
# /etc/hosts: Local Host Database
#
# This file describes a number of aliases-to-address mappings for the for
# local hosts that share this file.
...
</code></pre>
|
<p>How can I prevent kube-dns from forwarding request to Google's name servers (8.8.8.8:53 and 8.8.4.4:53)?
I just want to launch pods only for internal use, which means containers in pods are not supposed to connect to the outside at all.
When a Zookeeper client connects to a Zookeeper server using hostname (e.g. zkCli.sh -server zk-1.zk-headless), it takes 10 seconds for the client to change its state from [Connecting] to [Connected].
The reason I suspect kube-dns is that, with pods' IP address, the client gets connected instantly.
When I take a look at the log of kube-dns, I found the following two lines:</p>
<pre><code>07:25:35:170773 1 logs.go:41] skydns: failure to forward request "read udp 10.244.0.13:43455->8.8.8.8:53: i/o timeout"
07:25:39:172847 1 logs.go:41] skydns: failure to forward request "read udp 10.244.0.13:42388->8.8.8.8:53: i/o timeout"
</code></pre>
<p>It was around 07:25:30 when the client starts to connect to the server.</p>
<p>I'm running Kubernetes on a private cluster where internal servers are communicating to internet via http_proxy/https_proxy, which means I cannot connect to 8.8.8.8 for name resolution, AFAIK.</p>
<p>I found the followings from <a href="https://github.com/skynetservices/skydns" rel="nofollow noreferrer">https://github.com/skynetservices/skydns</a>:</p>
<ul>
<li>The default value of an environmental variable named <strong>SKYDNS_NAMESERVERS</strong> is <strong>"8.8.8.8:53,8.8.4.4:53"</strong></li>
<li>I could achieve my purpose by setting <strong>no_rec</strong> to <strong>true</strong></li>
</ul>
<p>I've been initiating Kubernetes using kubeadm and I couldn't find a way to modify the environmental variable and set the property value of skydns.</p>
<p>How can I prevent kube-dns from forwarding request to the outside of an internal Kubernetes cluster which is deployed by kubeadm?</p>
| <p>I don't think there is an option to completely prevent the <code>kube-dns</code> addon from forwarding requests. There certainly isn't an option directly in <code>kubeadm</code> for that.</p>
<p>Your best bet is to edit the <code>kube-dns</code> <code>Deployment</code> (e.g. <code>kubectl edit -n kube-system deploy kube-dns</code>) yourself after kubeadmin has started the cluster and change things to work for you.</p>
<p>You may want to try changing the upstream nameserver to something other than 8.8.8.8 that <em>is</em> accessible by the cluster. You should be able to do that by adding <code>--nameservers=x.x.x.x</code> to the <code>args</code> for the <code>kubedns</code> container.</p>
|
<p>I exposed a service with a static IP and an Ingress through an nginx controller as one of the examples of the <a href="https://github.com/kubernetes/ingress/tree/master/examples/static-ip/nginx" rel="noreferrer">kubernetes/ingress</a> repository. I have a second LoadBalancer service, that is not managed by any Ingress resource that is no longer properly exposed after the adding the new resources for the first service (I do not understand why this is the case).</p>
<p>I tried to add a second Ingress and LoadBalancer service to assign the second static IP, but I cant get it to work. </p>
<p>How would I go about exposing the second service, preferably with an Ingress? Do I need to add a second Ingress resource or do I have to reconfigure the one I already have?</p>
| <p>Using a <code>Service</code> with <code>type: LoadBalancer</code> and using an <code>Ingress</code> are usually mutually exclusive ways to expose your application.</p>
<p>When you create a <code>Service</code> with <code>type: LoadBalancer</code>, Kubernetes creates a LoadBalancer in your cloud account that has an IP, opens the ports on that LoadBalancer that match your <code>Service</code>, and then directs all traffic to that IP to the 1 <code>Service</code>. So if you have 2 <code>Service</code> objects, each with 'type: LoadBalancer' for 2 different <code>Deployment</code>s, then you have 2 IPs as well (one for each <code>Service</code>).</p>
<p>The <code>Ingress</code> model is based on directing traffic through a single Ingress Controller which is running something like nginx. As the <code>Ingress</code> resources are added, the Ingress Controller reconfigures nginx to include the new <code>Ingress</code> details. In this case, there will be a <code>Service</code> for the Ingress Controller (e.g. nginx) that is <code>type: LoadBalancer</code>, but all of the services that the <code>Ingress</code> resources point to should be <code>type: ClusterIP</code>. Traffic for all the <code>Ingress</code> objects will flow through the same public IP of the LoadBalancer for the Ingress Controller <code>Service</code> to the Ingress Controller (e.g. nginx) <code>Pod</code>s. The configuration details from the <code>Ingress</code> object (e.g. virtual host or port or route) will then determine which <code>Service</code> will get the traffic.</p>
|
<p>In preparation for HIPAA compliance, we are transitioning our Kubernetes cluster to use secure endpoints across the fleet (between all pods). Since the cluster is composed of about 8-10 services currently using HTTP connections, it would be super useful to have this taken care of by Kubernetes.</p>
<p>The specific attack vector we'd like to address with this is packet sniffing between nodes (physical servers).</p>
<p>This question breaks down into two parts:</p>
<ul>
<li>Does Kubernetes encrypts the traffic between pods & nodes by default?</li>
<li>If not, is there a way to configure it such?</li>
</ul>
<p>Many thanks!</p>
| <p>Actually the correct answer is "it depends". I would split the cluster into 2 separate networks.</p>
<ol>
<li><p><strong>Control Plane Network</strong></p>
<p>This network is that of the physical network or the underlay network in other words.</p>
<p>k8s control-plane elements - kube-apiserver, kube-controller-manager, kube-scheduler, kube-proxy, kubelet - talk to each other in various ways. Except for a few endpoints (eg. metrics), it is possible to configure encryption on all endpoints.</p>
<p>If you're also pentesting, then <a href="https://kubernetes.io/docs/admin/kubelet-authentication-authorization/#kubelet-authorization" rel="nofollow noreferrer">kubelet authn/authz</a> should be switched on too. Otherwise, the encryption doesn't prevent unauthorized access to the kubelet. This endpoint (at port 10250) can be hijacked with ease.</p>
</li>
<li><p><strong>Cluster Network</strong></p>
<p>The cluster network is the one used by the Pods, which is also referred to as the overlay network. Encryption is left to the 3rd-party overlay plugin to implement, failing which, the app has to implement.</p>
<p>The Weave overlay <a href="https://github.com/weaveworks-experiments/weave-kube/issues/38" rel="nofollow noreferrer">supports encryption</a>. The service mesh linkerd that @lukas-eichler suggested can also achieve this, but on a different networking layer.</p>
</li>
</ol>
|
<p>I am on ubuntu, today I upgraded kubernetes services
with apt-get upgrade; apt-get update. Since then kubernetes stops working, none of the services are running, api-server is not starting. I get below error when i run any command</p>
<pre><code>root:~# kubectl get pods
The connection to the server 172.31.139.86:6443 was refused - did you specify the right host or port?
root:~# kubelet --version
Kubernetes v1.7.3
root:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0", GitCommit:"fff5156092b56e6bd60fff75aad4dc9de6b6ef37", GitTreeState:"clean", BuildDate:"2017-03-28T16:36:33Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server 172.31.139.86:6443 was refused - did you specify the right host or port?
</code></pre>
<p>when running kubelet, I get the below errors, it reports no api server running and different other errors</p>
<pre><code>root:~# kubelet
I0803 15:27:47.289047 20182 feature_gate.go:144] feature gates: map[]
W0803 15:27:47.289162 20182 server.go:496] No API client: no api servers specified
I0803 15:27:47.289208 20182 client.go:72] Connecting to docker on unix:///var/run/docker.sock
I0803 15:27:47.289221 20182 client.go:92] Start docker client with request timeout=2m0s
I0803 15:27:47.310348 20182 manager.go:143] cAdvisor running in container: "/user.slice/user-0.slice/session-1.scope"
W0803 15:27:47.342781 20182 manager.go:151] unable to connect to Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp 127.0.0.1:15441: getsockopt: connection refused
I0803 15:27:47.370809 20182 fs.go:117] Filesystem partitions: map[/dev/mapper/noiro--server1--vg-root:{mountpoint:/var/lib/docker/aufs major:253 minor:0 fsType:ext4 blockSize:0} /dev/sda1:{mountpoint:/boot major:8 minor:1 fsType:ext2 blockSize:0}]
I0803 15:27:47.375524 20182 manager.go:198] Machine: {NumCores:32 CpuFrequency:3600000 MemoryCapacity:270376665088 MachineID:95972d262b4e4cc64d46557758b0c9ea SystemUUID:36E3B4D4-D196-FC41-AC15-EABB4D086392 BootID:223527cc-aaef-482f-8871-726f48e853e7 Filesystems:[{Device:/dev/mapper/noiro--server1--vg-root DeviceMajor:253 DeviceMinor:0 Capacity:712231378944 Type:vfs Inodes:44179456 HasInodes:true} {Device:/dev/sda1 DeviceMajor:8 DeviceMinor:1 Capacity:494512128 Type:vfs Inodes:124928 HasInodes:true}] DiskMap:map[253:1:{Name:dm-1 Major:253 Minor:1 Size:274760466432 Scheduler:none} 8:0:{Name:sda Major:8 Minor:0 Size:998999326720 Scheduler:cfq} 253:0:{Name:dm-0 Major:253 Minor:0 Size:723722960896 Scheduler:none}] NetworkDevices:[{Name:enp1s0f0 MacAddress:28:6f:7f:31:8d:06 Speed:1000 Mtu:1500} {Name:enp1s0f1 MacAddress:28:6f:7f:31:8d:07 Speed:-1 Mtu:1500} {Name:enp6s0f0 MacAddress:90:e2:ba:d4:af:94 Speed:-1 Mtu:1500} {Name:enp6s0f1 MacAddress:90:e2:ba:d4:af:95 Speed:10000 Mtu:1500} {Name:virbr0 MacAddress:00:00:00:00:00:00 Speed:0 Mtu:1500} {Name:virbr0-nic MacAddress:52:54:00:1e:95:dd Speed:0 Mtu:1500} {Name:virbr1 MacAddress:00:00:00:00:00:00 Speed:0 Mtu:1500} {Name:virbr1-nic MacAddress:52:54:00:80:4d:52 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:135105630208 Cores:[{Id:0 Threads:[0 16] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:1 Threads:[1 17] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:2 Threads:[2 18] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:3 Threads:[3 19] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:4 Threads:[4 20] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:5 Threads:[5 21] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:6 Threads:[6 22] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:7 Threads:[7 23] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]}] Caches:[{Size:20971520 Type:Unified Level:3}]} {Id:1 Memory:135271034880 Cores:[{Id:0 Threads:[8 24] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:1 Threads:[9 25] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:2 Threads:[10 26] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:3 Threads:[11 27] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:4 Threads:[12 28] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:5 Threads:[13 29] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:6 Threads:[14 30] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]} {Id:7 Threads:[15 31] Caches:[{Size:32768 Type:Data Level:1} {Size:32768 Type:Instruction Level:1} {Size:262144 Type:Unified Level:2}]}] Caches:[{Size:20971520 Type:Unified Level:3}]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None}
I0803 15:27:47.376993 20182 manager.go:204] Version: {KernelVersion:4.10.0-28-generic ContainerOsVersion:Ubuntu 16.04.3 LTS DockerVersion:17.06.0-ce DockerAPIVersion:1.30 CadvisorVersion: CadvisorRevision:}
W0803 15:27:47.377969 20182 server.go:356] No api server defined - no events will be sent to API server.
I0803 15:27:47.377997 20182 server.go:536] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
W0803 15:27:47.380752 20182 container_manager_linux.go:218] Running with swap on is not supported, please disable swap! This will be a fatal error by default starting in K8s v1.6! In the meantime, you can opt-in to making this a fatal error by enabling --experimental-fail-swap-on.
I0803 15:27:47.380845 20182 container_manager_linux.go:246] container manager verified user specified cgroup-root exists: /
I0803 15:27:47.380871 20182 container_manager_linux.go:251] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} ExperimentalQOSReserved:map[]}
W0803 15:27:47.386826 20182 kubelet_network.go:70] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
I0803 15:27:47.386880 20182 kubelet.go:508] Hairpin mode set to "hairpin-veth"
I0803 15:27:47.414780 20182 docker_service.go:208] Docker cri networking managed by kubernetes.io/no-op
I0803 15:27:47.443874 20182 docker_service.go:225] Setting cgroupDriver to cgroupfs
I0803 15:27:47.483153 20182 remote_runtime.go:42] Connecting to runtime service unix:///var/run/dockershim.sock
I0803 15:27:47.485332 20182 kuberuntime_manager.go:166] Container runtime docker initialized, version: 17.06.0-ce, apiVersion: 1.30.0
I0803 15:27:47.487348 20182 server.go:943] Started kubelet v1.7.3
E0803 15:27:47.487409 20182 kubelet.go:1229] Image garbage collection failed once. Stats initialization may not have completed yet: unable to find data for container /
I0803 15:27:47.487496 20182 server.go:132] Starting to listen on 0.0.0.0:10250
W0803 15:27:47.487532 20182 kubelet.go:1313] No api server defined - no node status update will be sent.
I0803 15:27:47.487837 20182 kubelet_node_status.go:247] Setting node annotation to enable volume controller attach/detach
I0803 15:27:47.489587 20182 server.go:310] Adding debug handlers to kubelet server.
E0803 15:27:47.491669 20182 kubelet.go:1729] Failed to check if disk space is available for the runtime: failed to get fs info for "runtime": unable to find data for container /
E0803 15:27:47.491703 20182 kubelet.go:1737] Failed to check if disk space is available on the root partition: failed to get fs info for "root": unable to find data for container /
I0803 15:27:47.492891 20182 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
I0803 15:27:47.492966 20182 status_manager.go:136] Kubernetes client is nil, not starting status manager.
I0803 15:27:47.492971 20182 volume_manager.go:245] Starting Kubelet Volume Manager
E0803 15:27:47.492981 20182 container_manager_linux.go:543] [ContainerManager]: Fail to get rootfs information unable to find data for container /
I0803 15:27:47.492981 20182 kubelet.go:1809] Starting kubelet main sync loop.
I0803 15:27:47.493066 20182 kubelet.go:1820] skipping pod synchronization - [container runtime is down PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s]
I0803 15:27:47.549343 20182 factory.go:351] Registering Docker factory
W0803 15:27:47.549379 20182 manager.go:247] Registration of the rkt container factory failed: unable to communicate with Rkt api service: rkt: cannot tcp Dial rkt api service: dial tcp 127.0.0.1:15441: getsockopt: connection refused
I0803 15:27:47.549389 20182 factory.go:54] Registering systemd factory
I0803 15:27:47.549675 20182 factory.go:86] Registering Raw factory
I0803 15:27:47.549959 20182 manager.go:1121] Started watching for new ooms in manager
I0803 15:27:47.552051 20182 oomparser.go:185] oomparser using systemd
I0803 15:27:47.552852 20182 manager.go:288] Starting recovery of all containers
I0803 15:27:47.641407 20182 manager.go:293] Recovery completed
I0803 15:27:47.800854 20182 kubelet_node_status.go:247] Setting node annotation to enable volume controller attach/detach
E0803 15:27:47.821579 20182 helpers.go:771] Could not find capacity information for resource storage.kubernetes.io/scratch
W0803 15:27:47.821613 20182 helpers.go:782] eviction manager: no observation found for eviction signal allocatableNodeFs.available
E0803 15:27:52.501201 20182 kubelet_volumes.go:128] Orphaned pod "12c13b83-762a-11e7-af75-286f7f318d06" found, but volume paths are still present on disk. : There were a total of 10 errors similar to this. Turn up verbosity to see them.
E0803 15:27:53.499008 20182 kubelet_volumes.go:128] Orphaned pod "12c13b83-762a-11e7-af75-286f7f318d06" found, but volume paths are still present on disk. : There were a total of 10 errors similar to this. Turn up verbosity to see them.
E0803 15:27:55.500440 20182 kubelet_volumes.go:128] Orphaned pod "12c13b83-762a-11e7-af75-286f7f318d06" found, but volume paths are still present on disk. : There were a total of 10 errors similar to this. Turn up verbosity to see them.
E0803 15:27:57.498812 20182 kubelet_volumes.go:128] Orphaned pod "12c13b83-762a-11e7-af75-286f7f318d06" found, but volume paths are still present on disk. : There were a total of 10 errors similar to this. Turn up verbosity to see them.
I0803 15:27:57.889786 20182 kubelet_node_status.go:247] Setting node annotation to enable volume controller attach/detach
E0803 15:27:59.501288 20182 kubelet_volumes.go:128] Orphaned pod "12c13b83-762a-11e7-af75-286f7f318d06" found, but volume paths are still present on disk. : There were a total of 10 errors similar to this. Turn up verbosity to see them.
E0803 15:28:01.499147 20182 kubelet_volumes.go:128] Orphaned pod "12c13b83-762a-11e7-af75-286f7f318d06" found, but volume paths are still present on disk. : There were a total of 10 errors similar to this. Turn up verbosity to see them.
E0803 15:28:03.500537 20182 kubelet_volumes.go:128] Orphaned pod "12c13b83-762a-11e7-af75-286f7f318d06" found, but volume paths are still present on disk. : There were a total of 10 errors similar to this. Turn up verbosity to see them.
E0803 15:28:05.500421 20182 kubelet_volumes.go:128] Orphaned pod "12c13b83-762a-11e7-af75-286f7f318d06" found, but volume paths are still present on disk. : There were a total of 10 errors similar to this. Turn up verbosity to see them.
E0803 15:28:07.500412 20182 kubelet_volumes.go:128] Orphaned pod "12c13b83-762a-11e7-af75-286f7f318d06" found, but volume paths are still present on disk. : There were a total of 10 errors similar to this. Turn up verbosity to see them.
I0803 15:28:07.976617 20182 kubelet_node_status.go:247] Setting node annotation to enable volume controller attach/detach
E0803 15:28:09.501419 20182 kubelet_volumes.go:128] Orphaned pod "12c13b83-762a-11e7-af75-286f7f318d06" found, but volume paths are still present on disk. : There were a total of 10 errors similar to this. Turn up verbosity to see them.
E0803 15:28:11.498850 20182 kubelet_volumes.go:128] Orphaned pod "12c13b83-762a-11e7-af75-286f7f318d06" found, but volume paths are still present on disk. : There were a total of 10 errors similar to this. Turn up verbosity to see them.
E0803 15:28:13.500340 20182 kubelet_volumes.go:128] Orphaned pod "12c13b83-762a-11e7-af75-286f7f318d06" found, but volume paths are still present on disk. : There were a total of 10 errors similar to this. Turn up verbosity to see them.
E0803 15:28:15.498716 20182 kubelet_volumes.go:128] Orphaned pod "12c13b83-762a-11e7-af75-286f7f318d06" found, but volume paths are still present on disk. : There were a total of 10 errors similar to this. Turn up verbosity to see them.
E0803 15:28:17.500026 20182 kubelet_volumes.go:128] Orphaned pod "12c13b83-762a-11e7-af75-286f7f318d06" found, but volume paths are still present on disk. : There were a total of 10 errors similar to this. Turn up verbosity to see them.
I0803 15:28:18.058552 20182 kubelet_node_status.go:247] Setting node annotation to enable volume controller attach/detach
E0803 15:28:19.501567 20182 kubelet_volumes.go:128] Orphaned pod "12c13b83-762a-11e7-af75-286f7f318d06" found, but volume paths are still present on disk. : There were a total of 10 errors similar to this. Turn up verbosity to see them
</code></pre>
| <p>I see you have upgraded to <code>DockerVersion:17.06.0-ce</code>. I don't think this version is tested with kubernetes based on this page <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md#external-dependency-version-information" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md#external-dependency-version-information</a></p>
<p>Not sure this is causing the issue.</p>
<p>these are the process needs to running in master node. see you can start etcd and API server first.</p>
<pre><code> etcd
kubelet
kube-controller-manager
kube-scheduler
kube-apiserver
kube-proxy
</code></pre>
|
<p>I faced with the problem that I cannot send emails from K8s pod using <code>smtp.gmail.com</code> and <code>587</code> port. I tried to use <code>dnsPolicy: ClusterFirstWithHostNet</code> but nothing has changed. With <code>dnsPolicy: Default</code> everything seems OK but I can't use this approach since pods should be able to resolve other pods from the cluster. Btw, <code>ConfigMap</code> with Google's dns didn't help too:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
data:
upstreamNameservers: |
[“8.8.8.8”, “8.8.4.4”]
</code></pre>
<p>Are there any ideas?<br/>
Thanks in advance. <br/>
PS, my Kubernetes version is v1.7.2</p>
| <p>Maybe it just a syntax error in your configmap with quotes (" vs “)
If you run
<code>
kubectl -n kube-system logs kube-dns-xxxx -c dnsmasq
</code>
you will get a syntax error, instead of
<code>
upstreamNameservers to [8.8.8.8, 4.4.4.4]
</code></p>
|
<p>I have recently followed the tutorial on how to use Kubernetes with Windows pods ( <a href="https://learn.microsoft.com/en-us/azure/container-service/kubernetes/container-service-kubernetes-windows-walkthrough" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/container-service/kubernetes/container-service-kubernetes-windows-walkthrough</a> ). I decided to extend the example to two services, one front calling the one in the back. Simplified:</p>
<p><a href="https://gist.github.com/sebug/f478f1cfd0a793e8d556c6001bbbe142" rel="nofollow noreferrer">https://gist.github.com/sebug/f478f1cfd0a793e8d556c6001bbbe142</a></p>
<p>But now when I connect to one of the front nodes:</p>
<pre><code>kubectl exec -it samplefront-2836659004-4m824 -- powershell
</code></pre>
<p>I can't ping the other service:</p>
<pre><code>PS C:\> ping sample-back
Ping request could not find host sample-back. Please check the name and try again.
</code></pre>
<p>I heard that it may be because of the two network interfaces and the wrong DNS server being chosen, but I have not found a way to specify anything in the deployment.</p>
<pre><code>Windows IP Configuration
Ethernet adapter vEthernet (Container NIC 7baf5cc0):
Connection-specific DNS Suffix . :
Link-local IPv6 Address . . . . . : fe80::f182:e2e7:7bce:ed60%33
IPv4 Address. . . . . . . . . . . : 10.244.0.211
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . : 10.244.0.1
Ethernet adapter vEthernet (Container NIC ae765bad):
Connection-specific DNS Suffix . : 10jheu23yh0ujpey5vzw0q45qg.ax.internal.cloudapp.net
Link-local IPv6 Address . . . . . : fe80::c4dc:b785:9cd:2a7b%37
IPv4 Address. . . . . . . . . . . : 172.31.245.122
Subnet Mask . . . . . . . . . . . : 255.255.240.0
Default Gateway . . . . . . . . . : 172.31.240.1
</code></pre>
| <blockquote>
<p>Can't resolve another service's hostname inside my Kubernetes.</p>
</blockquote>
<p>This is a by design behavior. Because the cluster IP does not exist.</p>
<p>In Kubernetes, all the services in a cluster are handled by <strong>kube-proxy</strong>. kube-proxy runs on every node in the cluster, and what it does it write <strong>iptables</strong> rules for each service (Linux node, same as windows). These iptables rules manage the traffic <strong>towards</strong> the service IPs. They don’t actually have any rules for ICMP, because it’s not needed.</p>
<p>But we can ping pod IP or pod's DNS.</p>
<p>For example, we can use this command to list pods IP addresses:</p>
<pre><code>root@k8s-master-9F42C511-0:~# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
azure-vote-back-3048739398-8zx8b 1/1 Running 0 18m 10.244.1.2 k8s-agent-9f42c511-0
azure-vote-front-837696400-tglpn 1/1 Running 0 18m 10.244.1.3 k8s-agent-9f42c511-0
</code></pre>
<p>Then we use one pod to ping those IP addresses:</p>
<pre><code>root@k8s-master-9F42C511-0:~# kubectl exec -it azure-vote-front-837696400-tglpn -- /bin/bash
root@azure-vote-front-837696400-tglpn:/app# ping 10.244.1.3
PING 10.244.1.3 (10.244.1.3): 56 data bytes
64 bytes from 10.244.1.3: icmp_seq=0 ttl=64 time=0.063 ms
64 bytes from 10.244.1.3: icmp_seq=1 ttl=64 time=0.052 ms
^C--- 10.244.1.3 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.052/0.057/0.063/0.000 ms
root@azure-vote-front-837696400-tglpn:/app# ping 10.244.1.4
PING 10.244.1.4 (10.244.1.4): 56 data bytes
64 bytes from 10.244.1.4: icmp_seq=0 ttl=64 time=0.102 ms
64 bytes from 10.244.1.4: icmp_seq=1 ttl=64 time=0.098 ms
^C--- 10.244.1.4 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.098/0.100/0.102/0.000 ms
</code></pre>
<p>Also, we can ping pod's A record. In kubernetes, pod's A record in the form of <code>pod-ip-address.my-namespace.pod.cluster.local</code>.</p>
<p>For example, a pod with IP <code>1.2.3.4</code> in the namespace <code>default</code> with a DNS name of <code>cluster.local</code> would have an entry: <code>1-2-3-4.default.pod.cluster.local</code></p>
<p>In my lab, my pod's A record like this:</p>
<pre><code>root@k8s-master-9F42C511-0:~# kubectl exec -it azure-vote-front-837696400-tglpn -- /bin/bash
root@azure-vote-front-837696400-tglpn:/app# ping 10-244-1-2.default.pod.cluster.local
PING 10-244-1-2.default.pod.cluster.local (10.244.1.2): 56 data bytes
64 bytes from 10.244.1.2: icmp_seq=0 ttl=64 time=0.103 ms
64 bytes from 10.244.1.2: icmp_seq=1 ttl=64 time=0.087 ms
64 bytes from 10.244.1.2: icmp_seq=2 ttl=64 time=0.096 ms
^C--- 10-244-1-2.default.pod.cluster.local ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.087/0.095/0.103/0.000 ms
</code></pre>
<p>So, we <strong>can't</strong> ping cluster IP address, but we can use URL to test it. We can ping pod's IP address, and A record.</p>
<hr>
<p><strong>Update</strong>:<br>
Sorry for my mistake, the k8s A record rules work for Linux Agent, but does not work for windows agent.</p>
<p><a href="https://i.stack.imgur.com/1spI8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1spI8.png" alt="enter image description here"></a>
More information about windows server containers, please refer to this <a href="https://kubernetes.io/docs/getting-started-guides/windows/" rel="nofollow noreferrer">article</a>.</p>
|
<p>While trying to run the next Docker command:</p>
<pre><code>docker run \
--net=host \
--pid=host \
--privileged=true \
-d \
gcr.io/google_containers/hyperkube-amd64:v1.7.0 \
/nsenter \
--target=1 \
--mount \
--wd=. \
-- ./hyperkube kubelet \
--hostname-override="127.0.0.1" \
--address="0.0.0.0" \
--api-servers=http://localhost:8080 \
--config=etc/kubernetes/manifests \
--v=2
</code></pre>
<p>I am getting an error</p>
<blockquote>
<p>nsenter: failed to execute ./hyperkube: No such file or directory</p>
</blockquote>
<p>I have been trying a lot of combination, but nothing seems to work, have someone else tried to do this before?</p>
<p>My docker version is</p>
<pre><code>Client:
Version: 1.12.6
API version: 1.24
Go version: go1.6.4
Git commit: 78d1802
Built: Wed Jan 11 00:23:16 2017
OS/Arch: darwin/amd64
Server:
Version: 1.12.6
API version: 1.24
Go version: go1.6.4
Git commit: 78d1802
Built: Wed Jan 11 00:23:16 2017
OS/Arch: linux/amd64
</code></pre>
<p>Thanks</p>
| <p>I found the solution!</p>
<p>Hyperkube is not designed to run in macOS so i had to use other kind of kubernetes server for local development and that work perfectly:</p>
<p>I am currently using <a href="https://github.com/Mirantis/kubeadm-dind-cluster" rel="nofollow noreferrer">kubeadm-dind-cluster</a></p>
|
<p>I have a jenkins on my <a href="http://localhost:8080/" rel="nofollow noreferrer">http://localhost:8080/</a> and I created a project which will run a kubectl command to connect to a kubernetes cluster using (minikube)</p>
<p>I'm trying to run a windows command
C:\Program Files (x86)\Jenkins\workspace\test2>kubectl apply -f .\my-deployment.yaml </p>
<p>Here's the minikube cluster info
Kubernetes master is running at <a href="https://192.168.99.100:8443" rel="nofollow noreferrer">https://192.168.99.100:8443</a></p>
<p>on Jenkins my build environment is like
<a href="https://i.stack.imgur.com/m1Rhv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/m1Rhv.png" alt="here's my kubernetes kubectl config in Jenkins"></a></p>
<p>Is there a default credential when connecting to minikube? I used default-admin with no password or admin/admin </p>
<p>I'm getting this error during Jenkins build</p>
<pre><code>C:\Program Files (x86)\Jenkins\workspace\test2>kubectl apply -f .\my-deployment.yaml
error: You must be logged in to the server (the server has asked for the client to provide credentials)
</code></pre>
<p>Thanks for your answers.</p>
| <p>Minikube makes use of SSL certificates to connect to the API server. Then you need to use that certificate to properly identify yourself. You can find the information in your <code>.kube/config</code> file. </p>
|
<p>I switched the project config for <code>gcloud</code> using <code>gcloud config set project abcxyz</code>, however <code>kubectl get pods</code> is returning the pods in the previous gcloud / kubernetes project.</p>
<p>How do I update the project config to match gcloud's config?</p>
| <p>After you've changed project, run:</p>
<pre><code>gcloud container clusters get-credentials <cluster_name>
</code></pre>
<p>gcloud will then set kubectl to be looking at your new project.</p>
|
<p>It seems like get.k8s.io is the recommended way to deploy a Kubernetes cluster, but Digital Ocean isn't supported by this script.</p>
<p>Is there an alternate way to easily set up a cluster on Digital Ocean that I've missed?</p>
<p>Thanks</p>
| <p>You can use <code>kubicorn</code> to create a fairly dope kubernetes cluster in Digital Ocean pretty easily. Here are the steps needed to do so:</p>
<pre><code>// Install kubicorn
go get github.com/kris-nova/kubicorn
// Configure your auth
export DIGITALOCEAN_ACCESS_TOKEN=*****************************************
// Create your kubernetes profile from the default profile
kubicorn create mycluster --profile do
// Tweak your cluster as you like
kubicorn edit mycluster
// Apply your profile
kubicorn apply mycluster -v 4
// Use kubectl to access your cluster
kubectl get no
</code></pre>
<p>Note that <code>kubicorn</code> is vendored to be a library as well as a command line tool, so you should probably be able to also include this logic in a program if you'd like.</p>
<p>Source: <a href="https://www.nivenly.com/kubernetes-on-digital-ocean-with-encrypted-vpn-service-mesh/" rel="nofollow noreferrer">https://www.nivenly.com/kubernetes-on-digital-ocean-with-encrypted-vpn-service-mesh/</a></p>
|
<p>I have a rails project that using postgres database. I want to build a database server using Kubernetes and rails server will connect to this database.</p>
<p>For example here is my defined <code>postgres.yml</code></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
selector:
app: postgres
ports:
- name: "5432"
port: 5432
targetPort: 5432
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- env:
- name: POSTGRES_DB
value: hades_dev
- name: POSTGRES_PASSWORD
value: "1234"
name: postgres
image: postgres:latest
ports:
- containerPort: 5432
resources: {}
stdin: true
tty: true
volumeMounts:
- mountPath: /var/lib/postgresql/data/
name: database-hades-volume
restartPolicy: Always
volumes:
- name: database-hades-volume
persistentVolumeClaim:
claimName: database-hades-volume
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: database-hades-volume
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
</code></pre>
<p>I run this by following commands: <code>kubectl run -f postgres.yml</code>.</p>
<p>But when I try to run rails server. I always meet following exception:</p>
<pre><code>PG::Error
invalid encoding name: utf8
</code></pre>
<p>I try to forwarding port, and rails server successfully connects to database server:</p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
postgres-3681891707-8ch4l 1/1 Running 0 1m
</code></pre>
<p>Then run following command:</p>
<pre><code>kubectl port-forward postgres-3681891707-8ch4l 5432:5432
</code></pre>
<p>I think this solution not good. How can I define in my <code>postgres.yml</code> so I don't need to port-forwarding manually as above.</p>
<p>Thanks</p>
| <p>You can try by exposing your service on NodePort and then accessing the service on that port. </p>
<p>Check here <a href="https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport</a></p>
|
<p>Running <code>kubectl get pods</code> with sudo:</p>
<pre><code>sudo kubectl get pods
The connection to the server localhost:8080 was refused - did you specify the right host or port?
</code></pre>
<p>Running as a normal user:</p>
<pre><code>kubectl get pods
No resources found.
</code></pre>
| <p>By default, kubectl looks in ~/.kube/config (or the file pointed to be $KUBECONFIG) to determine what server to connect to. Your home directory and environment are different when running commands as root. When no connection info is found, kubectl defaults to localhost:8080</p>
|
<p>I've created a resource with:
<code>kubectl create -f example.yaml</code></p>
<p>How do I edit this resource with kubectl? Supposedly <code>kubectl edit</code>, but I'm not sure of the resource name, and <code>kubectl edit example</code> returns an error of:</p>
<pre><code>the server doesn't have a resource type "example"
</code></pre>
| <p>You can do a <code>kubectl edit -f example.yaml</code> to edit it directly. Nevertheless I would recommend to edit the file locally and do a <code>kubectl apply -f example.yaml</code>, so it gets updated on Kubernetes.</p>
<p>Also: Your command fails, because you have to specify a resource type. Example types are <code>pod</code>, <code>service</code>, <code>deployment</code> and so on. You can see all possible types with a plain <code>kubectl get</code>. The type should match the <code>kind</code> value in the YAML file and the name should match <code>metadata.name</code>.</p>
<p>For example the following file could be edited with <code>kubectl edit deployment nginx-deployment</code></p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 2 # tells deployment to run 2 pods matching the template
template: # create pods using pod definition in this template
metadata:
# unlike pod-nginx.yaml, the name is not included in the meta data as a unique name is
# generated from the deployment name
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
</code></pre>
|
<p>I'm trying to expose my api so I can send request to it. However when I used the command <code>minikube service api --url</code> I get nothing. All my pods are running fine according to <code>kubectl get pods</code> so I'm abit stuck about what this could be.</p>
<pre><code>api-1007925651-0rt1n 1/1 Running 0 26m
auth-1671920045-0f85w 1/1 Running 0 26m
blankit-app 1/1 Running 5 5d
logging-2525807854-2gfwz 1/1 Running 0 26m
mongo-1361605738-0fdq4 1/1 Running 0 26m
jwl:.build jakewlace$ kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
api 10.0.0.194 <none> 3001/TCP 23m
auth 10.0.0.36 <none> 3100/TCP 23m
kubernetes 10.0.0.1 <none> 443/TCP 5d
logging 10.0.0.118 <none> 3200/TCP 23m
mongo 10.0.0.132 <none> 27017/TCP 23m
jwl:.build jakewlace$
jwl:.build jakewlace$ minikube service api --url
jwl:.build jakewlace$
</code></pre>
<p>Any help would be massively appreciated, thank you.</p>
<p>I realised that the question here could be perceived as being minimal, but that is because I'm not sure what more information I could show from the tutorials I've been following it should just work. If you need more information please do let me know I will let you know. </p>
<p><strong>EDIT:</strong></p>
<p>api-service.yml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
io.kompose.service: api
name: api
spec:
ports:
- name: "3001"
port: 3001
targetPort: 3001
selector:
io.kompose.service: api
status:
loadBalancer: {}
</code></pre>
<p>api-deployment.yml</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
io.kompose.service: api
name: api
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: api
spec:
containers:
- image: blankit/web:0.0.1
name: api
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3001
resources: {}
restartPolicy: Always
status: {}
</code></pre>
| <p>Your configuration is fine, but only missing one thing.</p>
<p>There are many types of <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services---service-types" rel="noreferrer">Services</a> in Kubernetes, but in this case you should know about two of them:</p>
<blockquote>
<p><em><strong>ClusterIP Services:</strong></em><br />
Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default.</p>
<p><em><strong>NodePort:</strong></em><br />
Exposes the service on each Node’s IP at a static port (the NodePort). A ClusterIP service, to which the NodePort service will route, is automatically created. You’ll be able to contact the NodePort service, from outside the cluster, by requesting <code><NodeIP>:<NodePort></code>.</p>
</blockquote>
<p><strong>Note:</strong><br />
If you have a multi-node cluster and you've exposed a <em>NodePort Service</em>, you can access is from any other node on the same port, not necessarily the same node the pod is deployed onto.</p>
<p>So, getting back to your service, you should specify the service type in your spec:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
...
spec:
type: NodePort
selector:
...
ports:
- protocol: TCP
port: 3001
</code></pre>
<p>Now if you <code>minikube service api --url</code>, it should return a URL like <code>http://<NodeIP>:<NodePort></code>.</p>
<p><strong>Note:</strong> The default Kubernetes configuration will chose a random port from 30000-32767. But you can override that if needed.</p>
<hr />
<p>Useful references:</p>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="noreferrer">Kubernetes / Publishing services - service types</a></li>
<li><a href="https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/" rel="noreferrer">Kubernetes / Connect a Front End to a Back End Using a Service</a></li>
</ul>
|
<p>I need to make a decision for container orchestration , and needed help in finding out limitation in real world scenarios that can occur using docker swarm over kubernetes, if anyone ever faced any such limitation please suggest. </p>
<p><em>The containers cluster may reach a value of approx 50-100 containers.</em> </p>
| <p>Docker swarm is young and there are a lot of features introduced relatively quickly. This however causes more issues and open "serious" bugs. For a production system that should be up 100% that might be an issue. I personally experienced <a href="https://github.com/moby/moby/issues/31698" rel="nofollow noreferrer">a bug</a> that made it impossible to start new containers because they were assigned an IP that is already taken. This forced me to shut down my swarm (it's a dev system so I didn't mind too much).</p>
<p>I suggest having a look at <a href="https://github.com/moby/moby/issues?q=is%3Aopen+label%3Aarea%2Fswarm+sort%3Acomments-desc" rel="nofollow noreferrer">the most commented swarm bugs/issues</a> in github.</p>
|
<p>I have recently followed the tutorial on how to use Kubernetes with Windows pods ( <a href="https://learn.microsoft.com/en-us/azure/container-service/kubernetes/container-service-kubernetes-windows-walkthrough" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/container-service/kubernetes/container-service-kubernetes-windows-walkthrough</a> ). I decided to extend the example to two services, one front calling the one in the back. Simplified:</p>
<p><a href="https://gist.github.com/sebug/f478f1cfd0a793e8d556c6001bbbe142" rel="nofollow noreferrer">https://gist.github.com/sebug/f478f1cfd0a793e8d556c6001bbbe142</a></p>
<p>But now when I connect to one of the front nodes:</p>
<pre><code>kubectl exec -it samplefront-2836659004-4m824 -- powershell
</code></pre>
<p>I can't ping the other service:</p>
<pre><code>PS C:\> ping sample-back
Ping request could not find host sample-back. Please check the name and try again.
</code></pre>
<p>I heard that it may be because of the two network interfaces and the wrong DNS server being chosen, but I have not found a way to specify anything in the deployment.</p>
<pre><code>Windows IP Configuration
Ethernet adapter vEthernet (Container NIC 7baf5cc0):
Connection-specific DNS Suffix . :
Link-local IPv6 Address . . . . . : fe80::f182:e2e7:7bce:ed60%33
IPv4 Address. . . . . . . . . . . : 10.244.0.211
Subnet Mask . . . . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . . . . : 10.244.0.1
Ethernet adapter vEthernet (Container NIC ae765bad):
Connection-specific DNS Suffix . : 10jheu23yh0ujpey5vzw0q45qg.ax.internal.cloudapp.net
Link-local IPv6 Address . . . . . : fe80::c4dc:b785:9cd:2a7b%37
IPv4 Address. . . . . . . . . . . : 172.31.245.122
Subnet Mask . . . . . . . . . . . : 255.255.240.0
Default Gateway . . . . . . . . . : 172.31.240.1
</code></pre>
| <p>So after trying out different scenarios I figured I'd delete the setup and try again, specifying a specific version of microsoft/iis - and it worked:</p>
<p><a href="https://gist.github.com/sebug/0f7776668fff4e0e6b3f3d313846afa6" rel="nofollow noreferrer">https://gist.github.com/sebug/0f7776668fff4e0e6b3f3d313846afa6</a></p>
<pre><code>kripke:Documents/Projets/ScaledSample% kubectl exec -it samplefront-1226573881-21bbh -- ping sample-back
Pinging sample-back [10.0.216.120] with 32 bytes of data:
Reply from 10.0.216.120: bytes=32 time<1ms TTL=128
Reply from 10.0.216.120: bytes=32 time<1ms TTL=128
Reply from 10.0.216.120: bytes=32 time<1ms TTL=128
Reply from 10.0.216.120: bytes=32 time<1ms TTL=128
Ping statistics for 10.0.216.120:
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 0ms, Average = 0ms
</code></pre>
<p>My hypothesis is that either I fell on a more well-behaved instance after having recreated the resource group and cluster or that this specifying which exact version of the container image did the trick.</p>
|
<p>I'm using Helm to upgrade my k8s environment and I'm wondering,
if I can change the release name of a deployed release using the upgrade command (or any other way)?</p>
<p><strong>example:</strong></p>
<pre><code>helm upgrade --set image.tag='12.55.6-2' zk-chart /path/to/helm/files
</code></pre>
<p>So that <strong>zk-chart</strong> will change to <strong>new-zk-chart</strong>, without downtime so I don't want to delete the release and create it again.</p>
| <p>You cannot do this. </p>
<p>There is a open <a href="https://github.com/kubernetes/helm/issues/1809" rel="noreferrer">request</a> for this feature.</p>
|
<p>I am having difficulties accessing a NodePort service on my Kubernetes cluster.</p>
<p><strong>Goal</strong></p>
<p>set up ALB Ingress controller so that i can use websockets and http/2</p>
<p>setup NodePort service as required by that controller</p>
<p><strong>Steps taken</strong></p>
<p>Previously a Kops (Version 1.6.2) cluster was created on AWS eu-west-1. The kops addons for nginx ingress was added as well as Kube-lego. ELB ingress working fine.</p>
<p>Setup the ALB Ingress Controller with custom AWS keys using IAM profile specified by that project.</p>
<p>Changed service type from LoadBalancer to NodePort using kubectl replace --force</p>
<pre><code>> kubectl describe svc my-nodeport-service
Name: my-node-port-service
Namespace: default
Labels: <none>
Selector: service=my-selector
Type: NodePort
IP: 100.71.211.249
Port: <unset> 80/TCP
NodePort: <unset> 30176/TCP
Endpoints: 100.96.2.11:3000
Session Affinity: None
Events: <none>
> kubectl describe pods my-nodeport-pod
Name: my-nodeport-pod
Node: <ip>.eu-west-1.compute.internal/<ip>
Labels: service=my-selector
Status: Running
IP: 100.96.2.11
Containers:
update-center:
Port: 3000/TCP
Ready: True
Restart Count: 0
(ssh into node)
$ sudo netstat -nap | grep 30176
tcp6 0 0 :::30176 :::* LISTEN 2093/kube-proxy
</code></pre>
<p><strong>Results</strong></p>
<p>Curl from ALB hangs</p>
<p>Curl from <code><public ip address of all nodes>:<node port for service></code> hangs</p>
<p><strong>Expected</strong></p>
<p>Curl from both ALB and directly to the node:node-port should return 200 "Ok" (the service's http response to the root)</p>
<p>Update:
Issues created on github referencing above with some further details in some cases:</p>
<ul>
<li><a href="https://github.com/kubernetes/kubernetes/issues/50261" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/50261</a></li>
<li><a href="https://github.com/coreos/alb-ingress-controller/issues/169" rel="noreferrer">https://github.com/coreos/alb-ingress-controller/issues/169</a></li>
<li><a href="https://github.com/kubernetes/kops/issues/3146" rel="noreferrer">https://github.com/kubernetes/kops/issues/3146</a></li>
</ul>
| <p>By default Kops does not configure the EC2 instances to allows NodePort traffic from outside.</p>
<p>In order for traffic outside of the cluster to reach the NodePort you must edit the configuration for your EC2 instances that are your Kubernetes nodes in the EC2 Console on AWS. </p>
<p>Once in the EC2 console click "Security groups." Kops should have annotated the original Security groups that it made for your cluster as <code>nodes.<your cluster name></code> and <code>master.<your cluster name></code></p>
<p>We need to modify these Security Groups to forward traffic from the default port range for NodePorts to the instances.</p>
<p>Click on the security group, click on rules and add the following rule. </p>
<p><strong>Port range to open on the nodes and master</strong>: 30000-32767</p>
<p><a href="https://i.stack.imgur.com/LNUpr.png" rel="noreferrer"><img src="https://i.stack.imgur.com/LNUpr.png" alt="security group rule"></a></p>
<p>This will allow anyone on the internet to access a NodePort on your cluster, so make sure you want these exposed.</p>
<p>Alternatively instead of allowing it from any origin you can allow it only from the security group created by for the ALB by the alb-ingress-controller. However, since these can be re-created it will likely be necessary to modify the rule on modifications to the kubernetes service. I suggest specifying the NodePort explicitly to it is a predetermined known NodePort rather than a randomly assigned one.</p>
|
<p>I'm creating a ram based emptyDir volume for a pod. </p>
<pre><code> volumes:
- name: ram-disk
emptyDir:
medium: "Memory"
</code></pre>
<p>Because this is a ram disk, I need to set the maximum size which it will take from the ram. Is there a parameter to set maximum size and minimum size?</p>
| <p>For kubernetes-1.7.x, it's <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#emptydirvolumesource-v1-core" rel="nofollow noreferrer">possible to set the <code>sizeLimit</code> for an <code>EmptyDir</code></a>.</p>
|
<p>After trying kubernetes on a few KVMs with kubeadm, I'd like to setup a proper auto-scalable cluster on AWS with <a href="https://github.com/kubernetes/kops/" rel="nofollow noreferrer">kops</a> and serve a few websites with it.</p>
<p>The mind-blowing magic of <code>kops create cluster ...</code> gives me a bunch of ec2 instances, makes the k8s API available at <code>test-cluster.example.com</code> and even configures my local <code>~/.kube/config</code> so that I can <code>kubectl apply -f any-stuff.yaml</code> right away. This is just great!</p>
<p>I'm at the point when I can send my deployments to the cluster and configure the ingress rules – all this stuff is visible in the dashboard. However, at the moment it's not very clear how I can associate the nodes in my cluster with the domain names I've got.</p>
<p>In my small KVM k8s I simply install <a href="https://github.com/containous/traefik/" rel="nofollow noreferrer">traefik</a> and expose it on ports <code>:80</code> and <code>:443</code>. Then I go to my DNS settings and add a few A records, which point to the public IP(s) of my cluster node(s). In AWS, there is a dynamic set of VMs, some of which may go down when the cluster is not under heavy load. So It feels like I need to use an external load balancer given that my <a href="https://github.com/kubernetes/charts/tree/master/stable/traefik" rel="nofollow noreferrer">traefik helm chart</a> service exposes two random ports instead of fixed :80 and :443, but I'm not sure.</p>
<p>What are the options? What is their cost? What should go to DNS records in case if the domains are not controlled by AWS?</p>
| <p>Configuring your service as a LoadBalancer service is not sufficient for your cluster to to setup the actual loadbalancer, you need an ingress controller running like the one above. </p>
<p>You should add the kops nginx ingress addon: <a href="https://github.com/kubernetes/kops/tree/master/addons/ingress-nginx" rel="nofollow noreferrer">https://github.com/kubernetes/kops/tree/master/addons/ingress-nginx</a></p>
<p>In this case the nginx ingress controller on AWS will find the ingress and create an AWS ELB for it. I am not sure of the cost, but its worth it.</p>
<p>You can also consider Node Ports which you can access against the node's public ips and node port (be sure to add a rule to your security group)</p>
<p>You can also consider the new AWS ELB v2 or ALB which supports Http/2 and websockets. You can use the alb-ingress-controller <a href="https://github.com/coreos/alb-ingress-controller" rel="nofollow noreferrer">https://github.com/coreos/alb-ingress-controller</a> for this.</p>
<p>Finally if you want SSL (which you should) consider the kube-lego project which will automate getting SSL certs for you. <a href="https://github.com/jetstack/kube-lego" rel="nofollow noreferrer">https://github.com/jetstack/kube-lego</a></p>
|
<p>I'd like to know what each 'config' key represents within the usual yaml config (not even sure what it's called) as well as what options / directives are available.</p>
<p>For example what does the following mean:</p>
<pre><code> ports:
- containerPort: 80
name: example
</code></pre>
<p>I could google for 'kubernetes ports', but is there a more accurate way to interpret the various configs keys / values?</p>
| <p>What about reading the <a href="https://kubernetes.io/docs/resources-reference/v1.5/#deployment-v1beta1" rel="nofollow noreferrer">Kubernetes Resources Reference Documentation</a>?</p>
<p>All the directives are defined here for the various resources exploited by Kubernetes. Eg: <a href="https://kubernetes.io/docs/resources-reference/v1.5/#containerport-v1" rel="nofollow noreferrer">ContainerPort v1</a> </p>
|
<p>When I want to run the demo, I get the error:</p>
<blockquote>
<p>error: error validating "cronJob_example.yaml": error validating data:
couldn't find type: v2alpha1.CronJob; if you choose to ignore these
errors, turn validation off with --validate=false</p>
</blockquote>
<p>Then I found:</p>
<blockquote>
<p>Prerequisites You need a working Kubernetes cluster at version >= 1.4
(for ScheduledJob), >= 1.5 (for CronJob), with batch/v2alpha1 API
turned on by passing --runtime-config=batch/v2alpha1=true while
bringing up the API server (see Turn on or off an API version for your
cluster for more).</p>
</blockquote>
<p>The above conditions need to do <code>--runtime-config=batch/v2alpha1=true</code>, but I don't know where and how to execute it</p>
| <p>Here is it documented. <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/</a> we need to enable this feature in API server.</p>
<p>on the master server you need to add the line command section in this file <code>/etc/kubernetes/manifests/kube-apiserver.yaml</code>. then restart whole cluster.</p>
<p>After restart check the api version. we should see the feature enabled.</p>
<pre><code>kubectl api-versions |grep batch
batch/v1
batch/v2alpha1
</code></pre>
|
<p>I am running k8s on aws, and I updated the deployment of nginx - which normally, it works fine-, but after this time, the nginx deployment won't show up in "kubectl get deployments".</p>
<p>I want to kill all the pods related to nginx, but they keep reproduce themselves. I deleted all deployments "kubectl delete --all deployments", other pods just got terminated, but not nginx. </p>
<p>I have no idea where I can stop the pods recreating.
any idea where to start ?</p>
| <p>check the deployment, replication controller and replica set and remove them.</p>
<pre><code> kubectl get deploy,rc,rs
</code></pre>
|
<p>We're running multiple Kubernetes clusters, which run Cassandra.
Our usual procedure when doing a rolling restart of the Cassandra pods is to log into each and submit a <code>nodetool drain</code> and then trigger a recreation of that pod. But often when the pods restart we get errors like </p>
<pre><code>ERROR [HintsDispatcher:2] 2017-08-07 11:09:32,489 HintsDispatchExecutor.java:243 - Failed to dispatch hints file 5fdd139d-4465-4825-85ef-f380bddcb67d-1502100535128-1.hints: file is corrupted ({})
</code></pre>
<p>Those corrupt files prevent Cassandra from starting. Is there a way to tell Cassandra to flush all buffers and stop writing, before stopping it, to ensure there are no corrupt files left behind?</p>
| <p>You can try to disable hinted handoff, or try to truncate hints after the drain:</p>
<p><strong>nodetool truncatehints</strong></p>
<p>If you care about consistency, run repair after the process.</p>
<p><strong>Warning:</strong> If you are working with ANY consistency setting or RF=1, this may lead to some data loss.</p>
|
<p>I'm not sure why I'm getting an error <code>No nodes are available that match all of the following predicates:: Insufficient cpu (1)</code>. </p>
<p>I don't recall setting any CPU limits. Unless this is some default? </p>
<p>The output of <code>kubectl describe pod wordpress</code>: </p>
<pre><code>Name: wordpress-114465096-bn4rv
Namespace: default
Node: /
Labels: app=wordpress
pod-template-hash=114465096
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"default","name":"wordpress-114465096","uid":"fff460df-7c4c-11e7-b3fd-42010a840026...
kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container wordpress; cpu request for container cloudsql-proxy; cpu request for container nginx
Status: Pending
IP:
Controllers: ReplicaSet/wordpress-114465096
Containers:
wordpress:
Image: wordpress:latest
Port:
Requests:
cpu: 100m
Environment:
WORDPRESS_HOST: localhost
WORDPRESS_DB_USERNAME: <set to the key 'username' in secret 'cloudsql-db-credentials'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-ql6k8 (ro)
/var/www/html from wordpress-persistent-storage (rw)
cloudsql-proxy:
Image: gcr.io/cloudsql-docker/gce-proxy:1.09
Port:
Command:
/cloud_sql_proxy
--dir=/cloudsql
-instances=inspiring-tower-99712:europe-west1:wordpressdb=tcp:3306
-credential_file=/secrets/cloudsql/credentials.json
Requests:
cpu: 100m
Environment: <none>
Mounts:
/cloudsql from cloudsql (rw)
/etc/ssl/certs from ssl-certs (rw)
/secrets/cloudsql from cloudsql-instance-credentials (ro)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-ql6k8 (ro)
nginx:
Image: nginx:latest
Port: 80/TCP
Requests:
cpu: 100m
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-ql6k8 (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
wordpress-persistent-storage:
Type: GCEPersistentDisk (a Persistent Disk resource in Google Compute Engine)
PDName: wordpress-disk
FSType: ext4
Partition: 0
ReadOnly: false
cloudsql-instance-credentials:
Type: Secret (a volume populated by a Secret)
SecretName: cloudsql-instance-credentials
Optional: false
ssl-certs:
Type: HostPath (bare host directory volume)
Path: /etc/ssl/certs
cloudsql:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
default-token-ql6k8:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-ql6k8
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady=:Exists:NoExecute for 300s
node.alpha.kubernetes.io/unreachable=:Exists:NoExecute for 300s
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1h 22s 265 default-scheduler Warning FailedScheduling No nodes are available that match all of the following predicates:: Insufficient cpu (1).
</code></pre>
<p>Output of <code>kubectl describe node gke-wordpress-default-pool-91c14317-jdlj</code> (the single node in the cluster):</p>
<pre><code>Name: gke-wordpress-default-pool-91c14317-jdlj
Role:
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/fluentd-ds-ready=true
beta.kubernetes.io/instance-type=n1-standard-1
beta.kubernetes.io/os=linux
cloud.google.com/gke-nodepool=default-pool
failure-domain.beta.kubernetes.io/region=europe-west1
failure-domain.beta.kubernetes.io/zone=europe-west1-b
kubernetes.io/hostname=gke-wordpress-default-pool-91c14317-jdlj
Annotations: node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
Taints: <none>
CreationTimestamp: Fri, 04 Aug 2017 17:44:08 +0100
Phase:
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Fri, 04 Aug 2017 17:44:35 +0100 Fri, 04 Aug 2017 17:44:35 +0100 RouteCreated RouteController created a route
OutOfDisk False Tue, 08 Aug 2017 21:04:47 +0100 Fri, 04 Aug 2017 17:44:08 +0100 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Tue, 08 Aug 2017 21:04:47 +0100 Fri, 04 Aug 2017 17:44:08 +0100 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 08 Aug 2017 21:04:47 +0100 Fri, 04 Aug 2017 17:44:08 +0100 KubeletHasNoDiskPressure kubelet has no disk pressure
Ready True Tue, 08 Aug 2017 21:04:47 +0100 Fri, 04 Aug 2017 17:44:39 +0100 KubeletReady kubelet is posting ready status. AppArmor enabled
KernelDeadlock False Tue, 08 Aug 2017 21:03:56 +0100 Fri, 04 Aug 2017 17:43:19 +0100 KernelHasNoDeadlock kernel has no deadlock
Addresses: 10.132.0.3,35.195.163.26,gke-wordpress-default-pool-91c14317-jdlj
Capacity:
cpu: 1
memory: 3794520Ki
pods: 110
Allocatable:
cpu: 1
memory: 3794520Ki
pods: 110
System Info:
Machine ID: 2643dae58dd36381dc5e8ebe124272bc
System UUID: 2643DAE5-8DD3-6381-DC5E-8EBE124272BC
Boot ID: 37002900-44ab-45b1-bbca-04d2b5866683
Kernel Version: 4.4.52+
OS Image: Container-Optimized OS from Google
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://1.11.2
Kubelet Version: v1.6.7
Kube-Proxy Version: v1.6.7
PodCIDR: 10.24.0.0/24
ExternalID: 8419821342083849481
Non-terminated Pods: (7 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
kube-system fluentd-gcp-v2.0-t3rzf 100m (10%) 0 (0%) 200Mi (5%) 300Mi (8%)
kube-system heapster-v1.3.0-3440173064-d66jq 138m (13%) 138m (13%) 301456Ki (7%) 301456Ki (7%)
kube-system kube-dns-1829567597-n6kz6 260m (26%) 0 (0%) 110Mi (2%) 170Mi (4%)
kube-system kube-dns-autoscaler-2501648610-88ch6 20m (2%) 0 (0%) 10Mi (0%) 0 (0%)
kube-system kube-proxy-gke-wordpress-default-pool-91c14317-jdlj 100m (10%) 0 (0%) 0 (0%) 0 (0%)
kube-system kubernetes-dashboard-490794276-93cn2 100m (10%) 100m (10%) 50Mi (1%) 50Mi (1%)
kube-system l7-default-backend-3574702981-509zt 10m (1%) 10m (1%) 20Mi (0%) 20Mi (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
728m (72%) 248m (24%) 700816Ki (18%) 854416Ki (22%)
Events: <none>
</code></pre>
<p>The config file (production.yaml):</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
replicas: 1
selector:
matchLabels:
app: wordpress
template:
metadata:
labels:
app: wordpress
spec:
terminationGracePeriodSeconds: 30
containers:
- image: wordpress:latest
name: wordpress
imagePullPolicy: "Always"
env:
- name: WORDPRESS_HOST
value: localhost
- name: WORDPRESS_DB_USERNAME
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
- image: nginx:latest
name: nginx
ports:
- containerPort: 80
name: nginx
- image: gcr.io/cloudsql-docker/gce-proxy:1.09
name: cloudsql-proxy
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=inspiring-tower-99712:europe-west1:wordpressdb=tcp:3306",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: cloudsql
mountPath: /cloudsql
volumes:
- name: wordpress-persistent-storage
gcePersistentDisk:
pdName: wordpress-disk
fsType: ext4
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
- name: cloudsql
emptyDir:
</code></pre>
| <p>There are additional containers running besides the ones specified in the config. These are provisioned by default with Kubernetes and run in the <code>kube-system</code> namespace which are not shown in the default namespace.</p>
<p>You can view all the pods by <code>kubectl get pods --all-namespaces</code>.</p>
<p>These additional containers are taking up 72% of the CPU quota of the single node... </p>
<p>Hence the 3 containers at 10% cpu quota would exceed 100% of the CPU quota <strong>(because 72% + (3 * 10) > 100%)</strong>...</p>
<p>As to why 72% of the containers is being allocated to the other containers - the question is asked here: <a href="https://stackoverflow.com/questions/45586069/why-does-a-single-node-cluster-only-have-a-small-percentage-of-the-cpu-quota-ava?noredirect=1&lq=1">Why does a single node cluster only have a small percentage of the cpu quota available?</a></p>
<p>Additional resources that may be usefull: <a href="https://stackoverflow.com/q/33391748/1663462">How to reduce CPU limits of kubernetes system resources?</a></p>
<hr>
<p>However, I was able to get the containers to run with sufficient CPU by adding additional nodes to the cluster. In addition, the high cpu instances seem to be more efficiently allocated on Google Cloud.</p>
|
<p>I met a couple of problems when installing the Kubernetes with Kubeadm. I am working behind the corporate network. I declared the proxy settings in the session environment.</p>
<pre><code>$ export http_proxy=http://proxy-ip:port/
$ export https_proxy=http://proxy-ip:port/
$ export no_proxy=master-ip,node-ip,127.0.0.1
</code></pre>
<p>After installing all the necessary components and dependencies, I began to initialize the cluster. In order to use the current environment variables, I used <code>sudo -E bash</code>. </p>
<pre><code>$ sudo -E bash -c "kubeadm init --apiserver-advertise-address=192.168.1.102 --pod-network-cidr=10.244.0.0/16"
</code></pre>
<p>Then the output message hung at the message below forever. </p>
<pre><code>[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.3
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] Starting the kubelet service
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated CA certificate and key.
[certificates] Generated API server certificate and key.
[certificates] API Server serving cert is signed for DNS names [loadbalancer kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.102]
[certificates] Generated API server kubelet client certificate and key.
[certificates] Generated service account token signing key and public key.
[certificates] Generated front-proxy CA certificate and key.
[certificates] Generated front-proxy client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[apiclient] Created API client, waiting for the control plane to become ready
</code></pre>
<p>Then I found that none of the kube components was up while <code>kubelet</code> kept requesting <code>kube-apiserver</code>. <code>sudo docker ps -a</code> returned nothing. </p>
<p>What is the possible root cause of it? </p>
<p>Thanks in advance. </p>
| <p>I would strongly suspect it is trying to pull down the docker images for <code>gcr.io/google_containers/hyperkube:v1.7.3</code> or whatever, which requires teaching the docker daemon about the proxies, <a href="https://docs.docker.com/engine/admin/systemd/#httphttps-proxy" rel="nofollow noreferrer">in this way using systemd</a></p>
<p>That would certainly explain why <code>docker ps -a</code> shows nothing, but I would expect the dockerd logs <code>journalctl -u docker.service</code> (or its equivalent in your system) to complain about its inability to pull from <code>gcr.io</code></p>
<p>Based on what I read from the kubeadm reference guide, they are expecting you to patch the systemd config on the target machine to expose those environment variables, and not just set them within the shell that launched kubeadm (although that certainly could be a feature request)</p>
|
<p>For some reason Kubernetes 1.6.2 does not trigger autoscaling on Google Container Engine.</p>
<p>I have a <code>someservice</code> definition with the following resources and rolling update:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: someservice
labels:
layer: backend
spec:
minReadySeconds: 160
replicas: 1
strategy:
rollingUpdate:
maxSurge: 100%
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
labels:
name: someservice
layer: backend
spec:
containers:
- name: someservice
image: eu.gcr.io/XXXXXX/someservice:v1
imagePullPolicy: Always
resources:
limits:
cpu: 2
memory: 20Gi
requests:
cpu: 400m
memory: 18Gi
<.....>
</code></pre>
<p>After changing image version, the new instance cannot start:</p>
<pre><code>$ kubectl -n dev get pods -l name=someservice
NAME READY STATUS RESTARTS AGE
someservice-2595684989-h8c5d 0/1 Pending 0 42m
someservice-804061866-f2trc 1/1 Running 0 1h
$ kubectl -n dev describe pod someservice-2595684989-h8c5d
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
43m 43m 4 default-scheduler Warning FailedScheduling No nodes are available that match all of the following predicates:: Insufficient cpu (4), Insufficient memory (3).
43m 42m 6 default-scheduler Warning FailedScheduling No nodes are available that match all of the following predicates:: Insufficient cpu (3), Insufficient memory (3).
41m 41m 2 default-scheduler Warning FailedScheduling No nodes are available that match all of the following predicates:: Insufficient cpu (2), Insufficient memory (3).
40m 36s 136 default-scheduler Warning FailedScheduling No nodes are available that match all of the following predicates:: Insufficient cpu (1), Insufficient memory (3).
43m 2s 243 cluster-autoscaler Normal NotTriggerScaleUp pod didn't trigger scale-up (it wouldn't fit if a new node is added)
</code></pre>
<p>My node pool is set to autoscale with <code>min: 2</code>, <code>max: 5</code>. And machines (<code>n1-highmem-8</code>) in node pool are large enough (52GB) to accommodate this service. But somehow nothing happens:</p>
<pre><code>$ kubectl get nodes
NAME STATUS AGE VERSION
gke-dev-default-pool-efca0068-4qq1 Ready 2d v1.6.2
gke-dev-default-pool-efca0068-597s Ready 2d v1.6.2
gke-dev-default-pool-efca0068-6srl Ready 2d v1.6.2
gke-dev-default-pool-efca0068-hb1z Ready 2d v1.6.2
$ kubectl describe nodes | grep -A 4 'Allocated resources'
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
7060m (88%) 15510m (193%) 39238591744 (71%) 48582818048 (88%)
--
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
6330m (79%) 22200m (277%) 48930Mi (93%) 66344Mi (126%)
--
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
7360m (92%) 13200m (165%) 49046Mi (93%) 44518Mi (85%)
--
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
7988m (99%) 11538m (144%) 32967256Ki (61%) 21690968Ki (40%)
$ gcloud container node-pools describe default-pool --cluster=dev
autoscaling:
enabled: true
maxNodeCount: 5
minNodeCount: 2
config:
diskSizeGb: 100
imageType: COS
machineType: n1-highmem-8
oauthScopes:
- https://www.googleapis.com/auth/compute
- https://www.googleapis.com/auth/datastore
- https://www.googleapis.com/auth/devstorage.read_only
- https://www.googleapis.com/auth/devstorage.read_write
- https://www.googleapis.com/auth/service.management.readonly
- https://www.googleapis.com/auth/servicecontrol
- https://www.googleapis.com/auth/sqlservice
- https://www.googleapis.com/auth/logging.write
- https://www.googleapis.com/auth/monitoring
serviceAccount: default
initialNodeCount: 2
instanceGroupUrls:
- https://www.googleapis.com/compute/v1/projects/XXXXXX/zones/europe-west1-b/instanceGroupManagers/gke-dev-default-pool-efca0068-grp
management:
autoRepair: true
name: default-pool
selfLink: https://container.googleapis.com/v1/projects/XXXXXX/zones/europe-west1-b/clusters/dev/nodePools/default-pool
status: RUNNING
version: 1.6.2
$ kubectl -n dev get pods -l name=someservice
NAME READY STATUS RESTARTS AGE
someservice-2595684989-h8c5d 0/1 Pending 0 42m
someservice-804061866-f2trc 1/1 Running 0 1h
$ kubectl -n dev describe pod someservice-2595684989-h8c5d
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
43m 43m 4 default-scheduler Warning FailedScheduling No nodes are available that match all of the following predicates:: Insufficient cpu (4), Insufficient memory (3).
43m 42m 6 default-scheduler Warning FailedScheduling No nodes are available that match all of the following predicates:: Insufficient cpu (3), Insufficient memory (3).
41m 41m 2 default-scheduler Warning FailedScheduling No nodes are available that match all of the following predicates:: Insufficient cpu (2), Insufficient memory (3).
40m 36s 136 default-scheduler Warning FailedScheduling No nodes are available that match all of the following predicates:: Insufficient cpu (1), Insufficient memory (3).
43m 2s 243 cluster-autoscaler Normal NotTriggerScaleUp pod didn't trigger scale-up (it wouldn't fit if a new node is added)
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:33:11Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:22:08Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
| <p>Make sure instance group autoscaler either is disabled or has proper minimum/maximum number of instances settings. </p>
<p>According to <a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#is-cluster-autoscaler-compatible-with-cpu-usage-based-node-autoscalers" rel="nofollow noreferrer">Kubernetes Cluster Autoscaler FAQ</a>:</p>
<blockquote>
<p>CPU-based (or any metric-based) cluster/node group autoscalers, like
GCE Instance Group Autoscaler, are NOT compatible with [Kubernetes
Cluster Austoscaler]. They are also not particularly suited to use
with Kubernetes in general.</p>
</blockquote>
<p>...so it should probably be disabled.</p>
<p>Try:</p>
<pre><code>gcloud compute instance-groups managed describe gke-dev-default-pool-efca0068-grp \
--zone europe-west1-b
</code></pre>
<p>Then check out <code>autoscaler</code> property. It will be absent if instance group autoscaler is disabled.</p>
<p>To disable it, do:</p>
<pre><code>gcloud compute instance-groups managed stop-autoscaling gke-dev-default-pool-efca0068-grp \
--zone europe-west1-b
</code></pre>
|
<p><a href="https://stackoverflow.com/questions/45573825/pod-will-not-start-due-to-no-nodes-are-available-that-match-all-of-the-followin/45585916#45585916">pod will not start due to "No nodes are available that match all of the following predicates:: Insufficient cpu"</a></p>
<p>In the above question, I had an issue starting a deployment with 3 containers. </p>
<p>Upon further investigation, it appears there is only 27% of the CPU quota available - which seems very low. The rest of the CPU seems to be assigned to some default bundled containers.</p>
<p>How is this normally mitigated? Is a larger node required? Do limits need to be set manually? Are all those additional containers necessary?</p>
| <p>1 cpu for a single node cluster is probably too small.</p>
<p>From the containers in the original answer, both the dashboard and fluentd can be removed:</p>
<ul>
<li>the dashboard is just a web UI, which can go away if you use <code>kubectl</code> (which you should, IMO);</li>
<li>fluentd should be reading the log files on disk to ship them somewhere (GCP's log aggregation, I think).</li>
</ul>
<p>The unnecessary containers should be tied to a <code>Deployment</code> or <code>ReplicaSet</code>, which can be listed with <code>kubectl get deployment</code> and <code>kubectl get rs</code>, respectively. You can then <code>kubectl delete</code> them.</p>
<p>Increasing the resources on the node should not change the requirements for the basic pods, meaning they should all be free scheduling.</p>
|
<p>This is my config:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
replicas: 1
selector:
matchLabels:
app: wordpress
template:
metadata:
labels:
app: wordpress
spec:
terminationGracePeriodSeconds: 30
containers:
- image: wordpress:latest
name: wordpress
imagePullPolicy: "Always"
env:
- name: WORDPRESS_HOST
value: localhost
- name: WORDPRESS_DB_USERNAME
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
- image: nginx:latest
name: nginx
ports:
- containerPort: 80
name: nginx
- image: gcr.io/cloudsql-docker/gce-proxy:1.09
name: cloudsql-proxy
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=abcxyz:europe-west1:wordpressdb=tcp:3306",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: cloudsql
mountPath: /cloudsql
volumes:
- name: wordpress-persistent-storage
gcePersistentDisk:
pdName: wordpress-disk
fsType: ext4
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
- name: cloudsql
emptyDir:
</code></pre>
<p>I'd like to expose Nginx's port 80 only (to act as a load balancer). However it fails to start, the logs from the container:</p>
<pre><code>2017/08/09 14:39:50 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
2017/08/09 14:39:50 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
2017/08/09 14:39:50 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
2017/08/09 14:39:50 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
2017/08/09 14:39:50 [emerg] 1#1: bind() to 0.0.0.0:80 failed (98: Address already in use)
nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address already in use)
2017/08/09 14:39:50 [emerg] 1#1: still could not bind()
nginx: [emerg] still could not bind()
</code></pre>
<p>I'm guessing this is due to the Wordpress container is already listening on port 80.</p>
<p>I would have assumed they would be independent and not have any port conflicts. How can I resolve this issue?</p>
| <blockquote>
<p>I would have assumed they would be independent and not have any port conflicts. How can I resolve this issue?</p>
</blockquote>
<p><em>Across</em> Pods that's true, but <em>within</em> a Pod, all containers share the same networking namespace -- that's part of what makes them a Pod. To accomplish what you said about "to act as a load balancer", deploy the nginx Pod separately, and point its upstream at a Service you will create for the wordpress Pod. Or, of course, you can also relocate the port the wordpress container is listening on, but the following bears consideration before doing that.</p>
<p>Unless your "load balancer" is genuinely going to take <strong>load</strong> into account, and not just be a round-robin LB, the very act of creating a Service will naturally disperse traffic across all the Pods that match the selector in the Service.</p>
|
<p>I'm trying to expose a GRPC Java service thought an Ingress to outside world from my GKE cluster.</p>
<p>The problem is that GKE's default implementation creates a health check that expect 200 response code on curling "/". It is expected and documented <a href="https://github.com/kubernetes/ingress/blob/master/docs/faq/gce.md#can-i-configure-gce-health-checks-through-the-ingress" rel="nofollow noreferrer">here</a>.</p>
<p>Unfortunately this seems not to work with <a href="https://github.com/grpc/grpc-java" rel="nofollow noreferrer">grpc-java</a> implementation since it's not handling "/" GET requests.</p>
<p>GRPC itself defines a <a href="https://github.com/grpc/grpc/blob/master/doc/health-checking.md" rel="nofollow noreferrer">health checking protocol</a>. But it's not supported either.</p>
<p>I wonder if there is a similar secret annotation like "kubernetes.io/ingress.global-static-ip-name" but for disabling health checks at least(ideally overriding them).</p>
| <p>Seems at the moment GCP HTTP Load Balancers <a href="https://groups.google.com/forum/#!topic/grpc-io/bfURoNLojHo" rel="nofollow noreferrer">doesn't support HTTP/2</a>. So I ended up simply by exposing my service through LoadBalancer instead of NodePort + Ingress.</p>
<p>Note: static IP you provide in <code>loadBalancerIP</code> should be <strong>REGIONAL</strong>. For a multi-region static IP my service's external IP was always in pending state.</p>
|
<p>By default, image-gc-high-threshold and image-gc-low-threshold values are 90 and 80% respectively.</p>
<p>We want to change them to 80 and 70, How we can change the Kubernetes image garbage collection threshold values.</p>
| <p>Changing the garbage collection thresholds can be done using switches on the <code>kubelet</code>.</p>
<p>From <a href="https://kubernetes.io/docs/admin/kubelet/" rel="nofollow noreferrer">the docs</a></p>
<pre><code>--image-gc-high-threshold int32 The percent of disk usage after which image garbage collection is always run. (default 85)
--image-gc-low-threshold int32 The percent of disk usage before which image garbage collection is never run. Lowest disk usage to garbage collect to. (default 80)
</code></pre>
|
<p>Once you deploy a pod, kubernetes monitors the pod health and if something goes wrong, new pod is created . How is this working internally. After deployment, where is kubernetes storing the deployment yaml.</p>
| <p>Kubernetes uses the ETCD database as data store. The flow is like this. Kubectl command connect to API server and sends the yaml file to API server. API parses and store the information in ETCD database. Kube controller and schedule looks at the ETCD database and starts the POD. Controller maintain the number of replica defined in the yaml file.</p>
|
<p>The question is same as the title. I am deploying Kubernetes Cluster on CentOS 7 using Kubeadm and planning to replicate the Master with HA solution. </p>
<p>Although there are loads of tools listed in the <a href="https://kubernetes.io/docs/setup/pick-right-solution/#bare-metal" rel="nofollow noreferrer">Kubernetes wiki page</a>, "Building a High-Available" cluster" is a separate topic and there is no automation tools recommended by K8s yet. </p>
<p>The question is whether there is a tool to automated the HA setup steps. What is the most efficient tool to do that ? Will Kubeadm support HA in future and when will it? </p>
| <p>A ton of them, and a search before posting would have easily surfaced that very answer</p>
<p><a href="https://github.com/ramitsurana/awesome-kubernetes#readme" rel="nofollow noreferrer">https://github.com/ramitsurana/awesome-kubernetes#readme</a></p>
|
<p>I have configured OIDC with k8s installed using kubeadm.
After the configuration, when I run the command <code>kubectl [email protected] get nodes</code> I get </p>
<blockquote>
<p>error: You must be logged in to the server (the server has asked for the client to provide credentials (get nodes))</p>
</blockquote>
<p>Can someone please help me with this?</p>
| <p>I use <code>kubectl [email protected] get nodes</code> and it works. Earlier I was using the parameter <code>--user</code> instead of <code>--username</code>.</p>
|
<p>I have a container with some SYS_ADMIN capabilities, and I'm having trouble killing the pod, and am wondering what is the best way to deal with these reluctant containers?</p>
<pre><code>- image: headless-chrome
securityContext:
capabilities:
add:
- SYS_ADMIN
</code></pre>
| <p>Sometimes a pod will zombify, even without privileged status. In order to kill a zombified pod, first try the following:
<code>
kubectl delete pod <<PODNAME>> --grace-period=0 --force
</code>
...where <code><<PODNAME>></code> is the name of the offending pod. I have to do that from time to time, even with v1.7.X.</p>
<p>If that doesn't work, then try to first <code>kubectl drain</code> the node, find the Docker container (<code>docker ps</code>), delete it (<code>docker rm</code>), restart the Docker service and the Kubelet service, then <code>kubectl uncordon</code> the node. I've only had to do that once, and not since v1.6.X.</p>
|
<p>Before kubeadm I use these steps to take flannel ip & mtu value to docker.</p>
<p>Step 1: stop Docker and Flannel<br>
Step 2: start Flannel and check its status;<br>
step 3: update Docker startup script like this</p>
<pre><code>source /run/flannel/subnet.env
--bip=${FLANNEL_SUBNET} --mtu=${FLANNEL_MTU}
</code></pre>
<p>Step 4: start Docker and check its status.</p>
<p>How this steps done with <code>kubeadm</code>? I see Docker deamon process start first then Flannel starts as container trying to understate the integration process. </p>
<p>Thanks
SR</p>
| <p>Here are the steps I took to set up flannel in Kubernetes v1.7.3.</p>
<p>Install flannel</p>
<pre><code>kubectl create -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel-rbac.yml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
</code></pre>
<p>You will see the flannel pod created but it falls into a "CrashLoopBackOff" state and restart forever. </p>
<p>After flannel is installed by Kubeadm, the subnet info will be recorded in file <code>/run/flannel/subnet.env</code>.</p>
<pre><code>cat /run/flannel/subnet.env
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
</code></pre>
<p>Setup these environment variables for docker</p>
<pre><code>mkdir -p /usr/lib/systemd/system/docker.service.d
sudo cat << EOF > /usr/lib/systemd/system/docker.service.d/flannel.conf
[Service]
EnvironmentFile=-/run/flannel/docker
EOF
sudo cat << EOF > /run/flannel/docker
DOCKER_OPT_BIP="--bip=10.244.0.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=10.244.0.1/24 --ip-masq=false --mtu=1450"
</code></pre>
<p><strong><em>Note: do set ip-masq as false for docker, otherwise kube-dns would not work well.</em></strong> </p>
<p>Reload the service configuration, then the changes will take effect.</p>
<pre><code>sudo systemctl daemon-reload`
</code></pre>
<p>Voila, everything works after that.</p>
|
<p>I was trying to deploy Istio in my environment and run across the following error. All the solutions online are regarding clusterrolebinding, I have tried to do that but failed nevertheless. Any inputs to my problem?</p>
<p><strong>kubectl api-versions | grep rbac</strong></p>
<pre><code>rbac.authorization.k8s.io/v1alpha1
rbac.authorization.k8s.io/v1beta1
</code></pre>
<p><strong>sudo kubectl apply -f install/kubernetes/istio-rbac-beta.yaml</strong></p>
<pre><code>rolebinding "istio-pilot-admin-role-binding" configured
rolebinding "istio-ca-role-binding" configured
rolebinding "istio-ingress-admin-role-binding" configured
rolebinding "istio-sidecar-role-binding" configured
Error from server (Forbidden):
error when creating"install/kubernetes/istio-rbac-beta.yaml":
clusterroles.rbac.authorization.k8s.io "istio-pilot" is forbidden:
attempt to grant extra privileges: [{[*] [istio.io] [istioconfigs] []
[]} {[*] [istio.io] [istioconfigs.istio.io] [] []} {[*] [extensions]
[thirdpartyresources] [] []} {[*] [extensions]
[thirdpartyresources.extensions] [] []} {[*] [extensions] [ingresses]
[] []} {[*] [] [configmaps] [] []} {[*] [] [endpoints] [] []} {[*] []
[pods] [] []} {[*] [] [services] [] []}] user=&{kubeconfig
[system:authenticated] map[]} ownerrules=[] ruleResolutionErrors=[]
Error from server (Forbidden): error when creating
"install/kubernetes/istio-rbac-beta.yaml":
clusterroles.rbac.authorization.k8s.io "istio-ca" is forbidden:
attempt to grant extra privileges: [{[create] [] [secrets] [] []}
{[get] [] [secrets] [] []} {[watch] [] [secrets] [] []} {[list] []
[secrets] [] []} {[watch] [] [serviceaccounts] [] []} {[list] []
[serviceaccounts] [] []}] user=&{kubeconfig [system:authenticated]
map[]} ownerrules=[] ruleResolutionErrors=[]
Error from server (Forbidden): error when creating
"install/kubernetes/istio-rbac-beta.yaml":
clusterroles.rbac.authorization.k8s.io "istio-sidecar" is forbidden:
attempt to grant extra privileges: [{[get] [istio.io] [istioconfigs] []
[]} {[watch] [istio.io] [istioconfigs] [] []} {[list] [istio.io]
[istioconfigs] [] []} {[get] [extensions] [thirdpartyresources] [] []}
{[watch] [extensions] [thirdpartyresources] [] []} {[list] [extensions]
[thirdpartyresources] [] []} {[update] [extensions]
[thirdpartyresources] [] []} {[get] [extensions] [ingresses] [] []}
{[watch] [extensions] [ingresses] [] []} {[list] [extensions]
[ingresses] [] []} {[update] [extensions] [ingresses] [] []} {[get] []
[configmaps] [] []} {[watch] [] [configmaps] [] []} {[list] []
[configmaps] [] []} {[get] [] [pods] [] []} {[watch] [] [pods] [] []}
{[list] [] [pods] [] []} {[get] [] [endpoints] [] []} {[watch] []
[endpoints] [] []} {[list] [] [endpoints] [] []} {[get] [] [services]
[] []} {[watch] [] [services] [] []} {[list] [] [services] [] []}]
user=&{kubeconfig [system:authenticated] map[]} ownerrules=[]
ruleResolutionErrors=[]
</code></pre>
| <p>The error Kubernetes gives you basically means that it thinks whatever you're trying to do is a privilege escalation (which is correct) and tries to prevent that.</p>
<blockquote>
<p>The RBAC API prevents users from escalating privileges by editing roles or role bindings. Because this is enforced at the API level, it applies even when the RBAC authorizer is not in use.
A user can only create/update a role if they already have all the permissions contained in the role, at the same scope as the role (cluster-wide for a ClusterRole, within the same namespace or cluster-wide for a Role). For example, if “user-1” does not have the ability to list secrets cluster-wide, they cannot create a ClusterRole containing that permission.
(taken from <a href="https://kubernetes.io/docs/admin/authorization/rbac/#privilege-escalation-prevention-and-bootstrapping" rel="nofollow noreferrer">here</a>)</p>
</blockquote>
<p>The reason for that is because the ClusterRole that is applied (using a ClusterRoleBinding) to the user you're using to access the cluster does not actually have all the permissions you're trying to give your application. To resolve that, you need to create a ClusterRoleBinding that gives your User the necessary permissions. In your case it would make sense to bind you to the cluster-admin role which gives you unlimited permissions.</p>
<p>To do that, you can run something like that:</p>
<pre><code>kubectl create clusterrolebinding --clusterrole cluster-admin --user your-user
</code></pre>
|
<p>I have a cronjob that is completing and outputting several log files.</p>
<p>I want to persist these files and be able access them after the pod has succeeded.</p>
<p>I've found I can access the stdout with <code>oc logs -f <pod></code>, but I really need to access the log files.</p>
<p>I'm aware Openshift 2 apparently had an environment variable location OPENSHIFT_LOG_DIR that log files were written to, but Openshift 3.5 doesn't appear to have this.</p>
<p>What's my best way of logging and accessing the logs from the CronJob after the pod has succeeded and finished?</p>
| <p>After a <code>Job</code> runs to completion, the <code>Pod</code> terminates, but it is not automatically deleted. Since it has completed, you need to use <code>-a</code> to see it. Once you have the <code>Pod</code> name, <code>kubectl logs</code> works as you would expect.</p>
<pre><code>$ kubectl get pods -a
NAME READY STATUS RESTARTS AGE
curator-1499817660-6rzmf 0/1 Completed 0 28d
$ kubectl logs curator-1499817660-6rzmf
2017-07-12 00:01:10,409 INFO ...
</code></pre>
|
<p>I am following the sample azure file mentioned at <a href="https://github.com/Azure-Samples/azure-voting-app-redis/blob/master/kubernetes-manifests/azure-vote-all-in-one-redis.yaml" rel="nofollow noreferrer">Azure Voting App</a><br>
I have a deployment file like:</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: azure-vote-front
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
template:
metadata:
labels:
app: azure-vote-front
spec:
containers:
- name: azure-vote-front
image: microsoft/azure-vote-front:redis-v1
ports:
- containerPort: 80
resources:
requests:
cpu: 250m
limits:
cpu: 500m
env:
- name: REDIS
value: "azure-vote-back"
</code></pre>
<p>I want to add an <code>imagePullSecret</code> to this file using a CLI and tried the following patch command which isn't working as expected. Can someone let me know what is the right way to add <code>impagePullSecret</code> to the deployment without modifying manually?</p>
<pre><code>kubectl patch deployment azure-vote-front -p '{"spec":{"template":{"spec":{"imagePullSecret":[{"name":"mykey"}]}}}}'
</code></pre>
<p><strong>Error:</strong></p>
<pre><code>Error from server: json: cannot unmarshal string into Go value of type map[string]interface {}
</code></pre>
<p>Can you tell me what is wrong with the patch command?</p>
| <p>I'm not sure if it was merely an "example typo", or if that was quite literally your <code>patch</code> command, but the field is <code>imagePullSecrets</code>, plural (as the array contents might hint at), not <code>imagePullSecret</code> singular</p>
|
<p>I have a Kubernetes deployment which uses a ConfigMap with some configuration which is frequently updated. Currently I have to update this configuration manually, by running a script on my local machine that updates the ConfigMap via <code>kubectl</code>.</p>
<p>Is there a way to do this in a more automated fashion using the Kubernetes API (from inside or outside Kubernetes)?</p>
| <p>There are a couple of Kubernetes clients in many languages if you take a look <a href="https://kubernetes.io/docs/reference/using-api/client-libraries" rel="nofollow noreferrer">here</a>. Python and Go are officially supported. You can automate the steps by calling the client.</p>
<p>If you know Python, you may refer to the <a href="https://github.com/kubernetes-incubator/client-python/blob/master/kubernetes/docs/CoreV1Api.md#patch_namespaced_config_map" rel="nofollow noreferrer">sample</a> below.</p>
<pre><code>from __future__ import print_statement
import time
import kubernetes.client from kubernetes.client.rest
import ApiException from pprint import pprint
# Configure API key authorization: BearerToken
kubernetes.client.configuration.api_key['authorization'] = 'YOUR_API_KEY'
# Uncomment below to setup prefix (e.g. Bearer) for API key, if needed
# kubernetes.client.configuration.api_key_prefix['authorization'] = 'Bearer'
# create an instance of the API class api_instance =
kubernetes.client.CoreV1Api()
name = 'name_example' # str | name of the ConfigMap
namespace = 'namespace_example' # str | object name and auth scope, such as for teams and projects
body = NULL # object |
pretty = 'pretty_example' # str | If 'true', then the output is pretty printed. (optional)
try:
api_response = api_instance.patch_namespaced_config_map(name, namespace, body, pretty=pretty)
pprint(api_response)
except ApiException as e:
print("Exception when calling CoreV1Api->patch_namespaced_config_map: %s\n" % e)
</code></pre>
<p>With respect to use APIs internally and externally, you can take a look at the <a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/#accessing-the-api-from-a-pod" rel="nofollow noreferrer">wiki</a>. Especially this <a href="https://stackoverflow.com/questions/30690186/how-do-i-access-the-kubernetes-api-from-within-a-pod-container">thread</a> explains how to access APIs from a pod.</p>
<pre><code>KUBE_TOKEN=$(</var/run/secrets/kubernetes.io/serviceaccount/token)
curl -sSk -H "Authorization: Bearer $KUBE_TOKEN" https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_PORT_443_TCP_PORT/api/v1/namespaces/iot2cloud/configmaps
</code></pre>
|
<p>I have project with docker-compose setup ready. Now I want to move to kubernetes. I use <a href="https://github.com/kubernetes/kompose" rel="nofollow noreferrer">Kompose</a> tool for converting from docker-compose to kubernetes.</p>
<p>For example, here is my sample <code>docker-compose.yml</code> file</p>
<pre><code>version: '3'
volumes:
database_hades_volume:
external: true
services:
db:
image: postgres:latest
container_name: hades-db
ports:
- "5432:5432"
environment:
POSTGRES_DB: hades_dev
POSTGRES_PASSWORD: 1234
volumes:
- database_hades_volume:/var/lib/postgresql/data/
tty: true
stdin_open: true
redis:
container_name: hades-redis
image: redis:latest
ports:
- "6379:6379"
app:
container_name: hades-app
build:
context: .
dockerfile: Dockerfile
ports:
- "4001:4001"
volumes:
- ".:/webapp"
env_file:
- ./.env.docker_compose-dev
depends_on:
- db
- redis
</code></pre>
<p>I have run successfully by using command: <code>docker-compose up</code>. Now, I use <code>kompose</code> for converting to kubernetes by using command:</p>
<pre><code>kompose convert
</code></pre>
<p>Then I run by using:</p>
<pre><code>kompose up
</code></pre>
<p>Here is command line result information:</p>
<pre><code>INFO We are going to create Kubernetes Deployments, Services and PersistentVolumeClaims for your Dockerized application. If you need different kind of resources, use the 'kompose convert' and 'kubectl create -f' commands instead.
INFO Deploying application in "default" namespace
INFO Successfully created Service: app
INFO Successfully created Service: db
INFO Successfully created Service: redis
INFO Successfully created Deployment: app
INFO Successfully created PersistentVolumeClaim: app-claim0 of size 100Mi. If your cluster has dynamic storage provisioning, you don't have to do anything. Otherwise you have to create PersistentVolume to make PVC work
INFO Successfully created Deployment: db
INFO Successfully created PersistentVolumeClaim: database-hades-volume of size 100Mi. If your cluster has dynamic storage provisioning, you don't have to do anything. Otherwise you have to create PersistentVolume to make PVC work
INFO Successfully created Deployment: redis
Your application has been deployed to Kubernetes. You can run 'kubectl get deployment,svc,pods,pvc' for details.
</code></pre>
<p>But when I try to test by going to <code>localhost:4001</code> or <code>10.0.0.180:4001</code>, I see that it's waiting forever.</p>
<p>I don't know if I have setup something wrong or miss some steps. Please help me.</p>
<p>Thanks</p>
| <p>Your docker-compose file contains <code>build</code> key which means you have source code/Dockerfile present for <code>app</code> service,
As of now,</p>
<pre><code>NAME READY STATUS RESTARTS AGE
po/app-2119952459-b4jtb 0/1 ErrImagePull 0 24s
</code></pre>
<p>Status is <code>ErrImagePull</code> as cluster is not able to find any image, so along with <code>build</code> key, provide <code>image</code> key as well,
for ex.</p>
<pre><code>app:
container_name: hades-app
build:
context: .
dockerfile: Dockerfile
image: <username>/<imagename>:<tag>
</code></pre>
<p>Because, now kompose has feature of local build and push support,
So kompose will build your image, push to dockerhub and then your cluster can pull image from there while deploying.</p>
<p>commands can be like, </p>
<pre><code>kompose up --build=local
</code></pre>
<p>I hope this makes sense</p>
|
<p>So here is my problem, I want to set up in Kubernetes ingress resource 3 timeout parameters. Described method in the Kubernetes docs is to use either ingress resource annotations or ConfigMap. In the example below I am trying with annotations but for unknown reason the change is not taking effect.</p>
<pre><code>nginx.org/proxy-connect-timeout: 10s
nginx.org/proxy-read-timeout: 10s
nginx.org/proxy-send-timeout: 10s
</code></pre>
<p>I am setting those parameters in my ingress resource deffinition:</p>
<pre><code>kind: Ingress
metadata:
name: my-foobar-ingress
namespace: foobar
annotations:
nginx.org/proxy-send-timeout: "10s"
nginx.org/proxy-connect-timeout: "10s"
nginx.org/proxy-read-timeout: "10s"
spec:
rules:
- host: foobar.foo.bar
http:
paths:
- backend:
serviceName: foobar-svc
servicePort: 8080
path: /
</code></pre>
<p>The other way to set up nginx configurations is through a ConfigMap, but I do not want to do it globally so I need to use annotations to do so.</p>
| <p>Are you sure these are the correct annotations? Have a look at <a href="https://github.com/kubernetes/ingress/blob/master/core/pkg/ingress/annotations/proxy/main.go" rel="noreferrer">ingress/annotations/proxy/main.go</a>, where the following constants are defined:</p>
<pre><code>connect = "ingress.kubernetes.io/proxy-connect-timeout"
send = "ingress.kubernetes.io/proxy-send-timeout"
read = "ingress.kubernetes.io/proxy-read-timeout"
</code></pre>
|
<p>So in order to update the images running on a pod, I have to modify the deployment config (yaml file), and run something like <code>kubectl apply -f deploy.yaml</code>.</p>
<p>This means, if I'm not editing the yaml file manually I'll have to use some template / search and replace functionality. Which isn't really ideal.</p>
<p>Are there any better approaches?</p>
<p>It seems there is a <code>kubectl rolling-update</code> command, but I'm not sure if this works for 'deployments'. </p>
<p>For example running the following: <code>kubectl rolling-update wordpress --image=eu.gcr.io/abcxyz/wordpress:deploy-1502443760</code></p>
<p>Produces an error of:</p>
<pre><code>error: couldn't find a replication controller with source id == default/wordpress
</code></pre>
| <p>I am using this for changing images in Deployments:</p>
<pre><code>kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1
</code></pre>
<p>If you view the yaml files as source of truth then use a tag like <code>stable</code> in the yaml and only issue <code>kubectl set image</code> commands when the tag is moved (use the sha256 image id to actually trigger a rollout; the image names are matched like a string so updating from <code>:stable</code> to <code>:stable</code> is a noop even if the tag now points to a different image).</p>
<p>See <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment" rel="nofollow noreferrer">updating a deployment</a> for more details.</p>
<p>The above requires the deployment replica count to be set more then 1, which is explained here: <a href="https://stackoverflow.com/a/45649024/1663462">https://stackoverflow.com/a/45649024/1663462</a>.</p>
|
<p>I have a Spring Boot application, deployed on a docker container on Kubernetes. The application works well for some time (hours) but at a certain moment it starts restarting like crazy showing a CrashLoopBackOff error state.</p>
<p>This is the info I get from the dead pod:</p>
<pre><code>Port: 8080/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 137
Started: Fri, 11 Aug 2017 10:15:03 +0200
Finished: Fri, 11 Aug 2017 10:16:22 +0200
Ready: False
Restart Count: 7
...
Volume Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-bhk8f (ro)
Environment Variables:
JAVA_OPTS: -Xms512m -Xmx1792m
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
...
QoS Class: BestEffort
Tolerations: <none>
No events.
</code></pre>
<p><strong>Is there any way to get more detailed information about the cause of the crashes?</strong></p>
<p><strong>Is 137 error code an out of memory error?</strong> I have kept increasing the memory of the Java process from -Xmx768m up to 1792m, but errors keep showing up. <strong>Could it be something else?</strong></p>
<p><strong>One weird fact:</strong> I need to find out how come the application runs well, after some hours the pod is killed and then every restart is killed after only some seconds executing.</p>
| <p><code>kubectl logs podName containerName</code> will provide you with the container logs which should give you additional information about the cause of the error. </p>
|
<p>How can I get the image ID (the docker sha256 hash) of a image / container within a Kubernetes deployment? </p>
| <p>Something like this will do the trick (you must have <code>jq</code> installed):</p>
<pre><code>$ kubectl get pod --namespace=xx yyyy -o json | jq '.status.containerStatuses[] | { "image": .image, "imageID": .imageID }'
{
"image": "nginx:latest",
"imageID": "docker://sha256:b8efb18f159bd948486f18bd8940b56fd2298b438229f5bd2bcf4cedcf037448"
}
{
"image": "eu.gcr.io/zzzzzzz/php-fpm-5:latest",
"imageID": "docker://sha256:6ba3fe274b6110d7310f164eaaaaaaaaaa707a69df7324a1a0817fe3b475566a"
}
</code></pre>
|
<p><strong>Background</strong></p>
<p>I have installed Prometheus on my Kubernetes cluster (hosted on Google Container Engineer) using the <a href="https://github.com/kubernetes/charts/tree/master/stable/prometheus" rel="noreferrer">Helm chart for Prometheus</a>.</p>
<p><strong>The Problem</strong></p>
<p>I cannot figure out how to add scrape targets to the Prometheus server. The prometheus.io site describes how I can mount a prometheus.yml file (which contains a list of scrape targets) to a Prometheus Docker container -- I have done this locally and it works. However, I don't know how to specify scrape targets for a Prometheus setup installed via Kubernetes-Helm. Do I need to add a volume to the Prometheus server pod that contains the scrape targets, and therefore update the YAML files generated by Helm??</p>
<p>I am also not clear on how to expose metrics in a Kubernetes Pod -- do I need to forward a particular port?</p>
| <p>You need to add annotations to the service you want to monitor.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
prometheus.io/scrape: 'true'
</code></pre>
<p>From the prometheus.yml in the chart:</p>
<ul>
<li><code>prometheus.io/scrape</code>: Only scrape services that have a value of <code>true</code> </li>
<li><code>prometheus.io/scheme</code>: http or https</li>
<li><code>prometheus.io/path</code>: override if the metrics path is not <code>/metrics</code></li>
<li><code>prometheus.io/port</code>: If the metrics are exposed on a different port</li>
</ul>
<p>And yes you need to expose the port with metrics to the service so Prometheus could access it</p>
|
<p>Even after granting cluster roles to user, I get <code>Error from server (Forbidden): User "system:anonymous" cannot list nodes at the cluster scope. (get nodes)</code></p>
<p>I have the following set for the user:</p>
<pre><code>- context:
cluster: kubernetes
user: [email protected]
name: user@kubernetes` set in the ~/.kube/config file
</code></pre>
<p>and the below added to admin.yaml to create cluster-role and cluster-rolebindings:</p>
<pre><code>kind: CluserRouster: kubernetes user: [email protected] name: nsp@kubernetese
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: admin-role
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
---
oidckind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
name: admin-binding
subjects:
- kind: User
name: [email protected]
roleRef:
kind: ClusterRole
name: admin-role
</code></pre>
<p>When I try the command I still get error.</p>
<pre><code>kubectl [email protected] get nodes
Error from server (Forbidden): User "system:anonymous" cannot list nodes at the cluster scope. (get nodes)
</code></pre>
<p>Can someone please suggest on how to proceed.</p>
| <p>Your problem is not with your ClusterRoleBindings but rather with user authentication. Kubernetes tells you that it identified you as <code>system:anonymous</code> (which is similar to *NIX's nobody) and not [email protected] (to which you applied your binding).</p>
<p>In your specific case the reason for that is that the <code>username</code> flag uses HTTP Basic authentication and needs the <code>password</code> flag to actually do anything. But even if you did supply the password, you'd still need to actually tell the API server to accept that specific user.</p>
<p>Have a look at <a href="https://kubernetes.io/docs/admin/authentication/" rel="nofollow noreferrer">this</a> part of the Kubernetes documentation which deals with different methods of authentication. For the <code>username</code> and <code>password</code> authentication to work, you'd want to look at the Static Password File section, but I would actually recommend you go with X509 Client Certs since they are more secure and are operationally much simpler (no secrets on the Server, no state to replicate between API servers). </p>
|
<p>I try to setup nginx ingress (nodeport) on google container with proxy protocol so that the real ip can be forwarded to backend service, but ended up with broken header.</p>
<pre><code>2017/02/05 13:48:52 [error] 18#18: *2 broken header: "�����~��]H�k��m[|����I��iv.�{y��Z �嵦v�Ȭq���2Iu4P�z;� o$�s����"���+�/�,�0̨̩����/" while reading PROXY protocol, client: 10.50.0.1, server: 0.0.0.0:443
</code></pre>
<p>If without the proxy protocol, thing works well. According to the <a href="https://blog.mythic-beasts.com/2016/05/09/proxy-protocol-nginx-broken-header/" rel="noreferrer">https://blog.mythic-beasts.com/2016/05/09/proxy-protocol-nginx-broken-header/</a> this is due to the protocol v2 is used (binary), but nginx only can speak v1. Any suggestion?</p>
| <p>GKE: With kubernetes v1.6+ source ip is preserved by default and can be found in headers under <code>x-real-ip</code> without setting any extra nginx config. </p>
<p>AWS: Source ip can be preserved by adding this to the annotations </p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx-ingress
namespace: nginx-ingress
annotations:
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
labels:
app: nginx-ingress
</code></pre>
<p>Checkout this link
<a href="https://github.com/kubernetes/ingress/tree/master/examples/aws/nginx" rel="nofollow noreferrer">https://github.com/kubernetes/ingress/tree/master/examples/aws/nginx</a></p>
|
<p>I would like to roll back the deployment to a certain revision( rollout history) using client-go library of k8s. But so far I havent found a solution. I could onyl fetch resource revision but not 'deployment revision' that I get using kebctl </p>
<pre><code>kubectl rollout history deployment/nginx_dep
</code></pre>
<p>Here is the code using client-go api :</p>
<pre><code>config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
clientset, err := kubernetes.NewForConfig(config)
dp, err := clientset.ExtensionsV1beta1Client.Deployments("default").Get("nginx-deployment", metav1.GetOptions{})
</code></pre>
<p><strong><em>Using client-go api:
How do I get the existing revision for the given deployment.? I want to roll back the deployment to use this revision. Can anyone tell me how I should do that??</em></strong></p>
<p>Here is the list of dependecies in my project:</p>
<pre><code>[[constraint]]
name = "k8s.io/client-go"
version = "3.0.0"
[[override]]
name = "k8s.io/apimachinery"
branch = "release-1.6"
</code></pre>
<p>Thank you in advance</p>
| <p>Assuming you already had a look at the <a href="https://github.com/kubernetes/client-go/tree/master/examples/create-update-delete-deployment" rel="nofollow noreferrer">update example</a>?</p>
<p>In any case, the <code>dp</code> variable here contains all you need:</p>
<pre><code>dp, err := clientset.ExtensionsV1beta1Client.Deployments("default").Get("nginx-deployment", metav1.GetOptions{})
</code></pre>
<p>So <code>dp</code> is of type <code>v1beta1.Deployment</code> which contains a variable of type <code>metav1.ObjectMeta</code> which has the <code>ResourceVersion</code>.</p>
|
<p>I modified the deployment config (production.yaml), changing the container image value.</p>
<p>I then ran this: <code>kubectl replace -f production.yaml</code>.</p>
<hr>
<p>While this occurred, my service did not appear to be responding, in addition:</p>
<p><code>kubectl get pods</code>:</p>
<p><code>wordpress-2105335096-dkrvg 3/3 Running 0 47s</code></p>
<p>a while later... :</p>
<p><code>wordpress-2992233824-l4287 3/3 Running 0 14s</code></p>
<p>a while later... :</p>
<p><code>wordpress-2992233824-l4287 0/3 ContainerCreating 0 7s</code></p>
<p>It seems it has terminated the previous pod before the new pod is <code>Running</code>... Why? </p>
<hr>
<p><strong>produciton.yaml:</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
replicas: 1
selector:
matchLabels:
app: wordpress
template:
metadata:
labels:
app: wordpress
spec:
terminationGracePeriodSeconds: 30
containers:
- image: eu.gcr.io/abcxyz/wordpress:deploy-1502463532
name: wordpress
imagePullPolicy: "Always"
env:
- name: WORDPRESS_HOST
value: localhost
- name: WORDPRESS_DB_USERNAME
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
- image: eu.gcr.io/abcxyz/nginx:deploy-1502463532
name: nginx
imagePullPolicy: "Always"
ports:
- containerPort: 80
name: nginx
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
- image: gcr.io/cloudsql-docker/gce-proxy:1.09
name: cloudsql-proxy
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=abcxyz:europe-west1:wordpressdb2=tcp:3306",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: cloudsql
mountPath: /cloudsql
volumes:
- name: wordpress-persistent-storage
gcePersistentDisk:
pdName: wordpress-disk
fsType: ext4
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
- name: cloudsql
emptyDir:
</code></pre>
| <p>I believe this behaviour is correct according to the Kubernetes documentation. Assuming you specify <code>n</code> replicas for a deployment, the following steps will be taken by Kubernetes when updating a deployment:</p>
<ol>
<li>Terminate old pods, while ensuring that at least <code>n - 1</code> total pods are up</li>
<li>Create new pods until a maximum of <code>n + 1</code> total pods are up</li>
<li>As soon as new pods are up, go back to step 1 until <code>n</code> new pods are up</li>
</ol>
<p>In your case <code>n = 1</code>, which means that in the first step, all old pods will be terminated.</p>
<p>See <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#updating-a-deployment" rel="nofollow noreferrer">Updating a Deployment</a> for more information:</p>
<blockquote>
<p>Deployment can ensure that only a certain number of Pods may be down while they are being updated. By default, it ensures that at least 1 less than the desired number of Pods are up (1 max unavailable).
Deployment can also ensure that only a certain number of Pods may be created above the desired number of Pods. By default, it ensures that at most 1 more than the desired number of Pods are up (1 max surge).
In a future version of Kubernetes, the defaults will change from 1-1 to 25%-25%.</p>
</blockquote>
|
<p>I try to setup a mysql database in Kubernetes. I configured a ConfigMap to store the Database name and a Secret that contains the root password, the user and the password for the user.</p>
<p>When I try to connect to the DB afterwards (Inside the container with mysql cli and from outside with IntelliJ Database tool) I get an "ERROR 1045 (28000): Access denied for user 'testadm'@'localhost' (using password: YES)" error.</p>
<p>My kubernetes.yaml file:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: db
data:
mysql-database: database
---
apiVersion: v1
kind: Secret
metadata:
name: db-credentials
type: Opaque
data:
mysql-root-password: VGVzdDEyMzQK # Test1234
mysql-user: dGVzdGFkbQo= # testadm
mysql-password: VGVzdDEyMzQK # Test1234
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: mysql
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:5.7
ports:
- containerPort: 3306
env:
- name: MYSQL_DATABASE
valueFrom:
configMapKeyRef:
name: db
key: mysql-database
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: mysql-root-password
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: db-credentials
key: mysql-user
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: db-credentials
key: mysql-password
</code></pre>
<p><strong>If I set the passwords directly like below the connection succeeds</strong> inside of the container and from the outside!</p>
<pre><code>env:
- name: MYSQL_ROOT_PASSWORD
value: Test1234
</code></pre>
<p>If I inspect the env variables inside the container I can't spot a difference between the two approaches.</p>
<p>Is there any additional formatting required to use the passwords stored in the secret? I also tried to place the values in the data-dictionary in quotes like this:</p>
<pre><code>data:
mysql-root-password: "VGVzdDEyMzQK"
</code></pre>
<p>Version information</p>
<pre><code>Docker 17.06.0-ce
Minikube 0.21.0
Kubectl Server 1.7.0
Kubectl Client 1.7.3
</code></pre>
| <p>Are you sure the data in your secret yaml is base64-encoded correctly? Using <a href="https://www.base64encode.org/" rel="nofollow noreferrer">https://www.base64encode.org/</a>, your <code>data</code> block is supposed to look like:</p>
<pre><code>data:
mysql-root-password: VGVzdDEyMzQ= # Test1234
mysql-user: dGVzdGFkbQ== # testadm
mysql-password: VGVzdDEyMzQ= # Test1234
</code></pre>
|
<p>I use gitlab for my CI, they host it and i have my own runners.
I have a k8s cluster running in gke.</p>
<p>I want to use <code>kubectl apply</code> to deploy new versions of my containers.</p>
<p>This all works from my local machine because it uses my google account.</p>
<p>I tried setting this all up as suggested by k8s and gitlab
1. copy over the <code>ca.crt</code>
2. copy over the token</p>
<pre><code>- echo "$KUBE_CA_PEM" > kube_ca.pem
- kubectl config set-cluster default-cluster --server=$KUBE_URL --certificate-authority="$(pwd)/kube_ca.pem"
- kubectl config set-credentials default-admin --token=$KUBE_TOKEN
- kubectl config set-context default-system --cluster=default-cluster --user=default-admin
- kubectl config use-context default-system
</code></pre>
<p>When i do this it fails with <code>x509: certificate signed by unknown authority</code></p>
<p>I tried going to the google cloud console > cluster > show credentials and instead of the token specify the username and password that it shows me there, this fails with the same error.</p>
<p>Finally i tried using the <code>--insecure-skip-tls-verify=true</code> but then it complains <code>error: You must be logged in to the server (the server has asked for the client to provide credentials)</code></p>
<p>Any Help would be appreciated.</p>
| <p>The cause of this problem was an incorrect server url. The server needs to be the one defined on the cluster information page in the google cloud console. You will find an Endpoing ip address.</p>
|
<p>I see more and more software organizations using gRPC in their service-oriented architectures, but people are also still using REST. In what use cases does it make sense to use gRPC, and when does it make sense to use REST for inter-service communication?</p>
<p>Interestingly, I've come across open source projects that use both REST and gRPC. For instance, Kubernetes and Docker Swarm all employ gRPC to some extent for cluster coordination, but also expose REST APIs for interfacing with master/leader nodes. Why not use gRPC up and down?</p>
<p><strong>Edit</strong>: for a summary of my own thoughts on this topic, see
<a href="https://medium.com/@natemurthy/rest-rpc-and-brokered-messaging-b775aeb0db3" rel="nofollow noreferrer">https://medium.com/@natemurthy/rest-rpc-and-brokered-messaging-b775aeb0db3</a></p>
| <p>When done correctly, REST improves long-term evolvability and scalability at the cost of performance and added complexity. REST is ideal for services that must be developed and maintained independently, like the Web itself. Client and server can be loosely coupled and change without breaking each other.</p>
<p>RPC services can be simpler and perform better, at the cost of flexibility and independence. RPC services are ideal for circumstances where client and server are tightly coupled and follow the same development cycle.</p>
<p>However, most so-called REST services don't really follow REST at all, because REST became just a buzzword for any kind of HTTP API. In fact, most so-called REST APIs are so tightly coupled, they offer no advantage over an RPC design. </p>
<p>Given that, my somewhat cynical answers to your question are:</p>
<ol>
<li><p>Some people are adopting gRPC for the same reason they adopted REST a few years ago: design-by-buzzword.</p></li>
<li><p>Many people are realizing the way they implement REST amounts to RPC anyway, so why not go with an standardized RPC framework and implement it correctly, instead of insisting on poor REST implementations?</p></li>
<li><p>REST is a solution for problems that appear in projects that span several organizations and have long-term goals. Maybe people are realizing they don't really need REST and looking for better options.</p></li>
</ol>
|
<p>I have the following deployment which puts up MySQL instance:</p>
<pre><code>kind: Deployment
apiVersion: apps/v1beta1
metadata:
name: mysql
spec:
replicas: 1
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: mysql:8
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-root-password
key: password
</code></pre>
<p>The password is just <code>root</code> :</p>
<pre><code>kind: Secret
apiVersion: v1
metadata:
name: mysql-root-password
type: Opaque
data:
password: cm9vdA==
</code></pre>
<p><strong>The problem is I try to connect to the instance after port forwarding the MySQL port, following the instructions from <a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/" rel="nofollow noreferrer">here</a>, but get an error:</strong></p>
<pre><code>$ kubectl port-forward mysql-824284009-rpbpk 3306
Forwarding from 127.0.0.1:3306 -> 3306
Forwarding from [::1]:3306 -> 3306
# from another terminal
$ mysql -u root -p
Enter password:
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
</code></pre>
<p>Connecting to the server from the pod itself works:</p>
<pre><code>$ kubectl exec -it mysql-824284009-rpbpk -- /bin/bash
root@mysql-824284009-rpbpk:/# mysql -u root -p
Enter password:
Welcome to the MySQL monitor. Commands end with ; or \g.
...
mysql>
</code></pre>
<p>I have basically the same setup like <a href="https://stackoverflow.com/questions/39910225/how-to-connect-mysql-running-on-kubernetes">here</a>, except I'm running the cluster in minikube instead of GCP.
My local MySQL is not runnning, so I assume there is no chance of clashing.</p>
| <p>The port forwarding is likely there, but you need to tell mysql client to connect using host/port and not unix socket (default)</p>
<pre><code>mysql --host=localhost --protocol tcp --port=3306 -u root -p
</code></pre>
<p>If you don't, mysql by default uses local linux socket to connect to he server: /var/run/mysqld/mysqld.sock .. It even tells you so ;)</p>
<p><strong>Update:</strong> As Gabriel checked - adding <code>--protocol tcp</code> had finally made it works, so I am addding it to my answer</p>
|
<p>A livenessProbe (extracted from <a href="https://raw.githubusercontent.com/kubernetes/kubernetes.github.io/master/docs/tasks/configure-pod-container/exec-liveness.yaml" rel="noreferrer">an example</a>) below is working well.</p>
<pre><code>livenessProbe:
exec:
command:
- cat
- /tmp/healthy
</code></pre>
<p>But, my livenessProbe is not working.(pod is continually restarted).
YAML is below</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness-test
name: liveness
spec:
containers:
- name: liveness
args:
- /bin/bash
- -c
- /home/my_home/run_myprogram.sh; sleep 20
image: liveness:v0.6
securityContext:
privileged: true
livenessProbe:
exec:
command:
- /home/my_home/check.sh
initialDelaySeconds: 10
periodSeconds: 5
</code></pre>
<p>/home/my_home/check.sh (to restart the pod when the number of running processes is 1 or 0) is below, which is pre-tested.</p>
<pre><code>#!/bin/sh
if [ $(ps -ef | grep -v grep | grep my-program | wc -l) -lt 2 ]; then
exit 1
else
exit 0
fi
</code></pre>
| <p>This problem is related to <a href="https://golang.org/pkg/os/exec/#Command" rel="noreferrer">Golang Command API</a>.
I changed the livenessProbe as below</p>
<pre><code>livenessProbe:
exec:
command:
- /bin/sh
- -c
- /home/test/check.sh
</code></pre>
|
<p>I have celery running in a docker container on GCP with Kubernetes. Its workers have recently started to get <code>kill -9</code>'d – this looks like it has something to do with OOMKiller. There are no OOM events in <code>kubectl get events</code>, which is something to be expected if these events only appear when a pod has trespassed <code>resources.limits.memory</code> value.</p>
<p>So, my theory is that celery process getting killed is a work of linux' own OOMKiller. This doesn't make sense though: if so much memory is consumed that OOMKiller enters the stage, how is it possible that this pod was scheduled in the first place? (assuming that Kubernetes does not allow scheduling of new pods if the sum of <code>resources.limits.memory</code> exceeds the amount of memory available to the system).</p>
<p>However, I am not aware of any other plausible reason for these SIGKILLs than OOMKiller.</p>
<p>An example of celery error (there is one for every worker):</p>
<pre><code>[2017-08-12 07:00:12,124: ERROR/MainProcess] Process 'ForkPoolWorker-7' pid:16 exited with 'signal 9 (SIGKILL)'
[2017-08-12 07:00:12,208: ERROR/MainProcess] Task handler raised error: WorkerLostError('Worker exited prematurely: signal 9 (SIGKILL).',)
</code></pre>
| <p>Containers can be OOMKilled for two reasons.</p>
<ol>
<li>If they exceed the memory limits of set for them. Limits are specified on a per container basis and if the container uses more memory than the limit it will be OOMKilled. From the process's point of view this is the same as if the system ran out of memory.</li>
<li><p>If the system runs out of memory. There are two kinds of resource specifications in Kubernetes: <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container" rel="nofollow noreferrer">requests and limits</a>. Limits specify the maximum amount of memory the container can use before being OOMKilled. Requests are used to schedule Pods and default to the limits if not specified. Requests must be less than or equal to container limits. That means that containers could be overcommitted on nodes and OOMKilled if multiple containers are using more memory than their respective requests at the same time.</p>
<p>For instance, if both process A and process B have request of 1GB and limit of 2GB, they can both be scheduled on a node that has 2GB of memory because requests are what is used for scheduling. Having requests less than the limit generally means that the container can burst up to 2GB but will usually use less than 1GB. Now, if both burst above 1GB at the same time the system can run out of memory and one container will get OOMKilled while still being below the limit set on the container. </p></li>
</ol>
<p>You can debug whether the container is being OOMKilled by examining the <code>containerStatuses</code> field on the Pod.</p>
<pre><code>$ kubectl get pod X -o json | jq '.status.containerStatuses'
</code></pre>
<p>If the pod was OOMKilled it will usually say something to that effect in the <code>lastState</code> field. In your case it looks like it may have been an OOM error based on issues filed against celery (like <a href="https://github.com/celery/celery/issues/1406" rel="nofollow noreferrer">this one</a>).</p>
|
<p>The Google Container Engine's (GKE) documentation says: "There’s no vendor lock-in, you’re free to take your workloads out of Container Engine and run them anywhere Kubernetes is supported, including on your own on-premise servers."</p>
<p>Still, GKE is a managed environment for Kubernetes and I was wondering how difficult it would be to take your workloads out of GKE after, say, 2-3 years and run them on your own premises. What do you need to pay attention to maintain maximum portability?</p>
| <p>Your workloads will migrate out of GKE without much modification; the only things you would need to change are those tied to the cloud provider (ingress, persistent volumes, etc). Note that this is the same regardless of whether you are running in GKE or on GCE, AWS, Digital Ocean -- as you move environments you will need to make small tweaks to your workload definition if and where it is tied directly to the environment. </p>
<p>The larger change is migrating out of the hosted environment into a cluster that you manage yourself. You will have both the control and responsibility to configure many aspects of your system (authentication and authorization, admission controllers, flags to the various system components, base operating system, etc) that come together to make a smoothly operating cluster. You will also be in charge of managing upgrades for your cluster, which means you will need to keep up with Kubernetes releases to ensure that you are picking up patches for critical bugs and security vulnerabilities. </p>
|
<p>I have installed K8S using minikube on ubuntu 16.04 machine with VirtualBox driver.</p>
<hr>
<p>I am confused with various documents that are related to this topic. Some say it is not possible with minikube, but minikube documents that it suitable for test purpose. So i believe that maybe there is a way to achieve oidc authentication with minikube. Is there any link for this which i can follow?</p>
<p>I want to enable oidc in my production environment. But as i am not familiar with K8S, i thought minikube would be ideal to test the feature first. That is the reason i want to know if minikube will support OIDC. If yes, i can make changes here and then replicate the same in my production environment.</p>
<p>I have referred the official documentation, but it does not give detailed explanation on how to obtain the oidc parameters and which files are to be modified. </p>
| <p>Now that I have spent time on this, I am answering this question so that it can help someone.
The answer is YES. Minikube provides a k8s setup which supports the OIDC based authentication. I have been able to configure it. So here is some details on how I configured the kube-apiserver parameters.</p>
<p><code>minikube start \
--extra-config=apiserver.Authorization.Mode=RBAC \
--extra-config=apiserver.Authentication.OIDC.IssuerURL=https://accounts.google.com \
--extra-config=apiserver.Authentication.OIDC.UsernameClaim=email \
--extra-config=apiserver.Authentication.OIDC.ClientID="client_id"</code></p>
<p>Also use the k8s-oidc-helper tool and get the refresh-token, id_token and other essential parameters. Then append the contents to the ~/.kube/config file, add the path for the api-server certificate and key. The user is now registered and can use Gmail id to login to k8s </p>
|
<p>I got Kubernetes Minikube on my laptop (4cores, 8 GB RAM). I just performed the basic installation steps (got <em>miniKube</em> and <em>kubectl</em>, enabled the BIOS virtualization) and I am able to start the cluster:</p>
<pre><code>C:\Users\me>minikube start
Starting local Kubernetes cluster...
Starting VM...
SSH-ing files into VM...
Setting up certs...
Starting cluster components...
Connecting to cluster...
Setting up kubeconfig...
Kubectl is now configured to use the cluster.
</code></pre>
<p>However, when I try to interact with the cluster, I allways get the same error, sample:</p>
<pre><code>C:\Users\me>kubectl get pods --context=minikube
Unable to connect to the server: dial tcp 192.168.99.100:8443: connectex: A connection attempt failed because the connected party
did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
</code></pre>
<p>I execute <code>minikube ip</code> and I ping the result IP and I get a response. Also I tried to give more memory (3Gb vs the standard 2Gb) and nothing changed.</p>
<p>Am I doing something wrong here?</p>
<p>Thanks!</p>
| <p>I had same issue as above. I found out that <code>kubectl</code> couldn't connect to the cluster and would throw up the error when i'm on a VPN connection. When I turned off my VPN client, it started working as fine. </p>
|
<p>I am quite familiar with Docker, but I have zero experience on Kubernetes.</p>
<p>I have a <a href="https://docs.docker.com/engine/reference/commandline/stack/" rel="nofollow noreferrer">Docker stack</a> (multi-container) software that I can deploy in a Docker swarm cluster. I was wondering if Kubernetes has something similar? I don't need replicas, auto scaling and so on... I just need a group of containers working together with its dependencies and networks defined in single text file.</p>
<p>I have searched and found a tool called <a href="http://kompose.io/" rel="nofollow noreferrer">kompose</a> that translates the Docker stack file to Kubernetes syntax... However, it looks like the output is a list of <code>*.yaml</code> files, instead of a single file. </p>
<p>So, I came to the conclusion that kubernetes does not have this exact functionality.. Am I missing something?</p>
| <p>You can copy the content of the generated files into one file and separate them with <code>---</code>.</p>
<p>For instance, if you've got 3 Kubernetes files: <code>service.yml</code>, <code>deployment.yml</code> and <code>configmap.yml</code>, your file should look something like:</p>
<pre><code># content of service.yml
....
---
# content of deployment.yml
....
---
# content of configmap.yml
....
</code></pre>
<p>You would use the same <code>kubectl</code> commands to CRUD using this spec file.</p>
|
<p>I want to set initial data on MySQL of container.
In docker-compose.yml, such code can create initial data when running container.</p>
<pre><code>volumes:
- db:/var/lib/mysql
- "./docker/mysql/conf.d:/etc/mysql/conf.d"
- "./docker/mysql/init.d:/docker-entrypoint-initdb.d"
</code></pre>
<p>However, how can I create initial data on Kubernetes when running?</p>
| <p>According to the MySQL Docker image <a href="https://hub.docker.com/_/mysql/" rel="noreferrer">README</a>, the part that is relevant to data initialization on container start-up is to ensure all your initialization files are mount to the container's <code>/docker-entrypoint-initdb.d</code> folder.</p>
<p>You can define your initial data in a <code>ConfigMap</code>, and mount the corresponding volume in your pod like this:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: mysql
spec:
containers:
- name: mysql
image: mysql
ports:
- containerPort: 3306
volumeMounts:
- name: mysql-initdb
mountPath: /docker-entrypoint-initdb.d
volumes:
- name: mysql-initdb
configMap:
name: mysql-initdb-config
---
apiVersion: v1
kind: ConfigMap
metadata:
name: mysql-initdb-config
data:
initdb.sql: |
CREATE TABLE friends (id INT, name VARCHAR(256), age INT, gender VARCHAR(3));
INSERT INTO friends VALUES (1, 'John Smith', 32, 'm');
INSERT INTO friends VALUES (2, 'Lilian Worksmith', 29, 'f');
INSERT INTO friends VALUES (3, 'Michael Rupert', 27, 'm');
</code></pre>
|
<p>I am new to Kubernetes & Docker. I created a simple nodejs application and deployed on BlueMix Kubernetes. But I am unable to accesses the application on internet. The ip & port mentioned in the kubernetes is not accessible. Can somebody help me.</p>
<p>I tried to <a href="http://10.76.193.146:31972" rel="nofollow noreferrer">http://10.76.193.146:31972</a>, but it did not go through. I am not sure if this the public ip as its 10. series.</p>
<p>I also tried the public ip ( <a href="http://184.173.1.79:31972" rel="nofollow noreferrer">http://184.173.1.79:31972</a> ) mentioned in the blue mix kubernetes cluster - screenshot below. But that too failed.</p>
<p>This are steps I followed.</p>
<ol>
<li>Created a nodejs app locally. It ran as desired on the local</li>
</ol>
<blockquote>
<pre><code>// Load the http module to create an http server.
var http = require('http');
// Configure our HTTP server to respond with Hello World to all requests.
var server = http.createServer(function (request, response) {
response.writeHead(200, {"Content-Type": "text/plain"});
response.end("Hello World\n");
});
// Listen on port 8000, IP defaults to 127.0.0.1
server.listen(8000);
// Put a friendly message on the terminal
console.log("Server running at http://127.0.0.1:8000/");
</code></pre>
</blockquote>
<p>---------- package.json</p>
<blockquote>
<pre><code>{
"name": "helloworld-nodejs",
"version": "0.0.1",
"description": "First Docker",
"main": "app.js",
"scripts": {
"start": "PORT=8000 node ./app.js"
},
"author": "",
"license": "ISC"
}
</code></pre>
</blockquote>
<ol start="2">
<li><p>Created a docker container locally and ran the docker. It worked properly</p></li>
<li><p>Uploaded the docker container on Bluemix registry as</p>
<blockquote>
<p>registry.ng.bluemix.net/testkubernetes/helloworld-nodejs:0.0.1</p>
</blockquote></li>
<li><p>Created the Nodes and Services in Kubernetes, using the following YAML file</p></li>
</ol>
<p>----------Node YAML file</p>
<blockquote>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: helloworld-nodejs
labels:
name: helloworld-nodejs
spec:
containers:
- name: helloworld-nodejs
image: registry.ng.bluemix.net/testkubernetes/helloworld-nodejs:0.0.1
ports:
- containerPort: 8000
</code></pre>
</blockquote>
<p>---------- Services YAML</p>
<blockquote>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: helloworld-nodejs
labels:
name: helloworld-nodejs
spec:
type: NodePort
selector:
name: helloworld-nodejs
ports:
- port: 8080
</code></pre>
</blockquote>
<ol start="5">
<li>The application gets deployed properly and is also running, which I can confirm from the logs</li>
</ol>
<p><a href="https://i.stack.imgur.com/uuRik.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uuRik.png" alt="enter image description here"></a>
<a href="https://i.stack.imgur.com/coKmv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/coKmv.png" alt="enter image description here"></a></p>
<p>Result of kubectl get services & kubectl get nodes command</p>
<p><a href="https://i.stack.imgur.com/K9DV0.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/K9DV0.png" alt="Result of kubectl get services"></a></p>
<p><a href="https://i.stack.imgur.com/0HMQA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0HMQA.png" alt="Result of kubectl get nodes"></a></p>
<p><a href="https://i.stack.imgur.com/9atM8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9atM8.png" alt="enter image description here"></a></p>
| <p>Since your service's <code>port</code> is different from your pod's <code>containerPort</code>, you will have to specify <code>targetPort</code> in your service.</p>
<pre><code>spec:
type: NodePort
selector:
name: helloworld-nodejs
ports:
- port: 8080
targetPort: 8000
</code></pre>
<p>According to the Kubernetes <a href="https://kubernetes.io/docs/api-reference/v1.5/#serviceport-v1" rel="nofollow noreferrer">documentation</a> on <code>targetPort</code>, it is the:</p>
<blockquote>
<p>Number or name of the port to access on the pods targeted by the
service. .... If this is not specified, the value of the 'port' field
is used (an identity map).</p>
</blockquote>
|
<p>I have created a Kubernetes cluster of 2 nodes and one master on CentOS 7. When I use <code>kubectl get nodes</code>, it lists the nodes and everything seems to be working.</p>
<p>My question is: Will the cluster automatically balance the resources now between the nodes? I mean, let's say I installed a nginx web server with WordPress on the master node (which has 512MB of memory), if the memory usage is almost full on the master, will it automatically now depend on the nodes according to its needs? Or should I do something manual?</p>
<p>And does it also balance the CPU load between the nodes?</p>
| <p>Kubernetes won’t balance the node resources. It will schedule the workload based on available resource in the node. In your case if memory is full on master node, if you want to run another pod kubernetes will schedule them in second node.</p>
|
<p>I'm using uwsgi in a container hosted in a Kubernetes cluster. UWSGI supports a traditional <a href="http://uwsgi-docs.readthedocs.io/en/latest/ThingsToKnow.html" rel="noreferrer">master / slave architecture</a> to provide a better availability for the application but my question is, should I even use this feature?</p>
<p>In other words, when I need more processes to handle and compute requests, should I increase the number of pods in the cluster or should I still use master / slave mode of UWSGI to respond to the requests?</p>
| <p>Be conscious of having enough threads/processes/pods to maintain availability if your application blocks while serving each HTTP request (e.g. Django). There is going to be some pod startup time if you're using a horizontal pod autoscaler, and I found with a high traffic application I had much better availability with uwsgi and the application within each pod (same container), and a separate nginx pod doing reverse proxying and request pooling when all the uwsgi workers were busy. </p>
<p>YMMV but at the end of the day, availability is more important than sticking to the single process per pod rule of thumb. Just understand the downsides, such as less isolation between the processes within the same container. Logs are available on a per container basis, so there won't be isolation between anything in the same container using the built in kubectl logs functionality. </p>
|
<p>I've got a persistent disk (GCP), that I'm hoping to be able to allow read write access to multiple pods.</p>
<p>Is this possible? Here are my two configs:</p>
<p><strong>pVolume.yaml</strong></p>
<pre><code>apiVersion: "v1"
kind: "PersistentVolume"
metadata:
name: "pv0001"
spec:
storageClassName: manual
capacity:
storage: "10Gi"
accessModes:
- "ReadWriteMany"
gcePersistentDisk:
fsType: "ext4"
pdName: "wordpress-disk"
</code></pre>
<p><strong>pVolumeClaim.yaml</strong></p>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 3Gi
</code></pre>
<p>With the above config, I see the following error on my pods:</p>
<pre><code>FailedMount Failed to attach volume "pv0001" on node "xyz" with: googleapi: Error 400: The disk resource 'abc' is already being used by 'xyz'
</code></pre>
<p>This occurs with the replica count set to 2. </p>
| <p>For a GCP persistent disk in ReadWrite mode on different nodes this is not possible :(</p>
<p>It is possible however:</p>
<ul>
<li>Have both replicas scheduled on the <strong>same node</strong>. In that case both of them can mount the same persistent disk ReadWrite</li>
<li>Use it in <strong>ReadOnly mode</strong>, on any number of nodes</li>
<li>Use a <strong>different kind of PV</strong>, like <a href="https://github.com/gluster/gluster-kubernetes" rel="nofollow noreferrer">gluster</a> or nfs that supports this kind of use</li>
</ul>
|
<p>I'm not sure why the persistent volume is not being claimed, or what steps I could take to further diagnose this? </p>
<p>Should the claim size match the volume size? Should the volume size match the GCP volume size?</p>
<p>This is so difficult to test and figure out... </p>
<p>My goal here is just to be able to create a Wordpress instance with even a single replica as long as it would support rolling deployments....</p>
<p>Output of <code>kubectl get pods</code>:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
wordpress-1546832918-mz4rt 0/3 Pending 0 47m
wordpress-1546832918-p0s1s 0/3 Pending 0 47m
</code></pre>
<p>Output of <code>kubectl describe pods</code>:</p>
<pre><code>...truncated...
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
47m 3s 168 default-scheduler Warning FailedScheduling [SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "task-pv-claim", which is unexpected., SchedulerPredicates failed due to PersistentVolumeClaim is not bound: "task-pv-claim", which is unexpected.]
</code></pre>
<p>Output of <code>kubectl get pvc</code>:</p>
<pre><code>NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
task-pv-claim Pending manual 4h
</code></pre>
<p>Output of <code>kubectl get pv</code>:</p>
<pre><code>NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
pv0001 10Gi RWX Retain Available manual 4h
</code></pre>
<p><strong>production.yaml</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
replicas: 2
selector:
matchLabels:
app: wordpress
template:
metadata:
labels:
app: wordpress
spec:
terminationGracePeriodSeconds: 30
containers:
- image: eu.gcr.io/abcxyz/wordpress:deploy-1502807720
name: wordpress
imagePullPolicy: "Always"
env:
- name: WORDPRESS_HOST
value: localhost
- name: WORDPRESS_DB_USERNAME
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
- image: eu.gcr.io/abcxyz/nginx:deploy-1502807720
name: nginx
imagePullPolicy: "Always"
ports:
- containerPort: 80
name: nginx
volumeMounts:
- name: wordpress-persistent-storage
mountPath: /var/www/html
readOnly: true
- image: gcr.io/cloudsql-docker/gce-proxy:1.09
name: cloudsql-proxy
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=abcxyz:europe-west1:wordpressdb2=tcp:3306",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: cloudsql
mountPath: /cloudsql
volumes:
- name: wordpress-persistent-storage
persistentVolumeClaim:
claimName: "task-pv-claim"
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
- name: cloudsql
emptyDir:
</code></pre>
<p><strong>pVolume.yaml</strong></p>
<pre><code>apiVersion: "v1"
kind: "PersistentVolume"
metadata:
name: "pv0001"
spec:
storageClassName: manual
capacity:
storage: "10Gi"
accessModes:
- "ReadWriteMany"
gcePersistentDisk:
fsType: "ext4"
pdName: "wordpress-disk"
</code></pre>
<p><strong>pVolumeClaim.yaml</strong></p>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
</code></pre>
| <p>The <code>spec.accessModes</code> of your persistent volume claim has to match that in the persistent volume. Try change both of them to the same value.</p>
<p>If that didn't work, you can add the <code>spec.selector</code> definition to your persistent volume claim definition, by updating it to match your persistent volume <code>metadata.labels</code> like this:</p>
<pre><code>apiVersion: "v1"
kind: "PersistentVolume"
metadata:
name: "pv0001"
labels:
name: "pv0001" # can be anything as long as it matches the selector in the pvc
spec:
...
----
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
selector:
matchLabels:
name: "pv0001"
</code></pre>
<p>The <code>spec.selector</code> serves as a filter to ensure that only PV with the specified labels are matched.</p>
|
<p>I've found that it is hard to use minikube to run local tests that depend on certain containers to be running. For instance using a remote headless-chrome to run some tests:</p>
<pre><code> containers:
- image: chrome-test
imagePullPolicy: Never
ports:
- containerPort: 80
- image: headless-chrome
imagePullPolicy: Never
ports:
- containerPort: 8910
securityContext:
capabilities:
add:
- SYS_ADMIN
</code></pre>
<p>I've found it easier to use docker-compose for instances like these. That way I don't have to deal with deleting pods, and finding log outputs. I understand this is a pretty open ended question, but I figure any information in this area could be helpful.
What is the best way to test multiple containers with kubernetes?</p>
| <p>You can use docker-compose, but using minikube is also very easy to test things.</p>
<p>So for my testing I do following steps:</p>
<ul>
<li>Create new Kubernetes <code>Namespace</code></li>
<li>Deploy artifacts(here you deploy the <code>deployment</code>)</li>
<li>Expose that <code>deployment</code> with a <code>service</code> of type <code>NodePort</code></li>
<li>Now all you need is to find out what random port was chosen as nodePort and curl on that port with <code>minikube ip:nodePort</code></li>
<li>If you want logs of the pod you can get that as well</li>
<li>Once you see all the things you wanna see, just delete the <code>Namespace</code>.</li>
</ul>
<p>I have done similar things while doing end to end testing of the tool called <a href="https://github.com/kedgeproject/kedge" rel="nofollow noreferrer">kedge</a> which generates Kubernetes artifacts, so we deploy all generated artifacts on a running cluster in similar way I have mentioned above, ref: <a href="https://github.com/kedgeproject/kedge/pull/77" rel="nofollow noreferrer">here</a>.</p>
|
<p>I have created a Kubernetes cluster of 2 nodes and one master on CentOS 7. When I use <code>kubectl get nodes</code>, it lists the nodes and everything seems to be working.</p>
<p>My question is: Will the cluster automatically balance the resources now between the nodes? I mean, let's say I installed a nginx web server with WordPress on the master node (which has 512MB of memory), if the memory usage is almost full on the master, will it automatically now depend on the nodes according to its needs? Or should I do something manual?</p>
<p>And does it also balance the CPU load between the nodes?</p>
| <p>The key point that sfgroups mentioned is that a pod cannot be split between nodes, so there is no "balancing of resources" when resources are exhausted on a node. Your process running in the pod will be OOM killed by the kernel if it hits its memory cgroup limit, and it will be throttled if it hits its CPU cgroup limit. </p>
<p>You need pods that fit onto your nodes with room to spare for the daemons that run on every node (kubelet and kube-proxy). </p>
<p>The only automatic things that kubernetes does (related to your question) are: 1) scheduling pods on nodes where they fit, 2) autoscaling pods (when running a horizontal pod autoscaler), and 3) autoscaling nodes (when running a cluster autoscaler). You might also be interested in the behavior of the replicaset and deployment, which control the number of pods that are present in the cluster for a specific application. </p>
|
<p>I am currently trying to deploy the following on Minikube.
I updated the configuration files to use a hostpath as a persistent storage on minikube node.</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: "pv-volume"
spec:
capacity:
storage: "20Gi"
accessModes:
- "ReadWriteOnce"
hostPath:
path: /data
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: "orientdb-pv-claim"
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "20Gi"
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: orientdbservice
spec:
#replicas: 1
template:
metadata:
name: orientdbservice
labels:
run: orientdbservice
test: orientdbservice
spec:
containers:
- name: orientdbservice
image: orientdb:latest
env:
- name: ORIENTDB_ROOT_PASSWORD
value: "rootpwd"
ports:
- containerPort: 2480
name: orientdb
volumeMounts:
- name: orientdb-config
mountPath: /data/orientdb/config
- name: orientdb-databases
mountPath: /data/orientdb/databases
- name: orientdb-backup
mountPath: /data/orientdb/backup
volumes:
- name: orientdb-config
persistentVolumeClaim:
claimName: orientdb-pv-claim
- name: orientdb-databases
persistentVolumeClaim:
claimName: orientdb-pv-claim
- name: orientdb-backup
persistentVolumeClaim:
claimName: orientdb-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: orientdbservice
labels:
run: orientdbservice
spec:
type: NodePort
selector:
run: orientdbservice
ports:
- protocol: TCP
port: 2480
name: http
</code></pre>
<p>which results in the following:</p>
<pre><code>#kubectl get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-volume 20Gi RWO Retain Available 4h
pvc-cd14d593-78fc-11e7-a46d-1277ec3dd2b5 20Gi RWO Delete Bound default/orientdb-pv-claim standard 4h
#kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
orientdb-pv-claim Bound pvc-cd14d593-78fc-11e7-a46d-1277ec3dd2b5 20Gi RWO standard 4h
#kubectl get svc
NAME READY STATUS RESTARTS AGE
orientdbservice-458328598-zsmw5 0/1 ContainerCreating 0 3h
#kubectl describe pod orientdbservice-458328598-zsmw5
.
.
.
Events:
FirstSeen LastSeen Count From SubObjectPath TypeReason Message
--------- -------- ----- ---- ------------- -------- ------ -------
3h 41s 26 kubelet, minikube Warning FailedMount Unable to mount volumes for pod "orientdbservice-458328598-zsmw5_default(392b1298-78ff-11e7-a46d-1277ec3dd2b5)": timeout expired waiting for volumes to attach/mount for pod "default"/"orientdbservice-458328598-zsmw5". list of unattached/unmounted volumes=[orientdb-databases]
</code></pre>
<p>It seems that volumes are not able mount for the pod. Is there something wrong with the way I am creating a persistent volume on my node ?
Appreciate all the help</p>
| <p>Few questions before I tell you what worked for me:</p>
<ul>
<li>The directory <code>/data</code> on minikube machine does it have right set of permissions?</li>
<li>In minikube you don't need to worry about setting up volumes in other words don't worry about <code>PersistentVolume</code> anymore, just enable the volume provisioner addon using following command. Once you do that every <code>PersistentVolumeClaim</code> that tries to claim storage will get whatever it needs.</li>
</ul>
<p><code>minikube addons enable default-storageclass</code></p>
<p>So here is what worked for me:</p>
<ul>
<li>I removed the <code>PersistentVolume</code></li>
<li>I have changed the <code>mountPath</code> also to match what is given in the upstream docs <a href="https://hub.docker.com/_/orientdb/" rel="nofollow noreferrer">https://hub.docker.com/_/orientdb/</a></li>
<li>I have added separate <code>PersistentVolumeClaim</code> for <code>databases</code> and <code>backup</code></li>
<li>I have changed the <code>config</code> from a <code>PersistentVolumeClaim</code> to <code>configMap</code>, so you don't need to care about <em>how do I get the config to running cluster?</em>, you do it using configMap. Because the config is coming from set of config Files.</li>
</ul>
<p>Here is the config that worked for me:</p>
<pre><code>---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
app: orientdbservice
name: orientdb-databases
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
status: {}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
app: orientdbservice
name: orientdb-backup
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
status: {}
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: orientdbservice
name: orientdbservice
spec:
ports:
- name: orientdbservice-2480
port: 2480
targetPort: 0
- name: orientdbservice-2424
port: 2424
targetPort: 0
selector:
app: orientdbservice
type: NodePort
status:
loadBalancer: {}
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: orientdbservice
name: orientdbservice
spec:
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: orientdbservice
name: orientdbservice
spec:
containers:
- env:
- name: ORIENTDB_ROOT_PASSWORD
value: rootpwd
image: orientdb
name: orientdbservice
resources: {}
volumeMounts:
- mountPath: /orientdb/databases
name: orientdb-databases
- mountPath: /orientdb/backup
name: orientdb-backup
- mountPath: /orientdb/config
name: orientdb-config
volumes:
- configMap:
name: orientdb-config
name: orientdb-config
- name: orientdb-databases
persistentVolumeClaim:
claimName: orientdb-databases
- name: orientdb-backup
persistentVolumeClaim:
claimName: orientdb-backup
</code></pre>
<p>And for <code>configMap</code> I had to goto this directory for sample config <a href="https://github.com/orientechnologies/orientdb-docker/tree/b226ca56bc9efb89ebbd79d68a05234af76fd0ae/examples/3-nodes-compose/var/odb3/config" rel="nofollow noreferrer">examples/3-nodes-compose/var/odb3/config</a> in github repository <a href="https://github.com/orientechnologies/orientdb-docker" rel="nofollow noreferrer">orientechnologies/orientdb-docker</a></p>
<p>You goto above directory or directory you have saved config in and run following command:</p>
<pre><code>kubectl create configmap orientdb-config --from-file=.
</code></pre>
<p>If you wanna see what is being created automatically and being deployed run following:</p>
<pre><code>kubectl create configmap orientdb-config --from-file=. --dry-run -o yaml
</code></pre>
<p>Here is the configMap I have used to do my deployment:</p>
<pre><code>apiVersion: v1
data:
automatic-backup.json: |-
{
"enabled": true,
"mode": "FULL_BACKUP",
"exportOptions": "",
"delay": "4h",
"firstTime": "23:00:00",
"targetDirectory": "backup",
"targetFileName": "${DBNAME}-${DATE:yyyyMMddHHmmss}.zip",
"compressionLevel": 9,
"bufferSize": 1048576
}
backups.json: |-
{
"backups": []
}
default-distributed-db-config.json: |
{
"autoDeploy": true,
"readQuorum": 1,
"writeQuorum": "majority",
"executionMode": "undefined",
"readYourWrites": true,
"newNodeStrategy": "static",
"servers": {
"*": "master"
},
"clusters": {
"internal": {
},
"*": {
"servers": [
"<NEW_NODE>"
]
}
}
}
events.json: |-
{
"events": []
}
hazelcast.xml: "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<!-- ~ Copyright (c)
2008-2012, Hazel Bilisim Ltd. All Rights Reserved. ~ \n\t~ Licensed under the
Apache License, Version 2.0 (the \"License\"); ~ you may \n\tnot use this file
except in compliance with the License. ~ You may obtain \n\ta copy of the License
at ~ ~ http://www.apache.org/licenses/LICENSE-2.0 ~ \n\t~ Unless required by applicable
law or agreed to in writing, software ~ distributed \n\tunder the License is distributed
on an \"AS IS\" BASIS, ~ WITHOUT WARRANTIES \n\tOR CONDITIONS OF ANY KIND, either
express or implied. ~ See the License for \n\tthe specific language governing
permissions and ~ limitations under the License. -->\n\n<hazelcast\n\txsi:schemaLocation=\"http://www.hazelcast.com/schema/config
hazelcast-config-3.3.xsd\"\n\txmlns=\"http://www.hazelcast.com/schema/config\"
xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\">\n\t<group>\n\t\t<name>orientdb</name>\n\t\t<password>orientdb</password>\n\t</group>\n\t<network>\n\t\t<port
auto-increment=\"true\">2434</port>\n\t\t<join>\n\t\t\t<multicast enabled=\"true\">\n\t\t\t\t<multicast-group>235.1.1.1</multicast-group>\n\t\t\t\t<multicast-port>2434</multicast-port>\n\t\t\t</multicast>\n\t\t</join>\n\t</network>\n\t<executor-service>\n\t\t<pool-size>16</pool-size>\n\t</executor-service>\n</hazelcast>\n"
orientdb-client-log.properties: |
#
# /*
# * Copyright 2014 Orient Technologies LTD (info(at)orientechnologies.com)
# *
# * Licensed under the Apache License, Version 2.0 (the "License");
# * you may not use this file except in compliance with the License.
# * You may obtain a copy of the License at
# *
# * http://www.apache.org/licenses/LICENSE-2.0
# *
# * Unless required by applicable law or agreed to in writing, software
# * distributed under the License is distributed on an "AS IS" BASIS,
# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# * See the License for the specific language governing permissions and
# * limitations under the License.
# *
# * For more information: http://www.orientechnologies.com
# */
#
# Specify the handlers to create in the root logger
# (all loggers are children of the root logger)
# The following creates two handlers
handlers = java.util.logging.ConsoleHandler
# Set the default logging level for the root logger
.level = ALL
com.orientechnologies.orient.server.distributed.level = FINE
com.orientechnologies.orient.core.level = WARNING
# Set the default logging level for new ConsoleHandler instances
java.util.logging.ConsoleHandler.level = WARNING
# Set the default formatter for new ConsoleHandler instances
java.util.logging.ConsoleHandler.formatter = com.orientechnologies.common.log.OLogFormatter
orientdb-server-config.xml: |
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<orient-server>
<handlers>
<handler class="com.orientechnologies.orient.server.hazelcast.OHazelcastPlugin">
<parameters>
<parameter value="${distributed}" name="enabled"/>
<parameter value="${ORIENTDB_HOME}/config/default-distributed-db-config.json" name="configuration.db.default"/>
<parameter value="${ORIENTDB_HOME}/config/hazelcast.xml" name="configuration.hazelcast"/>
<parameter value="odb3" name="nodeName"/>
</parameters>
</handler>
<handler class="com.orientechnologies.orient.server.handler.OJMXPlugin">
<parameters>
<parameter value="false" name="enabled"/>
<parameter value="true" name="profilerManaged"/>
</parameters>
</handler>
<handler class="com.orientechnologies.orient.server.handler.OAutomaticBackup">
<parameters>
<parameter value="false" name="enabled"/>
<parameter value="${ORIENTDB_HOME}/config/automatic-backup.json" name="config"/>
</parameters>
</handler>
<handler class="com.orientechnologies.orient.server.handler.OServerSideScriptInterpreter">
<parameters>
<parameter value="true" name="enabled"/>
<parameter value="SQL" name="allowedLanguages"/>
</parameters>
</handler>
<handler class="com.orientechnologies.orient.server.plugin.livequery.OLiveQueryPlugin">
<parameters>
<parameter value="false" name="enabled"/>
</parameters>
</handler>
</handlers>
<network>
<sockets>
<socket implementation="com.orientechnologies.orient.server.network.OServerSSLSocketFactory" name="ssl">
<parameters>
<parameter value="false" name="network.ssl.clientAuth"/>
<parameter value="config/cert/orientdb.ks" name="network.ssl.keyStore"/>
<parameter value="password" name="network.ssl.keyStorePassword"/>
<parameter value="config/cert/orientdb.ks" name="network.ssl.trustStore"/>
<parameter value="password" name="network.ssl.trustStorePassword"/>
</parameters>
</socket>
<socket implementation="com.orientechnologies.orient.server.network.OServerSSLSocketFactory" name="https">
<parameters>
<parameter value="false" name="network.ssl.clientAuth"/>
<parameter value="config/cert/orientdb.ks" name="network.ssl.keyStore"/>
<parameter value="password" name="network.ssl.keyStorePassword"/>
<parameter value="config/cert/orientdb.ks" name="network.ssl.trustStore"/>
<parameter value="password" name="network.ssl.trustStorePassword"/>
</parameters>
</socket>
</sockets>
<protocols>
<protocol implementation="com.orientechnologies.orient.server.network.protocol.binary.ONetworkProtocolBinary" name="binary"/>
<protocol implementation="com.orientechnologies.orient.server.network.protocol.http.ONetworkProtocolHttpDb" name="http"/>
</protocols>
<listeners>
<listener protocol="binary" socket="default" port-range="2424-2430" ip-address="0.0.0.0"/>
<listener protocol="http" socket="default" port-range="2480-2490" ip-address="0.0.0.0">
<commands>
<command implementation="com.orientechnologies.orient.server.network.protocol.http.command.get.OServerCommandGetStaticContent" pattern="GET|www GET|studio/ GET| GET|*.htm GET|*.html GET|*.xml GET|*.jpeg GET|*.jpg GET|*.png GET|*.gif GET|*.js GET|*.css GET|*.swf GET|*.ico GET|*.txt GET|*.otf GET|*.pjs GET|*.svg GET|*.json GET|*.woff GET|*.woff2 GET|*.ttf GET|*.svgz" stateful="false">
<parameters>
<entry value="Cache-Control: no-cache, no-store, max-age=0, must-revalidate\r\nPragma: no-cache" name="http.cache:*.htm *.html"/>
<entry value="Cache-Control: max-age=120" name="http.cache:default"/>
</parameters>
</command>
</commands>
<parameters>
<parameter value="utf-8" name="network.http.charset"/>
<parameter value="true" name="network.http.jsonResponseError"/>
<parameter value="Access-Control-Allow-Origin:*;Access-Control-Allow-Credentials: true" name="network.http.additionalResponseHeaders"/>
</parameters>
</listener>
</listeners>
</network>
<storages/>
<users>
<user resources="*" password="{PBKDF2WithHmacSHA256}8B5E4C8ABD6A68E8329BD58D1C785A467FD43809823C8192:BE5D490BB80D021387659F7EF528D14130B344D6D6A2D590:65536" name="root"/>
<user resources="connect,server.listDatabases,server.dblist" password="{PBKDF2WithHmacSHA256}268A3AFC0D2D9F25AB7ECAC621B5EA48387CF2B9996E1881:CE84E3D0715755AA24545C23CDACCE5EBA35621E68E34BF2:65536" name="guest"/>
</users>
<properties>
<entry value="1" name="db.pool.min"/>
<entry value="50" name="db.pool.max"/>
<entry value="true" name="profiler.enabled"/>
</properties>
<isAfterFirstTime>true</isAfterFirstTime>
</orient-server>
orientdb-server-log.properties: |
#
# /*
# * Copyright 2014 Orient Technologies LTD (info(at)orientechnologies.com)
# *
# * Licensed under the Apache License, Version 2.0 (the "License");
# * you may not use this file except in compliance with the License.
# * You may obtain a copy of the License at
# *
# * http://www.apache.org/licenses/LICENSE-2.0
# *
# * Unless required by applicable law or agreed to in writing, software
# * distributed under the License is distributed on an "AS IS" BASIS,
# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# * See the License for the specific language governing permissions and
# * limitations under the License.
# *
# * For more information: http://www.orientechnologies.com
# */
#
# Specify the handlers to create in the root logger
# (all loggers are children of the root logger)
# The following creates two handlers
handlers = java.util.logging.ConsoleHandler, java.util.logging.FileHandler
# Set the default logging level for the root logger
.level = INFO
com.orientechnologies.level = INFO
com.orientechnologies.orient.server.distributed.level = INFO
# Set the default logging level for new ConsoleHandler instances
java.util.logging.ConsoleHandler.level = INFO
# Set the default formatter for new ConsoleHandler instances
java.util.logging.ConsoleHandler.formatter = com.orientechnologies.common.log.OAnsiLogFormatter
# Set the default logging level for new FileHandler instances
java.util.logging.FileHandler.level = INFO
# Naming style for the output file
java.util.logging.FileHandler.pattern=../log/orient-server.log
# Set the default formatter for new FileHandler instances
java.util.logging.FileHandler.formatter = com.orientechnologies.common.log.OLogFormatter
# Limiting size of output file in bytes:
java.util.logging.FileHandler.limit=10000000
# Number of output files to cycle through, by appending an
# integer to the base file name:
java.util.logging.FileHandler.count=10
kind: ConfigMap
metadata:
creationTimestamp: null
name: orientdb-config
</code></pre>
<p>Here is what it looks like for me:</p>
<pre><code>$ kubectl get all
NAME READY STATUS RESTARTS AGE
po/orientdbservice-4064909316-pzxhl 1/1 Running 0 1h
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/orientdbservice 10.0.0.185 <nodes> 2480:31058/TCP,2424:30671/TCP 1h
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/orientdbservice 1 1 1 1 1h
NAME DESIRED CURRENT READY AGE
rs/orientdbservice-4064909316 1 1 1 1h
$ kubectl get cm
NAME DATA AGE
orientdb-config 8 1h
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
orientdb-backup Bound pvc-9c1507ea-8253-11e7-9e2b-52540058bb88 10Gi RWO standard 1h
orientdb-databases Bound pvc-9c00ca83-8253-11e7-9e2b-52540058bb88 10Gi RWO standard 1h
</code></pre>
<p>The config generated above might look little different, it was auto generated from the tool called <a href="https://github.com/kedgeproject/kedge" rel="nofollow noreferrer">kedge</a> see the instructions of how I did it in this gist: <a href="https://gist.github.com/surajssd/6bbe43a1b2ceee01962e0a1480d8cb04" rel="nofollow noreferrer">https://gist.github.com/surajssd/6bbe43a1b2ceee01962e0a1480d8cb04</a></p>
|
<p>How to set <code>Time in minutes to retain slave when idle</code> and <code>Max number of instances</code> in pipeline when config podTemplate ?</p>
<p>I see these two config options in System->Could->kubernetes. But I use pipeline and I didn't figure it out how to set them.</p>
<p>Now My pipeline looks like below.</p>
<pre><code>podTemplate(label: 'docker-go',
containers: [
containerTemplate(
name: 'jnlp',
image: 'docker.mydomain.com/library/jnlp-slave:2.62',
command: '',
args: '${computer.jnlpmac} ${computer.name}',
),
containerTemplate(name: 'docker', image: 'docker.mydomain.com/library/docker:1.12.6', ttyEnabled: true, command: 'cat'),
containerTemplate(name: 'golang', image: 'docker.mydomain.com/library/golang:1.8.3', ttyEnabled: true, command: '')
],
volumes: [hostPathVolume(hostPath: '/var/run/docker.sock', mountPath: '/var/run/docker.sock')]
) {
def image_tag = "docker.mydomain.com/deploy-demo/demo-go:v0.1"
def workdir = "/go/src/demo-go"
node('docker-go') {
stage('setup') {
}
stage('clone') {
}
stage('compile') {
}
stage('build and push image') {
}
}
}
</code></pre>
| <p>Ok, I figuire it out</p>
<p>Add these two.
<code>
idleMinutes: 10
instanceCap: 10
</code></p>
<pre><code>podTemplate(label: 'docker-go',
containers: [
containerTemplate(
name: 'jnlp',
image: 'docker.mydomain.com/library/jnlp-slave:2.62',
command: '',
args: '${computer.jnlpmac} ${computer.name}',
),
containerTemplate(name: 'docker', image: 'docker.mydomain.com/library/docker:1.12.6', ttyEnabled: true, command: 'cat'),
containerTemplate(name: 'golang', image: 'docker.mydomain.com/library/golang:1.8.3', ttyEnabled: true, command: '')
],
volumes: [hostPathVolume(hostPath: '/var/run/docker.sock', mountPath: '/var/run/docker.sock')],
idleMinutes: 10
instanceCap: 10
) {
def image_tag = "docker.mydomain.com/deploy-demo/demo-go:v0.1"
def workdir = "/go/src/demo-go"
node('docker-go') {
stage('setup') {
}
stage('clone') {
}
stage('compile') {
}
stage('build and push image') {
}
}
}
</code></pre>
|
<p>I am trying to deploy multiple identical docker containers on Google Container Engine. I am using kubectl for that following the instructions here: <a href="https://cloud.google.com/container-engine/docs/tutorials/hello-node" rel="nofollow noreferrer">https://cloud.google.com/container-engine/docs/tutorials/hello-node</a></p>
<p>The instructions describe how to run a redundant service managed by the load balancer, so when I contact the balancer, it sends my request to one of my redundant pods. And in that mode, it works fine.</p>
<p>But I need to do this differently. I need to be able to contact individual pods directly from the client. So I am trying to use --type=NodePort with my "kubectl expose deployment" command:</p>
<pre><code>mac-124307:hellonode ivm$ kubectl expose deployment hello-world --type=NodePort --port 9000 --target-port 9000
service "hello-world" exposed
mac-124307:hellonode ivm$ kubectl get service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-world 10.15.253.149 <nodes> 9000:30513/TCP 21m
kubernetes 10.15.240.1 <none> 443/TCP 46m
</code></pre>
<p>The command does not complain, and I can use "gcloud compute instances list" to see external IP addressed of individual pods:</p>
<pre><code>mac-124307:hellonode ivm$ gcloud compute instances list
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS
gke-hello-cluster-default-pool-402030b2-j60q us-central1-a n1-standard-1 10.128.0.3 104.197.72.212 RUNNING
gke-hello-cluster-default-pool-402030b2-q86r us-central1-a n1-standard-1 10.128.0.4 35.192.4.43 RUNNING
gke-hello-cluster-default-pool-402030b2-tf7t us-central1-a n1-standard-1 10.128.0.2 146.148.72.137 RUNNING
</code></pre>
<p>but when I try to connect to port 9000 at any of these IP addresses, my connection times-out.</p>
<pre><code>mac-124307:hellonode ivm$ curl http://104.197.72.212:9000/
... <time-out>
</code></pre>
<p>What am I doing wrong ?</p>
| <p>Note that the node port that was allocated is <code>30513</code>. You are using <code>9000</code>, that's the port for the ClusterIP, <code>10.15.253.149</code> that was assigned.</p>
<p>You also need to have port <code>30513</code> open on the firewall, as suggested by Eric.</p>
<p>If you only need a one-off access to this pod, you can also use <code>kubectl port-forward name-of-a-hello-pod 9000</code> which will forward <code>127.0.0.1:9000</code> on your workstation directly to the pod. Of course this only works as long as <code>kubect port-forward</code> is running.</p>
|
<p>Is it possible to take ssh to pod?
Eg: <code>ssh pod_ip</code></p>
<p>I know we can do this with the <code>kubectl</code> command. But I need to do <code>ssh</code> from my local linux machine which doesn't have <code>kubectl</code>.</p>
| <p>Firstly, you have to ensure that the openssh-server has been installed and running in the pod. If not, you can use <code>kubectl exec -it <pod-name> -n <namespace> -- bash</code> to access the pod.
If your pod are running Ubuntu, do <code>apt-get install -y openssh-server</code>. </p>
<p>Secondly, pods are running in a virtual IP subnet assigned by network service. They are accessible to any Master nodes and Worker nodes in the cluster. You can do ssh from any of the Host OS. </p>
|
<p>We are building our docker-images, and adding a few metadata-labels during docker build, e.g:</p>
<pre><code>docker build --label no.company.version=1.2.3 .
</code></pre>
<p>When we at some point later in time use this image in a kubernetes pod, we would like to retreive that labels from the kubernetes api. This would be useful for tracking other metadata than what is encoded in the image tag itself.</p>
<p>The labels are available through docker on the node, but I can not find any way to retreive them in kubernetes.
Is this possible?</p>
| <p>I'm not aware of a way to get docker labels from Kubernetes. The downward api allows you to get k8s labels into a pod however: <a href="https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/</a></p>
|
<p>Currently, I have working K8s API pods in a K8s service that connects to a K8s Redis service, with K8s pods of it's own. The problem is, I am using NodePort meaning BOTH are exposed to the public. I only want the API accessable to the public. The issue is that if I make the Redis service not public, the API can't see it. Is there a way to connect two Services without exposing one to the public?</p>
<p>This is my API service yaml: </p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: app-api-svc
spec:
selector:
app: app-api
tier: api
ports:
- protocol: TCP
port: 5000
nodePort: 30400
type: NodePort
</code></pre>
<p>And this is my Redis service yaml:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: app-api-redis-svc
spec:
selector:
app: app-api-redis
tier: celery_broker
ports:
- protocol: TCP
port: 6379
nodePort: 30537
type: NodePort
</code></pre>
| <p>First, configure the Redis service as a <code>ClusterIP</code> service. It will be private, visible only for other services. This is could be done removing the line with the option <code>type</code>.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: app-api-redis-svc
spec:
selector:
app: app-api-redis
tier: celery_broker
ports:
- protocol: TCP
port: 6379
targetPort: [the port exposed by the Redis pod]
</code></pre>
<p>Finally, when you configure the API to reach Redis, the address should be <code>app-api-redis-svc:6379</code></p>
<p>And that's all. I have a lot of services communicating each other in this way. If this doesn't work for you, let me know in the comments.</p>
|
<p>I've been using <a href="https://github.com/kubernetes/examples/blob/master/staging/volumes/nfs/README.md" rel="nofollow noreferrer">https://github.com/kubernetes/examples/blob/master/staging/volumes/nfs/README.md</a> as a guide.</p>
<p>I've successfully set the above working. However I'm not able to locate where the actual "NFS Volume" is located at - I'm guessing it's just an ordinary directory.</p>
<p>I have two goals here:</p>
<ul>
<li>To prepopulate the volume / directory initially</li>
<li>To make a backup of this volume / directory</li>
</ul>
<p><strong>nfs-pvc.yaml</strong></p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1Mi
</code></pre>
| <p>This deployment is GCE specific NFS. NFS server run as POD in the cluster <code>image: gcr.io/google_containers/volume-nfs:0.8</code></p>
<p>looks at this deployment file <a href="https://github.com/kubernetes/examples/blob/master/staging/volumes/nfs/nfs-server-rc.yaml" rel="nofollow noreferrer">https://github.com/kubernetes/examples/blob/master/staging/volumes/nfs/nfs-server-rc.yaml</a></p>
<p>This pod acts as NFS server. if you run `kubectl get pods' you will see nfs-server pod is running.</p>
<p><code>volume-nfs:</code> is google image, not familer with all the features. I can see the <code>kubectl cp</code> option to copy the file to and from the NFS server POD.</p>
<p><a href="https://kubernetes.io/docs/user-guide/kubectl/v1.7/#cp" rel="nofollow noreferrer">https://kubernetes.io/docs/user-guide/kubectl/v1.7/#cp</a></p>
<p>eg:</p>
<pre><code>kubectl cp /tmp/foo <some-namespace>/nfs-server:/exports
</code></pre>
<p>The above command will only work if the <code>tar</code> executable is present on the container.</p>
|
<p>I was able to cluster 2 nodes together in Kubernetes. The master node seems to be running fine but running any command on the worker node results in the error: "The connection to the server localhost:8080 was refused - did you specify the right host or port?"</p>
<p>From master (node1), </p>
<pre><code>$ kubectl get nodes
NAME STATUS AGE VERSION
node1 Ready 23h v1.7.3
node2 Ready 23h v1.7.3
</code></pre>
<p>From worker (node 2),</p>
<pre><code>$ kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
$ telnet localhost 8080
Trying ::1...
telnet: connect to address ::1: Connection refused
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
$ ping localhost
PING localhost (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.032 ms
</code></pre>
<p>I am not sure how to fix this issue. Any help is appreciated. </p>
<p>On executing,"journalctl -xeu kubelet" I see:
"CNI failed to retrieve network namespace path: Cannot find network namespace for the terminated container", but this seems to be related to installing a pod network ... which I am not able to because of the above error.</p>
<p>Thanks!</p>
| <p><code>kubectl</code> interfaces with <code>kube-apiserver</code> for cluster management. The command works on the master node because that's where <code>kube-apiserver</code> runs. On the worker nodes, only <code>kubelet</code> and <code>kube-proxy</code> is running.</p>
<p>In fact, <code>kubectl</code> is supposed to be run on a client (eg. laptop, desktop) and not on the kubernetes nodes.</p>
|
<p>I've followed a Kubernetes tutorial similar to:
<a href="https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/declare-network-policy/</a> which created some basic networkpolicies as follows:</p>
<pre><code>root@server:~# kubectl get netpol -n policy-demo
NAME POD-SELECTOR AGE
access-nginx run=nginx 49m
default-deny <none> 50m
</code></pre>
<p>I saw that I can delete the entire namespace (pods included) using a command like "kubectl delete ns policy-demo", but I can't see what command I need to use if I just want to delete a single policy (or edit it even).</p>
<p>How would I use kubectl to delete just the "access-nginx" policy above?</p>
| <p>This should work. A similar command works at my end.</p>
<pre><code>kubectl -n policy-demo delete networkpolicy access-nginx
</code></pre>
|
<p>So I'm getting this error when trying to install kubernetes with aws as the cloud provider. I'm installing with kubespray but I narrowed it down to the below command which I tried executing manually inside the hyperkube container. I'm guessing the actual error comes from not having the proper iam role. I'm working on obtaining one but that will take sometime. I also see that it says zone not specified in the configuration file. I'm not really sure where to specify it. Can someone point me in the right direction in that? Also just for testing purposes I can manually obtain awa access keys and session tokens. Is there a way to get hyperkube to use those?</p>
<p><code>
root@15713968201f:/# /hyperkube apiserver --advertise-address=10.205.232.161 --etcd-servers=https://10.205.232.161:2379,https://10.205.235.70:2379 --etcd-quorum-read=true --etcd-cafile=/etc/ssl/etcd/ssl/ca.pem --etcd-certfile=/etc/ssl/etcd/ssl/node-ip-10-205-232-161.ec2.internal.pem --etcd-keyfile=/etc/ssl/etcd/ssl/node-ip-10-205-232-161.ec2.internal-key.pem --insecure-bind-address=127.0.0.1 --apiserver-count=2 --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota --service-cluster-ip-range=10.233.0.0/18 --service-node-port-range=30000-32767 --client-ca-file=/etc/kubernetes/ssl/ca.pem --basic-auth-file=/etc/kubernetes/users/known_users.csv --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem --token-auth-file=/etc/kubernetes/tokens/known_tokens.csv --service-account-key-file=/etc/kubernetes/ssl/apiserver-key.pem --secure-port=6443 --insecure-port=8080 --storage-backend=etcd3 --v=2 --allow-privileged=true --cloud-provider=aws --anonymous-auth=False
I0817 22:08:00.258693 134 aws.go:762] Building AWS cloudprovider
I0817 22:08:00.258810 134 aws.go:725] Zone not specified in configuration file; querying AWS metadata service
Error: error setting the external host value: "aws" cloud provider could not be initialized: could not init cloud provider "aws": error finding instance i-045f83bfff733a224: error listing AWS instances: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Error: error setting the external host value: "aws" cloud provider could not be initialized: could not init cloud provider "aws": error finding instance i-045f83bfff733a224: error listing AWS instances: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
</code></p>
| <p><a href="https://github.com/kubernetes/kubernetes/issues/11543" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/11543</a></p>
<p>I'd be willing to bet that your node iam role doesn't have enough access. I'm not acquainted with kubespray but I think the above issue should help you troubleshoot your problem.</p>
<p>This is the bit that I'm focusing on: </p>
<pre><code>error finding instance i-045f83bfff733a224: error listing AWS instances: NoCredentialProviders: no valid providers in chain.
</code></pre>
|
<p>Numerous forum posts and documentations specify extracting login info for the Kubernetes install from ~/.kube/config.</p>
<p>The problem I found: mine doesn't have a proper user account, it specifies a name and a token.</p>
<p>How do I get the account name so I can use the kubernetes-cockpit UI? Surprisingly there appears to be nothing on that topic - what to do if the config doesn't contain an account.</p>
| <p>It depends on how you use Cockpit.
According to <a href="http://cockpit-project.org/guide/latest/feature-kubernetes.html" rel="nofollow noreferrer">cockpit official page</a>:</p>
<p>Used in a standard cockpit session:</p>
<blockquote>
<p>If a user is able to use kubectl successfully when at their shell terminal, then that same user will able to use Kubernetes dashboard when logged into Cockpit</p>
</blockquote>
<p>I suppose this is your scenario, so if you didn't change default settings, the cockpit will look for .kube/config itself, i.e. you should be able to login without specifying your account.</p>
|
<p>I can't find this in the k8s documentation, I'm just wondering what are the default environment variables that are created in every container by k8s. Not user created defaults, but like (and this is just an example) maybe something like <code>{service_name}_PORT</code> or something like that. I just wanna know what information is available in a container by default.</p>
| <p>From the <a href="https://kubernetes.io/docs/concepts/containers/container-environment/" rel="nofollow noreferrer">K8S Documentation</a>:</p>
<p><strong>Container information - ENV's</strong></p>
<p>The hostname of a Container is the name of the Pod in which the Container is running. It is available through the hostname command or the gethostname function call in libc.</p>
<p>The Pod name and namespace are available as environment variables.</p>
<p>These are the additional ENV's in a MiniKube cluster I have running:</p>
<pre><code>HOSTNAME=something-api-234234234-skm70
SHLVL=1
HOME=/root
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_SERVICE_HOST=x.x.x.x
PWD=/
</code></pre>
<p>User defined environment variables from the Pod definition are also available to the Container, as are any environment variables specified statically in the Docker image.</p>
<p><strong>Cluster Information - ENV's</strong></p>
<p>A list of all services that were running when a Container was created is available to that Container as environment variables. Those environment variables match the syntax of Docker links.</p>
<p>For a service named <code>foo</code> that maps to a container port named <code>bar</code>, the following variables are defined:</p>
<pre><code>FOO_SERVICE_HOST=<the host the service is running on>
FOO_SERVICE_PORT=<the port the service is running on>
</code></pre>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.