prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>I am trying to deploy an application to Google Cloud Platform.
I have my back-end and my front-end running in separate docker containers, each project have his own Docker-Compose and Dockerfile , I am using a container for my Postgres database. I deployed my containers to Dockerhub, and I created Kubernetes services and deployments. (using Kubernetes Kompose: so I converted my docker-compose to deployments.yaml files first) </p>
<p>Now I want to deploy my application to Google Cloud Platform </p>
<ul>
<li><p>I created a cluster </p></li>
<li><p>I created two separate deployments for my front-end and my back-end</p>
<pre><code>kubectl run backendapp --image=docker.io/[dockerhub username]/backendapp --port=9000
kubectl run frontendapp --image=docker.io/[dockerhub username]/frontendapp --port=3000
</code></pre></li>
</ul>
<p>When I access through the external IP address to my front-end, it seems working (there are an interface) and it is impossible to get to my back-end through the external IP.</p>
<p>So my questions are:</p>
<ul>
<li>what are the correct steps to follow after running my Kubernetes services and
dockerizing my application in order to deploy it on Google Cloud Platform?</li>
<li>how to link my front-end and my back-end services once deployed?</li>
<li>should I create a deployment for my database also in Google Cloud Platform?</li>
</ul>
<p>This is the description of my backend service </p>
<pre><code>> Name: backendapp Namespace: default
> Labels: run=backendapp Annotations:
> <none> Selector: run=backendapp Type:
> LoadBalancer IP: 10.111.73.214 Port:
> <unset> 9000/TCP TargetPort: 9000/TCP NodePort:
> <unset> 32449/TCP Endpoints: Session Affinity:
> None External Traffic Policy: Cluster Events:
> <none>
</code></pre>
<p>and this is the description for my frontend service</p>
<pre><code>Name: frontendapp
Namespace: default
Labels: run=frontendapp
Annotations: <none>
Selector: run=frontendapp
Type: LoadBalancer
IP: 10.110.133.172
Port: <unset> 3000/TCP
TargetPort: 3000/TCP
NodePort: <unset> 30016/TCP
Endpoints:
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
</code></pre>
<p>I am able to acces my frontend service via</p>
<blockquote>
<p>clusterIp:Nodeport
But I am unable to do so with the backend </p>
</blockquote>
| <blockquote>
<p>what are the correct steps to follow after running my Kubernetes
services and dockerizing my application in order to deploy it on
Google Cloud Platform?</p>
</blockquote>
<p>After you created a cluster in GCP, you need to create deployments and services for your front-end and back-end applications. You can read GKE documentation to understand how to create <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app" rel="nofollow noreferrer">Deployment</a> and <a href="https://cloud.google.com/kubernetes-engine/docs/quickstart" rel="nofollow noreferrer">Services</a> </p>
<blockquote>
<p>how to link my front-end and my back-end services once deployed?</p>
</blockquote>
<p>PODs in Kubernetes communicate using Services, they also use Services for exposing themselves to the outer world. In GCP, you can use LoadBalancer for sharing HTTP/HTTPS traffic and TCP traffic. For more information about it, you can look through Codelab <a href="https://codelabs.developers.google.com/codelabs/gcp-infra-gke/index.html?index=..%2F..%2Fcloud#0" rel="nofollow noreferrer">Kubernetes, GKE, and Load Balancing</a> example.</p>
<blockquote>
<p>should I create a deployment for my database also in Google Cloud
Platform?</p>
</blockquote>
<p>If you want to locate database in Kubernetes, you need to add deployment and service for it. If you don't, you need to configure the access from your applications to the database outside Kubernetes.</p>
|
<p>Using google Kubernetes engine:</p>
<pre><code>kubectl cluster-info
kubernetes-dashboard is running at https://x.x.x.x/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy
</code></pre>
<p>If I go to the link:</p>
<p>I get to a forbidden page and if I accepts I get the following:</p>
<pre><code>{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "services \"kubernetes-dashboard\" is forbidden: User \"system:anonymous\" cannot get services/proxy in the namespace \"kube-system\": Unknown user \"system:anonymous\"",
"reason": "Forbidden",
"details": {
"name": "kubernetes-dashboard",
"kind": "services"
},
"code": 403
}
</code></pre>
<p>Is it not possible to access the dashboard?</p>
| <p>This message says, you should login with valid authentication to the UI Dashboard it can be with token or cluster config. If you want to login using token, follow the steps below:</p>
<ol>
<li><p>Get list of all secret tokens:</p>
<p>kubectl -n kube-system get secret</p></li>
<li><p>Get the token:</p>
<p>kubectl -n kube-system describe secret [NameOfToken] </p></li>
<li><p>Run Proxy:</p>
<p>kubectl proxy</p></li>
<li><p>Enter the dashboard link:</p>
<p><a href="http://localhost:8001/ui" rel="nofollow noreferrer">http://localhost:8001/ui</a></p></li>
<li><p>Copy the token and order it in one line and put it on dashboard UI</p></li>
</ol>
|
<p><strong>Problem</strong></p>
<p>I have a Kafka setup with three brokers in Kubernetes, set up according to the guide at <a href="https://github.com/Yolean/kubernetes-kafka" rel="nofollow noreferrer">https://github.com/Yolean/kubernetes-kafka</a>. The following error message appears when producing messages from a Java client.</p>
<pre><code>2018-06-06 11:15:44.103 ERROR 1 --- [ad | producer-1] o.s.k.support.LoggingProducerListener : Exception thrown when sending a message with key='null' and payload='[...redacted...]':
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for topicname-0: 30001 ms has passed since last append
</code></pre>
<p><strong>Detailed setup</strong></p>
<p>The listeners are set up to allow SSL producers/consumers from the outside world:</p>
<pre><code>advertised.host.name = null
advertised.listeners = OUTSIDE://kafka-0.mydomain.com:32400,PLAINTEXT://:9092
advertised.port = null
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL,OUTSIDE:SSL
listeners = OUTSIDE://:9094,PLAINTEXT://:9092
inter.broker.listener.name = PLAINTEXT
host.name =
port.name = 9092
</code></pre>
<p>The OUTSIDE listeners are listening on kafka-0.mydomain.com, kafka-1.mydomain.com, etc. The plaintext listeners are listening on any IP, since they are cluster-local to Kubernetes. </p>
<p>The producer settings:</p>
<pre><code>kafka:
bootstrap-servers: kafka.mydomain.com:9092
properties:
security.protocol: SSL
producer:
batch-size: 16384
buffer-memory: 1048576 # 1MB
retries: 1
ssl:
key-password: redacted
keystore-location: file:/var/private/ssl/kafka.client.keystore.jks
keystore-password: redacted
truststore-location: file:/var/private/ssl/kafka.client.truststore.jks
truststore-password: redacted
</code></pre>
<p>In addition I set <code>linger.ms</code> to 100 in code, which forces messages to be transmitted within 100ms. Linger time is set intentionally low, because the use case requires minimal delays.</p>
<p><strong>Analysis</strong></p>
<ul>
<li>The errors started appearing when the broker was moved moved to SSL. </li>
<li>On the server side everything is running as expected, there are no errors in the log and I can connect to the broker manually with a Kafka client tool. </li>
<li>The errors appear intermittently: sometimes it sends 30+ messages per second, sometimes it sends nothing at all. It may work like a charm for hours and then just spam timeouts for a little while.</li>
<li>Clocks for the client and server are in sync (UTC).</li>
<li>CPU is consistently around 20% for both the producing and server side.</li>
</ul>
<p>What could it be?</p>
| <p>This problem normally occurs when the producer is faster than the brokers, the reason why this happens with your setup seems to be that the SSL needs extra CPU and that may slow down the brokers. But anyway check the following:</p>
<ul>
<li>Check if you are producing message at the same speed, according what you are saying seems that you are having spikes.</li>
<li>Another possibility is that other kafka clients in the cluster (producer or consumers), which not necessarily uses the same topic, makes this to happen because overloads the brokers (check brokers cpu/network).</li>
</ul>
<p>To minimize whatever causes this retention you should increase the <code>buffer-memory</code> to more than 32MB, think that 32MB is the default and you are setting this lower. The lower you have, the easy is that the buffer gets full and if this happens it will block at most <code>max.block.ms</code>, and a request will timeout after <code>request.timeout.ms</code>.</p>
<p>Another parameter that you should increase is <code>batch-size</code>, this parameter is in bytes, not in number of messages. Also linger.ms should be increased, in case this producer messages are created in user request time, do not increase very much, a good choice could be 1-4 ms.</p>
<p>Messages will be send when the <code>batch.size</code> gets full or takes longer than <code>linger.ms</code> to have more data than the <code>batch.size</code>. Big batches increases the throughput in normal cases, but if the linger is too low it doesn't help, because you will send before you have enough data to get the <code>batch.size</code>.</p>
<p>Also recheck on producer logs that the properties are loaded correctly.</p>
|
<p>This is my worker node:</p>
<pre><code>root@ivu:~# kubeadm join 10.16.70.174:6443 --token hl36mu.0uptj0rp3x1lfw6n --discovery-token-ca-cert-hash sha256:daac28160d160f938b82b8c720cfc91dd9e6988d743306f3aecb42e4fb114f19 --ignore-preflight-errors=swap
[preflight] Running pre-flight checks.
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING FileExisting-crictl]: crictl not found in system path
Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl
[discovery] Trying to connect to API Server "10.16.70.174:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.16.70.174:6443"
[discovery] Requesting info from "https://10.16.70.174:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.16.70.174:6443"
[discovery] Successfully established connection with API Server "10.16.70.174:6443"
This node has joined the cluster:
* Certificate signing request was sent to master and a response
was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the master to see this node join the cluster.
</code></pre>
<p>While checking in master nodes using command <code>kubectl get nodes</code>, I can only able to see master:</p>
<pre><code>ivum01@ivum01-HP-Pro-3330-SFF:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ivum01-hp-pro-3330-sff Ready master 36m v1.10.0
</code></pre>
<hr>
<p>For question answer:</p>
<ol>
<li>docker kubelet kubeadm kubectl installed fine</li>
<li><code>kubectl get node</code> can not see the current added node; of cause <code>kubectl get pods --all-namespaces</code> has no result for this node;</li>
<li><code>docker</code> which in current node has no report for <code>kubeadm</code> command(means no k8s images pull, no running container for that)</li>
<li>must import is <code>kubelet</code> not running in work node </li>
<li><p>run <code>kubelet</code> output: </p>
<blockquote>
<p>Failed to get system container stats for "/user.slice/user-1000.slice/session-1.scope": failed to get cgroup stats for "/user.slice/user-1000.slice/session-1.scope": failed to get container info for "/user.slice/user-1000.slice/session-1.scope": unknown container "/user.slice/user-1000.slice/session-1.scope"</p>
</blockquote>
<p>same as <a href="https://github.com/kubernetes/kubernetes/issues/57080" rel="nofollow noreferrer">this issue said</a></p></li>
<li>tear down and reset cluster(<code>kubeadm reset</code>) and redo that has no problem in my case. </li>
</ol>
| <p>I had this problem and it was solved by ensuring that cgroup driver on the <em>worker</em> nodes also were set properly.</p>
<p>check with:</p>
<pre><code>docker info | grep -i cgroup
cat /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
</code></pre>
<p>set it with:</p>
<pre><code>sed -i "s/cgroup-driver=systemd/cgroup-driver=cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
</code></pre>
<p>then restart kubelet service and rejoin the cluster:</p>
<pre><code>systemctl daemon-reload
systemctl restart kubelet
kubeadm reset
kubeadm join ...
</code></pre>
<p>Info from docs: <a href="https://kubernetes.io/docs/tasks/tools/install-kubeadm/#configure-cgroup-driver-used-by-kubelet-on-master-node" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/tools/install-kubeadm/#configure-cgroup-driver-used-by-kubelet-on-master-node</a></p>
|
<p>I set up a high-availability Kubernetes cluster by following the official <a href="https://kubernetes.io/docs/setup/independent/high-availability/" rel="nofollow noreferrer">Creating HA clusters with kubeadm</a> guide. It is an experimental cluster for exploring the feasibility of a on-premises high-availability deployment, and as such I created the cluster on six Cent OS 7 virtual machines hosted on VMware Workstation - three master nodes and three worker nodes.</p>
<p>It was running fine after initial setup, but after I shut down everything last night and restarted all the VMs this morning, kube-apiserver is no longer starting on any of the master nodes. It is failing on all nodes with a message stating that it is "unable to create storage backend (context deadline exceeded)":</p>
<pre><code>F0614 20:18:43.297064 1 storage_decorator.go:57] Unable to create storage backend: config (&{ /registry [https://192.168.56.10.localdomain:2379 https://192.168.56.11.localdomain:2379 https://192.168.56.12.localdomain:2379] /etc/pki/tls/private/client-key.pem /etc/pki/tls/certs/client.pem /etc/pki/ca-trust/source/anchors/ca.pem true false 1000 0xc42047e100 <nil> 5m0s 1m0s}), err (context deadline exceeded)
</code></pre>
<p>That suggests a problem with etcd, but the etcd cluster reports healthy, and I can successfully use it to set and query values using the same certs provided to kube-apiserver.</p>
<p>My versions are:</p>
<pre><code>CentOS 7.5.1804
Kubernetes - 1.10.4
Docker - 18.03.1-ce
etcd - 3.1.17
keepalived - 1.3.5
</code></pre>
<p>And though these all worked fine together last night, in an effort to rule out version conflicts, I tried adding <code>--storage-backend=etcd3</code> to the kube-apiserver.yaml manifest file and downgrading Docker to 17.03.2-ce. Neither helped.</p>
<p>I also disabled firewalld to ensure it wasn't blocking any etcd traffic. Again, that did not help (nor did I see any evidence of dropped connections)</p>
<p>I don't know how to dig any deeper to discover why the kube-apiserver can't create its storage backend. So far my experiment with high-availability is a failure.</p>
| <p>The details at the end of the error message (<code>context deadline expired</code>), suggest a timeout (Go's <a href="https://golang.org/pkg/context/" rel="nofollow noreferrer">context package</a> is used for handling timeouts). But I wasn't seeing any slowness when I accessed the etcd cluster directly via etcdctl, so I set up a tcpdump capture to see if it would tell me anything more about what was happening between kube-apiserver and etcd. I filtered on port 2379, which is etcd's client request port:</p>
<pre><code>tcpdump -i any port 2379
</code></pre>
<p>I did not see any activity at first, so I forced activity by querying etcd directly via etcdctl. That worked, and it showed the expected traffic to port 2379.</p>
<p>At this point I was still stuck, because it appeared that kube-apiserver simply wasn't calling etcd. But then a few mysterious entries appeared in tcpdump's output:</p>
<pre><code>18:04:30.912541 IP master0.34480 > unallocated.barefruit.co.uk.2379: Flags [S], seq 1974036339, win 29200, options [mss 1460,sackOK,TS val 4294906938 ecr 0,nop,wscale 7], length 0
18:04:32.902298 IP master0.34476 > unallocated.barefruit.co.uk.2379: Flags [S], seq 3960458101, win 29200, options [mss 1460,sackOK,TS val 4294908928 ecr 0,nop,wscale 7], length 0
18:04:32.910289 IP master0.34478 > unallocated.barefruit.co.uk.2379: Flags [S], seq 2100196833, win 29200, options [mss 1460,sackOK,TS val 4294908936 ecr 0,nop,wscale 7], length 0
</code></pre>
<p>What is unallocated.barefruit.co.uk and why is a process on my master node trying to make an etcd client request to it? </p>
<p>A quick google search reveals that unallocated.barefruit.co.uk is a DNS "enhancing" service that redirects bad DNS queries. </p>
<p>My nodes aren't registered in DNS because this is just an experimental cluster. I have entries for them in /etc/hosts, but that's it. Apparently something in kube-apiserver is attempting to resolve my etcd node names (e.g. master0.localdomain) and is querying DNS before /etc/hosts (I always thought /etc/hosts took priority). And rather than rejecting the invalid names, my ISP (Verizon FIOS) is using this "enhanced" DNS service that redirects to unallocated.barefruit.co.uk which, surprisingly enough, isn't running an etcd cluster for me.</p>
<p>I edited the network configuration on my master nodes to prove out my hypothesis, adding explicit DNS settings pointing to google's servers 8.8.8.8 and 8.8.4.4 that are not "enhanced". I then rebooted, and the cluster came right up.</p>
<p>So what really changed between last night and today? My experimental cluster is running on my laptop, and yesterday I was working in the office (no FIOS), while today I was working at home (connected to FIOS). Ugh. Thanks Verizon!</p>
<p>I'm still not sure why kube-apiserver seems to be prioritizing DNS over /etc/hosts. But I guess the lesson is to either make sure your node names have valid DNS entries or specify everything by IP address. Anyone have any thoughts as to which is best practice?</p>
|
<p>I have been running a pod for more than a week and there has been no restart since started. But, still I am unable to view the logs since it started and it only gives logs for the last two days. Is there any log rotation policy for the container and how to control the rotation like based on size or date?</p>
<p>I tried the below command but shows only last two days logs.</p>
<pre><code>kubectl logs POD_NAME --since=0
</code></pre>
<p>Is there any other way?</p>
| <blockquote>
<p>Is there any log rotation policy for the container and how to control the rotation like based on size or date</p>
</blockquote>
<p>The log rotation is controlled by the docker <code>--log-driver</code> and <code>--log-opts</code> (or their <a href="https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-configuration-file" rel="noreferrer"><code>daemon.json</code></a> equivalent), which for any sane system has file size and file count limits to prevent a run-away service from blowing out the disk on the docker host. <em>that answer also assumes you are using docker, but that's a fairly safe assumption</em></p>
<blockquote>
<p>Is there any other way?</p>
</blockquote>
<p>I <strong>strongly</strong> advise something like <a href="https://github.com/kubernetes/kubernetes/tree/v1.10.4/cluster/addons/fluentd-elasticsearch" rel="noreferrer">fluentd-elasticsearch</a>, or <a href="https://github.com/Graylog2/graylog-docker#readme" rel="noreferrer">graylog2</a>, or Sumologic, or Splunk, or whatever in order to egress those logs from the hosts. No serious cluster would rely on infinite log disks nor on using <code>kubectl logs</code> in a <code>for</code> loop to search the output of Pods. To say nothing of egressing the logs from the kubernetes containers themselves, which is almost essential for keeping tabs on the health of the cluster.</p>
|
<p>I have a integration Kubernetes cluster in AWS, and I want to conduct end-to-end tests on that cluster. </p>
<p>I currently use Deployments and Services.</p>
<p>The proposed approach is to use Ingress and configure it to use cookie injection for ensuring access to web page that implements the following logic:</p>
<ol>
<li><p>When <em>special</em> header is recognized in request -> allow the agent to enter the webpage (used for automated tests)</p></li>
<li><p>When <em>special</em> header is not recognized in request -> display popup for basic http authentication (used for normal users).</p></li>
</ol>
<p>I also want to use a single entry point for both cases (the same url).</p>
<p>I've been browsing official documentation, but didn't find the specified use case, nor did I find any examples that might be close to what I want to achieve.</p>
<p>I'm interested if anyone has used similar approach or something that might be similar in use.</p>
<p>Many thanks!</p>
| <p>It sounds like either <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#configuration-snippet" rel="nofollow noreferrer">configuration-snippet</a> or a full-blown <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/custom-template/" rel="nofollow noreferrer">custom template</a> may do what you want, along with the nginx <a href="http://nginx.org/en/docs/http/ngx_http_rewrite_module.html#if" rel="nofollow noreferrer"><code>if</code></a> and <a href="http://nginx.org/en/docs/http/ngx_http_headers_module.html#add_header" rel="nofollow noreferrer"><code>add_header</code></a> using something like:</p>
<pre><code>if ($http_user_agent ~ "My Sekrit Agent/666") {
add_header Authentication "Basic dXNlcm5hbWU6cGFzc3dvcmQ=";
}
</code></pre>
|
<p>I setup my cluster with one master and two nodes. I can create pods on nodes. If my master node fails (reboot) when I use kubeadm reset and then kubeadm init I lost all my pods, deployments, services.</p>
<p>Am I losting my pods because reset? What should I do?</p>
<p>Some similar questions:</p>
<p><a href="https://stackpointcloud.com/community/question/how-do-i-restart-my-kubernetes-cluster" rel="nofollow noreferrer">https://stackpointcloud.com/community/question/how-do-i-restart-my-kubernetes-cluster</a></p>
<p><a href="https://stackoverflow.com/questions/48362855/is-there-a-best-practice-to-reboot-a-cluster">Is there a best practice to reboot a cluster</a></p>
| <p><code>kubeadm reset</code> on the master deletes all configuration (files and a database too). There is no way back.</p>
<p>You should not run <code>kubeadm init</code> when you reboot the master. <code>kubeadm init</code> is a one off action to bootstrap the cluster. When the master is rebooted your OS's init system (systemd, upstart, ...) should start <code>kubelet</code> which in turn starts the master components (as containers). An exception is if your cluster is <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-alpha/#cmd-phase-self-hosting" rel="nofollow noreferrer">self-hosting</a></p>
|
<p>I am trying to provision PV with RBD using <a href="https://github.com/kubernetes/kubernetes/tree/release-1.7/examples/persistent-volume-provisioning/rbd" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/tree/release-1.7/examples/persistent-volume-provisioning/rbd</a></p>
<p>But I have faced an issue when my PVC is in Pending state without any meaningful log</p>
<pre><code>root@ubuntu:~# kubectl describe pvc
Name: claim1
Namespace: default
StorageClass: fast
Status: Pending
Volume:
Labels: <none>
Annotations: volume.beta.kubernetes.io/storage-provisioner=kubernetes.io/rbd
Capacity:
Access Modes:
Events: <none>
</code></pre>
| <p>It seems you don't have volume defined in your pvc :</p>
<p><a href="https://i.stack.imgur.com/54mos.png" rel="noreferrer"><img src="https://i.stack.imgur.com/54mos.png" alt="missing_volume"></a></p>
<p>I got the problem also if the volumeName is not correct or if it doesn't exist. Indeed, there are no logs or events that show the problem.</p>
<p>If all is working fine, the status should be :</p>
<pre><code>Status: Bound
</code></pre>
|
<p>I have developed a nodejs REST API, for which i have built a docker image and now deploying it as a pod on my kubernetes cluster.</p>
<p>Dockerfile</p>
<pre><code>FROM mhart/alpine-node:8
WORKDIR /home/appHome/
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD [ "npm", "start" ]
</code></pre>
<p>however, i am passing npm start as my container start up command.</p>
<p>but how do i test my node js application with npm test and verify test results?</p>
<p>when i was doing it in my local system i used to run application in backend and used to run </p>
<pre><code>npm test
</code></pre>
<p>in another window.
how to achieve the same while deploying it on kubernetes cluster! and remove the test part once it is successfull!</p>
<p>below is my script of package.json,</p>
<pre><code>"scripts": {
"test": "mocha ./test/scheduledTaskTest.js",
"start": "nodemon app.js --exec babel-node --presets es2015,stage-2",
"cmd": "set NODE_ENV=devConfig&& npm start"
},
</code></pre>
<p>Below are the deployment and job yaml files.</p>
<p>Deployment.yaml</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: scheduled-task-test
spec:
replicas: 1
selector:
matchLabels:
app: scheduled-task-test
template:
metadata:
labels:
app: scheduled-task-test
spec: # pod spec
containers:
- name: st-c1
image: 104466/st
imagePullPolicy: Always
env:
- name: NODE_ENV
value: devConfig
- name: st-c2
image: 104466/st
imagePullPolicy: Always
command: [ 'sh','-c','npm test && sleep 3600']'
</code></pre>
<p>TestJob.yaml</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: scheduled-task-test-job
labels:
purpose: test
spec:
template:
spec:
containers:
- name: st-c1
image: 104466/st
imagePullPolicy: Always
env:
- name: NODE_ENV
value: devConfig
- name: st-c2
image: 104466/st
imagePullPolicy: Always
command: [ 'sh','-c','npm test && sleep 3600']
restartPolicy: Never
</code></pre>
<p>Pod status </p>
<pre><code>scheduled-task-test-7d86c87d44-q9zdv 2/2 Running 1 8m 100.96.9.87 ip-172-20-34-139.us-west-2.compute.internal
scheduled-task-test-job-gt9hx 1/2 Error 0 7m 100.96.9.88 ip-172-20-34-139.us-west-2.compute.internal
</code></pre>
<p>Successfully executed test case pod log (deployment results)</p>
<pre><code>kubectl logs scheduled-task-test-7d86c87d44-q9zdv -c st-c2
> [email protected] test /home/appHome/ScheduledTask
> mocha ./test/scheduledTaskTest.js
Testing listTask api
✓ List all the Tasks (435ms)
Testing addTask api
✓ Add a new task (438ms)
Testing updateTask api
✓ Update task
Testing delete task api
✓ Delete task (434ms)
4 passing (1s)
</code></pre>
<p>Test failed logs, (Executed containers as in job)</p>
<pre><code>kubectl logs scheduled-task-test-job-gt9hx -c st-c2
> [email protected] test /home/appHome/ScheduledTask
> mocha ./test/scheduledTaskTest.js
Testing listTask api
1) List all the Tasks
Testing addTask api
2) Add a new task
Testing updateTask api
3) Update task
Testing delete task api
4) Delete task
0 passing (61ms)
4 failing
1) Testing listTask api
List all the Tasks:
Uncaught TypeError: Cannot read property 'body' of undefined
at Request._callback (test/scheduledTaskTest.js:24:29)
at self.callback (node_modules/request/request.js:186:22)
at Request.onRequestError (node_modules/request/request.js:878:8)
at Socket.socketErrorListener (_http_client.js:387:9)
at emitErrorNT (internal/streams/destroy.js:64:8)
at _combinedTickCallback (internal/process/next_tick.js:138:11)
at process._tickCallback (internal/process/next_tick.js:180:9)
2) Testing addTask api
Add a new task :
Uncaught TypeError: Cannot read property 'body' of undefined
at Request._callback (test/scheduledTaskTest.js:43:20)
at self.callback (node_modules/request/request.js:186:22)
at Request.onRequestError (node_modules/request/request.js:878:8)
at Socket.socketErrorListener (_http_client.js:387:9)
at emitErrorNT (internal/streams/destroy.js:64:8)
at _combinedTickCallback (internal/process/next_tick.js:138:11)
at process._tickCallback (internal/process/next_tick.js:180:9)
3) Testing updateTask api
Update task :
Uncaught TypeError: Cannot read property 'body' of undefined
at Request._callback (test/scheduledTaskTest.js:63:20)
at self.callback (node_modules/request/request.js:186:22)
at Request.onRequestError (node_modules/request/request.js:878:8)
at Socket.socketErrorListener (_http_client.js:387:9)
at emitErrorNT (internal/streams/destroy.js:64:8)
at _combinedTickCallback (internal/process/next_tick.js:138:11)
at process._tickCallback (internal/process/next_tick.js:180:9)
4) Testing delete task api
Delete task :
Uncaught TypeError: Cannot read property 'body' of undefined
at Request._callback (test/scheduledTaskTest.js:83:20)
at self.callback (node_modules/request/request.js:186:22)
at Request.onRequestError (node_modules/request/request.js:878:8)
at Socket.socketErrorListener (_http_client.js:387:9)
at emitErrorNT (internal/streams/destroy.js:64:8)
at _combinedTickCallback (internal/process/next_tick.js:138:11)
at process._tickCallback (internal/process/next_tick.js:180:9)
npm ERR! Test failed. See above for more details.
</code></pre>
<p>Describe logs of Pod( of containers deployed as job)</p>
<pre><code>kubectl describe pod scheduled-task-test-job-gt9hx
Name: scheduled-task-test-job-gt9hx
Namespace: default
Node: ip-172-20-34-139.us-west-2.compute.internal/172.20.34.139
Start Time: Tue, 19 Jun 2018 16:28:06 +0000
Labels: controller-uid=bf5569e5-73dd-11e8-8ede-02924b27b126
job-name=scheduled-task-test-job
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"Job","namespace":"default","name":"scheduled-task-test-job","uid":"bf5569e5-73dd-11e8-8ede-02924b27b126","...
kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container st-c1; cpu request for container st-c2
Status: Running
IP: 100.96.9.88
Created By: Job/scheduled-task-test-job
Controlled By: Job/scheduled-task-test-job
Containers:
st-c1:
Container ID: docker://7fc17e717921020cec074774ccb87956979712e7de55480be6e6bd586b28ce6d
Image: 104466/st
Image ID: docker-pullable://104466/st@sha256:91414cc48e86975041c233329b1814d7c6d2b76bc839a24fa68e99f92750390f
Port: <none>
State: Running
Started: Tue, 19 Jun 2018 16:28:08 +0000
Ready: True
Restart Count: 0
Requests:
cpu: 100m
Environment:
NODE_ENV: devConfig
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-2tvsm (ro)
st-c2:
Container ID: docker://95837799d34e1e14f7718acc1dc745042b94a33f115aab11a03a25ab4a569c18
Image: 104466/st
Image ID: docker-pullable://104466/st@sha256:91414cc48e86975041c233329b1814d7c6d2b76bc839a24fa68e99f92750390f
Port: <none>
Command:
sh
-c
npm test && sleep 3600
State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 19 Jun 2018 16:28:10 +0000
Finished: Tue, 19 Jun 2018 16:28:12 +0000
Ready: False
Restart Count: 0
Requests:
cpu: 100m
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-2tvsm (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-2tvsm:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-2tvsm
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
</code></pre>
| <p>If you want to use Kubernetes only, then using Jobs would be the easiest way. A job creates one or more pods and ensures that a specified number of them successfully terminate. As pods successfully complete, the job tracks the successful completions. When a specified number of successful completions is reached, the job itself is complete.</p>
<p>Rough plan:</p>
<p><strong>1. Create a Job for your application.</strong><br>
You can use the Deployment of your application as a template. You need to change <code>kind: Job</code> and add <code>spec.containers.command: ["npm", "test"]</code>, the last one will replace <code>CMD [ "npm", "start" ]</code> defined in your Dockerfile. Here is an example:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: npm-test-job
labels:
purpose: test
spec:
template:
spec:
containers:
- name: npm-test-job
image: <your-image>
command: ["npm", "test"]
restartPolicy: Never
</code></pre>
<p><strong>2. Run the Job</strong><br>
Run the Job and wait until it is complete:</p>
<pre><code>kubectl create -f npm-test-job
</code></pre>
<p><strong>3. Check the status of the Job</strong><br>
Check status of your Job, for example:</p>
<pre><code>kubectl describe jobs kubectl describe jobs | grep "Pods Statuses"
</code></pre>
<p><strong>4. Run the Deployment</strong><br>
If test finished successfully, you can start you deployment:</p>
<pre><code>kubectl create -f npm-deployment
</code></pre>
<p>Of course, you need to automate this process. Therefore, you need to develop a script implementing this logic.</p>
<hr>
<p>But my advice here: it is better to use CI tool, such as Jenkins. It integrates easily with Kubernetes and provides a vast amount of functionalities for such cases.</p>
|
<p>Hy</p>
<p>We're trying to get our website working on kubernetes (running in a container using nginx). We use ingress to route to the site, here is our configuration:</p>
<p>nginx-conf:</p>
<pre><code>server {
listen 80;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html =404;
}
}
</code></pre>
<p>Kubernetes deployment:</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: mywebsite
spec:
replicas: 2
template:
metadata:
labels:
app: mywebsite
spec:
containers:
- name: mywebsite
image: containerurl/xxx/mywebsite:latest
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: mywebsite
spec:
ports:
- port: 82
targetPort: 80
selector:
app: mywebsite
</code></pre>
<p>Ingress config:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myIngress
annotations:
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- xxx.com
secretName: tls-secret
rules:
- host: xxx.com
http:
paths:
- path: /mywebsite
backend:
serviceName: mywebsite
servicePort: 82
</code></pre>
<p>When i go to xxx.com/mywebiste, the index.html is loading, but the css and js are not loaded (404):</p>
<p>index.html:</p>
<pre><code><!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>My Website</title>
<meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1, user-scalable=no">
<link href="/styles.30457fcf45ea2da7cf6a.css" rel="stylesheet"></head>
<body>
<div id="react-root"></div>
<script type="text/javascript" src="/app.171d6a95cf24d8abab14.js"></script></body>
</html>
</code></pre>
<p>...because it trys to get the resources here, as example of the css:</p>
<pre><code>xxx.com/styles.30457fcf45ea2da7cf6a.css
</code></pre>
<p>...instead of:</p>
<p>xxx.com/mywebsite/styles.30457fcf45ea2da7cf6a.css</p>
<p>If tried different things like:</p>
<pre><code>nginx.ingress.kubernetes.io/add-base-url
nginx.ingress.kubernetes.io/app-root
</code></pre>
<p>...etc., but nothing seems to work.</p>
<p>Any ideas? Thanks for your help.</p>
<p>Regards,
Peter</p>
| <p>This is not a routing problem on nginx part, but the browser trying to access an absolute URI from the root of your domain. Use relative URIs (remove the leading slash):</p>
<pre><code><link href="styles.30457fcf45ea2da7cf6a.css" rel="stylesheet"></head>
<script type="text/javascript" src="app.171d6a95cf24d8abab14.js"></script></body>
</code></pre>
|
<p>Does anyone know how to import the data inside my dump.sql file to a kubernetes pod either;</p>
<p>Directly,same way as you dealing with docker containers:</p>
<pre><code>docker exec -i container_name mysql -uroot --password=secret database < Dump.sql
</code></pre>
<p>Or using the data stored in an existing docker container volume and pass it to the pod .</p>
| <p>Just if other people are searching for this : </p>
<pre><code>kubectl -n namespace exec -i my_sql_pod_name -- mysql -u user -ppassword < my_local_dump.sql
</code></pre>
|
<p>The following is the k8s definition used:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pv-provisioning-demo
labels:
demo: nfs-pv-provisioning
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 200Gi
---
apiVersion: v1
kind: ReplicationController
metadata:
name: nfs-server
spec:
securityContext:
runAsUser: 1000
fsGroup: 2000
replicas: 1
selector:
role: nfs-server
template:
metadata:
labels:
role: nfs-server
spec:
containers:
- name: nfs-server
image: k8s.gcr.io/volume-nfs:0.8
ports:
- name: nfs
containerPort: 2049
- name: mountd
containerPort: 20048
- name: rpcbind
containerPort: 111
securityContext:
privileged: true
volumeMounts:
- mountPath: /exports
name: mypvc
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: nfs-pv-provisioning-demo
---
kind: Service
apiVersion: v1
metadata:
name: nfs-server
spec:
ports:
- name: nfs
port: 2049
- name: mountd
port: 20048
- name: rpcbind
port: 111
selector:
role: nfs-server
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
nfs:
# FIXME: use the right IP
server: nfs-server
path: "/"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
resources:
requests:
storage: 1Gi
---
# This mounts the nfs volume claim into /mnt and continuously
# overwrites /mnt/index.html with the time and hostname of the pod.
apiVersion: v1
kind: ReplicationController
metadata:
name: nfs-busybox
spec:
securityContext:
runAsUser: 1000
fsGroup: 2000
replicas: 2
selector:
name: nfs-busybox
template:
metadata:
labels:
name: nfs-busybox
spec:
containers:
- image: busybox
command:
- sh
- -c
- 'while true; do date > /mnt/index.html; hostname >> /mnt/index.html; sleep $(($RANDOM % 5 + 5)); done'
imagePullPolicy: IfNotPresent
name: busybox
volumeMounts:
# name must match the volume name below
- name: nfs
mountPath: "/mnt"
volumes:
- name: nfs
persistentVolumeClaim:
claimName: nfs
</code></pre>
<p>Now /mnt directory in nfs-busybox should have 2000 as gid(as per <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/" rel="noreferrer">docs</a>). But it still have root and root as user and group. Since application is running with 1000/2000 its not able to create any logs or data in /mnt directory. </p>
<p>chmod might solve the issue, but it looks like work around. Is there any permenant solution for this?</p>
<p>Observations: If i replace nfs with some other PVC its working fine as told in <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/" rel="noreferrer">docs</a>.</p>
| <p>Have you tried <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="noreferrer">initContainers</a> method? It fixes permissions on an exported directory:</p>
<pre><code>initContainers:
- name: volume-mount-hack
image: busybox
command: ["sh", "-c", "chmod -R 777 /exports"]
volumeMounts:
- name: nfs
mountPath: /exports
</code></pre>
<p>If you use a standalone NFS server on Linux box, I suggest using <strong>no_root_squash</strong> option:</p>
<p><code>/exports *(rw,no_root_squash,no_subtree_check)</code></p>
<p>To manage the directory permission on nfs-server, there is a need to change security context and raise it to privileged mode:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: nfs-server
labels:
role: nfs-server
spec:
containers:
- name: nfs-server
image: nfs-server
ports:
- name: nfs
containerPort: 2049
securityContext:
privileged: true
</code></pre>
|
<p>In my setup, there is a set of containers that were initially built to run with <code>docker-compose</code>. After moving to Kubernetes I'm facing the following challenges:</p>
<ol>
<li><code>docker-compose</code> managed to provide some type of internal DNS that allowed a container to be addressed by its name. How do I create such a network in Kubernetes? What I found so far:</li>
</ol>
<ul>
<li>they could all be part of one pod and therefore communicate via localhost</li>
<li>they could all have a common label with matching key:value pairs and a service, but how does one handle Ports?</li>
</ul>
<ol start="2">
<li>I need to expose an internal Port to a certain NodePort as it has to be publicly available. How does such a service config look like? What I found so far:</li>
</ol>
<ul>
<li><p>something like this:</p>
<pre><code> apiVersion: v1
kind: Service
metadata:
labels:
app: frontend
name: frontend-nodeport
spec:
type: NodePort
ports:
- name: "3000-30001"
port: 3000
nodePort: 30001
selector:
app: frontend
status:
loadBalancer: {}`
</code></pre>
</li>
</ul>
| <blockquote>
<p>Docker-compose managed to provide some type of internal DNS that allowed a container to be addressed by its name. How do I create such a network in Kubernetes?</p>
</blockquote>
<p>As you researched, you can, indeed have two approaches:</p>
<ul>
<li><p><strong>IF</strong> you containers are to be <strong>scaled together</strong> then place them inside same pod and communicate through localhost over separate ports. This is less likely your case since this approach is more suitable when containerized app is more similar to processes on one physical box than a separate service/server.</p></li>
<li><p><strong>IF</strong> your containers are to be <strong>scaled separaltey</strong>, which is more probably your case, then use service. With services, in place of localhost (in previous point) you will either use just service name as it is (if pods are in same namespace) or FQDN (servicename.namespace.svc.cluster.local) if services are accessed across namespaces. As opposed to previous point where you had to have different ports for your containers (since you address localhost), in this case you can have same port across multiple services, since service:port must be unique. Also with service you can remap ports from containers if you wish to do so as well.</p></li>
</ul>
<p>Since you asked this as an introductory question two words of caution:</p>
<ul>
<li>service resolution works from standpoint of pod/container. To test it you actually need to exec into actual container (or proxy from host) and this is common confusion point. Just to be on safe side test service:port accessibility within actual container, not from master.</li>
<li>Finally, just to mimic docker-compose setup for inter-container network, you don't need to expose NodePort or whatever. Service layer in kubernetes will take care of DNS handling. NodePort has different intention.</li>
</ul>
<blockquote>
<p>I need to expose an internal Port to a certain NodePort. How does such a service config look like?</p>
</blockquote>
<p>You are on a good track, here is <a href="https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0" rel="nofollow noreferrer">nice overview</a> to get you started, and reference relevant to your question is given below:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-nodeport-service
selector:
app: my-app
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
nodePort: 30036
protocol: TCP
</code></pre>
<blockquote>
<p>Edit: Could you please provide an example of how a service.yaml would look like if the containers are scaled seperately ?</p>
</blockquote>
<ul>
<li><p>First one is, say, api server, we'll call it <code>svc-my-api</code>, it will use pod(s) labeled <code>app: my-api</code> and communicate to pod's port 80 and will be accessible by other pods (in the same namespace) as host:<code>svc-my-api</code> and port:<code>8080</code></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: svc-my-api
labels:
app: my-api
spec:
selector:
app: my-api
ports:
- protocol: TCP
port: 8080
targetPort: 80
</code></pre></li>
<li><p>Second one is, say, mysql server, we'll call it <code>svc-my-database</code>. Supposing that containers from api pods (covered by previous service) want to access database, they will use host:<code>svc-my-database</code> and port:<code>3306</code>.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: svc-my-database
labels:
app: my-database
spec:
selector:
app: my-database
ports:
- name: http
protocol: TCP
port: 3306
targetPort: 3306
</code></pre></li>
</ul>
|
<p>is jsonPath supported in kubernetes http api ?</p>
<p>for ex; how the following translates to in http API ?</p>
<pre><code>kubectl get pods -o=jsonpath='{.items[0]}'
</code></pre>
| <p>It's not supported by the API, you would need to evaluate that jsonpath against the API response. </p>
|
<p>I have a microservice deployed in a Tomcat container/pod. There are four different files generated in the container - access.log, tomcat.log, catalina.out and application.log (log4j output). What is the best approach to send these logs to Elasticsearch (or similar platform). </p>
<p>I read through the information on this <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/" rel="nofollow noreferrer">page</a> Logging Architecture - Kubernetes 5. Is “Sidecar container with a logging agent” the best option for my use case?</p>
<p>Is it possible to fetch pod labels (e.g.: version) and add it to each line? If it is doable, use a logging agent like fluentd? (I just want to know the direction I should take).</p>
| <p>Yes, the best option for your use case is to have to have one <code>tail -f</code> sidecar per log file and then install either a <code>fluentd</code> or a <code>fluent-bit</code> daemonset that will handle shipping and enriching the log events. </p>
<p>The <a href="https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch" rel="nofollow noreferrer">fluentd elasticsearch</a> cluster addon is available at that link. It will install a fluentd daemonset and a minimal ES cluster. The ES cluster is not production ready so please see the README for details on what must be changed.</p>
|
<p>I would like my application hosted in an Openshift cluster to target an external REST API without hardcoding IP/PORT in the client application, and also to be able to change IP/PORT without redelivering the application.</p>
<p>I managed to do it through ConfigMap, but I saw it may also be possible to do through Service <a href="https://docs.openshift.com/enterprise/3.0/dev_guide/integrating_external_services.html" rel="nofollow noreferrer">in OpenShift doc</a>.</p>
<p>However I did not manage to understand how it is working. I did the following:</p>
<h2>Creating a service</h2>
<pre><code>sylvain@HP:~$ oc export svc example-external-service
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
name: example-external-service
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
</code></pre>
<h2>Creating an endpoint</h2>
<pre><code>sylvain@HP:~$ oc export endpoints example-external-service
apiVersion: v1
kind: Endpoints
metadata:
creationTimestamp: null
name: example-external-service
subsets:
- addresses:
- ip: 216.58.198.195
ports:
- name: "80"
port: 80
protocol: TCP
</code></pre>
<h2>Doing a curl to my service from the POD where my app is running</h2>
<pre><code>sylvain@HP:~$ oc get pods
NAME READY STATUS RESTARTS AGE
nodejs-example-1-qnq46 1/1 Running 0 36m
sylvain@HP:~$ oc rsh nodejs-example-1-qnq46
sh-4.2$ env | grep "EXAMPLE_EXTERNAL"
EXAMPLE_EXTERNAL_SERVICE_SERVICE_PORT_HTTP=80
EXAMPLE_EXTERNAL_SERVICE_SERVICE_PORT=80
EXAMPLE_EXTERNAL_SERVICE_PORT_80_TCP_PORT=80
EXAMPLE_EXTERNAL_SERVICE_SERVICE_HOST=172.30.51.168
EXAMPLE_EXTERNAL_SERVICE_PORT_80_TCP_ADDR=172.30.51.168
EXAMPLE_EXTERNAL_SERVICE_PORT_80_TCP_PROTO=tcp
EXAMPLE_EXTERNAL_SERVICE_PORT=tcp://172.30.51.168:80
EXAMPLE_EXTERNAL_SERVICE_PORT_80_TCP=tcp://172.30.51.168:80
sh-4.2$ curl 172.30.51.168
curl: (7) Failed connect to 172.30.51.168:80; No route to host
sh-4.2$ curl 216.58.198.195
<HTML><HEAD><meta http-equiv="content-type" content="text/html; charset=utf-8">
<TITLE>301 Moved</TITLE></HEAD><BODY>
<H1>301 Moved</H1>
The document has moved
<A HREF="http://www.google.com/">here</A>.
</BODY></HTML>
sh-4.2$
</code></pre>
<p>In the first curl using the address defnined in the endpoint is working, however using the one in the environment variable:</p>
<pre><code>EXAMPLE_EXTERNAL_SERVICE_SERVICE_HOST=172.30.51.168
</code></pre>
<p>It is failing, so the routing is not correcty done. </p>
<p>What am I doing wrong? What did I miss?</p>
<p>Cheers</p>
| <p>In your endpoint configuration, change the name of port 80 (<code>"80"</code>) to <code>http</code>. It needs to be the same as in the service configuration.</p>
|
<p>I understood that </p>
<ul>
<li><code>StatefulSet</code> - manages/maintains stable hostname, network ID and persistent storage. </li>
<li><code>HeadlessService</code> - stable network ID you need to define a headless service for stateful applications</li>
</ul>
<blockquote>
<p>FROM K8s Docs -> Sometimes you don’t need or want load-balancing and a single service
IP. In this case, you can create “headless” services by specifying
"None" for the cluster IP (.spec.clusterIP).</p>
</blockquote>
<p><strong>My thoughts on "Statefull vs Stateless" Apps/components</strong></p>
<ol>
<li><p><code>UI</code> comes under stateless application/component because, it doesn't maintain any data. But it gets from DB and displays</p></li>
<li><p><code>DB</code>, <code>Cache</code>(Redis) are Statefull application/components,because it has to maintain data </p></li>
</ol>
<p><strong>My Questions.</strong></p>
<ol>
<li><p><code>Persistence storage in Apps</code> - Why should I consider to deploy postgress (for example) as <code>StatefulSet</code>? I can define <code>PV</code>s and <code>PVC</code> in <code>Deployement</code> to store the data in PV. Even if pods restart, it will gets it PV, thus there is no lose of data.</p></li>
<li><p><code>Network</code> - Redis(for example) should deploy as <code>StatefulSet</code>, so that we can get unique "Network ID"/Name everytime even after restart of pods. For example; <code>Redis-0</code>, <code>Redis-1</code> are in <code>StatefulSet</code>, I can define <code>Redis-0</code> as master, so master <code>name</code> never changes. Now why should I consider <code>Headless Service</code> for <code>StatefulSet</code> apps? I can directly access/connect the PODs itself, right? What is the use of <code>Headless Service</code>?</p></li>
<li><p>I heard about <code>Operators</code>, a best way to manage <code>StatefulSet</code> apps. I found some example below. Why those(or some other) are important to deploy as <code>StatefulSet</code>. For example, <code>Prometheus</code> or <code>ElasticSearch</code>; I can define <code>PVs</code> and <code>PVC</code> to store data without lose.</p>
<ul>
<li><a href="https://github.com/krallistic/kafka-operator" rel="noreferrer">https://github.com/krallistic/kafka-operator</a></li>
<li><a href="https://github.com/coreos/prometheus-operator" rel="noreferrer">https://github.com/coreos/prometheus-operator</a></li>
<li><a href="https://github.com/upmc-enterprises/elasticsearch-operator" rel="noreferrer">https://github.com/upmc-enterprises/elasticsearch-operator</a></li>
</ul></li>
</ol>
<p>Why/When should I care about <code>StatefulSet</code> and <code>Headless Serivice</code>?</p>
| <p>Before trying to answer some of your questions I must add disclaimer: there are different ways to skin a cat. And since we are discussing StatefulSets here note that not all approaches are best suited for all stateful applications. In case that you need a single database pod with single PV, you could have one approach, if your api pod needs some shared and some separated PV then another and so on..</p>
<blockquote>
<p>Persistence storage in Apps - Why should I consider to deploy postgress (for example) as StatefulSet? I can define PVs and PVC in Deployement to store the data in PV.</p>
</blockquote>
<p>This hold true if all your pods are using same persistent volume claim across all replicas (and provisioner allows that). If you try to increase number of replicas based on Deployment all your pods will use the very same PVC. On the other hand, StatefulSet as defined in <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#statefulset-v1-apps" rel="noreferrer">api documentation</a> has <code>volumeClaimTemplates</code> allowing for each replica to have own generated PVC, securing separately provisioned PV for each pod in replica set.</p>
<blockquote>
<p>Now why should I consider Headless Service for StatefulSet apps?</p>
</blockquote>
<p>Because of ease of discovery. Again, you don't need to know how many replicas you have in Headless Service, checking service DNS you will get ALL replicas (caveat - that are up and running in that moment). You can do it manually, but in this case you rely on different mechanism of counting/keeping tabs on replicas (replicas are self registered to master for example). Here is nice example of <a href="https://akomljen.com/stateful-applications-on-kubernetes/" rel="noreferrer">pod discovery with nslookup</a> that can shed some light on why headless can be a nice idea.</p>
<blockquote>
<p>Why those(or some other) are important to deploy as StatefulSet</p>
</blockquote>
<p>To my understanding, very Operators you listed are deployed using the Deployment themselves. They handle StatefulSets though, so lets consider ElasticSearch for example. If it was not deployed as StatefulSet you would end up with two pods targeting same PV (if provisioner allows it) and that would heavily mess up things. With StatefulSet each pod gets its very own persistent volume claim (from template) and consequently separate persistent volume from other ElasticSearch pods in same StatefulSet. This is just a tip of the iceberg since ElasticSearch is more complex for setup/handling and operators are helping with that.</p>
<blockquote>
<p>Why/When should I care about StatefulSet and Headless Serivice?</p>
</blockquote>
<ul>
<li><p>Stateful set you should use in any case where replicated pods need to have separate PV from each other (created from PVC template, and automatically provisioned).</p></li>
<li><p>Headless Service you should use in any case where you want to automatically discover all pods under the service as opposed to regular Service where you get ClusterIP instead. As an illustration from above mentioned <a href="https://akomljen.com/stateful-applications-on-kubernetes/" rel="noreferrer">example</a> here is difference between DNS entries for Service (with ClusterIP) and Headless Service (without ClusterIP):</p>
<ul>
<li><p>Standard service - you will get the clusterIP value:</p>
<pre><code>kubectl exec zookeeper-0 -- nslookup zookeeper
Server: 10.0.0.10
Address: 10.0.0.10#53
Name: zookeeper.default.svc.cluster.local
Address: 10.0.0.213
</code></pre></li>
<li><p>Headless service - you will get the IP of each Pod:</p>
<pre><code>kubectl exec zookeeper-0 -- nslookup zookeeper
Server: 10.0.0.10
Address: 10.0.0.10#53
Name: zookeeper.default.svc.cluster.local
Address: 172.17.0.6
Name: zookeeper.default.svc.cluster.local
Address: 172.17.0.7
Name: zookeeper.default.svc.cluster.local
Address: 172.17.0.8
</code></pre></li>
</ul></li>
</ul>
|
<p>Was there a recent change to Nginx Ingress? Out of the blue I'm now getting "Connection refused" errors. I thought it was my own configuration which worked on a previous cluster.</p>
<p>Instead I decided to follow this tutorial <a href="https://cloud.google.com/community/tutorials/nginx-ingress-gke" rel="noreferrer">GKE NGINX INGRESS</a> and I'm getting the same result.</p>
<pre><code>$ kubectl get deployments --all-namespaces
NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
default hello-app 1 1 1 1 13m
default nginx-ingress-controller 1 1 1 1 12m
default nginx-ingress-default-backend 1 1 1 0 12m
</code></pre>
<p>I see the default-backend isn't running but I don't know enough about Kubernetes to know if that's what's preventing everything from working properly.</p>
<pre><code>$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-app ClusterIP 10.31.255.90 <none> 8080/TCP 14m
kubernetes ClusterIP 10.31.240.1 <none> 443/TCP 19m
nginx-ingress-controller LoadBalancer 10.31.251.198 35.227.50.24 80:31285/TCP,443:30966/TCP 14m
nginx-ingress-default-backend ClusterIP 10.31.242.167 <none> 80/TCP 14m
</code></pre>
<p>Finally:</p>
<pre><code>$ kubectl get ing
NAME HOSTS ADDRESS PORTS AGE
ingress-resource * 35.237.184.85 80 10m
</code></pre>
<p>According to the tutorial I should just be able to go to <a href="http://35.237.184.85/hello" rel="noreferrer">here</a> to receive a 200 and <a href="http://35.237.184.85/test" rel="noreferrer">here</a> to get a 404.</p>
<p>I've left the links live so you all can see them.</p>
<pre><code>$ curl -I http://35.237.184.85/hello
curl: (7) Failed to connect to 35.237.184.85 port 80: Connection refused
</code></pre>
<p>I swear everything worked before and the only thing I can think of is that something from the teller install of nginx-ingress changed.</p>
<p>Please, any help is appreciated! Thank you in advance!</p>
| <p>That's because you are trying the request against the IP address created by the Ingress. Your entrypoint is the LoadBalancer type service created IP.</p>
<p>Try <code>curl -I http://35.227.50.24/hello</code>. That's where you will get 200.</p>
|
<p>Our service is running in kubernetes cluster.
I'm trying to make our service to be secured by SSL. </p>
<p>For that purpose I added to application.properties:</p>
<pre><code>security.require-ssl=true
server.ssl.key-store-type=JKS
server.ssl.key-store=serviceCertificates.jks
server.ssl.key-store-password=${KEYSTORE_PASSWORD}
server.ssl.key-alias=certificate
</code></pre>
<p>The keystore password I want to take from kubernetes secret, that is defined in the cluster.<br>
When the service starts running I get an error <code>Password verification failed</code>:</p>
<blockquote>
<p>"org.apache.catalina.LifecycleException: Failed to start component [Connector[HTTP/1.1-8080]]\n\tat org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:167)\n\tat org.apache.catalina.core.StandardService.addConnector(StandardService.java:225)\n\tat org.springframework.boot.web.embedded.tomcat.TomcatWebServer.addPreviouslyRemovedConnectors(TomcatWebServer.java:256)\n\tat org.springframework.boot.web.embedded.tomcat.TomcatWebServer.start(TomcatWebServer.java:198)\n\tat org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.startWebServer(ServletWebServerApplicationContext.java:300)\n\tat org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.finishRefresh(ServletWebServerApplicationContext.java:162)\n\tat org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:553)\n\tat org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:140)\n\tat org.springframework.boot.SpringApplication.refresh(SpringApplication.java:759)\n\tat org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:395)\n\tat org.springframework.boot.SpringApplication.run(SpringApplication.java:327)\n\tat org.springframework.boot.SpringApplication.run(SpringApplication.java:1255)\n\tat org.springframework.boot.SpringApplication.run(SpringApplication.java:1243)\n\tat com.ibm.securityservices.cryptoutils.Application.main(Application.java:9)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n\tat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n\tat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n\tat java.lang.reflect.Method.invoke(Method.java:498)\n\tat org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:48)\n\tat org.springframework.boot.loader.Launcher.launch(Launcher.java:87)\n\tat org.springframework.boot.loader.Launcher.launch(Launcher.java:50)\n\tat org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:51)\nCaused by: org.apache.catalina.LifecycleException: Protocol handler start failed\n\tat org.apache.catalina.connector.Connector.startInternal(Connector.java:1020)\n\tat org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)\n\t... 21 common frames omitted\nCaused by: java.lang.IllegalArgumentException: Keystore was tampered with, or password was incorrect\n\tat org.apache.tomcat.util.net.AbstractJsseEndpoint.createSSLContext(AbstractJsseEndpoint.java:116)\n\tat org.apache.tomcat.util.net.AbstractJsseEndpoint.initialiseSsl(AbstractJsseEndpoint.java:87)\n\tat org.apache.tomcat.util.net.NioEndpoint.bind(NioEndpoint.java:225)\n\tat org.apache.tomcat.util.net.AbstractEndpoint.start(AbstractEndpoint.java:1150)\n\tat org.apache.coyote.AbstractProtocol.start(AbstractProtocol.java:591)\n\tat org.apache.catalina.connector.Connector.startInternal(Connector.java:1018)\n\t... 22 common frames omitted\nCaused by: java.io.IOException: Keystore was tampered with, or password was incorrect\n\tat sun.security.provider.JavaKeyStore.engineLoad(JavaKeyStore.java:780)\n\tat sun.security.provider.JavaKeyStore$JKS.engineLoad(JavaKeyStore.java:56)\n\tat sun.security.provider.KeyStoreDelegator.engineLoad(KeyStoreDelegator.java:224)\n\tat sun.security.provider.JavaKeyStore$DualFormatJKS.engineLoad(JavaKeyStore.java:70)\n\tat java.security.KeyStore.load(KeyStore.java:1445)\n\tat org.apache.tomcat.util.net.SSLUtilBase.getStore(SSLUtilBase.java:139)\n\tat org.apache.tomcat.util.net.SSLHostConfigCertificate.getCertificateKeystore(SSLHostConfigCertificate.java:204)\n\tat org.apache.tomcat.util.net.jsse.JSSEUtil.getKeyManagers(JSSEUtil.java:184)\n\tat org.apache.tomcat.util.net.AbstractJsseEndpoint.createSSLContext(AbstractJsseEndpoint.java:114)\n\t... 27 common frames omitted\nCaused by: java.security.UnrecoverableKeyException: Password verification failed\n\tat sun.security.provider.JavaKeyStore.engineLoad(JavaKeyStore.java:778)\n\t... 35 common frames omitted\n"}</p>
</blockquote>
<p>My investigation:<br>
1. If I print in the code </p>
<pre><code> System.out.println("KEYSTORE_PASSWORD: "+ System.getenv("KEYSTORE_PASSWORD"));
</code></pre>
<p>I see its correct value.<br>
2. If I set hard coded constant password value in application properties, it works, service is up and running. </p>
<p>So I guess that the problem is to set secret value to application property.<br>
Your help and advice will be appreciated </p>
| <p>I think there is typo or hidden character in your secret descriptor. You can exec into the pod , verify the system property and also try decrypting the password using command line tools. </p>
|
<p>I'm using Cloud composer to orchestrate my airflow instance but not sure how to install packages for the airflow worker bash.</p>
<p>Previously I was running airflow on a google compute engine instance using docker, it was easy to specify requirements via the docker make file.</p>
<p>As someone who is new to kubernetes and Cloud composer, I was wondering if there is something similar I could do for kubernetes/Cloud composer as I previously did for docker?</p>
<p>I looking to install <code>lzop</code> for unix and also would need to update the gsutil boto config file with s3 credentials.</p>
<p>How do you pull a new docker image into kubernetes and recreate it for pods? </p>
<p>Sorry if my lingo is incorrect, this is new to me</p>
| <p>At the moment, if I read the documentation correctly, you cannot modify the images used by Composer. Unless you deploy your custom solution on a Kubernetes cluster, I think you cannot extend it beyond Python Libraries and Airflow Plugins. </p>
<p>You can ssh into each worker Compute Engine instance and install it manually on each machine.</p>
<p>You may try to run <code>apt</code> it via <code>BashOperator</code>, but I doubt you will succeed; unfortunately, Composer is still a Beta product, and many features are still in the making.</p>
|
<p>I can create config map from a property file and use the same config files inside the POD. However, I don't want to use the configmap created in past and supplied with the helmchart. Rather in the helmchart values.yaml i want to provide a file name from where the config map will be created dynamically ..</p>
<p>Any suggestions/examples are welcome .</p>
<p>Thanks in advance -
Tutai </p>
| <p>See if the approach described in <a href="https://github.com/kubernetes/charts/issues/1310" rel="nofollow noreferrer">kubernetes/charts issue 1310</a> works for you.</p>
<blockquote>
<p>I suggest that we allow for overriding the name of the <code>ConfigMap</code> that gets mounted to the persistent volume.<br>
That way, the parent chart could create, and even make templates for, these <code>ConfigMaps</code>.</p>
<p>For example <code>values.yaml</code> could have the following fields added:</p>
</blockquote>
<pre><code>## alertmanager ConfigMap entries
##
alertmanagerFiles:
# ConfigMap override where full-name is {{.Release.Name}}-{{.Values.alertmanagerFiles.configMapOverrideName}}
configMapOverrideName: ""
...
## Prometheus server ConfigMap entries
##
serverFiles:
# ConfigMap override where full-name is {{.Release.Name}}-{{.Values.serverFiles.configMapOverrideName}}
configMapOverrideName: ""
...
</code></pre>
<p>You can see the implementation of that issue in <a href="https://github.com/kubernetes/charts/commit/2ea776441e305cab858913e2c2806f5122df06dd" rel="nofollow noreferrer">commit 2ea7764</a>, as an example of override.</p>
<hr>
<p>This differs from a file approach, where you create a new config map and replace the old one:</p>
<pre><code>kubectl create configmap asetting --from-file=afile \
-o yaml --dry-run | kubectl replace -f -
</code></pre>
<p>See "<a href="https://medium.com/google-cloud/kubernetes-configmaps-and-secrets-68d061f7ab5b#33e4" rel="nofollow noreferrer">Updating Secrets and ConfigMaps</a>" as an example.</p>
|
<p>How can one define log retention for kubernetes pods?
For now it seems like the log file size is not limited, and it is uses the host machine complete resources.</p>
| <p>According to <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/" rel="noreferrer">Logging Architecture</a> from kubernetes.io there are some options</p>
<p><strong>First option</strong></p>
<blockquote>
<p>Kubernetes currently is not responsible for rotating logs, but rather
a deployment tool should set up a solution to address that. For
example, in Kubernetes clusters, deployed by the kube-up.sh script,
there is a <a href="https://linux.die.net/man/8/logrotate" rel="noreferrer">logrotate</a> tool configured to run each hour. You can also
set up a container runtime to rotate application’s logs automatically,
e.g. by using Docker’s log-opt. In the kube-up.sh script, the latter
approach is used for COS image on GCP, and the former approach is used
in any other environment. In both cases, by default rotation is
configured to take place when log file exceeds 10MB.</p>
</blockquote>
<p>Also</p>
<p><strong>Second option</strong></p>
<blockquote>
<p>Sidecar containers can also be used to rotate log files that cannot be rotated by the application itself. <a href="https://github.com/samsung-cnct/logrotate" rel="noreferrer">An example</a> of this approach is a small container running logrotate periodically. However, it’s recommended to use stdout and stderr directly and leave rotation and retention policies to the kubelet.</p>
</blockquote>
<p><a href="https://i.stack.imgur.com/lIp0E.png" rel="noreferrer"><img src="https://i.stack.imgur.com/lIp0E.png" alt="enter image description here"></a></p>
|
<p>Suppose I have a resource foo which is a statefulset with 3 replicas. Each makes a persistent volume claim.</p>
<p>One of the foo pods (foo-1) dies, and a new one starts in its place. Will foo-1 be bound to the same persistent volume that the previous foo-1 had before it died? Will the number of persistent volume claims stay the same or grow?</p>
<p>This edge case doesn't seem to be in the documentation on StatefulSets.</p>
| <p>Yes you can. A PVC is going to create a disk on GCP, and add it as secondary disk to the node in which the pod is running.</p>
<p>Upon deletion of an individual pod, K8s is going to re-create the pod on the same node it was running. If it is not possible (say the node no longer exists), the pod will be created on another node, and the secondary disk will be moved to that node.</p>
|
<p>I've got a node pool defined with min instances set to 1 and max instances set to 5, and autoscaling enabled.</p>
<p>However it does not seem to be scaling down.</p>
<ul>
<li>I have cordoned a node.</li>
<li>It has been over 12 hours</li>
<li>There are no pending pods</li>
<li>Removing a node would not reduce the amount of replicas of my own deployment</li>
</ul>
<p>The node in question has the following pods running on it:</p>
<ul>
<li>fluentd</li>
<li>kube-dns</li>
<li>kube-proxy-gke</li>
<li>metrics-server</li>
<li>redis</li>
</ul>
<p>All the pods above are in the <code>kube-system</code> namespace besides the <code>redis</code> pod which is defined within a daemonset.</p>
<p>Is there any additional configuration required? A pod disruption budget perhaps?</p>
<p>Output of <code>kubectl describe -n kube-system configmap cluster-autoscaler-status</code>:</p>
<pre><code>Name: cluster-autoscaler-status
Namespace: kube-system
Labels: <none>
Annotations: cluster-autoscaler.kubernetes.io/last-updated=2018-06-15 10:40:16.289611397 +0000 UTC
Data
====
status:
----
Cluster-autoscaler status at 2018-06-15 10:40:16.289611397 +0000 UTC:
Cluster-wide:
Health: Healthy (ready=4 unready=0 notStarted=0 longNotStarted=0 registered=4 longUnregistered=0)
LastProbeTime: 2018-06-15 10:40:14.942263061 +0000 UTC
LastTransitionTime: 2018-06-15 09:17:56.845900388 +0000 UTC
ScaleUp: NoActivity (ready=4 registered=4)
LastProbeTime: 2018-06-15 10:40:14.942263061 +0000 UTC
LastTransitionTime: 2018-06-15 09:18:55.777577792 +0000 UTC
ScaleDown: NoCandidates (candidates=0)
LastProbeTime: 2018-06-15 10:40:14.942263061 +0000 UTC
LastTransitionTime: 2018-06-15 09:39:03.33504599 +0000 UTC
NodeGroups:
Name: https://content.googleapis.com/compute/v1/projects/gcpwp-ayurved-subs-staging/zones/europe-west1-b/instanceGroups/gke-wordpress-preempt-nodes-9c33afcb-grp
Health: Healthy (ready=3 unready=0 notStarted=0 longNotStarted=0 registered=3 longUnregistered=0 cloudProviderTarget=3 (minSize=2, maxSize=3))
LastProbeTime: 2018-06-15 10:40:14.942263061 +0000 UTC
LastTransitionTime: 2018-06-15 09:17:56.845900388 +0000 UTC
ScaleUp: NoActivity (ready=3 cloudProviderTarget=3)
LastProbeTime: 2018-06-15 10:40:14.942263061 +0000 UTC
LastTransitionTime: 2018-06-15 09:18:55.777577792 +0000 UTC
ScaleDown: NoCandidates (candidates=0)
LastProbeTime: 2018-06-15 10:40:14.942263061 +0000 UTC
LastTransitionTime: 2018-06-15 09:39:03.33504599 +0000 UTC
Events: <none>
</code></pre>
| <p>There are a few constraints that could prevent the node from scaling down.</p>
<p>You should verify the pods you listed one by one against the <a href="https://github.com/kubernetes/autoscaler/blob/cluster-autoscaler-1.2.2/cluster-autoscaler/FAQ.md#what-types-of-pods-can-prevent-ca-from-removing-a-node" rel="nofollow noreferrer">What types of pods can prevent CA from removing a node?</a> documentation.
This should help you discover if there is a pod that prevents it.</p>
<p>If it is indeed the <code>redis</code> pod then you could try using the safe to evict annotation:</p>
<pre><code>"cluster-autoscaler.kubernetes.io/safe-to-evict": "true"
</code></pre>
<p>If it is one of the system pods I would try the same thing on other nodes to see if scaling down works on them.
According to the <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler#minimum_and_maximum_node_pool_size" rel="nofollow noreferrer">GKE documentation</a>, you should be able to scale down your cluster to 1 node per cluster or completely for a specific node pool.</p>
|
<pre><code>result, err := crdclient.Create(example)
if err == nil {
fmt.Printf("CREATED: %#v\n", result)
} else if apierrors.IsAlreadyExists(err) {
fmt.Printf("ALREADY EXISTS: %#v\n", result)
} else {
panic(err)
}
// List all Example objects
items, err := crdclient.List(meta_v1.ListOptions{})
if err != nil {
panic(err)
}
fmt.Printf("List:\n%s\n", items)
result, err = crdclient.Get("example123")
if err != nil {
panic(err)
}
fmt.Printf("Get:\n%v\n", result)
result.Status.Message = "Hello There"
fmt.Println("\n Result is: %v \n", result)
up, uperr := crdclient.Update(result)
if uperr != nil {
panic(uperr)
}
</code></pre>
<p>In the above example for CRD with kubernetes API I get the error.
"In Update call
panic: name must be provided"</p>
<p>What am I missing? The code is based out the sample given @ <a href="https://github.com/yaronha/kube-crd" rel="nofollow noreferrer">https://github.com/yaronha/kube-crd</a></p>
| <p>I looked at the code, you need to update the Update API in the client.go file with the following code:</p>
<pre><code>func (f *crdclient) Update(obj *crd.Example) (*crd.Example, error) {
var result crd.Example
err := f.cl.Put().
Namespace(f.ns).Resource(f.plural).
Name(obj.Name).
Body(obj).Do().Into(&result)
return &result, err
}
</code></pre>
<p>After that your code should be working as expected. </p>
|
<p>I have a regional cluster set up in <strong>google kubernetes engine (GKE)</strong>. The node group is a single <strong>vm in each region (3 total)</strong>. I have a deployment with <strong>3 replicas minimum</strong> controlled by a HPA.
The <strong>nodegroup is configured to be autoscaling</strong> (cluster autoscaling aka CA).
The problem scenario:</p>
<p>Update deployment image. Kubernetes automatically creates new pods and the CA identifies that a new node is needed. I now have 4.
The old pods get removed when all new pods have started, which means I have the exact same CPU request as the minute before. But the after the 10 min maximum downscale time I still have 4 nodes.</p>
<p>The CPU requests for the nodes is now:</p>
<pre><code>CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
358m (38%) 138m (14%) 516896Ki (19%) 609056Ki (22%)
--
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
800m (85%) 0 (0%) 200Mi (7%) 300Mi (11%)
--
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
510m (54%) 100m (10%) 410Mi (15%) 770Mi (29%)
--
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
823m (87%) 158m (16%) 484Mi (18%) 894Mi (33%)
</code></pre>
<p>The 38% node is running:</p>
<pre><code>Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
kube-system event-exporter-v0.1.9-5c8fb98cdb-8v48h 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system fluentd-gcp-v2.0.17-q29t2 100m (10%) 0 (0%) 200Mi (7%) 300Mi (11%)
kube-system heapster-v1.5.2-585f569d7f-886xx 138m (14%) 138m (14%) 301856Ki (11%) 301856Ki (11%)
kube-system kube-dns-autoscaler-69c5cbdcdd-rk7sd 20m (2%) 0 (0%) 10Mi (0%) 0 (0%)
kube-system kube-proxy-gke-production-cluster-default-pool-0fd62aac-7kls 100m (10%) 0 (0%) 0 (0%) 0 (0%)
</code></pre>
<p>I suspect it wont downscale because heapster or kube-dns-autoscaler.
But the 85% pod contains:</p>
<pre><code>Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
kube-system fluentd-gcp-v2.0.17-s25bk 100m (10%) 0 (0%) 200Mi (7%) 300Mi (11%)
kube-system kube-proxy-gke-production-cluster-default-pool-7ffeacff-mh6p 100m (10%) 0 (0%) 0 (0%) 0 (0%)
my-deploy my-deploy-54fc6b67cf-7nklb 300m (31%) 0 (0%) 0 (0%) 0 (0%)
my-deploy my-deploy-54fc6b67cf-zl7mr 300m (31%) 0 (0%) 0 (0%) 0 (0%)
</code></pre>
<p>The fluentd and kube-proxy pods are present on every node, so I assume they are not needed without the node. Which means that my deployment could be relocated to the other nodes since it only has a request of 300m (31% since only 94% of node CPU is allocatable).</p>
<p>So I figured that Ill check the logs. But if I run <code>kubectl get pods --all-namespaces</code> there are no pod visible on GKE for the CA. And if I use the command <code>kubectl get configmap cluster-autoscaler-status -n kube-system -o yaml</code> it only tells me if it is about to scale, not why or why not.
Another option is to look at <code>/var/log/cluster-autoscaler.log</code> in the master node. I SSH:ed in the all 4 nodes and only found a <code>gcp-cluster-autoscaler.log.pos</code> file that says: <code>/var/log/cluster-autoscaler.log 0000000000000000 0000000000000000</code> meaning the file should be right there but is empty.
Last option according to the <a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md" rel="noreferrer">FAQ</a>, is to check the events for the pods, but as far as i can tell they are empty.</p>
<p>Anyone know why it wont downscale or atleast where to find the logs?</p>
| <p>Answering myself for visibility.</p>
<p>The problem is that the CA never considers moving anything unless all the requirements mentioned in the <a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md" rel="noreferrer">FAQ</a> are met at the same time.
So lets say I have 100 nodes with 51% CPU requests. It still wont consider downscaling.</p>
<p>One solution is to increase the value at which CA checks, now 50%. But unfortunately that is not supported by GKE, see answer from google support @GalloCedrone:</p>
<blockquote>
<p>Moreover I know that this value might sound too low and someone could be interested to keep as well a 85% or 90% to avoid your scenario.
Currently there is a feature request open to give the user the possibility to modify the flag "--scale-down-utilization-threshold", but it is not implemented yet.</p>
</blockquote>
<p>The workaround I found is to decrease the CPU request (100m instead of 300m) of the pods and have the Horizontal Pod Autoscaler (HPA) create more on demand. This is fine for me but if your application is not suitable for many small instances you are out of luck. Perhaps a cron job that cordons a node if the total utilization is low?</p>
|
<p>I am experiencing issues when I try to deploy my Django application to Kubernetes cluster. More specifically, when I try to deploy PostgreSQL.</p>
<p>Here is what my .YML deployment file looks like:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: postgres-service
spec:
selector:
app: postgres-container
tier: backend
ports:
- protocol: TCP
port: 5432
targetPort: 5432
type: ClusterIP
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: postgres-pv
labels:
type: local
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 2Gi
hostPath:
path: /tmp/data/persistent-volume-1 #U okviru cvora n
persistentVolumeReclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pv-claim
labels:
type: local
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres-container
tier: backend
template:
metadata:
labels:
app: postgres-container
tier: backend
spec:
containers:
- name: postgres-container
image: postgres:9.6.6
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-credentials
key: user
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-credentials
key: password
- name: POSTGRES_DB
value: agent_technologies_db
ports:
- containerPort: 5432
volumeMounts:
- name: postgres-volume-mount
mountPath: /var/lib/postgresql/data/db-files
volumes:
- name: postgres-volume-mount
persistentVolumeClaim:
claimName: postgres-pv-claim
- name: postgres-credentials
secret:
secretName: postgres-credentials
</code></pre>
<p>Here is what I get when I run kubectl get pods command :</p>
<pre><code>NAME READY STATUS RESTARTS AGE
agent-technologies-deployment-7c7c6676ff-8p49r 1/1 Running 0 2m
agent-technologies-deployment-7c7c6676ff-dht5h 1/1 Running 0 2m
agent-technologies-deployment-7c7c6676ff-gn8lp 1/1 Running 0 2m
agent-technologies-deployment-7c7c6676ff-n9qql 1/1 Running 0 2m
postgres-8676b745bf-8f7jv 0/1 CrashLoopBackOff 4 3m
</code></pre>
<p>And here is what I get when I try to inspect what is going on with PostgreSQL deployment by using <code>kubectl logs $pod_name</code>:</p>
<pre><code>initdb: directory "/var/lib/postgresql/data" exists but is not empty
If you want to create a new database system, either remove or empty
the directory "/var/lib/postgresql/data" or run initdb
with an argument other than "/var/lib/postgresql/data".
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
</code></pre>
<p>Note: I am using Google Cloud as a provider.</p>
| <p>You can't have your db in <code>/var/lib/postgres/data/whatever</code>.</p>
<p>Change that path by <code>/var/lib/postgres/whatever</code> and it will work.</p>
<blockquote>
<p><strong>17.2.1. Use of Secondary File Systems</strong></p>
<p>Many installations create their database clusters on file systems (volumes) other than the machine's "root" volume. If you choose to do this, it is not advisable to try to use the secondary volume's topmost directory (mount point) as the data directory. Best practice is to create a directory within the mount-point directory that is owned by the PostgreSQL user, and then create the data directory within that. This avoids permissions problems, particularly for operations such as pg_upgrade, and it also ensures clean failures if the secondary volume is taken offline.</p>
</blockquote>
<p>And, by the way, I had to create a secret, as it is not in the post:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: postgres-credentials
type: Opaque
data:
user: cG9zdGdyZXM= #postgres
password: cGFzc3dvcmQ= #password
</code></pre>
<p>Note that the username needs to be "postgres". I don't know if you are covering this...</p>
|
<p>I am trying to build an API, which can send back my pods' resource usage.
Looking at the <a href="https://i.stack.imgur.com/9wPbP.png" rel="nofollow noreferrer">resources being used by the pods</a>, I am not able to figure out the go-client API to send the request to. Any help will be very appreciated.</p>
| <ol>
<li><p>I'm pretty sure the kuberetes-dashboard uses XHR to obtain that data, so you can make the same requests your browser does, provided your <code>serviceAccount</code> has the correct credentials to interrogate the kubernetes-dashboard API</p></li>
<li><p>either way, that timeseries data surfaced by kubernetes-dashboard actually comes from heapster, not from the kubernetes API itself, so the kubernetes go-client wouldn't be involved but rather would be a query to the heapster <code>Service</code> (which IIRC does not require authentication, although it <em>would</em> require constructing the correct heapster query syntax, which kubernetes-dashboard is doing on your behalf)</p></li>
</ol>
|
<p>I would like to create a cron job which selects the name kubernetes secrets from a given type (e.g kubernetes.io/tls) older than 29 days.</p>
<p>I tried these 2 ways:</p>
<p>1 - Using grep:</p>
<pre><code>kubectl get secrets --all-namespaces| grep kubernetes.io/tls
cicd devkins.infra.mydomain.ninja.tls kubernetes.io/tls 2 14d
cust-ao-xmas cust-ao-xmas.infra.mydomain.ninja.tls kubernetes.io/tls 2 1h
dev dev.mydomain.com.tls kubernetes.io/tls 2 4h
logging graylog.infra.mydomain.ninja.tls kubernetes.io/tls 2 10d
</code></pre>
<p>2 Using jsonpath</p>
<pre><code>kubectl get secrets --all-namespaces -o jsonpath='{range .items[?(@.type=="kubernetes.io/tls")]}[{.metadata.namespace},{.metadata.name},{.metadata.creationTimestamp}]{"\n"}{end}'
[cicd,devkins.infra.mydomain.ninja.tls,2017-12-13T22:57:02Z]
[cust-ao-xmas,cust-ao-xmas.infra.mydomain.ninja.tls,2017-12-28T03:13:32Z]
[dev,dev.mydomain.com.tls,2017-12-28T00:59:10Z]
[logging,graylog.infra.mydomain.ninja.tls,2017-12-17T22:23:38Z]
</code></pre>
<p>Both of them return the answers but make it extremely hard to parse</p>
<p>Is there a better way to achieve this?</p>
| <p>This is what I use... There is probably better way of doing it, but works for my case (secrets older than 24hrs).</p>
<pre><code>kubectl get secrets -o json | jq -r "[.items[] | {name: .metadata.name, startTime: .metadata.creationTimestamp | fromdate } | select(.startTime < (now | . - 86400))]" | jq -r ".[].name"
</code></pre>
|
<p>I'm a beginner on Kubernetes. When I described my node, I saw this at the very bottom:</p>
<pre><code>kubectl describe node ip-x-x-x-x.ap-southeast-2.compute.internal
...
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
225m (11%) 200m (10%) 125Mi (1%) 300Mi (3%)
Events: <none>
</code></pre>
<p>How do I un-limit the memory + cpu?</p>
| <p>Every node has limits according to its resources: number of processors or cores, amount of memory. Kubernetes uses this information for distributing Pods across Nodes. </p>
<p>Referring to the official documentation:</p>
<p><strong>- Meaning of CPU</strong><br>
One CPU, in Kubernetes, is equivalent to:<br>
- 1 AWS vCPU<br>
- 1 GCP Core<br>
- 1 Azure vCore<br>
- 1 Hyperthread on a bare-metal Intel processor with Hyperthreading<br>
1 CPU is 1000m (1 thousand milicores)</p>
<p><strong>- Meaning of memory</strong>. It is an amount of memory on the server.</p>
<p>In the output from <code>kubectl describe node <node-name></code> command, you see statistics of resource usage. Actually, resources of your server can be counted from the example in question, it is 2 CPUs/Cores and around 10000 MB of memory.</p>
<p>Please note that some resources are already allocated by system Pods like kube-dns, kube-proxy or kubernetes-dashboard.</p>
|
<p><strong>What we want to achieve:</strong></p>
<p>We would like to use Airflow to manage our machine learning and data pipeline while using Kubernetes to manage the resources and schedule the jobs. What we would like to achieve is for Airflow to orchestrate the workflow (e.g. Various tasks dependencies. Re-run jobs upon failures) and Kubernetes to orchestrate the infrastructure (e.g cluster autoscaling and individual jobs assignment to nodes). In other words Airflow will tell the Kubernetes cluster what to do and Kubernetes decides how to distribute the work. In the same time we would also want Airflow to be able to monitor the individual tasks status. For example if we have 10 tasks spreaded across a cluster of 5 nodes, Airflow should be able to communicate with the cluster and reports show something like: 3 “small tasks” are done, 1 “small task” has failed and will be scheduled to re-run and the remaining 6 “big tasks” are still running.</p>
<p><strong>Questions:</strong></p>
<p>Our understanding is that Airflow has no Kubernetes-Operator, see open issues at <a href="https://issues.apache.org/jira/browse/AIRFLOW-1314" rel="noreferrer">https://issues.apache.org/jira/browse/AIRFLOW-1314</a>. That being said we don’t want Airflow to manage resources like managing service accounts, env variables, creating clusters, etc. but simply send tasks to an existing Kubernetes cluster and let Airflow know when a job is done. An alternative would be to use Apache Mesos but it looks less flexible and less straightforward compared to Kubernetes.</p>
<p>I guess we could use Airflow’s bash_operator to run <code>kubectl</code> but this seems not like the most elegant solution.</p>
<p>Any thoughts? How do you deal with that?</p>
| <p>Airflow has both a <a href="https://github.com/apache/incubator-airflow/blob/master/airflow/contrib/executors/kubernetes_executor.py" rel="noreferrer">Kubernetes Executor</a> as well as a <a href="https://github.com/apache/incubator-airflow/blob/master/airflow/contrib/operators/kubernetes_pod_operator.py" rel="noreferrer">Kubernetes Operator</a>. </p>
<p>You can use the Kubernetes Operator to send tasks (in the form of Docker images) from Airflow to Kubernetes via whichever AirflowExecutor you prefer. </p>
<p>Based on your description though, I believe you are looking for the KubernetesExecutor to schedule all your tasks against your Kubernetes cluster. As you can see from the source code it has a much tighter integration with Kubernetes. </p>
<p>This will also allow you to not have to worry about creating the docker images ahead of time as is required with the Kubernetes Operator.</p>
|
<p>I have an HPC cluster application where I am looking to replace MPI and our internal cluster management software with a combination of Kubernetes and some middleware, most likely ZMQ or RabbitMQ.</p>
<p>I'm trying to design how best to do peer discovery on this system using Kubernetes' service discovery. </p>
<p>I know Kubernetes can provide a DNS name for a given service, and that's great, but is there a way to also dynamically discover ports? </p>
<p>For example, assuming I replaced the MPI middleware with ZeroMQ, I would need a way for ranks (processes on the cluster) to find each other. I know I could simply have the ranks issue service creation messages to the Kubernetes discovery mechanism and get a hostname like myapp_mypid_rank_42 fairly easily, but how would I handle the port?</p>
<p>If possible, it would be great if I could just do:</p>
<pre><code>zmqSocket.connect("tcp://myapp_mypid_rank_42");
</code></pre>
<p>but I don't think that would work since I have no port number information from DNS.</p>
<p>How can I have Kubernetes service discovery also provide a port in as simple a manner as possible to allow ranks in the cluster to discover each other?</p>
<p>Note: The registering process knows its port and can register it with the K8s service discovery daemon. The problem is a quick and easy way to get that port number back for the processes that want it. The question I'm asking is whether or not there is a mechanism <em>as simple</em> as a DNS host name, or will I need to explicitly query both hostname <em>and</em> port number from the k8s daemon rather than simply building a hostname based on some agreed upon rule (like building a string from myapp_mypid_myrank)?</p>
| <p>Turns out the best way to do this is with a DNS SRV record:</p>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/service/#discovering-services" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#discovering-services</a>
<a href="https://en.wikipedia.org/wiki/SRV_record" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/SRV_record</a></p>
<p>A DNS SRV record provides both a hostname/IP <em>and</em> a port for a given request.</p>
<p>Luckily, Kubernetes service discovery supports SRV records and provides them on the cluster's DNS.</p>
|
<p>I am working on developing tools to interact with Kubernetes. I have OpenShift setup with the allow all authentication provider. I can log into the web console as I would expect.</p>
<p>I have also been able to setup a service account and assign a cluster role binding to the service account user. Despite this, when I access the REST API using a token of that service account, I get forbidden. </p>
<p>Here is what happens when I try to setup role bindings via OpenShift commands:</p>
<pre><code>[root@host1 ~]# oadm policy add-cluster-role-to-user view em7 --namespace=default
[root@host1 ~]# oadm policy add-cluster-role-to-user cluster-admin em7 --namespace=default
[root@host1 ~]# oadm policy add-cluster-role-to-user cluster-reader em7 --namespace=default
[root@host1 ~]# oc get secrets | grep em7
em7-dockercfg-hnl6m kubernetes.io/dockercfg 1 18h
em7-token-g9ujh kubernetes.io/service-account-token 4 18h
em7-token-rgsbz kubernetes.io/service-account-token 4 18h
TOKEN=`oc describe secret em7-token-g9ujh | grep token: | awk '{ print $2 }'`
[root@host1 ~]# curl -kD - -H "Authorization: Bearer $TOKEN" https://localhost:8443/api/v1/pods
HTTP/1.1 403 Forbidden
Cache-Control: no-store
Content-Type: application/json
Date: Tue, 19 Jun 2018 15:36:40 GMT
Content-Length: 260
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "User \"system:serviceaccount:default:em7\" cannot list all pods in the cluster",
"reason": "Forbidden",
"details": {
"kind": "pods"
},
"code": 403
}
</code></pre>
<p>I can also try using the yaml file from (<a href="https://stackoverflow.com/questions/49667239/openshift-admin-token">Openshift Admin Token</a>):
# creates the service account "ns-reader"
apiVersion: v1
kind: ServiceAccount
metadata:
name: ns-reader
namespace: default</p>
<pre><code>---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
# "namespace" omitted since ClusterRoles are not namespaced
name: global-reader
rules:
- apiGroups: [""]
# add other rescources you wish to read
resources: ["pods", "secrets"]
verbs: ["get", "watch", "list"]
---
# This cluster role binding allows service account "ns-reader" to read pods in all available namespace
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-ns
subjects:
- kind: ServiceAccount
name: ns-reader
namespace: default
roleRef:
kind: ClusterRole
name: global-reader
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>When I run this, I get the following error:</p>
<pre><code>[root@host1 ~]# kubectl create -f stack_overflow_49667238.yaml
error validating "stack_overflow_49667238.yaml": error validating data: API version "rbac.authorization.k8s.io/v1" isn't supported, only supports API versions ["federation/v1beta1" "v1" "authentication.k8s.io/v1beta1" "componentconfig/v1alpha1" "policy/v1alpha1" "rbac.authorization.k8s.io/v1alpha1" "apps/v1alpha1" "authorization.k8s.io/v1beta1" "autoscaling/v1" "extensions/v1beta1" "batch/v1" "batch/v2alpha1"]; if you choose to ignore these errors, turn validation off with --validate=false
</code></pre>
<p>I have tried several different API versions from the list but they all failed in a similar way.</p>
| <p><code>oadm policy add-cluster-role-to-user view em7</code> grants to the user named <code>em7</code></p>
<p>you need to grant permissions to the service account, e.g. <code>oadm policy add-cluster-role-to-user view system:serviceaccount:default:em7</code></p>
|
<p>I am trying to deploy the zalenium helm chart in my newly deployed aks Kuberbetes (1.9.6) cluster in Azure. But I don't get it to work. The pod is giving the log below:</p>
<blockquote>
<p>[bram@xforce zalenium]$ kubectl logs -f zalenium-zalenium-hub-6bbd86ff78-m25t2 Kubernetes service account found. Copying files for Dashboard... cp: cannot create regular file '/home/seluser/videos/index.html': Permission denied cp: cannot create directory '/home/seluser/videos/css': Permission denied cp: cannot create directory '/home/seluser/videos/js': Permission denied Starting Nginx reverse proxy... Starting Selenium Hub... ..........08:49:14.052 [main] INFO o.o.grid.selenium.GridLauncherV3 - Selenium build info: version: '3.12.0', revision: 'unknown' 08:49:14.120 [main] INFO o.o.grid.selenium.GridLauncherV3 - Launching Selenium Grid hub on port 4445 ...08:49:15.125 [main] INFO d.z.e.z.c.k.KubernetesContainerClient - Initialising Kubernetes support ..08:49:15.650 [main] WARN d.z.e.z.c.k.KubernetesContainerClient - Error initialising Kubernetes support. io.fabric8.kubernetes.client.KubernetesClientException: Operation: [get] for kind: [Pod] with name: [zalenium-zalenium-hub-6bbd86ff78-m25t2] in namespace: [default] failed. at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:62) at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:71) at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:206) at io.fabric8.kubernetes.client.dsl.base.BaseOperation.get(BaseOperation.java:162) at de.zalando.ep.zalenium.container.kubernetes.KubernetesContainerClient.(KubernetesContainerClient.java:87) at de.zalando.ep.zalenium.container.ContainerFactory.createKubernetesContainerClient(ContainerFactory.java:35) at de.zalando.ep.zalenium.container.ContainerFactory.getContainerClient(ContainerFactory.java:22) at de.zalando.ep.zalenium.proxy.DockeredSeleniumStarter.(DockeredSeleniumStarter.java:59) at de.zalando.ep.zalenium.registry.ZaleniumRegistry.(ZaleniumRegistry.java:74) at de.zalando.ep.zalenium.registry.ZaleniumRegistry.(ZaleniumRegistry.java:62) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at java.lang.Class.newInstance(Class.java:442) at org.openqa.grid.web.Hub.(Hub.java:93) at org.openqa.grid.selenium.GridLauncherV3$2.launch(GridLauncherV3.java:291) at org.openqa.grid.selenium.GridLauncherV3.launch(GridLauncherV3.java:122) at org.openqa.grid.selenium.GridLauncherV3.main(GridLauncherV3.java:82) Caused by: javax.net.ssl.SSLPeerUnverifiedException: Hostname kubernetes.default.svc not verified: certificate: sha256/OyzkRILuc6LAX4YnMAIGrRKLmVnDgLRvCasxGXDhSoc= DN: CN=client, O=system:masters subjectAltNames: [10.0.0.1] at okhttp3.internal.connection.RealConnection.connectTls(RealConnection.java:308) at okhttp3.internal.connection.RealConnection.establishProtocol(RealConnection.java:268) at okhttp3.internal.connection.RealConnection.connect(RealConnection.java:160) at okhttp3.internal.connection.StreamAllocation.findConnection(StreamAllocation.java:256) at okhttp3.internal.connection.StreamAllocation.findHealthyConnection(StreamAllocation.java:134) at okhttp3.internal.connection.StreamAllocation.newStream(StreamAllocation.java:113) at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:42) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121) at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:93) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121) at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:125) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121) at io.fabric8.kubernetes.client.utils.ImpersonatorInterceptor.intercept(ImpersonatorInterceptor.java:56) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121) at io.fabric8.kubernetes.client.utils.HttpClientUtils$2.intercept(HttpClientUtils.java:107) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:147) at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:121) at okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:200) at okhttp3.RealCall.execute(RealCall.java:77) at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:379) at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:344) at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:313) at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleGet(OperationSupport.java:296) at io.fabric8.kubernetes.client.dsl.base.BaseOperation.handleGet(BaseOperation.java:770) at io.fabric8.kubernetes.client.dsl.base.BaseOperation.getMandatory(BaseOperation.java:195) ... 16 common frames omitted 08:49:15.651 [main] INFO d.z.e.z.c.k.KubernetesContainerClient - About to clean up any left over selenium pods created by Zalenium Usage: [options] Options: --debug, -debug : enables LogLevel.FINE. Default: false --version, -version Displays the version and exits. Default: false -browserTimeout in seconds : number of seconds a browser session is allowed to hang while a WebDriver command is running (example: driver.get(url)). If the timeout is reached while a WebDriver command is still processing, the session will quit. Minimum value is 60. An unspecified, zero, or negative value means wait indefinitely. -matcher, -capabilityMatcher class name : a class implementing the CapabilityMatcher interface. Specifies the logic the hub will follow to define whether a request can be assigned to a node. For example, if you want to have the matching process use regular expressions instead of exact match when specifying browser version. ALL nodes of a grid ecosystem would then use the same capabilityMatcher, as defined here. -cleanUpCycle in ms : specifies how often the hub will poll running proxies for timed-out (i.e. hung) threads. Must also specify "timeout" option -custom : comma separated key=value pairs for custom grid extensions. NOT RECOMMENDED -- may be deprecated in a future revision. Example: -custom myParamA=Value1,myParamB=Value2 -host IP or hostname : usually determined automatically. Most commonly useful in exotic network configurations (e.g. network with VPN) Default: 0.0.0.0 -hubConfig filename: a JSON file (following grid2 format), which defines the hub properties -jettyThreads, -jettyMaxThreads : max number of threads for Jetty. An unspecified, zero, or negative value means the Jetty default value (200) will be used. -log filename : the filename to use for logging. If omitted, will log to STDOUT -maxSession max number of tests that can run at the same time on the node, irrespective of the browser used -newSessionWaitTimeout in ms : The time after which a new test waiting for a node to become available will time out. When that happens, the test will throw an exception before attempting to start a browser. An unspecified, zero, or negative value means wait indefinitely. Default: 600000 -port : the port number the server will use. Default: 4445 -prioritizer class name : a class implementing the Prioritizer interface. Specify a custom Prioritizer if you want to sort the order in which new session requests are processed when there is a queue. Default to null ( no priority = FIFO ) -registry class name : a class implementing the GridRegistry interface. Specifies the registry the hub will use. Default: de.zalando.ep.zalenium.registry.ZaleniumRegistry -role options are [hub], [node], or [standalone]. Default: hub -servlet, -servlets : list of extra servlets the grid (hub or node) will make available. Specify multiple on the command line: -servlet tld.company.ServletA -servlet tld.company.ServletB. The servlet must exist in the path: /grid/admin/ServletA /grid/admin/ServletB -timeout, -sessionTimeout in seconds : Specifies the timeout before the server automatically kills a session that hasn't had any activity in the last X seconds. The test slot will then be released for another test to use. This is typically used to take care of client crashes. For grid hub/node roles, cleanUpCycle must also be set. -throwOnCapabilityNotPresent true or false : If true, the hub will reject all test requests if no compatible proxy is currently registered. If set to false, the request will queue until a node supporting the capability is registered with the grid. -withoutServlet, -withoutServlets : list of default (hub or node) servlets to disable. Advanced use cases only. Not all default servlets can be disabled. Specify multiple on the command line: -withoutServlet tld.company.ServletA -withoutServlet tld.company.ServletB org.openqa.grid.common.exception.GridConfigurationException: Error creating class with de.zalando.ep.zalenium.registry.ZaleniumRegistry : null at org.openqa.grid.web.Hub.(Hub.java:97) at org.openqa.grid.selenium.GridLauncherV3$2.launch(GridLauncherV3.java:291) at org.openqa.grid.selenium.GridLauncherV3.launch(GridLauncherV3.java:122) at org.openqa.grid.selenium.GridLauncherV3.main(GridLauncherV3.java:82) Caused by: java.lang.ExceptionInInitializerError at de.zalando.ep.zalenium.registry.ZaleniumRegistry.(ZaleniumRegistry.java:74) at de.zalando.ep.zalenium.registry.ZaleniumRegistry.(ZaleniumRegistry.java:62) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at java.lang.Class.newInstance(Class.java:442) at org.openqa.grid.web.Hub.(Hub.java:93) ... 3 more Caused by: java.lang.NullPointerException at java.util.TreeMap.putAll(TreeMap.java:313) at io.fabric8.kubernetes.client.dsl.base.BaseOperation.withLabels(BaseOperation.java:411) at io.fabric8.kubernetes.client.dsl.base.BaseOperation.withLabels(BaseOperation.java:48) at de.zalando.ep.zalenium.container.kubernetes.KubernetesContainerClient.deleteSeleniumPods(KubernetesContainerClient.java:393) at de.zalando.ep.zalenium.container.kubernetes.KubernetesContainerClient.initialiseContainerEnvironment(KubernetesContainerClient.java:339) at de.zalando.ep.zalenium.container.ContainerFactory.createKubernetesContainerClient(ContainerFactory.java:38) at de.zalando.ep.zalenium.container.ContainerFactory.getContainerClient(ContainerFactory.java:22) at de.zalando.ep.zalenium.proxy.DockeredSeleniumStarter.(DockeredSeleniumStarter.java:59) ... 11 more ...........................................................................................................................................................................................GridLauncher failed to start after 1 minute, failing... % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 182 100 182 0 0 36103 0 --:--:-- --:--:-- --:--:-- 45500</p>
</blockquote>
<p>A describe pod gives:
Warning Unhealthy 4m (x12 over 6m) kubelet, aks-agentpool-93668098-0 Readiness probe failed: HTTP probe failed with statuscode: 502</p>
<p>Zalenium Image Version(s):
dosel/zalenium:3</p>
<p>If using Kubernetes, specify your environment, and if relevant your manifests:
I use the templates as is from <a href="https://github.com/zalando/zalenium/tree/master/docs/k8s/helm" rel="nofollow noreferrer">https://github.com/zalando/zalenium/tree/master/docs/k8s/helm</a></p>
<p>I guess it has to do something with rbac because of this part
"Error initialising Kubernetes support. io.fabric8.kubernetes.client.KubernetesClientException: Operation: [get] for kind: [Pod] with name: [zalenium-zalenium-hub-6bbd86ff78-m25t2] in namespace: [default] failed. at "</p>
<p>I created a clusterrole and clusterrolebinding for the service account zalenium-zalenium that is automatically created by the Helm chart.</p>
<pre><code>kubectl create clusterrole zalenium --verb=get,list,watch,update,delete,create,patch --resource=pods,deployments,secrets
kubectl create clusterrolebinding zalenium --clusterrole=zalnium --serviceaccount=zalenium-zalenium --namespace=default
</code></pre>
| <p>Issue had to do with Azure's AKS and Kubernetes. It has been fixed.
See github issue <a href="https://github.com/Azure/AKS/issues/399" rel="nofollow noreferrer">399</a></p>
|
<p>I can't configure a livenessProbe with attributes for a k8s Deployment, I tried apiVersion: apps/v1beta1, or apps/v1 or apps/v1beta2 or apps/v1beta3.</p>
<p>I want to add the attributes :</p>
<ul>
<li>initialDelaySeconds</li>
<li>periodSeconds</li>
<li>timeoutSeconds</li>
</ul>
<p>If I define any of these attributes I get an error </p>
<blockquote>
<p>unknown field "periodSeconds" in io.k8s.api.core.v1.HTTPGetAction</p>
</blockquote>
| <p>Yes it was the indent level thanks a lot, and it's correct on the documentation so I think this question isn't useful in general sorry</p>
|
<p>We are using Google Kubernetes Engine on 1.9.6-gke.1, and have a cluster with several nodepools for which we enable auto-scaling because the nodes in them contain attached GPUs (p100s).</p>
<p>Sometimes we run jobs overnight via a Kubernetes Pod on a node that was brought up due to a triggered auto-scaling event, and many hours later return to find that the pod has disappeared because the pod has terminated, in some unknown state, and since no other pod is scheduled to the node for 10 minutes, the node it ran on has been drained and removed. </p>
<p>That is, once the node is gone the pod disappears from the perspective of the Kubernetes logs and control plane ie.running things like <code>kubectl get pods</code> and <code>kubectl describe pod</code>. We would like to be able to know the status of these pods at the time of termination, eg. 'Completed, Error, OOM'. Is there a way to have this pod lifecycle information logged in Google Cloud Platform, perhaps via Stackdriver or other? If it's already available where would we find it?</p>
<p>Note this is for pods for which the node the pod ran on is no longer in the cluster.</p>
<p>Thanks in advance!</p>
| <p>There are two logs within Stackdriver Logging that you can check GKE logs. The first one is called "GKE Cluster Operations", and the second is called "Container Logs". </p>
<p>The "GKE Cluster Operations" logs will show you all the operations that take place within the cluster such as pod creation, container creation, etc...</p>
<p>The "Container Logs" will log the operations of a Container. I created a simple job using the yaml file given <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#running-an-example-job" rel="nofollow noreferrer">here</a>. After running the job, I went into the "Container Logs" and it showed the output of the container successfully.</p>
<p>In this case, you should be able to see the logs of the pod status from the "GKE Cluster Operations" logs within GCP.</p>
|
<p>I have deployed Kubernetes using the link <a href="https://kubernetes.io/docs/tasks/tools/install-kubeadm/" rel="nofollow noreferrer">Kubernetes official page</a></p>
<p>I see that Kubernetes is deployed because in the end i got this</p>
<pre><code>Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 172.16.32.101:6443 --token ma1d4q.qemewtyhkjhe1u9f --discovery-token-ca-cert-hash sha256:408b1fdf7a5ea5f282741db91ebc5aa2823802056ea9da843b8ff52b1daff240
</code></pre>
<p>when i do kubectl get pods it thorws this error </p>
<pre><code># kubectl get pods
The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port?
</code></pre>
<p>When I do see the cluster-info it says as follows </p>
<pre><code>kubectl cluster-info
Kubernetes master is running at https://127.0.0.1:6553
</code></pre>
<p>But when i see the config it shows as follows </p>
<pre><code>apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFNE1EWXlNREEyTURJd04xb1hEVEk0TURZeE56QTJNREl3TjFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBT0ZXCkxWQkJoWmZCQms4bXJrV0w2MmFGd2U0cUYvZkRHekJidnE5TFpGN3M4UW1rdDJVUlo5YmtSdWxzSlBrUXV1U2IKKy93YWRoM054S0JTQUkrTDIrUXdyaDVLSy9lU0pvbjl5TXJlWnhmRFdPTno2Y3c4K2txdnh5akVsRUdvSEhPYQpjZHpuZnJHSXVZS3lwcm1GOEIybys0VW9ldytWVUsxRG5Ra3ZwSUZmZ1VjVWF4UjVMYTVzY2ZLNFpweTU2UE4wCjh1ZjdHSkhJZFhNdXlVZVpFT3Z3ay9uUTM3S1NlWHVhcUlsWlFqcHQvN0RrUmFZeGdTWlBqSHd5c0tQOHMzU20KZHJoeEtyS0RPYU1Wczd5a2xSYjhzQjZOWDB6UitrTzhRNGJOUytOYVBwbXFhb3hac1lGTmhCeGJUM3BvUXhkQwpldmQyTmVlYndSWGJPV3hSVzNjQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFDTFBlT0s5MUdsdFJJTjdmMWZsNmlINTg0c1UKUWhBdW1xRTJkUHFNT0ZWWkxjbDVJZlhIQ1dGeE13OEQwOG1NNTZGUTNEVDd4bi9lNk5aK1l2M1JrK2hBdmdaQgpaQk9XT3k4UFJDOVQ2S1NrYjdGTDRSWDBEamdSeE5WTFIvUHd1TUczK3V2ZFhjeXhTYVJJRUtrLzYxZjJsTGZmCjNhcTdrWkV3a05pWXMwOVh0YVZGQ21UaTd0M0xrc1NsbDZXM0NTdVlEYlRQSzJBWjUzUVhhMmxYUlZVZkhCMFEKMHVOQWE3UUtscE9GdTF2UDBDRU1GMzc4MklDa1kzMDBHZlFEWFhiODA5MXhmcytxUjFQbEhJSHZKOGRqV29jNApvdTJ1b2dHc2tGTDhGdXVFTTRYRjhSV0grZXpJRkRjV1JsZnJmVHErZ2s2aUs4dGpSTUNVc2lFNEI5QT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
**server: https://172.16.32.101:6443**
</code></pre>
<p>Even telnet shows that there is a process running on 6443 but not on 6553
how can change the port and how can I fix the issue??</p>
<p>Any help would be of great use </p>
<p>Thanks in advance.</p>
| <p>It looks like your last kubectl config interferes with the previous clusters configurations.</p>
<p>It is possible to have settings for <a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/" rel="nofollow noreferrer">several different clusters</a> in one <code>.kube/config</code> or in separate files.<br>
But in some cases, you may want to manage only the cluster you've just created. </p>
<p><strong>Note:</strong> After tearing down the exited cluster using <code>kubeadm reset</code> followed by initializing fresh cluster using <code>kubeadm init</code>, new certificates will be generated. To operate the new cluster, you have to update kubectl configuration or replace it with the new one.</p>
<p>To clean up old <code>kubectl</code> configurations and apply the last one, run the following commands:</p>
<pre><code>rm -rf $HOME/.kube
unset KUBECONFIG
# Check if you have KUBECONFIG configured in profile dot files and comment or remove it.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
</code></pre>
<p>It gives you up-to-date configuration for the last cluster you've created using kubeadm tool. </p>
<p><strong>Note:</strong> You should copy kubectl configuration for all users accounts which you are going to use to manage the cluster.</p>
<p><a href="http://jeffmendoza.github.io/kubernetes/v1.0/docs/user-guide/kubeconfig-file.html" rel="nofollow noreferrer">Here</a> are some examples of how to manage config file using the command line.</p>
|
<p>I came across Skaffold, trying to streamline local Docker image development. We basically want to be able to quickly iterate on an image and deploy it using an existing stable helm chart version.</p>
<p>In the documentation it says:</p>
<p><a href="https://github.com/GoogleContainerTools/skaffold#installation" rel="noreferrer">https://github.com/GoogleContainerTools/skaffold#installation</a></p>
<blockquote>
<ol start="5">
<li>Docker image registry.
Your docker client should be configured to push to an external docker
image repository. If you're using a minikube or Docker for Desktop
cluster, you can skip this requirement.</li>
</ol>
</blockquote>
<p>However, when I'm trying to run the example <code>examples/helm-deployment</code> it actually tries to connect to a registry:</p>
<pre><code>$ skaffold dev
...
The push refers to repository [docker.io/library/skaffold-helm]
b3fb485368bf: Preparing
6f67560e4591: Preparing
d626a8ad97a1: Preparing
WARN[0012] run: build: build step: running push: denied: requested access to the resource is denied
</code></pre>
<p>Do I always need a registry, even using Minikube and Skaffold?</p>
| <p>With skaffold, you don't need any registry if you want to deploy to minikube. By default, it <a href="https://github.com/GoogleContainerTools/skaffold/blob/a0c641b4d626b604e34e765d851a05207b253639/examples/annotated-skaffold.yaml#L60" rel="noreferrer">seamlessly puts</a> the image to minikube without any external registry involved.</p>
<p>Consider the following <code>deployment.yaml</code> file:</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: petstore-frontend
labels:
app: petstore
tier: frontend
spec:
replicas: 1
selector:
matchLabels:
app: petstore
tier: frontend
template:
metadata:
labels:
app: foo
tier: frontend
spec:
containers:
- name: frontend
image: petstore/frontend
imagePullPolicy: Always
ports:
- containerPort: 80
</code></pre>
<p>Note that <code>image</code> field does not contain any tag, because skaffold will inject one for you.</p>
<p>The minimal <code>skaffold.yaml</code> file would look like this:</p>
<pre><code>apiVersion: skaffold/v1alpha2
kind: Config
profiles:
- name: minikube
build:
artifacts:
- imageName: petstore/frontend
deploy:
manifests:
- deployment.yaml
</code></pre>
<p>Here, <code>imageName</code> should match the <code>image</code> field value from <code>deployment.yaml</code> file.</p>
<p>Now you can just run <code>skaffold run -p minikube</code> and it will build your image, put it to minikube and apply the deployment.</p>
<p><strong>However</strong>, when deploying via Helm charts, there are the small but important things you also need to specify.</p>
<p>Consider the following <code>deployment.yaml</code> template (I've omited some fields for brevity):</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: {{ template "petstore-frontend.fullname" . }}
labels:
# <labels>
spec:
replicas: 1
selector:
matchLabels:
# <labels>
template:
metadata:
labels:
# <labels>
spec:
containers:
- name: {{ template "petstore-frontend.name" . }}
image: {{ .Values.image.repository }}{{ if .Values.image.tag }}:{{ .Values.image.tag }}{{ end }}
imagePullPolicy: Always
ports:
- containerPort: 80
</code></pre>
<p>It is important to have a possibility to specify image without any tag, so skaffold can inject it for you.</p>
<p>Then a <code>skaffold.yaml</code> file would look like this:</p>
<pre><code>apiVersion: skaffold/v1alpha2
kind: Config
profiles:
- name: minikube
build:
artifacts:
- imageName: petstore/frontend
deploy:
helm:
releases:
- name: petstore-frontend
chartPath: charts/petstore-frontend
values:
"image.repository": petstore/frontend
setValues:
"image.tag": ""
</code></pre>
<p>Here, <code>image.repository</code> should be exactly the same as <code>imageName</code> in <code>artifacts</code> section, and also <code>image.tag</code> is set an empty string to, as it was said above, allow skaffold to inject it.</p>
<p>In conclusion, it is worth to mention that skaffold is still in alpha, so the API may change, and maybe that "tricks" won't be needed anymore. Still, skaffold already provides much convenience for rapid application development and deployment.</p>
<p>I hope this helps.</p>
|
<p><strong>The ndots problem</strong></p>
<p>I have a Kubernetes cluster (running on openstack) and a separate cluster of VMs running on the same openstack tenancy. I needed to be able to resolve services running in the VM cluster from the kubernetes cluster.</p>
<ol>
<li>I set up ipvs "loadbalancer"</li>
<li>On this loadbalancer I connected eth1 to vm cluster, eth0 to kubernetes cluster</li>
<li>set up static routes (on host VMs) to supply route to this LB, masquerade rules on LB, allowed ip addresses on LB via opensatck etc</li>
<li>I then set my DNS server in the VM cluster as a <code>StubDomain</code> of <code>kube-dns</code> as per <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/" rel="nofollow noreferrer">Kubernetes docs</a></li>
</ol>
<p>nslookup of a service in the VM cluster now worked from kubernetes cluster but resolution was taking on avg 8 seconds. I tried alternating and combining <code>StubDomain</code> with <code>UpstreamNameserver</code> but the results were the same.</p>
<p>After much googling I discovered the <a href="https://rsmitty.github.io/KubeDNS-Tweaks/" rel="nofollow noreferrer">ndots problem</a>. Instead of modifying it I removed <code>ndots:5</code> from the pod doing the lookup configuration and nslookup is now much quicker operating at acceptable speed. </p>
<p>My question is what is the side-effect/trade-off to removing the ndots configuration altogether? My application seems to be working well now and no obvious side-effects yet.</p>
<p>Thanks</p>
| <p>There are no side effects. You just make one DNS query for a FQDN with is set Ndots:0. The whole ndots is like a secret agent trying to mess up your requests.</p>
|
<p>How are service accounts supposed to be used? Should we create one service account per container? Is there any concept equivalent to "user group" in order to group service accounts?</p>
| <p>Service Accounts are used by Pods to get access to Kubernetes API or Secrets. When a Pod is created, it gets a Service Account. The <code>default</code> Service Account for the same namespace is assigned to a Pod automatically. It can be specified in a description for Pod. Here you can find some examples: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/" rel="nofollow noreferrer">Configure Service Accounts for Pods</a></p>
<blockquote>
<p>How are service accounts supposed to be used?</p>
</blockquote>
<p>If you want to configure permissions different from the default, you need to use Service Accounts. </p>
<blockquote>
<p>Should we create one service account per container?</p>
</blockquote>
<p>In Kubernetes, a Pod is a set of one or more containers.<br>
You can create one Service Account and assign it to the first set of Pods. After that, create another Service Account with different permissions and assign it to the second set of Pods.</p>
<blockquote>
<p>Is there any concept equivalent to "user group" in order to group service accounts?</p>
</blockquote>
<p>There is no such equivalent. But in terms of "user", "group", "role", we can say that Service Account is like a "role" for a Pod. </p>
<p>For more information, you can look through these links:</p>
<ul>
<li><a href="http://jeffmendoza.github.io/kubernetes/v1.0/docs/user-guide/service-accounts.html" rel="nofollow noreferrer">Service Accounts</a></li>
<li><a href="https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/" rel="nofollow noreferrer">Managing Service Accounts</a></li>
<li><a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/" rel="nofollow noreferrer">Authenticating</a>, provided by @KonstantinVustin in comments</li>
<li><a href="https://kubernetes.io/docs/reference/access-authn-authz/authorization/#a-quick-note-on-service-accounts" rel="nofollow noreferrer">Authorization Overview</a></li>
</ul>
|
<p>We have multiple development teams who work and deploy their applications on kuberenetes. We use helm to deploy our application on kubernetes.</p>
<p>Currently the challenge we are facing with one of our shared clusters. We would like to deploy tiller separate for each team. So they have access to their resources. default Cluster-admin role will not help us and we don't want that.</p>
<p>Let's say we have multiple namespaces for one team. I would want to deploy tiller which has permission to work with resources exist or need to be created in these namespaces.</p>
<p>Team > multiple namespaces
tiller using the service account that has the role ( having full access to namespaces - not all ) associated with it.</p>
| <blockquote>
<p>I would want to deploy tiller which has permission to work with resources exist or need to be created in these namespaces</p>
</blockquote>
<p>According to <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding" rel="nofollow noreferrer">the fine manual</a>, you'll need a <code>ClusterRole</code> per team, defining the kinds of operations on the kinds of resources, but then use a <strong><code>RoleBinding</code></strong> to scope those rules to a specific namespace. The two ends of the binding target will be the team's tiller's <code>ServiceAccount</code> and the team's <code>ClusterRole</code>, and then one <code>RoleBinding</code> instance per <code>Namespace</code> (even though they will be textually identical except for the <code>namespace:</code> portion)</p>
<p>I actually would expect you could make an internal helm chart that would automate the specifics of that relationship, and then <code>helm install --name team-alpha --set team-namespaces=ns-alpha,ns-beta my-awesome-chart</code> and then grant <em>your</em> tiller <code>cluster-admin</code> or whatever more restrictive <code>ClusterRole</code> you wish.</p>
|
<p>Sometimes, I'll have a terminal tab with <code>kubectl exec bash</code> running in it to check on a container or tail a log file. When I shut my laptop, turn on my VPN, or just lose Wi-Fi for a second, that terminal can sometimes freeze and leave me unable to exit it without closing the terminal tab or killing the process manually.</p>
<p>I know that SSH sessions have an enter-tilda-period key combo that lets you quit in just this situation, but this doesn't seem to work for <code>kubectl exec</code>. Is there any similar way to accomplish this in this situation?</p>
| <p>It is not clear what is the root of the problem causing abnormal hang ups of <a href="https://kubernetes.io/docs/reference/kubectl/overview/" rel="nofollow noreferrer">kubectl</a> exec command.</p>
<p>Getting back to your question, you can force to drop the connection and return control of (docker) terminal
back to your hands by setting: <code>--request-timeout=<value></code> to the kubectl command line:</p>
<p><code>kubectl exec --request-timeout=5s bash</code></p>
<p>It is possible to force termination of kubectl exec by sending -9 signal using kill command.</p>
<p><code>kill -9 $(pidof kubectl)</code></p>
|
<p>If I run <code>systemctl restart kubelet</code> will it impact the other running nodes? Will it stop the cluster? Can you foresee any impact?
Any help would be appreciated!</p>
| <p>Before answering, small disclaimer: restart is not due to multiple potentially breaking configuration changes to kubelet and kubelet is indeed restarted not crashed due to misconfiguration. Answer is aimed at scenario of simple restart of kubelet (with maybe small and non-breaking configuration change)</p>
<blockquote>
<p>will it impact the other running nodes?</p>
</blockquote>
<p>Just a restart as such should not be an issue (providing it is really restart as defined in disclaimer above). There is difference if you restart kubelet on master or worker node. During restart on master as long as all system pods are running uninterrupted all should be well but if during kubelet downtime any system pod needs restart as well you are in trouble until kubelet gets operational again... For worker node (if you didn’t change default) Kubernetes waits 5 min for node to get back to ready state (kubelet gets operational after restart). Again supposing that pods are live and well during that time - if any of them fails liveness probe it will be restarted on another node but it will not be communicated back to node in question until kubelet is back online (and docker will continue to run it until then)...</p>
<blockquote>
<p>Will it stop the cluster?</p>
</blockquote>
<p>Again, if on worker then - no, if on master then no IF system pods (api, controller, scheduler, DNS, proxy...) continue to run uninterrupted on master during restart.</p>
<blockquote>
<p>Can you foresee any impact?</p>
</blockquote>
<p>If after restart kubelet crashes and you are operating on master and then any system pod crashes while kubelet is down you are in for a trouble (depending on what crashed).</p>
<p>Make sure that you didn’t introduce any breaking kubelet config changes that will break kubelet on node you are restarting, especially for master node...</p>
<p>To conclude: simple restart should not be an issue, make sure that your config is correct, kubelet restart is proper and monitor system pods during restart if restarting on master.</p>
|
<p><strong>What I have</strong></p>
<p>I have used Kube secrets for private Docker registry authentication in the <code>default</code> namespace. That works as expected. For example:</p>
<pre><code>$ kubectl get secret regsecret
NAME TYPE DATA AGE
regsecret kubernetes.io/dockerconfigjson 1 30m
</code></pre>
<p>Which is referenced in my <code>deployment.yml</code> as shown in the snippet below:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
template:
...
spec:
containers:
- name: bootstrap-nginx
image: quay.io/example/nginx:latest
...
imagePullSecrets:
- name: regsecret
</code></pre>
<p><strong>Here's my question</strong></p>
<p>I need to create the <code>regsecret</code> above in a <code>namepsace</code>, for example, <code>myns</code> as shown below:</p>
<pre><code>$ kubectl get secret regsecret --namespace=myns
NAME TYPE DATA AGE
regsecret kubernetes.io/dockerconfigjson 1 30m
</code></pre>
<p>With this, how do I reference <code>regsecret</code> from <code>myns</code> namespace into my deployment spec? If I use <code>imagePullSecrets</code> as shown above, it fails saying that Kubernetes could not pull the image (the secret <code>regsecret</code> could not be found). Is there a way to reference "fully qualified" secret name in <code>imagePullSecrets</code>?</p>
| <p>By design, there is no way to accomplish this. You will need to create the <code>regsecret</code> in the same namespace where your Deployment is.</p>
<blockquote>
<p><code>ImagePullSecrets</code> is an optional list of references to secrets <strong>in the same
namespace</strong> to use for pulling any of the images used by this PodSpec. If
specified, these secrets will be passed to individual puller
implementations for them to use. For example, in the case of docker, only
DockerConfig type secrets are honored.</p>
</blockquote>
<p>See also:
<a href="https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod</a></p>
|
<p>I am developing a chart and I had an error in it—incorrectly placed <code>imagePullSecrets</code>. When I tried to install it via</p>
<pre><code>helm install ./mychart
</code></pre>
<p>the misplaced element was simply ignored and I wondered what is wrong.</p>
<p>When I did</p>
<pre><code>helm template ./mychart | kubectl apply --dry-run -f -
</code></pre>
<p>it instead printed:</p>
<pre><code>error: error validating "STDIN": error validating data: ValidationError(Deployment.spec.template.spec.containers[0]): unknown field "imagePullSecrets" in io.k8s.api.core.v1.Container
</code></pre>
<p>which clearly shows what is wrong. I am not sure whether it matches what the tiller actually does with the expanded templates.</p>
<p>But if I just do a</p>
<pre><code>helm install --dry-run --debug ./mychart
</code></pre>
<p>it just shows the expanded template and looks OK.</p>
<p>So how do I correctly verify all my templates match corresponding schemata with helm?</p>
| <p>You can lint the chart by going <code>helm lint ./mychart</code> which should print the following if an issue is found:</p>
<pre><code>$ helm lint ./mychart
==> Linting ./mychart
[ERROR] Chart.yaml: version is required
[INFO] Chart.yaml: icon is recommended
Error: 1 chart(s) linted, 1 chart(s) failed
</code></pre>
<p>See <a href="https://docs.helm.sh/helm/#helm-lint" rel="noreferrer" title="helm lint">helm lint</a>.</p>
|
<p>Is it possible to reference a PVC (in namespace-A) from namespace-B. Why I need that? I would like to allow the customer (private cloud) to point their volume through PVC, allowing them full control of the storage. At the Helm install time will ask for PVC information and will be used in the pod spec. </p>
| <p>It looks impossible, just because <code>PersistentVolumeClaim</code> is a namespaced object. You can look detailed answer here: <a href="https://stackoverflow.com/a/35366775/9065705">https://stackoverflow.com/a/35366775/9065705</a></p>
|
<p>I have setup filebeat as a daemonset in kubernetes to forward logs to ES + kibana from docker containers.
(by referencing <a href="https://www.elastic.co/guide/en/beats/filebeat/master/running-on-kubernetes.html" rel="nofollow noreferrer">https://www.elastic.co/guide/en/beats/filebeat/master/running-on-kubernetes.html</a>)
And the logs are forward sucessfuly.</p>
<p>The problem is when there are multi-line logs, they are forwarded to ES as separate log lines. </p>
<p>here is the part where I have configured multiline filter my filebeat-kuberneted.yaml</p>
<pre><code>---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-prospectors
namespace: kube-system
labels:
k8s-app: filebeat
kubernetes.io/cluster-service: "true"
data:
kubernetes.yml: |-
- type: docker
containers.ids:
- "${data.kubernetes.container.id}"
processors:
- add_kubernetes_metadata:
in_cluster: true
multiline:
pattern: '^[[:space:]]'
negate: false
match: after
---
</code></pre>
<p>Here is my full filebeat-kuberneted.yaml file</p>
<pre><code>---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: kube-system
labels:
k8s-app: filebeat
kubernetes.io/cluster-service: "true"
data:
filebeat.yml: |-
filebeat.config:
prospectors:
# Mounted `filebeat-prospectors` configmap:
path: ${path.config}/prospectors.d/*.yml
# Reload prospectors configs as they change:
reload.enabled: true
modules:
path: ${path.config}/modules.d/*.yml
# Reload module configs as they change:
reload.enabled: false
filebeat.autodiscover:
providers:
- type: kubernetes
templates:
- condition:
or:
- equals:
kubernetes.container.name: customer-infor-service
config:
- type: docker
containers.ids:
- "${data.kubernetes.container.id}"
multiline.pattern: '^[[:space:]]'
multiline.negate: false
multiline.match: after
processors:
- add_cloud_metadata:
cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}
output.elasticsearch:
hosts: [ "192.168.1.245:9200"]
username: ""
password: ""
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-prospectors
namespace: kube-system
labels:
k8s-app: filebeat
kubernetes.io/cluster-service: "true"
data:
kubernetes.yml: |-
- type: docker
containers.ids:
- "${data.kubernetes.container.id}"
processors:
- add_kubernetes_metadata:
in_cluster: true
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
kubernetes.io/cluster-service: "true"
spec:
template:
metadata:
labels:
k8s-app: filebeat
kubernetes.io/cluster-service: "true"
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:6.2.4
args: [
"-c", "/etc/filebeat.yml",
"-e",
]
env:
- name: ELASTICSEARCH_HOST
value: "192.168.1.245"
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: ""
- name: ELASTICSEARCH_PASSWORD
value: ""
- name: ELASTIC_CLOUD_ID
value:
- name: ELASTIC_CLOUD_AUTH
value:
securityContext:
runAsUser: 0
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
readOnly: true
subPath: filebeat.yml
- name: prospectors
mountPath: /usr/share/filebeat/prospectors.d
readOnly: true
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: prospectors
configMap:
defaultMode: 0600
name: filebeat-prospectors
- name: data
emptyDir: {}
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: kube-system
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: kube-system
labels:
k8s-app: filebeat
---
</code></pre>
<p>However I am able to successfully apply filebeat multi-line filter on docker without kubernetes as well as on non-docker deployments. So I guess the problem is with my filebeat-kuberneted.yaml configuration file. </p>
<p>Am I missing something in my filebeat-kuberneted.yaml configuration?.</p>
| <p>Finally able to solve the issue, use Multi-line filter under
<code>filebeat.autodiscover:</code></p>
<pre><code> filebeat.autodiscover:
providers:
- type: kubernetes
templates:
- condition:
or:
- equals:
kubernetes.container.name: car-search
- equals:
kubernetes.container.name: cart-service
config:
- type: docker
containers.ids:
- "${data.kubernetes.container.id}"
multiline.pattern: '^[[:space:]]'
multiline.negate: false
multiline.match: after
processors:
- add_cloud_metadata:
</code></pre>
<p>ref <a href="https://github.com/moby/moby/issues/22920" rel="nofollow noreferrer">https://github.com/moby/moby/issues/22920</a></p>
<p>(corrected indentation in yaml above)</p>
|
<p>I have a kubernetes setup that looks like this:</p>
<p>nginx ingress -> load balancer -> nginx app</p>
<p>after getting an SSL certificate for <strong>www</strong>.foo.com, I've installed it in my nginx ingress as a secret, and it works as expected - traffic to <code>www.foo.com</code> gets redirected to the <code>https</code> version instead, and browsers display a secure connection indicator. Great. </p>
<p>What <em>hasn't</em> been easy, however, is getting the ingress to redirect <strong>non-www</strong> traffic to the www version of the site. <s>I've tried using <code>kubernetes.io/from-to-www-redirect: "true"</code>, but it doesn't seem to do anything - navigating to <code>foo.com</code> doesn't redirect me to the <strong>www</strong> version of the url, but either takes me to an insecure version of my site, or navigates me to <code>default backend - 404</code> depending on whether i include <code>foo.com</code> as a host with it's own path in my ingress.</p>
<p>I have been able to set up a patchy redirect by adding the following to my actual application's nginx config -</p>
<pre><code>server {
listen 80;
server_name foo.com;
return 301 http://www.foo.com$request_uri;
}
</code></pre>
<p></s></p>
<p><strong>UPDATE:</strong> <code>from-to-www-redirect</code> DOES work; you just have to reference it with <code>nginx.ingress.kubernetes.io</code> rather than <code>kubernetes.io</code> as I was. But, this only works for <code>foo.com</code> - typing in <code>https://foo.com</code> explicitly causes browsers to display a security warning and no redirect to the proper URL of <code>https://www.foo.com</code> occurs. </p>
<p>Here's my current config for the nginx ingress itself:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: foo-https-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/from-to-www-redirect: "true"
spec:
rules:
- host: www.foo.com
http:
paths:
- backend:
serviceName: foo-prod-front
servicePort: 80
path: /
tls:
- hosts:
- www.foo.com
secretName: tls-secret
</code></pre>
| <p>You need to add the certificate for the domain you want to be redirected:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: foo-https-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/from-to-www-redirect: "true"
spec:
rules:
- host: foo.com
http:
paths:
- backend:
serviceName: foo-prod-front
servicePort: 80
path: /
- host: www.foo.com
http:
paths:
- backend:
serviceName: foo-prod-front
servicePort: 80
path: /
tls:
- hosts:
- foo.com
- www.foo.com
secretName: tls-secret
</code></pre>
<p>I am not completely sure, whether <code>from-to-www-redirect</code> works with this setup, but you can replace it with the following lines, which do work:</p>
<pre><code> nginx.ingress.kubernetes.io/configuration-snippet: |
if ($host = 'foo.com' ) {
rewrite ^ https://www.foo.com$request_uri permanent;
}
</code></pre>
|
<p>I am trying to learn kubernetes, and I have successfully setup a cluster (1-node) on bare-metal, deployed a service and exposed it via ingress.</p>
<p>I tried to implement traefik, to get lets encrypt certificates, but I couldn't get it working and while debugging I noticed, my DNS service wasn't working properly (39 restarts).</p>
<p>I figured I would try to start over, because i been playing around alot, and also to try flannel instead of calico.</p>
<p>I now has a cluster which was created by the following commands:</p>
<pre><code>kubeadm init --pod-network-cidr=192.168.0.0/16
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
</code></pre>
<p>Testing the DNS with the following commands gives an error:</p>
<pre><code>kubectl create -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/docs/tasks/administer-cluster/busybox.yaml
kubectl exec -ti busybox -- nslookup kubernetes.default
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
nslookup: can't resolve 'kubernetes.default'
command terminated with exit code 1
</code></pre>
<p>By investigating the status on the pods, i see the dns pod is restarting</p>
<pre><code>kubectl get po --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default busybox 1/1 Running 0 11m
kube-system etcd-kubernetes-master 1/1 Running 0 15m
kube-system kube-apiserver-kubernetes-master 1/1 Running 0 15m
kube-system kube-controller-manager-kubernetes-master 1/1 Running 0 15m
kube-system kube-dns-86f4d74b45-8tgff 3/3 Running 2 25m
kube-system kube-flannel-ds-6cd9h 1/1 Running 0 15m
kube-system kube-flannel-ds-h78ld 1/1 Running 0 13m
kube-system kube-proxy-95kkd 1/1 Running 0 13m
kube-system kube-proxy-lq7hx 1/1 Running 0 25m
kube-system kube-scheduler-kubernetes-master 1/1 Running 0 15m
</code></pre>
<p>The DNS pod logs say the following:</p>
<pre><code>kubectl logs kube-dns-86f4d74b45-8tgff dnsmasq -n kube-system
I0621 08:41:51.414587 1 main.go:76] opts: {{/usr/sbin/dnsmasq [-k --cache-size=1000 --no-negcache --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053] true} /etc/k8s/dns/dnsmasq-nanny 10000000000}
I0621 08:41:51.414709 1 nanny.go:94] Starting dnsmasq [-k --cache-size=1000 --no-negcache --log-facility=- --server=/cluster.local/127.0.0.1#10053 --server=/in-addr.arpa/127.0.0.1#10053 --server=/ip6.arpa/127.0.0.1#10053]
I0621 08:41:52.255053 1 nanny.go:119]
W0621 08:41:52.255074 1 nanny.go:120] Got EOF from stdout
I0621 08:41:52.256152 1 nanny.go:116] dnsmasq[10]: started, version 2.78 cachesize 1000
I0621 08:41:52.256216 1 nanny.go:116] dnsmasq[10]: compile time options: IPv6 GNU-getopt no-DBus no-i18n no-IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth no-DNSSEC loop-detect inotify
I0621 08:41:52.256245 1 nanny.go:116] dnsmasq[10]: using nameserver 127.0.0.1#10053 for domain ip6.arpa
I0621 08:41:52.256260 1 nanny.go:116] dnsmasq[10]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa
I0621 08:41:52.256275 1 nanny.go:116] dnsmasq[10]: using nameserver 127.0.0.1#10053 for domain cluster.local
I0621 08:41:52.256320 1 nanny.go:116] dnsmasq[10]: reading /etc/resolv.conf
I0621 08:41:52.256335 1 nanny.go:116] dnsmasq[10]: using nameserver 127.0.0.1#10053 for domain ip6.arpa
I0621 08:41:52.256350 1 nanny.go:116] dnsmasq[10]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa
I0621 08:41:52.256365 1 nanny.go:116] dnsmasq[10]: using nameserver 127.0.0.1#10053 for domain cluster.local
I0621 08:41:52.256379 1 nanny.go:116] dnsmasq[10]: using nameserver 127.0.0.53#53
I0621 08:41:52.256432 1 nanny.go:116] dnsmasq[10]: read /etc/hosts - 7 addresses
I0621 08:50:43.727968 1 nanny.go:116] dnsmasq[10]: Maximum number of concurrent DNS queries reached (max: 150)
I0621 08:50:53.750313 1 nanny.go:116] dnsmasq[10]: Maximum number of concurrent DNS queries reached (max: 150)
I0621 08:51:03.879573 1 nanny.go:116] dnsmasq[10]: Maximum number of concurrent DNS queries reached (max: 150)
I0621 08:51:13.887735 1 nanny.go:116] dnsmasq[10]: Maximum number of concurrent DNS queries reached (max: 150)
I0621 08:51:23.957996 1 nanny.go:116] dnsmasq[10]: Maximum number of concurrent DNS queries reached (max: 150)
I0621 08:51:34.016679 1 nanny.go:116] dnsmasq[10]: Maximum number of concurrent DNS queries reached (max: 150)
I0621 08:51:43.032107 1 nanny.go:116] dnsmasq[10]: Maximum number of concurrent DNS queries reached (max: 150)
I0621 08:51:53.076274 1 nanny.go:116] dnsmasq[10]: Maximum number of concurrent DNS queries reached (max: 150)
I0621 08:52:03.359643 1 nanny.go:116] dnsmasq[10]: Maximum number of concurrent DNS queries reached (max: 150)
I0621 08:52:13.434993 1 nanny.go:116] dnsmasq[10]: Maximum number of concurrent DNS queries reached (max: 150)
I0621 08:52:23.497330 1 nanny.go:116] dnsmasq[10]: Maximum number of concurrent DNS queries reached (max: 150)
I0621 08:52:33.591295 1 nanny.go:116] dnsmasq[10]: Maximum number of concurrent DNS queries reached (max: 150)
I0621 08:52:43.639024 1 nanny.go:116] dnsmasq[10]: Maximum number of concurrent DNS queries reached (max: 150)
I0621 08:52:53.681231 1 nanny.go:116] dnsmasq[10]: Maximum number of concurrent DNS queries reached (max: 150)
I0621 08:53:03.717874 1 nanny.go:116] dnsmasq[10]: Maximum number of concurrent DNS queries reached (max: 150)
I0621 08:53:13.794725 1 nanny.go:116] dnsmasq[10]: Maximum number of concurrent DNS queries reached (max: 150)
I0621 08:53:23.877015 1 nanny.go:116] dnsmasq[10]: Maximum number of concurrent DNS queries reached (max: 150)
I0621 08:53:33.974114 1 nanny.go:116] dnsmasq[10]: Maximum number of concurrent DNS queries reached (max: 150)
kubectl logs kube-dns-86f4d74b45-8tgff sidecar -n kube-system
I0621 08:41:57.464915 1 main.go:51] Version v1.14.8
I0621 08:41:57.464987 1 server.go:45] Starting server (options {DnsMasqPort:53 DnsMasqAddr:127.0.0.1 DnsMasqPollIntervalMs:5000 Probes:[{Label:kubedns Server:127.0.0.1:10053 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:33} {Label:dnsmasq Server:127.0.0.1:53 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:33}] PrometheusAddr:0.0.0.0 PrometheusPort:10054 PrometheusPath:/metrics PrometheusNamespace:kubedns})
I0621 08:41:57.465029 1 dnsprobe.go:75] Starting dnsProbe {Label:kubedns Server:127.0.0.1:10053 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:33}
I0621 08:41:57.468630 1 dnsprobe.go:75] Starting dnsProbe {Label:dnsmasq Server:127.0.0.1:53 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:33}
W0621 08:50:46.832282 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:37437->127.0.0.1:53: i/o timeout
W0621 08:50:55.772310 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:57714->127.0.0.1:53: i/o timeout
W0621 08:51:02.779876 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:38592->127.0.0.1:53: i/o timeout
W0621 08:51:09.795385 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:39941->127.0.0.1:53: i/o timeout
W0621 08:51:16.798735 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:41457->127.0.0.1:53: i/o timeout
W0621 08:51:23.802617 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:45709->127.0.0.1:53: i/o timeout
W0621 08:51:30.822081 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:37072->127.0.0.1:53: i/o timeout
W0621 08:51:37.826914 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:49924->127.0.0.1:53: i/o timeout
W0621 08:51:51.093275 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:51194->127.0.0.1:53: i/o timeout
W0621 08:51:58.203965 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:35781->127.0.0.1:53: i/o timeout
W0621 08:52:06.423002 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:42763->127.0.0.1:53: i/o timeout
W0621 08:52:16.455821 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:36143->127.0.0.1:53: i/o timeout
W0621 08:52:23.496199 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:44195->127.0.0.1:53: i/o timeout
W0621 08:52:30.500081 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:48733->127.0.0.1:53: i/o timeout
W0621 08:52:37.519339 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:39179->127.0.0.1:53: i/o timeout
W0621 08:52:51.695822 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:51821->127.0.0.1:53: i/o timeout
W0621 08:52:58.739133 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:46358->127.0.0.1:53: i/o timeout
W0621 08:53:06.823714 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:49103->127.0.0.1:53: i/o timeout
W0621 08:53:16.866975 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:52782->127.0.0.1:53: i/o timeout
W0621 08:53:23.869540 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:52495->127.0.0.1:53: i/o timeout
W0621 08:53:30.882626 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:56134->127.0.0.1:53: i/o timeout
W0621 08:53:37.886811 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:55489->127.0.0.1:53: i/o timeout
W0621 08:53:46.023614 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:58523->127.0.0.1:53: i/o timeout
W0621 08:53:53.034985 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:56026->127.0.0.1:53: i/o timeout
W0621 08:54:00.041734 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:42093->127.0.0.1:53: i/o timeout
W0621 08:54:07.050864 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:58731->127.0.0.1:53: i/o timeout
W0621 08:54:14.053858 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:34062->127.0.0.1:53: i/o timeout
W0621 08:54:21.076986 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:44293->127.0.0.1:53: i/o timeout
W0621 08:54:28.080808 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:42738->127.0.0.1:53: i/o timeout
W0621 08:54:41.423864 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:58715->127.0.0.1:53: i/o timeout
</code></pre>
<p>It seems the dns gets spammed somehow, but i am at a loss as to how i should proceed.</p>
| <p>Digging into Javier's answer, I found the solution to the problem here:
<a href="https://github.com/kubernetes/kubeadm/issues/787" rel="nofollow noreferrer">https://github.com/kubernetes/kubeadm/issues/787</a></p>
<p>These were the commands needed to work around the problem:</p>
<pre><code>sudo rm /etc/resolv.conf
sudo ln -s /run/systemd/resolve/resolv.conf /etc/resolv.conf
</code></pre>
|
<p>What's the best way to store a persistent file in Kubernetes? I have a cert (.pfx) and I want to be passing to the application its path. From the looks of it it can't be stored in secrets. Was thinking about a volume but the question is how do I upload the file to it? And which type of volume to choose? Or is there any other efficient way?</p>
| <p>It's unclear from your question why you came to the conclusion that it can't be stored as a Secret. This is one of the main <a href="https://kubernetes.io/docs/concepts/configuration/secret/#use-cases" rel="noreferrer">use cases</a> for Secrets.</p>
<p>Step 1. Create a Secret from your file:</p>
<pre><code>kubectl create secret generic mysecret --from-file=myfile=/tmp/my.pfx
</code></pre>
<p>Step 2. Mount the Secret volume into a Pod:</p>
<pre><code>kind: Pod
apiVersion: v1
metadata:
name: secret-test-pod
spec:
volumes:
- name: secret-volume
secret:
secretName: mysecret
containers:
- name: ...
image: ...
volumeMounts:
- name: secret-volume
mountPath: "/etc/secret-volume"
</code></pre>
<p>Your container should see a file at <code>/etc/secret-volume/myfile</code></p>
|
<p>My kubernetes version is 1.10.4.</p>
<p>I am trying to create a ConfigMap for java keystore files:</p>
<pre><code>kubectl create configmap key-config --from-file=server-keystore=/home/ubuntu/ssl/server.keystore.jks --from-file=server-truststore=/home/ubuntu/ssl/server.truststore.jks --from-file=client--truststore=/home/ubuntu/ssl/client.truststore.jks --append-hash=false
</code></pre>
<p>It says <code>configmap "key-config" created</code>.</p>
<p>But when I describe the configmap I am getting null value:</p>
<pre><code>$ kubectl describe configmaps key-config
Name: key-config
Namespace: prod-es
Labels: <none>
Annotations: <none>
Data
====
Events: <none>
</code></pre>
<p>I know my version kubernetes support binary data as configmaps or secrets but I am not sure what is wrong with my approach.</p>
<p>Any input on this is highly appreciated.</p>
| <p><code>kubectl describe</code> does not show binary data in ConfigMaps at the moment (kubectl version v1.10.4); also the <code>DATA</code> column of the <code>kubectl get configmap</code> output does not include the binary elements:</p>
<pre><code>$ kubectl get cm
NAME DATA AGE
key-config 0 1m
</code></pre>
<p>But the data is there, it's just a poor UI experience at the moment. You can verify that with:</p>
<pre><code>kubectl get cm key-config -o json
</code></pre>
<p>Or you can use this friendly command to check that the ConfigMap can be mounted and the projected contents matches your original files:</p>
<p><code>kubectl run cm-test --image=busybox --rm --attach --restart=Never --overrides='{"spec":{"volumes":[{"name":"cm", "configMap":{"name":"key-config"}}], "containers":[{"name":"cm-test", "image":"busybox", "command":["sh","-c","md5sum /cm/*"], "volumeMounts":[{"name":"cm", "mountPath":"/cm"}]}]}}'</code></p>
|
<p>I understood Ingress can be used when we want to expose multiple service/routes with a single Load Balancer / public IP.</p>
<p>Now I want to expose my Nginx server to public. I have two choices</p>
<ol>
<li>Set <code>service type as LoadBalancer</code> voila I got public IP</li>
<li>Use <a href="https://github.com/kubernetes/ingress-nginx" rel="noreferrer">Nginx Ingress Controller</a></li>
</ol>
<p>Now I can get my job done with Option 1 when or why would I choose Option 2 whats the advantage of having nginx with Ingress without Ingress ?</p>
| <p>There is a difference between <em>ingress rule (ingress)</em> and <em>ingress controller</em>. So, technically, nginx ingress controller and LoadBalancer type service are not comparable. You can compare ingress resource and LoadBalancer type service, which is below.</p>
<p><strong>Generally speaking:</strong></p>
<p><code>LoadBalancer</code> type service is a L4(TCP) load balancer. You would use it to expose single app or service to outside world. It would balance the load based on destination IP address and port.</p>
<p><code>Ingress</code> type resource would create a L7(HTTP/S) load balancer. You would use this to expose several services at the same time, as L7 LB is application aware, so it can determine where to send traffic depending on the application state.</p>
<p><strong><em>ingress</em> and <em>ingress controller</em> relation</strong>:</p>
<p><code>Ingress</code>, or <code>ingress rules</code> are the rules that <code>ingress controller</code> follows to distribute the load. Ingress controller get the packet, checks ingress rules and determines to which service to deliver the packet.</p>
<p><strong>Nginx Ingress Controller</strong></p>
<p>Nginx ingress controller uses LoadBalancer type service actually as entrypoint to the cluster. Then is checks ingress rules and distributes the load. This can be very confusing. You create an ingress resource, it creates the HTTP/S load balancer. It also gives you an external IP address (on GKE, for example), but when you try hitting that IP address, the connection is refused.</p>
<p><strong>Conclusions:</strong></p>
<p>You would use Loadbalancer type service if you would have a single app, say <code>myapp.com</code> that you want to be mapped to an IP address. </p>
<p>You would use ingress resource if you would have several apps, say <code>myapp1.com</code>, <code>myapp1.com/mypath</code>, <code>myapp2.com</code>, .., <code>myappn.com</code> to be mapped to one IP address. </p>
<p>As the ingress is L7 it is able to distinguish between <code>myapp1.com</code> and <code>myapp1.com/mypath</code>, it is able to route the traffic to the right service.</p>
|
<p>In most situations, my web applications do not assume /tmp will exist between requests. However certain web application projects write to /tmp, and then read the results in the following request, which <a href="https://www.drupal.org/project/webform/issues/2980276" rel="nofollow noreferrer">can cause issues if a web application is not served from a single container or server</a>. If I am moving to Kubernetes, is it generally considered a good practice to share the /tmp directory between containers, or is it better to move toward assuming that /tmp will change from one call to another.</p>
| <p>It isn't a good practice to share your /tmp folder between containers, but you can create a persistent volume with read-write-many permission:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: pv0003
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /tmp
server: 172.17.0.2
</code></pre>
<p>and use it. </p>
<p>More information on how to use volumes you could find in <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">documentation.</a></p>
|
<p>If one has an image with a number of different executables, is it possible to have multiple cron entries with different commands that run at different times in the same kubernetes deployment.</p>
<p>e.g. For some single container image named "jolly-roger", I want to schedule the following crontab, without having to manage 4 separate kubernetes applications in different repositories.</p>
<pre><code>*/30 * * * * /pirate-bin/rehoist-the-flag
0 7,19 * * * /pirate-bin/feed-the-sharks
45 20 * * * /pirate-bin/count-the-gold
0 12 * * 1,5 /pirate-bin/make-landlubbers-walk-the-plank
</code></pre>
| <p>You can:</p>
<ul>
<li>create a single cronjob resource with exactly one crontab time (like <code>*/30 * * * *</code>) and several containers to run</li>
<li>create several cronjob resources using the same container images but different <code>command</code> and <code>args</code> for each job</li>
</ul>
<p>You can not:</p>
<ul>
<li>create one cron job resource with several crontab times</li>
<li>consequently not using multiple containers with multiple crontab times</li>
</ul>
<p>So in short, you can place all your binaries in a single container, but cannot solve your problem by defining one resource. The way to go is to use the same image in a distinct cronjob resource per crontab line of your example</p>
|
<p><a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/" rel="noreferrer">https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/</a> The docs are not very clear as to what exactly the values represent.</p>
<blockquote>
<p>the system will try to avoid placing a pod that does not tolerate the
taint on the node, but it is not required</p>
</blockquote>
<p>What does 'try' imply? If I said a function will try sort a a list of numbers - it's not very clear...</p>
| <p>Although there is a slight difference, I like more Google explanation about what <strong>Node Taints</strong> are, rather then Kubernetes:</p>
<p>A node taint lets you mark a node so that the scheduler avoids or prevents using it for certain Pods. A complementary feature, tolerations, lets you designate Pods that can be used on "tainted" nodes.</p>
<p>Node taints are key-value pairs associated with an effect. Here are the available effects:</p>
<p><code>NoSchedule</code>: Pods that do not tolerate this taint are not scheduled on the node.</p>
<p><code>PreferNoSchedule</code>: Kubernetes avoids scheduling Pods that do not tolerate this taint onto the node. This one basically means, do it, if possible.</p>
<p><code>NoExecute</code>: Pod is evicted from the node if it is already running on the node, and is not scheduled onto the node if it is not yet running on the node.</p>
<p>Note that the difference between <code>NoSchedule</code> and <code>NoExecute</code> is that with the first one it won't schedule a pod, but if it is already running, it won't kill it. With the last one, it will kill the pod and re-schedule on another node.</p>
|
<p>followed <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="noreferrer">steps to create single master cluster</a>, i was able to successfully init the master, but when apply i got <strong>forbidden</strong> error, any one experienced the same? thanks!</p>
<p>i did the following</p>
<pre><code>1. disable selinux in /etc/selinux/config, and reboot
2. comment out KUBELET_NETWORK_ARGS in
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
3. export no_proxy=$no_proxy,<master-ip>
4. export KUBECONFIG=/etc/kubernetes/kubelet.conf in .bash_profile
</code></pre>
<p>after init success, when try to apply </p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml
</code></pre>
<p>i got below error messages</p>
<pre><code>Error from server (Forbidden): error when retrieving current configuration of:
&{0xc42048ab40 0xc421a83730
flannel https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml 0xc42109cc20 false}
from server for: "https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml":
clusterroles.rbac.authorization.k8s.io "flannel" is forbidden:
User "system:node:<master-server-name>"
cannot get clusterroles.rbac.authorization.k8s.io at the cluster scope
</code></pre>
| <p>As soon as your cluster is not yet fully functional, it would be easier to tear it down and recreate from scratch:</p>
<h3>Tear down everything:</h3>
<pre><code>$> sudo su
#> kubeadm reset
#> rm -rf $HOME/.kube /etc/kubernetes
</code></pre>
<h3>Prepare your host (just in case you haven’t done it already):</h3>
<pre><code>#> swapoff -a
## Don't forget to comment swap partition line in /etc/fstab
## I assume that you have these packages already installed: docker, kubeadm, kubectl
## tune sysctl to pass bridged IPv4 traffic to iptables’ chains.
## This is a requirement for some CNI plugins to work, for more information please see
## https://kubernetes.io/docs/concepts/cluster-administration/network-plugins/#network-plugin-requirements
#> cat <<EOF >>/etc/ufw/sysctl.conf
net/bridge/bridge-nf-call-ip6tables = 1
net/bridge/bridge-nf-call-iptables = 1
net/bridge/bridge-nf-call-arptables = 1
EOF
#> sudo sysctl --system
</code></pre>
<h3>Initialize the cluster:</h3>
<pre><code>## Do not try to set less than /16 subnet for `--pod-network-cidr`
#> kubeadm init --pod-network-cidr=10.244.0.0/16
## Prepare the kubectl config
#> mkdir -p $HOME/.kube
#> cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
#> chown $(id -u):$(id -g) $HOME/.kube/config
</code></pre>
<h3>Install Flannel</h3>
<pre><code>#> kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml
</code></pre>
<h3>Allow pod scheduled on the master node.</h3>
<p>(Just in case you do not have any worker nodes.)</p>
<pre><code>#> kubectl taint nodes --all node-role.kubernetes.io/master-
</code></pre>
<h3>At this point you should have ready-to-use Kubernetes cluster:</h3>
<pre><code>#> kubectl get nodes
#> kubectl get pods --all-namespaces
</code></pre>
|
<p>We are looking into migrating all our company services to Kubernetes. Since it seems to be pretty straight forward to setup I was looking into Azure Kubernetes Service.</p>
<p>Out of curiosity and with certain privacy issues in mind, I was wondering if it is possible to add self-hosted nodes to the Azure Kubernetes cluster and if so, how to do it.</p>
| <p>No. All nodes in the cluster must run in Azure and are managed by the AKS service, though they reside in your subscription.</p>
|
<p>I have created the docker registry secret using below command</p>
<pre><code>kubectl create secret docker-registry regcred --docker-server=https://dockerreleaselocal.jfrog.io --docker-username=user --docker-password='password' --docker-email=email
</code></pre>
<p>Below is the Kubernetes yaml code for pulling the image</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: artifactory-terraform-job
spec:
template:
spec:
containers:
- image: dockerreleaselocal.jfrog.io/terraform:v0.11.7.001
name: terraform-container
imagePullSecrets:
- name: regcred
</code></pre>
<h2>Error</h2>
<pre><code>Failed to pull image "dockerreleaselocal.jfrog.io/terraform:v0.11.7.001": rpc error: code = Unknown desc = Error response from daemon: missing signature key
</code></pre>
<h2>Kubernetes version</h2>
<pre><code>Server Version: version.Info{Major:"1", Minor:"9+", GitVersion:"v1.9.7-2+231cc32d0a1119", GitCommit:"231cc32d0a111940d45936b3206548c610869df2", GitTreeState:"clean", BuildDate:"2018-04-25T08:09:04Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<h2>Output docker pull locally</h2>
<pre><code>docker pull dockerreleaselocal.jfrog.io/terraform:v0.11.7.001
Error response from daemon: missing signature key
</code></pre>
<p>Do i need to do additional configuration to fix the issue?</p>
| <p>Above issue was regarding ACL of the user. A user should have annotate permission for docker registry. If he has annotate permission, the properties will be generated automatically using the manifest file. </p>
|
<p>We are noticing restarts of our Kubernetes pods (Google Container Engine) whenever JVM's garbage collection duration is a little long.</p>
<p>Specifically speaking, anytime, it seems to cross ~20 seconds, it causes a restart of the pod.</p>
<p>1) The JVM is not out of heap memory. It's still less than 20% of allocated heap. It's just that once in a long while, a particular GC cycle takes long (could be due to IO on that pod's disk at that time)</p>
<p>2) I tried to adjust the liveness check parameters to periodSeconds=12, failureThreshold=5, so that the liveness checking process waits for at least 12 * 5 = 60 seconds before deciding that a pod has become unresponsive and replace it with a new one, but still it's restarting the pod as soon as the GC-pause crosses 20-22 seconds.</p>
<p>Could anyone comment on why this might be happening and what else can I adjust to not restart the pod on this GC-pause? It's a pity, because there is lot of heap capacity still available, and memory is not really the reason it should be replaced.</p>
| <p>Found it. </p>
<p>I had to adjust timeoutSeconds from default of 1 second to 5 seconds in addition to setting periodSeconds to 12, to make it wait for ~60 seconds before flagging a pod as unresponsive.</p>
|
<p>I have been reading about <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#define-readiness-probes" rel="nofollow noreferrer">liveness and readiness probes in kubernetes</a> and I would like to use them to check and see if a cluster has come alive.</p>
<p>The question is how to configure a readiness probe for an entire statefulset, and not an individual pod/container.</p>
<p>A simple HTTP check can be used to determine readiness, but the issue I'm running into is that the readinessCheck seems to apply to the container/pod and not to the set itself.</p>
<p>For the software I'm using, the HTTP endpoint doesn't come up until the cluster forms; meaning that each individual pod would fail the readinessCheck until all three are up and find one another.</p>
<p>The behavior I'm seeing in Kubernetes right now is that the first of 3 replicas is created, and Kubernetes does not even attempt to create replicas 2 and 3 until the first passes the readinessCheck, which never happens, because all three have to be up for it to have a chance to pass it.</p>
| <p>You need to change <code>.spec.podManagementPolicy</code> for a <code>StatefulSet</code> from <code>OrderedReady</code> to <code>Parallel</code> policy. </p>
<p>This way K8S will start all your pods in parallel and won't wait for probes.</p>
<p>From <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#statefulsetspec-v1-apps" rel="nofollow noreferrer">documentation</a></p>
<blockquote>
<p>podManagementPolicy controls how pods are created during initial scale
up, when replacing pods on nodes, or when scaling down. The default
policy is OrderedReady, where pods are created in increasing order
(pod-0, then pod-1, etc) and the controller will wait until each pod
is ready before continuing. When scaling down, the pods are removed in
the opposite order. The alternative policy is Parallel which will
create pods in parallel to match the desired scale without waiting,
and on scale down will delete all pods at once.</p>
</blockquote>
|
<p>I am deploying a spring boot application and is considering various options for externalize configuration. I came to know below options.</p>
<pre><code>1) ConfigMap
2) properties/yaml file in Git and a process in container to watch for changes and restart the application with new properties if there are any changes.
</code></pre>
<p>This is not only related to application only, But can be relevant for other components as well like proxies(apache RP,nginx)/load balancers etc.</p>
<p>Thanks</p>
| <p>Both are good options to handle configuration.</p>
<p>Git approach has the advantage of versioned configuration but you'll need an extra container to be coded and maintained.</p>
<p>For me configmap is the best option. As it is a Kubernetes native object you don't need a git repo nor a git container, it is just stored and preserved in etcd, much more easy to maintain. </p>
<p>Moreover, there are some open source projects that can help you with using configmaps: I use <a href="https://github.com/pieterlange/kube-backup" rel="noreferrer">kube-backup</a> to have a versioned copy of every config,and <a href="https://github.com/fabric8io/configmapcontroller" rel="noreferrer">configmapcontroller</a> to auto-rollout deployments when a configmap changes.</p>
|
<p>I'm looking for a way to tell (from within a script) when a Kubernetes Job has completed. I want to then get the logs out of the containers and perform cleanup. </p>
<p>What would be a good way to do this? Would the best way be to run <code>kubectl describe job <job_name></code> and grep for <code>1 Succeeded</code> or something of the sort?</p>
| <p>Since version 1.11, you can do:</p>
<pre><code>kubectl wait --for=condition=complete job/myjob
</code></pre>
<p>and you can also set a timeout:</p>
<pre><code>kubectl wait --for=condition=complete --timeout=30s job/myjob
</code></pre>
|
<p>The Kubernetes <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">documentation</a> mentions that a <code>CronJob</code> supports the use case of:</p>
<blockquote>
<p>Once at a specified point in time</p>
</blockquote>
<p>But, I don't see any examples of how this would be possible. Specifically, I'm looking to kick off a job to run once in <em>N</em> hours.</p>
| <p>According to <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/" rel="nofollow noreferrer">documentation</a>, CronJob uses the common <a href="https://en.wikipedia.org/wiki/Cron" rel="nofollow noreferrer">Cron</a> format of schedule:</p>
<p>Here are some examples:</p>
<pre><code> schedule: "1 2-14 * * 0-1,5-6" (first minute of every hour from 2am to 2pm UTC on Sun,Mon,Fri,Sat)
schedule: "*/1 * * * *" (every minute)
</code></pre>
<p>CronJobs also have some limitations:</p>
<blockquote>
<p>A cron job creates a job object about once per execution time of its
schedule. We say “about” because there are certain circumstances where
two jobs might be created, or no job might be created. We attempt to
make these rare, but do not completely prevent them. Therefore, <strong>jobs
should be idempotent</strong>.</p>
<p>If <code>startingDeadlineSeconds</code> is set to a large value or left unset (the
default) and if <code>concurrencyPolicy</code> is set to <code>Allow</code>, the jobs will
always run at least once.</p>
<p>Jobs may fail to run if the CronJob controller is not running or
broken for a span of time from before the start time of the CronJob to
start time plus <code>startingDeadlineSeconds</code>, or if the span covers
multiple start times and <code>concurrencyPolicy</code> does not allow concurrency.
For example, suppose a cron job is set to start at exactly <code>08:30:00</code>
and its <code>startingDeadlineSeconds</code> is set to <code>10</code>, if the CronJob
controller happens to be down from <code>08:29:00</code> to <code>08:42:00</code>, the job will
not start. Set a longer <code>startingDeadlineSeconds</code> if starting later is
better than not starting at all.</p>
<p>The Cronjob is only responsible for creating Jobs that match its
schedule, and the Job in turn is responsible for the management of the
Pods it represents.</p>
</blockquote>
<p>The other important thing is that Kubernetes uses <strong>UTC</strong> exclusively. Make sure you take that into account when you’re creating your schedule.</p>
<p>To run a job just once, you can use <code>kubectl create -f job.yaml</code> started by <strong><a href="https://www.computerhope.com/unix/uat.htm" rel="nofollow noreferrer">at</a></strong> command on the admin machine or on the master node.</p>
<pre><code>echo "kubectl create -f job.yaml" | at midnight
</code></pre>
|
<p>I'd like to know whether I'm able to propagate environment variables from a file.</p>
<p>Guess I have a file like this:</p>
<pre><code>ENV_VARIABLE_USERNAME=username
ENV_VARIABLE_PASSWORD=password
</code></pre>
<p>I'd like when I pod kicks off, this content file is propagated as environment variables.</p>
<p>Is it possible?</p>
| <p>Generally speaking, for environment variables, you would do it through a configMap. But as this is a username and password (sensitive information), you can do it through a secret.</p>
<p>For example, given the Pod (redis, for this example) and the Secret below:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: credentials
type: Opaque
data:
username: dXNlcm5hbWU= //username -> encoded base64
password: cGFzc3dvcmQ= //password -> encoded base64
</code></pre>
<p>Note: you need to have all data in a secret encoded. This one is encoded with base64:</p>
<pre><code>echo -n username | base64
</code></pre>
<p>And the pod:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: redis-pod
spec:
containers:
- name: redis
image: redis
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: credentials
key: username
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: credentials
key: password
restartPolicy: Never
</code></pre>
<p>Now if you would run <code>env | grep SECRET</code> in the pod, I would get the variable correctly initialized:</p>
<pre><code>nerus:~/workspace (master) $ kubectl exec redis-pod env | grep SECRET
SECRET_USERNAME=username
SECRET_PASSWORD=password
</code></pre>
|
<p>I have a Kubernetes v1.9.3 (no OpenShift) cluster I'd like to manage with ManageIQ (gaprindashvili-3 running as a Docker container).</p>
<p>I prepared the k8s cluster to interact with ManageIQ following <a href="http://manageiq.org/docs/guides/providers/kubernetes" rel="nofollow noreferrer">these instructions</a>. Notice that I performed the steps listed in the last section only (<strong>Prepare cluster for use with ManageIQ</strong>), as the previous ones were for setting up a k8s cluster and I already had a running one.</p>
<p>I successfully added the k8s container provider to ManageIQ, but the dashboard reports nothing: 0 nodes, 0 pods, 0 services, etc..., while I do have nodes, services and running pods on the cluster. I looked at the content of <code>/var/log/evm.log</code> of ManageIQ and found this error:</p>
<pre><code>[----] E, [2018-06-21T10:06:40.397410 #13333:6bc9e80] ERROR – : [KubeException]: events is forbidden: User “system:serviceaccount:management-infra:management-admin” cannot list events at the cluster scope: clusterrole.rbac.authorization.k8s.io “cluster-reader” not found Method:[block in method_missing]
</code></pre>
<p>So the ClusterRole <code>cluster-reader</code> was not defined in the cluster. I double checked with <code>kubectl get clusterrole cluster-reader</code> and it confirmed that <code>cluster-reader</code> was missing. </p>
<p>As a solution, I tried to create <code>cluster-reader</code> manually. I could not find any reference of it in the k8s doc, while it is mentioned in the OpenShift docs. So I looked at how <code>cluster-reader</code> was defined in OpenShift v3.9. Its definition changes across different OpenShift versions, I picked 3.9 as it is based on k8s v1.9 which is the one I'm using. So here's what I found in the OpenShift 3.9 doc:</p>
<pre><code>Name: cluster-reader
Labels: <none>
Annotations: authorization.openshift.io/system-only=true
rbac.authorization.kubernetes.io/autoupdate=true
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
[*] [] [get]
apiservices.apiregistration.k8s.io [] [] [get list watch]
apiservices.apiregistration.k8s.io/status [] [] [get list watch]
appliedclusterresourcequotas [] [] [get list watch]
</code></pre>
<p>I wrote the following yaml definition to create an equivalent ClusterRole in my cluster:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cluster-reader
rules:
- apiGroups: ["apiregistration"]
resources: ["apiservices.apiregistration.k8s.io", "apiservices.apiregistration.k8s.io/status"]
verbs: ["get", "list", "watch"]
- nonResourceURLs: ["*"]
verbs: ["get"]
</code></pre>
<p>I didn't include <code>appliedclusterresourcequotas</code> among the monitored resources because it's my understanding that is an OpenShift-only resource (but I may be mistaken).</p>
<p>I deleted the old k8s container provider on ManageIQ and created a new one after having created <code>cluster-reader</code>, but nothing changed, the dashboard still displays nothing (0 nodes, 0 pods, etc...). I looked at the content of <code>/var/log/evm.log</code> in ManageIQ and this time these errors were reported:</p>
<pre><code>[----] E, [2018-06-22T11:15:39.041903 #2942:7e5e1e0] ERROR -- : MIQ(ManageIQ::Providers::Kubernetes::ContainerManager::EventCatcher::Runner#start_event_monitor) EMS [kubernetes-01] as [] Event Monitor Thread aborted because [events is forbidden: User "system:serviceaccount:management-infra:management-admin" cannot list events at the cluster scope]
[----] E, [2018-06-22T11:15:39.042455 #2942:7e5e1e0] ERROR -- : [KubeException]: events is forbidden: User "system:serviceaccount:management-infra:management-admin" cannot list events at the cluster scope Method:[block in method_missing]
</code></pre>
<p>So what am I doing wrong? How can I fix this problem?
If it can be of any use, <a href="https://github.com/matte21/cloud-infrastructure-project-work/blob/master/k8s-cluster-miq-setup/k8-miq-objects.yaml" rel="nofollow noreferrer">here</a> you can find the whole .yaml file I'm using to set up the k8s cluster to interact with ManageIQ (all the required namespaces, service accounts, cluster role bindings are present as well).</p>
| <p>For the <code>ClusterRole</code> to take effect it must be bound to the group <code>management-infra</code> or user <code>management-admin</code>.</p>
<p>Example of creating group binding:</p>
<pre><code>kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-cluster-state
subjects:
- kind: Group
name: management-infra
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: cluster-reader
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>After applying this file changes will take place immediately. No need to restart cluster.</p>
<p>See more information <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding" rel="nofollow noreferrer">here</a>.</p>
|
<p>Problem:</p>
<pre><code>/ # ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
ping: sendto: Network unreachable
</code></pre>
<p>Example container <code>ifconfig</code>: </p>
<pre><code>eth0 Link encap:Ethernet HWaddr F2:3D:87:30:39:B8
inet addr:10.2.8.64 Bcast:0.0.0.0 Mask:255.255.255.0
inet6 addr: fe80::f03d:87ff:fe30:39b8%32750/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1450 Metric:1
RX packets:22 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:4088 (3.9 KiB) TX bytes:648 (648.0 B)
eth1 Link encap:Ethernet HWaddr 6E:1C:69:85:21:96
inet addr:172.16.28.63 Bcast:0.0.0.0 Mask:255.255.255.0
inet6 addr: fe80::6c1c:69ff:fe85:2196%32750/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:18 errors:0 dropped:0 overruns:0 frame:0
TX packets:8 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1418 (1.3 KiB) TX bytes:648 (648.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1%32750/128 Scope:Host
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
</code></pre>
<p>Routing inside container:</p>
<pre><code>/ # ip route show
10.2.0.0/16 via 10.2.8.1 dev eth0
10.2.8.0/24 dev eth0 src 10.2.8.73
172.16.28.0/24 via 172.16.28.1 dev eth1 src 172.16.28.72
172.16.28.1 dev eth1 src 172.16.28.72
</code></pre>
<p>Host <code>iptables</code>: <a href="http://pastebin.com/raw/UcLQQa4J" rel="nofollow">http://pastebin.com/raw/UcLQQa4J</a></p>
<p>Host <code>ifconfig</code>: <a href="http://pastebin.com/raw/uxsM1bx6" rel="nofollow">http://pastebin.com/raw/uxsM1bx6</a></p>
<p>logs by <code>flannel</code>:</p>
<pre><code>main.go:275] Installing signal handlers
main.go:188] Using 104.238.xxx.xxx as external interface
main.go:189] Using 104.238.xxx.xxx as external endpoint
etcd.go:129] Found lease (10.2.8.0/24) for current IP (104.238.xxx.xxx), reusing
etcd.go:84] Subnet lease acquired: 10.2.8.0/24
ipmasq.go:50] Adding iptables rule: FLANNEL -d 10.2.0.0/16 -j ACCEPT
ipmasq.go:50] Adding iptables rule: FLANNEL ! -d 224.0.0.0/4 -j MASQUERADE
ipmasq.go:50] Adding iptables rule: POSTROUTING -s 10.2.0.0/16 -j FLANNEL
ipmasq.go:50] Adding iptables rule: POSTROUTING ! -s 10.2.0.0/16 -d 10.2.0.0/16 -j MASQUERADE
vxlan.go:153] Watching for L3 misses
vxlan.go:159] Watching for new subnet leases
vxlan.go:273] Handling initial subnet events
device.go:159] calling GetL2List() dev.link.Index: 3
vxlan.go:280] fdb already populated with: 104.238.xxx.xxx 82:83:be:17:3e:d6
vxlan.go:280] fdb already populated with: 104.238.xxx.xxx 82:dd:90:b2:42:87
vxlan.go:280] fdb already populated with: 104.238.xxx.xxx de:e8:be:28:cf:7a
systemd[1]: Started Network fabric for containers.
</code></pre>
| <p>It is possible if you set a config map with upstreamNameServers.</p>
<p>Example:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
data:
upstreamNameservers: |
["8.8.8.8", "8.8.8.4"]
</code></pre>
<p>And in you Deployment definition add:</p>
<pre><code>dnsPolicy: "ClusterFirst"
</code></pre>
<p>More info here: </p>
<p><a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers</a></p>
|
<p><strong>Summary</strong></p>
<p>I have a flask application deployed to Kubernetes with python 2.7.12, Flask 0.12.2 and using requests library. I'm getting a SSLError while using requests.session to send a POST Request inside the container. When using requests sessions to connect to a https url , requests throws a SSLError</p>
<p><strong>Some background</strong></p>
<ul>
<li>I have not added any certificates</li>
<li>The project works when I run a docker image locally but after deployment to kubernetes, from inside the container - the post request is not being sent to the url
verify=false does not work either</li>
</ul>
<p><strong>System Info</strong> - What I am using:
Python 2.7.12, Flask==0.12.2, Kubernetes, python-requests-2.18.4</p>
<p><strong>Expected Result</strong></p>
<p>Get HTTP Response code 200 after sending a POST request</p>
<p><strong>Error Logs</strong></p>
<pre><code>r = adapter.send(request, **kwargs)
File "/usr/local/lib/python2.7/site-packages/requests/adapters.py", line 511, in send
raise SSLError(e, request=request)
SSLError: HTTPSConnectionPool(host='dev.domain.nl', port=443): Max retries exceeded with url: /ingestion?LrnDevEui=0059AC0000152A03&LrnFPort=1&LrnInfos=TWA_100006356.873.AS-1-135680630&AS_ID=testserver&Time=2018-06-22T11%3A41%3A08.163%2B02%3A00&Token=1765b08354dfdec (Caused by SSLError(SSLEOFError(8, u'EOF occurred in violation of protocol (_ssl.c:661)'),))
</code></pre>
<p>/usr/local/lib/python2.7/site-packages/urllib3/connectionpool.py:858: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: <a href="https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings" rel="nofollow noreferrer">https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings</a>
InsecureRequestWarning)</p>
<p><strong>Reproduction Steps</strong></p>
<pre><code>import requests
from flask import Flask, request, jsonify
from requests import Request, Session
sess = requests.Session()
adapter = requests.adapters.HTTPAdapter(max_retries = 200)
sess.mount('http://', adapter)
sess.mount('https://', adapter)
sess.cert ='/usr/local/lib/python2.7/site-packages/certifi/cacert.pem'
def test_post():
url = 'https://dev.domain.nl/ingestion/?'
header = {'Content-Type': 'application/json', 'Accept': 'application/json'}
response = sess.post(url, headers= header, params= somepara, data= json.dumps(data),verify=True)
print response.status_code
return response.status_code
def main():
threading.Timer(10.0, main).start()
test_post()
if __name__ == '__main__':
main()
app.run(host="0.0.0.0", debug=True, port=5001, threaded=True)
</code></pre>
<p>Docker File</p>
<pre><code>FROM python:2.7-alpine
COPY ./web /web
WORKDIR /web
RUN pip install -r requirements.txt
ENV FLASK_APP app.py
EXPOSE 5001
EXPOSE 443
CMD ["python", "app.py"]
</code></pre>
| <p>The problem may be in the Alpine Docker image that lacks CA certificates. On your laptop code works as it uses CA certs from you local workstation. I would think that running Docker image locally will fail too - so the problem is not k8s.</p>
<p>Try to add the following line to the Dockerfile:</p>
<pre><code>RUN apk update && apk add ca-certificates && rm -rf /var/cache/apk/*
</code></pre>
<p>It will install CA certs inside the container. </p>
|
<p>We have multiple Namespaces in our cluster. Admin will have access to all the Namespaces by means of ClusterRole. But, user will be given access to the corresponding namespaces.</p>
<p>Say, user A is given access to namespace B,C & D.</p>
<p>So user A deploys dashboard in Namespace B with service account and RoleBinding. User will be able to see all the applications in Namespaces B. But, how can we give access to this Dashboard so that one dashboard will be listing the 3 Namespaces to see the corresponding applications?</p>
| <p>In the current version of <a href="https://kubernetes.io" rel="nofollow noreferrer">Kubernetes</a>, it is possible to manage different namespaces by different users.
You need to realize how the <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">RBAC</a> works and how to use it to manage multiple <a href="https://github.com/kubernetes/dashboard" rel="nofollow noreferrer">Dashboards</a>.</p>
<p>Draft of the concept: You need to create rules, roles and grant permission (cluster-wide and all namespaces) and then do rolebinding.
It can be used to grant read access to resources in any particular namespace, or across all namespaces.</p>
<p>For example, here is how to bind user "jane" to the default namespace and user "dave" to the development team.
You can provide Dashboard in both namespaces to give individual user access to them.</p>
<pre><code># This role binding allows "jane" to read pods in the "default" namespace.
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-pods
namespace: default
subjects:
- kind: User
name: jane # Name is case sensitive
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role #this must be Role or ClusterRole
name: pod-reader # this must match the name of the Role or ClusterRole you wish to bind to
apiGroup: rbac.authorization.k8s.io
# This role binding allows "dave" to read secrets in the "development" namespace.
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-secrets
namespace: development # This only grants permissions within the "development" namespace.
subjects:
- kind: User
name: dave # Name is case sensitive
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: secret-reader
apiGroup: rbac.authorization.k8s.io
</code></pre>
|
<p>In Kubernetes, is it possible to use a Configmap for the value of an annotation? The reason I want to do this is to reuse an IP whitelist across multiple ingresses. Alternatively, is there another way to approach this problem?</p>
| <p>Unfortunately, no feature does it in Kubernetes. But as <a href="https://stackoverflow.com/users/4501268/iomv">iomv</a> wrote, you could try to use <a href="https://docs.helm.sh/using_helm/#quickstart-guide" rel="nofollow noreferrer">helm.</a></p>
<p>Helm allows you to use variables in your charts, for example:</p>
<pre><code>metadata:
{{- if .Values.controller.service.annotations }}
annotations:
{{ toYaml .Values.controller.service.annotations | indent 4 }}
{{- end }}
labels:
{{- if .Values.controller.service.labels }}
{{ toYaml .Values.controller.service.labels | indent 4 }}
{{- end }}
app: {{ template "nginx-ingress.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
component: "{{ .Values.controller.name }}"
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "nginx-ingress.controller.fullname" . }}
</code></pre>
<p>This part of code is from the <a href="https://github.com/kubernetes/charts/blob/master/stable/nginx-ingress/templates/controller-service.yaml" rel="nofollow noreferrer">nginx-ingress</a> chart.
As you see, you can fetch this chart and update the values as you need.</p>
|
<p>I have a problem about the images returned via the apps running behind the nginx ingress controller. They always returns 200 instead of an expected 304 once visited.</p>
<p>Digging around, i've found out about the ability to add specific blocks within the servers; so added the following under metadata > annotations :</p>
<pre><code> ingress.kubernetes.io/server-snippets: |
location ~* \.(?:ico|css|js|gif|jpe?g|png)$ {
expires 30d;
add_header Pragma public;
add_header Cache-Control "public";
}
</code></pre>
<p>The problems persists, so still getting a permanent 200; "ssh'ing" into the nginx pod and doing a cat on the nginx.conf created, I don't see this block appearing anywhere</p>
<p>So am kind of lost about how to improve it. I'm using kube-lego, but did not found out if it could have an impact here</p>
| <p>In case somebody else stumbles upon this (probably be me in the future), the annotation is <code>nginx.ingress.kubernetes.io/server-snippet</code>.</p>
|
<p>I need to access a service from my cluster on GKE outside of it. This service restricts access IP allowing just one IP. So, I have to set a NAT or something like that, but I don't really sure that is the right solution to set an external gateway/NAT on my GKE cluster. Can you help me, please?</p>
| <p>You can use <a href="https://github.com/doitintl/kubeIP" rel="nofollow noreferrer">kubeip</a> in order to assign IP addresses <a href="https://blog.doit-intl.com/kubeip-automatically-assign-external-static-ips-to-your-gke-nodes-for-easier-whitelisting-without-2068eb9c14cd" rel="nofollow noreferrer">blog post</a></p>
|
<ol>
<li>eg. I have 2 pods, they were called <strong>nginx-111-xxx</strong>,
<strong>nginx-111-yyy</strong>, but there is <strong>something wrong in nginx-111-xxx</strong>, I need to create a new pod to replace it because
the demands acquire you to <strong>delete nginx-111-xxx</strong>, then a NEW pods
<strong>nginx-111-zzz</strong>, Can I <strong>rename nginx-111-zzz with nginx-111-xxx</strong>? I try to use kubectl edit command to achieve it,</li>
<li>But Kubernetes said <strong><em>copy of your changes has been stored to
"/tmp/kubectl-edit-oh43e.yaml" error: At least one of apiVersion,
kind and name was changed</em></strong>, so If there is a way to change the name of the running pod ?<a href="https://i.stack.imgur.com/8J2p6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8J2p6.png" alt="enter image description here"></a></li>
</ol>
| <p>It's not possible to change the name of a running pod, as the API will not accept this.</p>
<p>The random pod names come in fact you are using Kubernetes deployments, which are the default or most commonly used ways to run pods on Kubernetes. But there are different ways to schedule pods, beside of that.</p>
<p>However, it looks like you need more predictable pod names for some reason. Here are some ways to go:</p>
<ol>
<li>use StatefulSets (<a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/</a>). The names of pods created by a StatefulSet will go like <strong>nginx-01</strong>, <strong>nginx-02</strong> ..</li>
<li>use pods directly. With that, you won't have those rolling update features from replicasets, DaemonSets etc, but absolute control about naming</li>
</ol>
|
<p>How to I generate a PDF version of the Kubernetes docs at <a href="http://kubernetes.io/v1.0/" rel="noreferrer">http://kubernetes.io/v1.0/</a>? Or, how do I get a printout of its docs at all?</p>
| <p>Here is a project that tries to do that: <a href="https://github.com/dohsimpson/kubernetes-doc-pdf" rel="noreferrer">kubernetes pdf doc</a></p>
|
<p>We are attempting to make several private Kubernetes clusters. We can find limited documentation on specific settings for the private cluster, therefore we are running into issues related to the subnetwork IP ranges. </p>
<p>Say we have 3 clusters: We set the Master Address Range to 172.16.0.0/28, 172.16.0.16/28 and 172.16.0.32/28 respectively.</p>
<p>We leave Network and Subnet set to "default". We are able to create 2 clusters that way, however, upon spin-up of the 3rd cluster, we receive the error of "Google Compute Engine: Exceeded maximum supported number of secondary ranges per subnetwork: 5." We suspect that we are setting up the subnetwork IP ranges incorrectly, but we are not sure what we are doing wrong, or why there is more than 1 secondary range per subnetwork, to begin with. </p>
<p>Here is a screenshot of the configuration for one of the clusters:
<a href="https://i.stack.imgur.com/Tlp5Z.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Tlp5Z.png" alt="kubernetes configuration screenshot"></a></p>
<p>We are setting these clusters up through the UI.</p>
| <p>This cluster has VPC-native (alias IP) enabled, which use 2 secondary ranges per cluster.</p>
<p>See <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/alias-ips#secondary_ranges" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/alias-ips#secondary_ranges</a></p>
<p>According to </p>
<p><code>Google Compute Engine: Exceeded maximum supported number of secondary ranges per subnetwork: 5.</code></p>
<p>the max is 5. That's why the 3rd one failed to create.</p>
|
<p>This is my ingress configuration:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: public-ingress
namespace: platform
annotations:
kubernetes.io/ingress.global-static-ip-name: "app-staging-ip"
spec:
backend:
serviceName: app-central
servicePort: 8080
</code></pre>
<p>However, what I always end up seeing on the panel is that it uses an Ephemeral address.</p>
<p>This is the output of <code>gcloud compute addresses list</code></p>
<pre><code>NAME REGION ADDRESS STATUS
app-staging-ip asia-south1 35.200.149.62 RESERVED
</code></pre>
<p>I have no idea why the ingress is still picking up an ephemeral id and not the one I have reserved.</p>
| <p>Because your IP is regional. Needs to be global. Create it with:</p>
<pre><code>gcloud compute addresses create app-staging-ip --global
</code></pre>
|
<p>I installed minikube on my mac and created deployment and a service for my nodejs app. I tested that everything is working by getting the URL of my service using the following command:</p>
<pre><code>minikube service my-nodejs-app --url
</code></pre>
<p>and then I run this URL in the browser and got results. The problem is when i tried to access the same URL from another machine inside the same network it didn't worked.</p>
<p>my <code>service.yml</code> file is:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-nodejs-app
spec:
type: NodePort
ports:
- port: 80
targetPort: 1337
protocol: TCP
name: app-server
selector:
app: my-nodejs-app
</code></pre>
<p>I tried to use port forwarding to forward my pod port to my localhost and it works only on the same machine who host the cluster and when I try to access from another machine on the same network (via the IP address of the machine where the cluster deployed) I still get page not found.</p>
| <p>You can use <code>"port forward a service"</code>. Assuming:</p>
<ul>
<li>Your local machine IP: <code>166.6.6.6</code> (which hold minikube) </li>
<li>Your minikube IP: <code>192.168.99.100</code> (check the real IP with command <code>$minikube ip</code>) </li>
<li>The nodePort of your service 'my-nodejs-app': <code>31000</code> (check the real
nodePort with command: <code>$kubectl get service</code>)</li>
</ul>
<p>In order to access your service from remote, you can forward a port (like 31000, recommend the same port with nodePort) to your service through the following command in your local machine:</p>
<blockquote>
<p>ssh -i ~/.minikube/machines/minikube/id_rsa docker@$(minikube ip) -L \*:31000:0.0.0.0:31000</p>
</blockquote>
<p>Then you can access your service through URL: <code>http://166.6.6.6:31000</code>, which will be forwarded to your service URL <code>http://192.168.99.100:31000</code></p>
<p>Thx: <a href="https://github.com/kubernetes/minikube/issues/877" rel="noreferrer">https://github.com/kubernetes/minikube/issues/877</a></p>
|
<p>Suppose I have a service containing two pods. One of the pods is an HTTP server, and the other pod needs to hit a REST endpoint on this pod. Is there a hostname that the second pod can use to address the first pod?</p>
| <p>I'm assuming when you say "service" you aren't referring to the Kubernetes lexicon of a <code>Service</code> object, otherwise your two Pods in the <code>Service</code> would be identical, so let's start by teasing out what a "Service" means in Kubernetes land. </p>
<p>You will have to create an additional <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="noreferrer">Kubernetes object called a <code>Service</code></a> to get your hostname for your HTTP server's <code>Pod</code>. When you create a <code>Service</code> you will define a <code>.spec.selector</code> that points to a set of labels on the HTTP service's Pod. For the sake of example, let's say the label is <code>app: nginx</code>. The name of that <code>Service</code> object will become the internal DNS record that can be queried by the second <code>Pod</code>.</p>
<p>A simplified example:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: http-service
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
name: my-http-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
</code></pre>
<p>Now your second <code>Pod</code> can make requests to the HTTP service by the <code>Service</code> name, <code>my-http-service</code>. </p>
<p>It's also worth mentioning that Kubernetes best practice dictates that these <code>Pod</code>s be managed by controllers such as <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="noreferrer">Deployment</a>s or <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/" rel="noreferrer">ReplicaSet</a>s for all sorts of reasons, including high availability of your applications.</p>
|
<p>Let's say I have a database with schema of v1, and an application which is tightly coupled to that schema of v1. i.e. SQLException is thrown if the records in the database don't match the entity classes.</p>
<p>How should I deploy a change which alters the database schema, and deploys the application which having a race condition. i.e. user queries the app for a field which no longer exists.</p>
| <p>This problem actually isn't specific to kubernetes, it happens in any system with more than one server -- kubernetes just makes it more front-and-center because of how automatic the rollover is. The words "tightly coupled" in your question are a dead giveaway of the <em>real</em> problem here.</p>
<p>That said, the "answer" actually will depend on which of the following mental models are better for your team:</p>
<ul>
<li>do not make two consecutive schemas contradictory</li>
<li>use a "maintenance" page that keeps traffic off of the pods until they are fully rolled out</li>
<li>just accept the <code>SQLException</code>s and add better retry logic to the consumers</li>
</ul>
<p>We use the first one, because the kubernetes rollout is baked into our engineering culture and we <em>know</em> that pod-old and pod-new will be running simultaneously and thus schema changes need to be incremental and backward compatible for at minimum one generation of pods.</p>
<p>However, sometimes we just accept that the engineering effort to do that is more cost than the 500s that a specific breaking change will incur, so we cheat and scale the replicas low, then roll it out and warn our monitoring team that there will be exceptions but they'll blow over. We can do that partially because the client has retry logic built into it.</p>
|
<p>I am trying to mount a <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">hostPath volume</a> into a Kubernetes Pod. An example of a <code>hostPath</code> volume specification is shown below, which is taken from the docs. I am deploying to hosts that are running RHEL 7 with SELinux enabled.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /data
# this field is optional
type: Directory
</code></pre>
<p>When my Pod tries to read from a file that has been mounted from the underlying host, I get a "Permission Denied" error. When I run <code>setenforce 0</code> to turn off SELinux, the error goes away and I can access the file. I get the same error when I bind mount a directory into a Docker container.</p>
<p>The issue is described <a href="https://www.projectatomic.io/blog/2015/06/using-volumes-with-docker-can-cause-problems-with-selinux/" rel="nofollow noreferrer">here</a> and, when using Docker, can be fixed by using the <code>z</code> or <code>Z</code> bind mount flag, described in the Docker docs <a href="https://docs.docker.com/storage/bind-mounts/#configure-the-selinux-label" rel="nofollow noreferrer">here</a>.</p>
<p>Whilst I can fix the issue by running</p>
<pre><code>chcon -Rt svirt_sandbox_file_t /path/to/my/host/dir/to/mount
</code></pre>
<p>I see this as a nasty hack, as I need to do this on every host in my Kubernetes cluster and also because my deployment of Kubernetes as described in the YAML spec is not a complete description of what it is that needs to be done to get my YAML to run correctly. Turning off SELinux is not an option.</p>
<p>I can see that Kubernetes mentions SELinux security contexts in the docs <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" rel="nofollow noreferrer">here</a>, but I haven't been able to successfully mount a hostPath volume into a pod without getting the permission denied error.</p>
<p>What does the YAML need to look like to successfully enable a container to mount a HostPath volume from an underlying host that is running SELinux?</p>
<p><strong>Update:</strong></p>
<p>The file I am accessing is a CA certificate that has these labels:</p>
<pre><code>system_u:object_r:cert_t:s0
</code></pre>
<p>When I use the following options:</p>
<pre><code>securityContext:
seLinuxOptions:
level: "s0:c123,c456"
</code></pre>
<p>and then check the access control audit errors via <code>ausearch -m avc -ts recent</code>, I can see that there is a permission denied error where the container has a level label of <code>s0:c123,c456</code>, so I can see that the level label works. I have set the label to be <code>s0</code>.</p>
<p>However, if I try to change the <code>type</code> label to be <code>cert_t</code>, the container doesn't even start, there's an error :</p>
<pre><code>container_linux.go:247: starting container process caused "process_linux.go:364: container init caused \"write /proc/self/task/1/attr/exec: invalid argument\""
</code></pre>
<p>I don't seem to be able to change the type label of the container.</p>
| <p>You can assign SELinux labels using <code>seLinuxOptions</code>:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
securityContext:
seLinuxOptions: # it may don’t have the desired effect
level: "s0:c123,c456"
securityContext:
seLinuxOptions:
level: "s0:c123,c456"
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /data
# this field is optional
type: Directory
</code></pre>
<p>According to <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" rel="nofollow noreferrer">documentation</a>:</p>
<p>Thanks to <a href="https://stackoverflow.com/users/1020315/phil">Phil</a> for pointing that out. It appears to be working only in <code>Pod.spec.securityContext</code> according to the <a href="https://github.com/projectatomic/adb-atomic-developer-bundle/issues/117#issuecomment-215313573" rel="nofollow noreferrer">issue comment</a></p>
<ul>
<li><strong>seLinuxOptions</strong>: Volumes that support SELinux labeling are relabeled to be accessible by the label specified under seLinuxOptions. Usually you only need to set the level section. This sets the Multi-Category Security (MCS) label <strong>given to all Containers in the Pod as well as the Volumes</strong>.</li>
</ul>
|
<p>I'm having trouble connecting to the kubernetes python client even though I'm following the examples <a href="https://github.com/kubernetes-client/python/blob/master/examples/example1.py" rel="noreferrer">here</a> in the api. </p>
<p>Basically this line can't connect to the kubernetes client: </p>
<pre><code>config.load_kube_config()
</code></pre>
<p><strong>What I'm doing:</strong> </p>
<p>I have a Dockerfile file like this that I'm building my image with. This is just a simple python/flask app. </p>
<pre><code>FROM python:2
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY requirements.txt /usr/src/app/
RUN pip install --no-cache-dir -r requirements.txt
COPY . /usr/src/app
EXPOSE 5000
CMD [ "python", "./app.py" ]
</code></pre>
<p>This is my requirements.txt:</p>
<pre><code>Flask==1.0.2
gunicorn==19.8.1
kubernetes==6.0.0
requests # Apache-2.0
</code></pre>
<p>After building the Dockerfile it outputs: </p>
<pre><code> Successfully built a2590bae9fd9
Successfully tagged testapp:latest
</code></pre>
<p>but when I do <code>docker run a2590bae9fd9</code> I receive an error:</p>
<pre><code>Traceback (most recent call last):
File "./app.py", line 10, in <module>
config.load_kube_config()
File "/usr/local/lib/python2.7/site- packages/kubernetes/config/kube_config.py", line 470, in load_kube_config
config_persister=config_persister)
File "/usr/local/lib/python2.7/site- packages/kubernetes/config/kube_config.py", line 427, in _get_kube_config_loader_for_yaml_file
with open(filename) as f:
IOError: [Errno 2] No such file or directory: '/root/.kube/config'
</code></pre>
<p>I thought it might've been my python directory but I checked and its running in /usr/local/bin/python. </p>
<p>I'm really stumped - any suggestions/tips? thank you. </p>
| <p>You don't want <code>config.load_kube_config()</code>, you want <a href="https://github.com/kubernetes-client/python/blob/6.0.0/examples/in_cluster_config.py#L21" rel="noreferrer"><code>config.load_incluster_config()</code></a></p>
<p>If you need to distinguish between your setup and when it's running in a <code>Pod</code>, one mechanism is <code>if os.getenv('KUBERNETES_SERVICE_HOST'): config.load_incluster_config()</code> since that for sure will be in the environment while in a <code>Pod</code>, and is unlikely to be in your local environment.</p>
|
<p>I have an application set up like this:</p>
<p><a href="https://i.stack.imgur.com/WDQeq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WDQeq.png" alt="enter image description here"></a></p>
<p>There is a server, with a reverseproxy/load balancer that acts as the HTTPS termination (this is the one that has a server certificate), and several applications behind it(*)</p>
<p>However, some applications require authentication of the client with a certificate. Authentication cannot happen in the reverse proxy. Will the application be able to see the user certificate, or will it be jettisoned by the HTTPS->HTTP transfer?</p>
<p>(*) OK, so this is a Kubernetes ingress, and containers/pods.</p>
| <p>It will be lost. I think you need to extract it in the reverse proxy (i.e. Nginx) and pass it in as a HTTP header if you really must. See for example <a href="https://serverfault.com/questions/788895/nginx-reverse-proxy-pass-through-client-certificate">https://serverfault.com/questions/788895/nginx-reverse-proxy-pass-through-client-certificate</a>. Not very secure as the cert is passed in the clear!</p>
<p>I don't know if we have that level of control over the ingress, personally I'm using a normal Nginx server for incoming traffic instead.</p>
|
<p><br>
I was installing Google cloud platform on my centos 7 machine. I am reading the commands from a book and it says that I should create a Project_ID, so I put proj1, then it says that as a verification step I could type: <br><br></p>
<pre><code>gcloud alpha projects list
</code></pre>
<p>To check the projects I have, but then I got this:</p>
<pre><code> $ gcloud alpha projects list
</code></pre>
<p>``ERROR: (gcloud.alpha.projects.list) INVALID_ARGUMENT: Project id 'proj1' not found or invalid.
- '@type': type.googleapis.com/google.rpc.Help
links:
- description: Google developers console
url: <a href="https://console.developers.google.com" rel="nofollow noreferrer">https://console.developers.google.com</a></p>
<p>I thought I could skip this, so I move to the installation of kubernetes, but when I execute this script:</p>
<pre><code>kubernetes/cluster/kube-up.sh
</code></pre>
<p>Now it asks also for the beta projects list. I already tried to execute the commands that it suggest:</p>
<pre><code>gcloud alpha projects list
gcloud beta projects list
</code></pre>
<p>It shows like this</p>
<p><a href="https://i.stack.imgur.com/HjT1O.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HjT1O.png" alt="enter image description here"></a></p>
<p>And it gives me the same error message:</p>
<p><a href="https://i.stack.imgur.com/fjVgC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fjVgC.png" alt="enter image description here"></a></p>
<p>Maybe there is a step in beetween, but the book didnt say anything. Can someone please assiste me? Im totally new to Kubernetes</p>
<p>Regards</p>
| <p>Errata...
Yes... I didnt double check, and there was a missing command, prior setting up the project, the command is:</p>
<pre><code>gcloud config list project
</code></pre>
<p>and then</p>
<pre><code>gcloud config set project PROJECT_ID
</code></pre>
|
<p>I have a Docker image with the CMD to run a Java application.</p>
<p>This application is being deployed to container into Kubernetes. Since, I am deploying it as a Docker image, I was expecting it as running as a Docker process. So, I just logged into the pods and was trying "<strong>docker ps</strong>".</p>
<p>But, I was surprised that it is running as a Java process and not as a docker process. I am able to see the process by "<strong>ps -ef</strong>"</p>
<p>I am confused, how does it work internally?</p>
| <p>As others stated, <strong>Kubernetes uses docker internally to deploy the containers</strong>. To explain in detail consider the cluster which has 4 nodes, 1 master and 3 slaves.</p>
<pre><code>$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
******.mylabserver.com Ready master 13d v1.10.5
******.mylabserver.com Ready <none> 13d v1.10.5
******.mylabserver.com Ready <none> 13d v1.10.5
******.mylabserver.com Ready <none> 13d v1.10.5
</code></pre>
<p>I am deploying a pod with <strong>nignx docker image</strong>.</p>
<pre><code>$ cat pod-nginx.yml
apiVersion: v1
kind: Pod
metadata:
name: alpine
namespace: default
spec:
containers:
- name: alpine
image: alpine
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
restartPolicy: Always
</code></pre>
<p>You can get the status of the pod as below:</p>
<pre><code>$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
alpine 1/1 Running 0 21s 10.244.3.4 ******.mylabserver.com
</code></pre>
<p><strong>Kube-scheduler</strong> will schedule the pod on one of the available nodes.</p>
<p>Now the pod is deployed to a server, where you can login to that particular server and find the information that you are looking for. </p>
<pre><code>root@******:/home/user# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS
PORTS NAMES
6486de4410ad alpine@sha256:e1871801d30885a610511c867de0d6baca7ed4e6a2573d506bbec7fd3b03873f "sleep 3600" 58 seconds ago Up 57 seconds
k8s_alpine_alpine_default_2e2b3016-79c8-11e8-aaab-
</code></pre>
<p>Run the <code>docker exec</code> command in that server to see the process running inside.</p>
<pre><code>root@******:/home/user# docker exec -it 6486de4410ad /bin/sh
/ # ps -eaf
PID USER TIME COMMAND
1 root 0:00 sleep 3600
7 root 0:00 /bin/sh
11 root 0:00 ps -eaf
/ #
</code></pre>
<p><a href="https://kubernetes.io/docs/home/-" rel="noreferrer">https://kubernetes.io/docs/home/-</a> this can give you more info about pods and how deployments happen with pods/containers.</p>
<p>Hope this helps.</p>
|
<p>i try to deploy a container but unfortunately i have an error when i try to execute <strong>kubectl apply -f *.yaml</strong></p>
<p>the error is : </p>
<blockquote>
<p>error validating data: ValidationError(Pod.spec.containers[1]):
unknown field "persistentVolumeClaim" in io.k8s.api.core.v1.Container;</p>
</blockquote>
<p>i dont understand why i get the error because i wrote <strong>claimName:</strong> under <strong>persistentVolumeClaim:</strong> in my pd.yaml config :(</p>
<p><strong>Pod.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: karafpod
spec:
containers:
- name: karaf
image: xxx/karaf:ids-1.1.0
volumeMounts:
- name: karaf-conf-storage
mountPath: /apps/karaf/etc
- name: karaf-conf-storage
persistentVolumeClaim:
claimName: karaf-conf-claim
</code></pre>
<p><strong>PersistentVolumeClaimKaraf.yml</strong> </p>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: karaf-conf-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Mi
</code></pre>
<p>Deployment.yaml</p>
<pre><code> apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: karaf
namespace: poc
spec:
replicas: 1
template:
metadata:
labels:
app: karaf
spec:
containers:
- name: karaf
image: "xxx/karaf:ids-1.1.0"
imagePullPolicy: Always
ports:
- containerPort: 6443
- containerPort: 6100
- containerPort: 6101
resources:
volumeMounts:
- mountPath: /apps/karaf/etc
name: karaf-conf
volumes:
- name: karaf-conf
persistentVolumeClaim:
claimName: karaf-conf
</code></pre>
| <p>The reason you're seeing that error is due to you specifying a <code>persistentVolumeClaim</code> under your pod spec's container specifications. As you can see from the auto generated docs here: <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#container-v1-core" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#container-v1-core</a></p>
<p><code>persistentVolumeClaims</code> aren't supported at this level/API object, which is what's giving the error you're seeing.</p>
<p>You should modify the <code>pod.yml</code> to specify this as a volume instead.</p>
<p>e.g.:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: karafpod
spec:
containers:
- name: karaf
image: xxx/karaf:ids-1.1.0
volumeMounts:
- name: karaf-conf-storage
mountPath: /apps/karaf/etc
volumes:
- name: karaf-conf-storage
persistentVolumeClaim:
claimName: karaf-conf-claim
</code></pre>
|
<p>I would like to pass in some of the values in kubernetes yaml files during runtime like reading from config/properties file. </p>
<p>what is the best way to do that?</p>
<p>In the below example, I do not want to hardcode the port value, instead read the port number from config file.</p>
<p>Ex:</p>
<pre><code>logstash.yaml
</code></pre>
<hr>
<pre><code>apiVersion: v1
kind: ReplicationController
metadata:
name: test
namespace: test
spec:
replicas: 1
selector:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: logstash
ports:
- containerPort: 33044 (looking to read this port from config file)
env:
- name: INPUT_PORT
value: "5044"
config.yaml
logstash_port: 33044
</code></pre>
| <p>This sounds like a perfect use case for Helm (www.helm.sh).</p>
<p>Helm Charts helps you define, install, and upgrade Kubernetes applications. You can use a pre-defined chart (like Nginx, etc) or create your own chart.</p>
<p>Charts are structured like:</p>
<pre><code>mychart/
Chart.yaml
values.yaml
charts/
templates/
...
</code></pre>
<p>In the templates folder, you can include your ReplicationController files (and any others). In the <code>values.yaml</code> file you can specify any variables you wish to share amongst the templates (like port numbers, file paths, etc).</p>
<p>The values file can be as simple or complex as you require. An example of a values file:</p>
<pre><code>myTestService:
containerPort: 33044
image: "logstash"
</code></pre>
<p>You can then reference these values in your template file using:</p>
<pre><code>apiVersion: v1
kind: ReplicationController
metadata:
name: test
namespace: test
spec:
replicas: 1
selector:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: logstash
ports:
- containerPort: {{ .Values.myTestService.containerPort }}
env:
- name: INPUT_PORT
value: "5044"
</code></pre>
<p>Once finished you can compile into Helm chart using <code>helm package mychart</code>. To deploy to your Kubernetes cluster you can use <code>helm install mychart-VERSION.tgz</code>. That will then deploy your chart to the cluster. The version number is set within the <code>Chart.yaml</code> file.</p>
|
<p>I am trying to understand the K8s gpu practices better, and implementing a small K8s GPU cluster which is suppose to work like below.</p>
<p>This going to be little long explanation, but I hope it will help to have many questions at once place to understand GPU practices better in Kubernetes.</p>
<h2>Application Requirement</h2>
<ul>
<li>I want to create a K8s autoscale cluster. </li>
<li>Pods are running the models say a tensorflow based deep learning program. </li>
<li>Pods are waiting for a message in pub sub queue to appear and it can proceed
its execution once it recieves a message. </li>
<li>Now a message is queued in a PUB/SUB queue. </li>
<li>As message is available, pods reads it and execute deep learning program.</li>
</ul>
<h2>Cluster requirement</h2>
<p>If no message is present in queue and none of the GPU based pods are executing program( i mean not using gpu), then gpu node pool should scale down to 0.</p>
<h2>Design 1</h2>
<p>Create a gpu node pool. Each node contains N GPU, where N >= 1.
Assign model trainer pod to each gpu. That is 1:1 mapping of pods and GPU.
When I tried assigning 2 pods to 2 GPU machine where each pod is suppose to run a mnist program.</p>
<p>What I noticed is </p>
<p>1 pod got allocated and executes the program and later it went into crash loop. May be I am doing some mistake as my docker image is suppose to run program once only as I was just doing feasibility test of running 2 pods simultaneously on 2 gpu of same node.Below is the error</p>
<pre> Message Reason First Seen Last Seen Count
Back-off restarting failed container BackOff Jun 21, 2018, 3:18:15 PM Jun 21, 2018, 4:16:42 PM 143
pulling image "nkumar15/mnist" Pulling Jun 21, 2018, 3:11:33 PM Jun 21, 2018, 3:24:52 PM 5
Successfully pulled image "nkumar15/mnist" Pulled Jun 21, 2018, 3:12:46 PM Jun 21, 2018, 3:24:52 PM 5
Created container Created Jun 21, 2018, 3:12:46 PM Jun 21, 2018, 3:24:52 PM 5
Started container Started Jun 21, 2018, 3:12:46 PM Jun 21, 2018, 3:24:52 PM 5
</pre>
<p>The other pod didn't get assigned at all to GPU. Below is the message from pod events</p>
<p><strong>0/3 nodes are available: 3 Insufficient nvidia.com/gpu.</strong></p>
<h2>Design 2</h2>
<p>Have multiple GPU machines in gpu node pool with each node having only 1 GPU.<br>
K8s, will assign each pod to each available GPU in node and hopefully there won't be any issue. I am yet to try this.</p>
<h2>Questions</h2>
<ol>
<li>Is there any suggested practice to design above type of system in kubernetes as of version 1.10?</li>
<li>Is Design 1 approach not feasible as of 1.10 release? For eg, I have 2 GPU node with 24 GB GPU memory, is it possible such that K8s assign
1 pod to each GPU and each pods execute its own workload with 12GB memory limit each?</li>
<li>How do I scale down gpu node pool to 0 size through autoscaler?</li>
<li>In Design 2, say what if I run out of GPU memory? as curently in GCP 1 GPU node doesn't have more than 16 GB memory.</li>
</ol>
<p>Again apologies for such a long question, but I hope it will help other also.</p>
<h2>Updates</h2>
<p><strong>For question 2</strong>
I created a new cluster to reproduce same issue which I faced multiple times before, I am not sure what changed this time but 2nd pod is successfully allocated a GPU. I think with this result I can confirm that 1gpu-1pod mapping is allowed in a multi gpu single node
However restricting memory per gpu process is not feasible as of 1.10.</p>
| <p>Both designs are supported in 1.10. I view design 2 as a special case of 1. You don't necessarily need to have 1 GPU per node. In case your pod needs more GPUs and memory, you have to have multiple GPUs per node, as you mentioned in question (4). I'd go with 1 unless there's a reason not to.</p>
<p>I think the best practice would be create a new cluster with no GPUs (a cluster has a default node pool), and then create a GPU node pool and attach it to the cluster. Your non-GPU workload can run in the default pool, and the GPU workload can run in the GPU pool. To support scaling-down to 0 GPUs, you need to set <code>--num-nodes</code> and <code>--min-nodes</code> to be 0 when creating the GPU node pool.</p>
<p>Docs:</p>
<p><em>Create a cluster with no GPUs: <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-cluster#creating_a_cluster" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/creating-a-cluster#creating_a_cluster</a></em></p>
<p><em>Create a GPU node pool for an existing cluster: <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/gpus#gpu_pool" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/concepts/gpus#gpu_pool</a></em></p>
|
<p>I'm trying to create a number of pods from a yaml loop in helm. if I run with <code>--debug --dry-run</code> the output matches my expectations, but when I actually deploy to to a cluster, only the last iteration of the loop is present.</p>
<p>some yaml for you: </p>
<pre><code>{{ if .Values.componentTests }}
{{- range .Values.componentTests }}
apiVersion: v1
kind: Pod
metadata:
name: {{ . }}
labels:
app: {{ . }}
chart: {{ $.Chart.Name }}-{{ $.Chart.Version | replace "+" "_" }}
release: {{ $.Release.Name }}
heritage: {{ $.Release.Service }}
spec:
{{ toYaml $.Values.global.podSpec | indent 2 }}
restartPolicy: Never
containers:
- name: {{ . }}
ports:
- containerPort: 3000
image: 123456789012.dkr.ecr.eu-west-1.amazonaws.com/{{ . }}:latest
imagePullPolicy: Always
command: ["sleep"]
args: ["100d"]
resources:
requests:
memory: 2000Mi
cpu: 500m
{{- end }}
{{ end }}
</code></pre>
<p>when I run <code>helm upgrade --install --set componentTests="{a,b,c}" --debug --dry-run</code></p>
<p>I get the following output: </p>
<pre><code># Source: <path-to-file>.yaml
apiVersion: v1
kind: Pod
metadata:
name: a
labels:
app: a
chart: integrationtests-0.0.1
release: funny-ferret
heritage: Tiller
spec:
restartPolicy: Never
containers:
- name: content-tests
ports:
- containerPort: 3000
image: 123456789012.dkr.ecr.eu-west-1.amazonaws.com/a:latest
imagePullPolicy: Always
command: ["sleep"]
args: ["100d"]
resources:
requests:
memory: 2000Mi
cpu: 500m
apiVersion: v1
kind: Pod
metadata:
name: b
labels:
app: b
chart: integrationtests-0.0.1
release: funny-ferret
heritage: Tiller
spec:
restartPolicy: Never
containers:
- name: b
ports:
- containerPort: 3000
image: 123456789012.dkr.ecr.eu-west-1.amazonaws.com/b:latest
imagePullPolicy: Always
command: ["sleep"]
args: ["100d"]
resources:
requests:
memory: 2000Mi
cpu: 500m
apiVersion: v1
kind: Pod
metadata:
name: c
labels:
app: users-tests
chart: integrationtests-0.0.1
release: funny-ferret
heritage: Tiller
spec:
restartPolicy: Never
containers:
- name: c
ports:
- containerPort: 3000
image: 123456789012.dkr.ecr.eu-west-1.amazonaws.com/c:latest
imagePullPolicy: Always
command: ["sleep"]
args: ["100d"]
resources:
requests:
memory: 2000Mi
cpu: 500m
---
</code></pre>
<p>(some parts have been edited/removed due to sensitivity/irrelevance)</p>
<p>which looks to me like I it does what I want it to, namely create a pod for <code>a</code> another for <code>b</code> and a third for <code>c</code>.</p>
<p>however, when actually installing this into a cluster, I always only end up with the pod corresponding to the last element in the list. (in this case, <code>c</code>) it's almost as if they overwrite each other, but given that they have different names I don't think they should? even running with <code>--debug</code> but not <code>--dry-run</code> the output tells me I should have 3 pods, but using <code>kubectl get pods</code> I can see only one.</p>
<p>How can I iteratively create pods from a list using Helm?</p>
| <p>found it! </p>
<p>so apparently, helm uses <code>---</code> as a separator between specifications of pods/services/whatHaveYou.</p>
<p>specifying the same fields multiple times in a single chart is valid, it will use the last specified value for for any given field. To avoid overwriting values and instead have multiple pods created, simply add the separator at the end of the loop:</p>
<pre><code>{{ if .Values.componentTests }}
{{- range .Values.componentTests }}
apiVersion: v1
kind: Pod
metadata:
name: {{ . }}
labels:
app: {{ . }}
chart: {{ $.Chart.Name }}-{{ $.Chart.Version | replace "+" "_" }}
release: {{ $.Release.Name }}
heritage: {{ $.Release.Service }}
spec:
{{ toYaml $.Values.global.podSpec | indent 2 }}
restartPolicy: Never
containers:
- name: {{ . }}
ports:
- containerPort: 3000
image: 123456789012.dkr.ecr.eu-west-1.amazonaws.com/{{ . }}:latest
imagePullPolicy: Always
command: ["sleep"]
args: ["100d"]
resources:
requests:
memory: 2000Mi
cpu: 500m
---
{{- end }}
{{ end }}
</code></pre>
|
<p>I created a Kubernetes cluster for a single-master multi-node cluster using kubeadm following the official kubernetes guide:</p>
<p><a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="nofollow noreferrer">Kubernetes cluster</a></p>
<p>I currently connect my laptop to the cluster via this command:</p>
<pre><code>kubectl get nodes --username kubernetes-admin --kubeconfig ~/.kube/config
</code></pre>
<p>However, I now want to add a separate user (or same actual user but different name) for our Jenkins to run commands. I just want a separate username for access/logging purposes. </p>
<p>How can I easily add another "jenkins" username (possibly with its own cert) in the config file? Kubeadm automatically uses <code>--authorization-mode=Node</code> (or at least mine did)</p>
<p>Background info: Only people who may make any changes on our cluster currently have/need access, so I don't need to only give users access to certain namespaces etc. Also, keep in mind we will have a cluster per environment: dev, UAT, production, etc.</p>
| <p>It's suitable to use Kubernetes <code>serviceAccount</code> and instruct your Jenkins deployment to use that account (with a bound Role):</p>
<pre><code>---
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: jenkins
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: jenkins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins
subjects:
- kind: ServiceAccount
name: jenkins
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: jenkins
name: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
serviceAccountName: jenkins
</code></pre>
|
<p>I can delete all jobs inside a custer running</p>
<pre><code>kubectl delete jobs --all
</code></pre>
<p>However, jobs are deleted one after another which is pretty slow (for ~200 jobs I had the time to write this question and it was not even done).</p>
<p>Is there a faster approach ?</p>
| <p>It's a little easier to setup an alias for this bash command:</p>
<pre><code>kubectl delete jobs `kubectl get jobs -o custom-columns=:.metadata.name`
</code></pre>
|
<p>I have a nested maven project structure, several of the Maven Artefacts contains also Kubernetes configurations, when I do the complete deployment of the projects, I want to combine all Kubernetes configuration and run it centrally.</p>
<p>It is something like Spring Application Context 'import' statements, of course I can take this small configurationx and copy/paste to one big file but then I have to maintain changes at 2 different places, which I don't want to...</p>
<p>I can't find an 'import' mechanism similar to Spring one in Kubernetes documentation, I can write something with Maven and Groovy, to combine all these small configuration files to one big one, but that some direction I really don't want to go.</p>
<p>Code Snippet from a configuration would be something like this..</p>
<pre>
<code>
kind: Deployment
apiVersion: apps/v1beta2
metadata:
name: apimanagementstore-deployment
namespace: api-management
spec:
replicas: 1
selector:
matchLabels:
k8s-app: api-management-store
template:
metadata:
labels:
k8s-app: api-management-store
spec:
containers:
- name: apimanagementstore
image: localhost:5000/api_management_store
ports:
- containerPort: 8383
protocol: TCP
---
#------------ Service ----------------#
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: api-management-store
name: api-management-store
namespace: api-management
spec:
ports:
- port: 8383
targetPort: 8383
selector:
k8s-app: api-management-store
</code>
</pre>
<p>I have several snippets (they all lie under a seperate Maven Project) like this and I want to combine those to one big one.</p>
| <p>There is no option to use something like include in Kubernetes right now. Maybe in the future, they will add such an option.</p>
<p>You are right that you can create one big file and use '---' delimiters for each new configuration.</p>
<p>However, you also have another option: use -f key, for example:</p>
<pre><code>kubectl apply -f ./file1 ./file2 ./file3 etc
</code></pre>
<p>For more details please refer to <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply" rel="nofollow noreferrer">Kubernetes documentation</a></p>
|
<p>In <a href="https://github.com/kubernetes-client/java" rel="nofollow noreferrer">Kubernetes-client Java api</a>, I can get a count of available and total deployed pod instances for a given app using this way:</p>
<pre><code>ApiClient defaultClient = Configuration.getDefaultApiClient();
AppsV1beta1Api apiInstance = new AppsV1beta1Api();
...
try {
AppsV1beta1DeploymentList result = apiInstance.listDeploymentForAllNamespaces(_continue, fieldSelector, includeUninitialized, labelSelector, limit, pretty, resourceVersion, timeoutSeconds, watch);
foreach(ExtensionsV1beta1Deployment extensionsDeployment : result.getItems() ) {
Map<String, String> labels = extensionsDeployment.getMetadata().getLabels();
String appName = labels.getOrDefault("app", "");
ExtensionsV1beta1DeploymentStatus status = extensionsDeployment.getStatus();
int availablePods = status.getAvailableReplicas();
int deployedPods = status.getReplicas();
if ( availablePods != deployedPods) {
// Generate an alert
}
}
} catch (ApiException e) {
System.err.println("Exception when calling AppsV1beta1Api#listDeploymentForAllNamespaces");
e.printStackTrace();
}
</code></pre>
<p>In above example, I'm comparing the <code>availablePods</code> with the <code>deployedPods</code> and if they don't match, I generate an alert. </p>
<p><strong>How can I replicate this logic using Prometheus using Alerting Rules and/or Alertmanager config, where it checks the number of available pod instances for a given app or job, and if it doesn't match a specified number of instances, it will trigger an alert?</strong></p>
<p>The specified threshold can be total <code>deployedPods</code> or it can come from another config file or template.</p>
| <p>I don’t know how to do this for all namespaces but for one namespace it will look like:</p>
<pre><code>curl -k -s 'https://prometheus-k8s/api/v1/query?query=(sum(kube_deployment_spec_replicas%7Bnamespace%3D%22default%22%7D)%20without%20(deployment%2C%20instance%2C%20pod))%20-%20(sum(kube_deployment_status_replicas_available%7Bnamespace%3D%22default%22%7D)%20without%20(deployment%2C%20instance%2C%20pod))'
</code></pre>
<p>This is curl request to the default namespace.</p>
<p>Alert config will look like:</p>
<pre><code>groups:
- name: example
rules:
# Alert for any instance that is unreachable for >5 minutes.
- alert: availablePods!=deployedPods
expr: (sum(kube_deployment_spec_replicas{namespace="$Name_of_namespace"}) without (deployment, instance, pod)) - (sum(kube_deployment_status_replicas_available{namespace="$Name_of_namespace"}) without (deployment, instance, pod)) != 0
for: 15m
labels:
severity: page
annotations:
summary: "availablePods are not equal deployedPods"
description: "In namespace $Name_of_namespace more than 15 minutes availablePods are not equal deployedPods. "
</code></pre>
<p>Don’t forget to change variable <code>$Name_of_namespace</code> to namespace name where you want to check.</p>
|
<p>Is there any policy or a way that enable us to auto delete a persistent volume claim when a kubernetes-job (using that claim) gets completed.</p>
<p>FYI:
i am creating persistent volume claim using delete policy i.e. on claim deletion the associated persistent volume gets deleted too which is my intended behavior.</p>
<p>only suggest other solutions like (cronjobs for cleaning or kubernetes watch feature) if there is not automated way of doing this, that will be my last option.</p>
| <p>There is no such functionality in Kubernetes to delete a Persistent Volume Claim after a Job has been completed.</p>
<p>One of the ways is to use Cron Job with simple <code>busybox</code> container running bash script, which checks and deletes unused Persistent Volume Claims.</p>
<p>It is not recommended to use custom scripts in "pre-stop phase" in Pod as @techtrainer suggested in comments because if some errors appear before this script, Job will fail and Persistent Volume Claim won’t be deleted.</p>
|
<p>I need to get access to current namespace. I've look up <code>KUBERNETES_NAMESPACE</code> and <code>OPENSHIFT_NAMESPACE</code> but they are unset.</p>
<pre><code>$ oc rsh wsec-15-t6xj4
$ env | grep KUBERNETES
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://172.30.0.1:443
KUBERNETES_PORT_53_TCP_ADDR=172.30.0.1
KUBERNETES_PORT_53_UDP_ADDR=172.30.0.1
KUBERNETES_PORT_53_TCP_PORT=53
KUBERNETES_PORT_53_TCP_PROTO=tcp
KUBERNETES_PORT_53_UDP_PORT=53
KUBERNETES_SERVICE_PORT_DNS=53
KUBERNETES_PORT_53_UDP_PROTO=udp
KUBERNETES_PORT_443_TCP_ADDR=172.30.0.1
KUBERNETES_PORT_53_TCP=tcp://172.30.0.1:53
KUBERNETES_PORT_53_UDP=udp://172.30.0.1:53
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_SERVICE_PORT_DNS_TCP=53
KUBERNETES_PORT_443_TCP=tcp://172.30.0.1:443
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_SERVICE_HOST=172.30.0.1
</code></pre>
<p>Also the content of <code>/var/run/secrets/kubernetes.io/namespace</code> is empty.</p>
<p>Any ideas?</p>
| <p>OpenShift are using Project instead Namespace.</p>
<p><a href="https://docs.openshift.com/container-platform/3.9/architecture/core_concepts/projects_and_users.html#namespaces" rel="nofollow noreferrer">https://docs.openshift.com/container-platform/3.9/architecture/core_concepts/projects_and_users.html#namespaces</a></p>
<p>It extended features of kubernetes namespace, such as resource limitation, rbac and so on.</p>
|
<p>I have to communicate between two PODs in minikube which are exposed in two different ports but are in a single node.</p>
<p>For example:</p>
<ul>
<li>POD A uses 8080 port and which is the landing page. </li>
<li>From POD A we access POD B via hyperlink which uses 8761 port. </li>
</ul>
<p>Now, in kubernetes it assigns a port dynamically eg: POD A: 30069 and POD B: 30070</p>
<p><strong>Problem here is</strong>: it does not automatically map kubernetes port for POD B (30070) while accessing POD B from POD A(30069). Instead POD B tries to open in 8761 port.</p>
<p>Apologies if my description is confusing. Please feel free to recheck if you could not relate to my question.</p>
<p>Thanks for your help</p>
| <blockquote>
<p>I have to communicate between two PODs in minikube which are exposed in two different ports but are in a single node.</p>
</blockquote>
<p>Based on the facts that you want inter-pod communication and that pods reside on the same node, you could have several (rather questionable and fragile) approaches such as hostname and nodePort exposures. In order to be more in line with kubernetes approach and recommendations I'd advise to use Service instead of exposing ports directly from Pod level.</p>
<p>You can read more about Services in <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">the official documenatation</a> and example for Service usage would be like so:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: my-pod-b-service
spec:
selector:
app: MyPodBApp
ports:
- protocol: TCP
port: 80
targetPort: 8761
</code></pre>
<p>This specification will create a new Service object named <code>my-pod-b-service</code> which targets TCP port 8761 on any Pod with the <code>app=MyPodBApp</code> label. With that any request coming from pod A for host: <code>my-pod-b-service</code> and port: <code>80</code> would be served by some pod B on port <code>8761</code> (note that port and targetPort can be the same, this is just an example).</p>
<p>As a side note, for pod A you would have something like:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: my-pod-a-service
spec:
selector:
app: MyPodAApp
ports:
- protocol: TCP
port: 80
targetPort: 8080
</code></pre>
<p>Since you target services, you can map same incoming port (80) to both services, and kubernetes is taking care that each comes to appropriate pods, as long as pod selector is properly set on pods.</p>
|
<p>I need cluster-id using kubectl command.</p>
<pre><code>root@vagrant-xenial64:~# kubectl get cm cluster-info -n kube-system -o jsonpath='{.data.cluster-config\.json}'
{
"cluster_id": "xxx",
"cluster_name": "prod-yyy-mmm",
"cluster_type": "rrr",
"cluster_pay_tier": "vvv",
"datacenter": "cse",
"account_id": "456777",
"created": "2018-06-32323dffdf:35:48+0000"
}
</code></pre>
<p>I need <code>cluster-id</code> of this particular json</p>
<pre><code>root@vagrant-xenial64:~# kubectl get cm cluster-info -n kube-system -o jsonpath='{.data.cluster-config\.json.cluster_id}'
root@vagrant-xenial64:~#
</code></pre>
<p>Above command returns empty string.
I tried many other combinations too</p>
| <p>Your <code>ConfigMap</code> resource <code>data</code>-field contains a string which are interpreted as-is when you run <code>jsonpath</code> to select it via <code>'{.data.cluster-config\.json}'</code>. What i mean is that the shell you use will print it as JSON at <code>stdout</code> although it's stored differently in Kubernetes. If you run <code>kubectl get cm cluster-info -n kube-system -o json</code> and look at the <code>data</code>-field it might look something like this:</p>
<pre><code>"data": {
"cluster-config.json": "{\n \"cluster_id\": \"xxx\",\n \"cluster_name\": \"prod-yyy-mmm\",\n \"cluster_type\": \"rrr\",\n \"cluster_pay_tier\": \"vvv\",\n \"datacenter\": \"cse\",\n \"account_id\": \"456777\",\n \"created\": \"2018-06-32323dffdf:35:48+0000\"\n}\n"
}
</code></pre>
<p>You won't be able to access the "fields" within that string with <code>jsonpath</code> since it's not actually part of the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#configmap-v1-core" rel="noreferrer"><code>ConfigMap</code> API resource</a> fields.</p>
<p>You could try to use a second tool to fetch it though, using <a href="https://stedolan.github.io/jq/" rel="noreferrer"><code>jq</code></a>, a command-line JSON processor. This tool would interpret the output of <code>jsonpath</code> as JSON on the fly and parse it accordingly. </p>
<p>Example:</p>
<pre><code>kubectl get cm cluster-info -n kube-system -o jsonpath='{.data.cluster-config\.json}' | jq '.cluster_id'
"xxx"
</code></pre>
<p>If installing e.g. <code>jq</code> defeat any purposes i would recommend to use a combination of already available tools (assuming you're on Linux) like <code>grep</code>, <code>awk</code> and <code>sed</code>:</p>
<pre><code>kubectl get cm cluster-info -n kube-system -o jsonpath='{.data.cluster-config\.json}' | grep cluster_id | awk '{ print $2 }' | sed -e 's/"//' -e 's/",//'
xxx
</code></pre>
|
<p>I have a newbie question about Docker and Kubernetes.</p>
<p>For example I have three PHP services in different containers. Nginx and php-fpm are in different containers too for each PHP service (1 process = 1 container).</p>
<p>Each PHP service should know the DNS name of the other PHP services in the cluster - how can I solve this? How should I configure the cluster and containers/pods to make it possible?</p>
<p>Thank you in advance for you help.</p>
<p>Sincerely, </p>
<p>gtw000</p>
| <p>Kubernetes services create a DNS record that resolves to an IP. When you communicate with this IP the request reaches one of the pods the service is targeted on.</p>
<p>For example, if you have the following services: svc1, svc2 and svc3
svc1 should be a resolvable hostname that directs the traffic to one of svc1's pods
The same goes for svc2 and svc3</p>
<p>Please read further on <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">K8s Services Docs</a></p>
|
<p>Any idea why I keep getting this annoying and unhelpful error code/description?</p>
<pre><code>Failed to pull image myapidemodocker.azurecr.io/apidemo:v4.0: rpc error: code = Unknown desc = unknown blob
</code></pre>
<p>I thought of incorrect secret and followed this documentation from Microsoft with no success! [<a href="https://learn.microsoft.com/en-us/azure/container-registry/container-registry-auth-aks][1]" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/container-registry/container-registry-auth-aks][1]</a>.</p>
<p>Context:</p>
<ul>
<li>I am using Visual Studio with Docker for Windows to create Windows
Container image.</li>
<li>Image is pushed to Azure Container Register (ACR) and Deployed as
Azure Container Instance. Unfortunately, I can't use ACI as
production application because it is not connected to a private vNET.
Can't use public IP for security reason but that's what is done just
for poc!</li>
<li>Next step, Created Kubernetes cluster in Azure and trying to deploy
the same image (Windows container) into Kubernetes POD but it is not
working.</li>
<li>Let me share my yml definition and event logs</li>
</ul>
<p>Here is my yml definition:</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: apidemo
spec:
template:
metadata:
labels:
app: apidemo
spec:
containers:
- name: apidemo
image: myapidemodocker.azurecr.io/apidemo:v4.0
imagePullSecrets:
- name: myapidemosecret
nodeSelector:
beta.kubernetes.io/os: windows
Event logs:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m default-scheduler Successfully assigned apidemo-57b5fc58fb-zxk86 to aks-agentp
ool-18170390-1
Normal SuccessfulMountVolume 4m kubelet, aks-agentpool-18170390-1 MountVolume.SetUp succeeded for volume "default-token-gsjhl"
Normal SandboxChanged 2m kubelet, aks-agentpool-18170390-1 Pod sandbox changed, it will be killed and re-created.
Normal Pulling 2m (x2 over 4m) kubelet, aks-agentpool-18170390-1 pulling image "apidemodocker.azurecr.io/apidemo:v4.0"
Warning Failed 20s (x2 over 2m) kubelet, aks-agentpool-18170390-1 Failed to pull image "apidemodocker.azurecr.io/apidemo:v4
.0": [rpc error: code = Unknown desc = unknown blob, rpc error: code = Unknown desc = unknown blob]
Warning Failed 20s (x2 over 2m) kubelet, aks-agentpool-18170390-1 Error: ErrImagePull
Normal BackOff 10s kubelet, aks-agentpool-18170390-1 Back-off pulling image "apidemodocker.azurecr.io/apidemo:
v4.0"
Warning Failed 10s kubelet, aks-agentpool-18170390-1 Error: ImagePullBackOff
</code></pre>
<p>(5) I don't understand why Kubernetes is still using <code>/var/run/secrets/kubernetes.io/serviceaccount</code> from <code>default-token-gsjhl</code> as secrete while I specified my own!</p>
<p>Thanks for taking time to provide feedback.</p>
| <p>I was able to resolve the issue. It had nothing to do with error message! The actual problem was, I was trying to use Windows Container image and Kubernetes in Azure only support Linux Container images.</p>
<p>This are the actions I had to do:</p>
<ul>
<li>Configured Ubuntu (<a href="https://tutorials.ubuntu.com/tutorial/tutorial-windows-ubuntu-hyperv-containers#0" rel="nofollow noreferrer">Linux Container on Windows 10</a>) </li>
<li>Configured Docker to use Linux (Switch to Linux Container).</li>
<li>Converted ASP.NET MVC project to ASP.NET Core using Visual Studio 2017. This was a big change to support multiple platforms including Linux.</li>
<li>Updated the dockerfile and docker-compose project.</li>
<li>Created new docker image (Linux Container).</li>
<li>Pushed the image to Azure Container Registry.</li>
<li>Created a new deployment in Kubernetes with same credential. It worked!</li>
<li>Created a new <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service to expose the app in Kubernetes</a>. This step created an endpoint that client can use.</li>
<li>My Kubernetes cluster is vNET joined and all IP's are private. So, I exposed the Kubernetes endpoint (service) via Azure API Gateway. Just for the sake of demo, I allowed anonymous access to API (API Key and jwt token are must for production app).</li>
<li>Here is the application flow: Client App -> Azure API Gateway -> Kubernetes Endpoint(private IP) -> Kubernetes PODs -> My Linux Container</li>
</ul>
<p>There are lots of complexities and technology specifications are changing rapidly. So, it took me lots of reading to get it right! I am sure you can do it. Try my API from Azure Kubernetes Service here-</p>
<ul>
<li><a href="https://gdtapigateway.azure-api.net/containerdemo/aks/api/address/GetTop10Cities?StateProvince=Texas&CountryRegion=United%20States" rel="nofollow noreferrer">https://gdtapigateway.azure-api.net/containerdemo/aks/api/address/GetTop10Cities?StateProvince=Texas&CountryRegion=United%20States</a></li>
<li><a href="https://gdtapigateway.azure-api.net/containerdemo/aks/api/address/GetAddressById?addressID=581" rel="nofollow noreferrer">https://gdtapigateway.azure-api.net/containerdemo/aks/api/address/GetAddressById?addressID=581</a></li>
</ul>
<p>Here are some the configurations that I used for your information-</p>
<p><strong>Dockerfile:</strong>
<pre><code></p>
<pre><code>FROM microsoft/aspnetcore:2.0
ARG source
WORKDIR /app
ENV ASPNETCORE_URLS=http://+:80
EXPOSE 80
COPY ${source:-obj/Docker/publish} .
ENTRYPOINT ["dotnet", "gdt.api.demo.dotnetcore.dll"]
</code></pre>
<p></pre></code></p>
<p><strong>Docker-compose:</strong>
<pre><code>
version: '3'</p>
<p>services:
gdt-api-demo:
image: gdt.api.demo.dotnetcore
build:
context: .\gdt.api.demo.dotnetcore
dockerfile: Dockerfile
</pre></code></p>
<p><strong>Kubernetes Deployment Definition:</strong>
<pre><code>
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: gdtapidemo
spec:
template:
metadata:
labels:
app: gdtapidemo
spec:
containers:
- name: gdtapidemo
image: gdtapidemodocker.azurecr.io/gdtapidemo-ubuntu:v1.0
imagePullSecrets:
- name: gdtapidemosecret
</pre></code></p>
<p><strong>Kubernetes Service Definition:</strong>
<pre><code>
kind: Service
apiVersion: v1
metadata:
name: gdtapidemo-service
spec:
selector:
app: gdtapidemo-app
ports:
- protocol: TCP
port: 80
targetPort: 9200
</pre></code></p>
<p><a href="https://i.stack.imgur.com/sPyOb.png" rel="nofollow noreferrer">Service as Deployed in Kubernetes</a></p>
|
<p>I have a directory within container A that I would like to share with container B. </p>
<p>For example I have a directory <code>/dataabc</code> on container A.</p>
<p>I've tried using a shared <code>hostPath</code> volume, however as this is empty when mounted - it makes the existing files non accessible. (<code>/dataabc</code> would be mounted on top of the existing <code>/dataabc/</code> from container A.</p>
<p>I could copy the files over on container startup - but this requires modification to the container. Is there a more simple way that does not require modification to the container?</p>
| <p>Big thanks to @graham, I could reuse the existing container with just this minor modification to the pod config:</p>
<pre><code> initContainers:
- args:
- cp -r /var/www / && ls -altr /www/
command:
- /bin/sh
- -c
image: example
imagePullPolicy: Always
name: example-init
volumeMounts:
- mountPath: /www
name: webroot
</code></pre>
|
<p>All system services in Kubernetes are deployed to a namespace, usually called <code>kube-system</code>. Where does that come from? What if I would like to change that to another namespace?</p>
| <blockquote>
<p>All system services in Kubernetes are deployed to a namespace, usually called kube-system. Where does that come from?</p>
</blockquote>
<p>As noted in <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/" rel="nofollow noreferrer">the nice documentation</a> there are three namespaces that Kubernetes initially starts wtih:</p>
<ul>
<li><strong>default</strong> - The default namespace for objects with no other namespace.</li>
<li><strong>kube-system</strong> - The namespace for objects created by the Kubernetes system.</li>
<li><strong>kube-public</strong> - The namespace is created automatically and readable by all users (including those not authenticated). This namespace is mostly reserved for cluster usage, in case that some resources should be visible and readable publicly throughout the whole cluster. The public aspect of this namespace is only a convention, not a requirement.</li>
</ul>
<p>You can change <code>default</code> namespace to any namespace of your liking using <code>kubectl config</code> context handling.</p>
<blockquote>
<p>What if I would like to change that to another namespace?</p>
</blockquote>
<p>That would be a convoluted and rather risky undertaking... For kubeadm created cluster you can find appropriate manifests in /etc/kubernetes/manifests but it is not just sufficient to change namespace there, there is an array of config maps, certificates and things to consider namespace-wise. And even if you manage to do so there is reason behind the <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/" rel="nofollow noreferrer">deprication of api-server flag <code>master-service-namespace</code></a> since you can break GKE implicit references and similar issues can arise. It all boils down to that it is not really advisable to change kube-system namespace.</p>
<h1>Edit:</h1>
<p>Below is excerpt from <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/apis/core/types.go" rel="nofollow noreferrer">kuberentes source</a> where you can see how those namespaces are initially defined.</p>
<pre><code>// NamespaceDefault means the object is in the default namespace which is applied when not specified by clients
NamespaceDefault string = "default"
// NamespaceAll is the default argument to specify on a context when you want to list or filter resources across all namespaces
NamespaceAll string = ""
// NamespaceNone is the argument for a context when there is no namespace.
NamespaceNone string = ""
// NamespaceSystem is the system namespace where we place system components.
NamespaceSystem string = "kube-system"
// NamespacePublic is the namespace where we place public info (ConfigMaps)
NamespacePublic string = "kube-public"
</code></pre>
<p>You can find <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/apis/componentconfig/types.go" rel="nofollow noreferrer">more references to kube-system</a> through the codebase, here is another example:</p>
<pre><code>// "kube-system" is the default scheduler lock object namespace
SchedulerDefaultLockObjectNamespace string = "kube-system"
</code></pre>
<p>And so on...</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.