prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>I am using <a href="https://github.com/kubernetes-sigs/kubeadm-dind-cluster" rel="nofollow noreferrer">kubeadm-dind-cluster</a> a Kubernetes multi-node cluster for developer of Kubernetes and projects that extend Kubernetes. Based on kubeadm and DIND (Docker in Docker).</p>
<p>I have a fresh Centos 7 install on which I have just run <code>./dind-cluster-v1.13.sh up</code>. I did not set any other values and am using all the default values for networking.</p>
<p>All appears well:</p>
<pre><code>[root@node01 dind-cluster]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
kube-master Ready master 23h v1.13.0
kube-node-1 Ready <none> 23h v1.13.0
kube-node-2 Ready <none> 23h v1.13.0
[root@node01 dind-cluster]# kubectl config view
apiVersion: v1
clusters:
- cluster:
insecure-skip-tls-verify: true
server: http://127.0.0.1:32769
name: dind
contexts:
- context:
cluster: dind
user: ""
name: dind
current-context: dind
kind: Config
preferences: {}
users: []
[root@node01 dind-cluster]# kubectl cluster-info
Kubernetes master is running at http://127.0.0.1:32769
KubeDNS is running at http://127.0.0.1:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@node01 dind-cluster]#
</code></pre>
<p>and it appears healthy:</p>
<pre><code>[root@node01 dind-cluster]# curl -w '\n' http://127.0.0.1:32769/healthz
ok
</code></pre>
<p>I know the dashboard service is there:</p>
<pre><code>[root@node01 dind-cluster]# kubectl get services kubernetes-dashboard -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard NodePort 10.102.82.8 <none> 80:31990/TCP 23h
</code></pre>
<p>however any attempt to access it is refused:</p>
<pre><code>[root@node01 dind-cluster]# curl http://127.0.0.1:8080/api/v1/namespaces/kube-system/services/kubernetes-dashboard
curl: (7) Failed connect to 127.0.0.1:8080; Connection refused
[root@node01 dind-cluster]# curl http://127.0.0.1:8080/ui
curl: (7) Failed connect to 127.0.0.1:8080; Connection refused
</code></pre>
<p>I also see the following in the firewall log:</p>
<pre><code>2019-02-05 19:45:19 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -C DOCKER -p tcp -d 127.0.0.1 --dport 32769 -j DNAT --to-destination 10.192.0.2:8080 ! -i br-669b654fc9cd' failed: iptables: No chain/target/match by that name.
2019-02-05 19:45:19 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -C DOCKER ! -i br-669b654fc9cd -o br-669b654fc9cd -p tcp -d 10.192.0.2 --dport 8080 -j ACCEPT' failed: iptables: Bad rule (does a matching rule exist in that chain?).
2019-02-05 19:45:19 WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -C POSTROUTING -p tcp -s 10.192.0.2 -d 10.192.0.2 --dport 8080 -j MASQUERADE' failed: iptables: No chain/target/match by that name.
</code></pre>
<p>Any suggestions on how I actually access the dashboard externally from my development machine? I don't want to use the proxy to do this.</p>
| <p>You should be able to access <code>kubernetes-dashboard</code> using the following addresses:</p>
<p>ClusterIP(works for other pods in cluster):</p>
<pre><code>http://10.102.82.8:80/
</code></pre>
<p>NodePort(works for every host who can access cluster nodes using their IPs):</p>
<pre><code>http://clusterNodeIP:31990/
</code></pre>
<p>Usually Kubernetes dashboard uses <code>https</code> protocol, so you may need to use different ports in request to <code>kubernetes-dashboard</code> Service for that. </p>
<p>You can also access the dashboard using <code>kube-apiserver</code> as a proxy:</p>
<p>Directly to dashboard Pod:</p>
<pre><code>https://<master-ip>:<apiserver-port>/api/v1/namespaces/kube-system/pods/https:kubernetes-dashboard-pod-name:/proxy/#!/login
</code></pre>
<p>To dashboard ClusterIP service:</p>
<pre><code>https://<master-ip>:<apiserver-port>/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
</code></pre>
<p>I can guess that <code><master-ip>:<apiserver-port></code> would mean <code>127.0.0.1:32769</code> in your case.</p>
|
<p>I have spring boot service which is running on distributed VMs, but I want to move that service to Kubernetes. Previously we had spring cloud gateway configured for request limiter across those 4 VMs, but now with Kubernetes, my application will be auto-scalable.</p>
<p>In that case, how can I limit the requests given that, Kubernetes could increase or decrease the pods based on traffic? How can I maintain the state of the incoming traffic but still keep my service stateless?</p>
| <p>Essentially, you can do rate limiting by fronting your application with a proxy. (nginx, haproxy, etc). More specifically you can use a <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Kubernetes Ingress</a>. More specifically you can use the <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">nginx ingress controller</a> and the can use something like <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#limit-rate" rel="nofollow noreferrer">limit rate</a> in the ConfigMap or <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#rate-limiting" rel="nofollow noreferrer">rate limiting</a> through annotations in the ingress.</p>
|
<p>When accessing the API server directly (i.e. not with kubectl, but with direct HTTP requests), what formats for the resource specifications does the API server support?</p>
<p>In all examples I have seen so far, the resource specifications are in JSON (for example <a href="https://github.com/kubernetes/kubernetes/issues/17404" rel="nofollow noreferrer">here</a>). But I couldn't find any general information about this.</p>
<p>Does the API server also accept resource specifications in other formats, such as YAML or protobuf?</p>
<p>Similarly, when the API server returns resources in response to a GET request, are the resources always returned in JSON or are there any other formats supported?</p>
| <p><a href="https://www.oreilly.com/library/view/managing-kubernetes/9781492033905/ch04.html" rel="nofollow noreferrer">Managing Kubernetes, Chapter 4</a> (section "Alternate encodings") says that the API server supports three data formats for resource specifications:</p>
<ul>
<li>JSON</li>
<li>YAML</li>
<li>Protocol Buffers (protobuf)</li>
</ul>
<p>I tested creating resources in these formats using <code>curl</code> and it works, as shown in the following.</p>
<h3>Preparation</h3>
<p>For easily talking to the API server, start a proxy to the API server with kubectl:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl proxy
</code></pre>
<p>Now the API server is accessible on <a href="http://127.0.0.1:8001" rel="nofollow noreferrer">http://127.0.0.1:8001</a>.</p>
<p>The Kubernetes API specification is accessible on <a href="http://127.0.0.1:8001/openapi/v2" rel="nofollow noreferrer">http://127.0.0.1:8001/openapi/v2</a>.</p>
<h3>Request body formats</h3>
<p>You have to specify the format of the HTTP POST request body (i.e. the resource specification) in the <code>Content-Type</code> header.</p>
<p>The following data formats are supported:</p>
<ul>
<li><code>application/json</code></li>
<li><code>application/yaml</code></li>
<li><code>application/vnd.kubernetes.protobuf</code></li>
</ul>
<p>Below are concrete examples of requests.</p>
<h3>Create a resource with JSON</h3>
<p>Define a resource specification in JSON and save it in a file.</p>
<p>For example, <code>pod.json</code>:</p>
<pre><code>{
"apiVersion":"v1",
"kind":"Pod",
"metadata":{
"name":"test-pod"
},
"spec":{
"containers":[
{
"image":"nginx",
"name":"nginx-container"
}
]
}
}
</code></pre>
<p>Call API server to create the resource:</p>
<pre class="lang-sh prettyprint-override"><code>curl -H "Content-Type: application/json" -d "$(cat pod.json)" -X POST http://127.0.0.1:8001/api/v1/namespaces/default/pods
</code></pre>
<h3>Create a resource with YAML</h3>
<p>Define a resource specification in YAML and save it in a file.</p>
<p>For example, <code>pod.yaml</code>:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- image: nginx
name: nginx-container
</code></pre>
<p>Call API server to create the resource:</p>
<pre class="lang-sh prettyprint-override"><code>curl -H "Content-Type: application/yaml" -d "$(cat pod.yaml)" -X POST http://127.0.0.1:8001/api/v1/namespaces/default/pods
</code></pre>
<h3>Create a resource with protobuf</h3>
<p>I didn't test this, because the Kubernetes protobuf wire format uses a custom wrapper around the protobuf serialisation of the resource (see <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/api-machinery/protobuf.md" rel="nofollow noreferrer">here</a> and <a href="https://kubernetes.io/docs/reference/using-api/api-concepts/#protobuf-encoding" rel="nofollow noreferrer">here</a>). But, in principle, it should work.</p>
<h3>Response body formats</h3>
<p>When creating a resource as shown above, the API server returns the complete specification of the same resource in the HTTP response (that is, the specification that you submitted, initialised with all the default values, a <code>status</code> field, etc.).</p>
<p>You can choose the format for this response data with the <code>Accept</code> header in the request.</p>
<p>The accepted formats for the <code>Accept</code> header are the same as for the <code>Content-Type</code> header:</p>
<ul>
<li><code>application/json</code> (default)</li>
<li><code>application/yaml</code></li>
<li><code>application/vnd.kubernetes.protobuf</code></li>
</ul>
<p>For example:</p>
<pre class="lang-sh prettyprint-override"><code>curl -H "Content-Type: application/json" -H "Accept: application/yaml" -d "$(cat pod.json)" -X POST http://127.0.0.1:8001/api/v1/namespaces/default/pods
</code></pre>
<p>All combinations of formats in the <code>Content-Type</code> and <code>Accept</code> headers are possible.</p>
|
<p>i'm getting this error i also created rbac.yaml. But it require admin permission. is it possible to apply rbac.yaml without admin role ??</p>
<pre><code> apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: hazelcast-rbac
subjects:
name: default-cluster
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view
subjects:
- kind: ServiceAccount
name: default
namespace: default
</code></pre>
| <p>By default, only cluster admin can create <code>ClusterRoleBinding</code>. If you are project admin, please create <code>RoleBinding</code> instead.</p>
<p>To check if you can create <code>rolebinding</code>:</p>
<pre><code>oc adm policy who-can create rolebinding
</code></pre>
|
<p>I have a k8s cluster on Azure created with asc-engine. It has 4 windows agent nodes.</p>
<p>Recently 2 of the nodes went into a not-ready state and remained there for over a day. In an attempt to correct the situation I did a "kubectl delete node" command on both of the not-ready nodes, thinking that they would simply be restarted in the same way that a pod that is part of a deployment is restarted.</p>
<p>No such luck. The nodes no longer appear in the "kubectl get nodes" list. The virtual machines that are backing the nodes are still there and still running. I tried restarting the VMs thinking that this might cause them to self register, but no luck.</p>
<p>How do I get the nodes back as part of the k8s cluster? Otherwise, how do I recover from this situation? Worse case I can simply throw away the entire cluster and recreate it, but I really would like to simply fix what I have.</p>
| <p>You can delete the virtual machines and rerun your acs engine template, that <em>should</em> bring the nodes back (although, i didnt really test your exact scenario). Or you could simply create a new cluster, not that it takes a lot of time, since you just need to run your template.</p>
<p>There is no way of recovering from deletion of object in k8s. Pretty sure they are purged from etcd as soon as you delete them.</p>
|
<p>I'm applying a <strong>Kubernetes CronJob</strong>.
So far it works.
Now I want to <strong>add the environment variables</strong>. (env: -name... see below)
While tryng to apply I get the error </p>
<blockquote>
<p>unknown field "configMapRef" in io.k8s.api.core.v1.EnvVarSource</p>
</blockquote>
<p>I don't like to set all singles variables here. I prefer to link the configmap to not to double the variables. <strong>How is it possible set a link to the configmap.yaml variables in a CronJob file, how to code it?</strong></p>
<p>Frank</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: ad-sync
creationTimestamp: 2019-02-15T09:10:20Z
namespace: default
selfLink: /apis/batch/v1beta1/namespaces/default/cronjobs/ad-sync
spec:
concurrencyPolicy: Allow
failedJobsHistoryLimit: 1
successfulJobsHistoryLimit: 3
suspend: false
schedule: "0 */1 * * *"
jobTemplate:
metadata:
labels:
job: ad-sync
spec:
template:
spec:
containers:
- name: ad-sync
image: foo.azurecr.io/foobar/ad-sync
command: ["dotnet", "AdSyncService.dll"]
args: []
env:
- name: AdSyncService
valueFrom:
configMapRef:
name: ad-sync-service-configmap
restartPolicy: OnFailure</code></pre>
</div>
</div>
</p>
| <p>There is no such field <strong>configMapRef</strong> in <strong>env field</strong> instead there is a field called <strong>configMapKeyRef</strong></p>
<p>in order to get more detail about <strong>kubernetes objects</strong>, its convenient to use <strong>kubectl explain --help</strong> </p>
<p>for example if you would like to check all of the keys and their types you can use following command </p>
<pre><code>kubectl explain cronJob --recursive
kubectl explain cronjob.spec.jobTemplate.spec.template.spec.containers.env.valueFrom.configMapKeyRef
</code></pre>
|
<p>I'm trying to apply SSL to my kubernetes clusters (production & staging environment), but for now only on staging. I successfully installed the cert-manager, and since I have a 5 subdomains, I want to use wildcards, so I want to configure it with dns01. The problem is, we us GoDaddy for DNS management, but it's currently not supported (I think) by cert-manager. There is an issue (<a href="https://github.com/jetstack/cert-manager/issues/1083" rel="noreferrer">https://github.com/jetstack/cert-manager/issues/1083</a>) and also a PR to support this, but I was wondering if there is a workaround for this to use godaddy with cert-manager since there is not a lot of activity on this subject? I want to use ACME so I can use let's encrypt for certificates.</p>
<p>I'm fairly new to kubernetes, so if I missed something let me know. </p>
<p>Is it possible to use let's encrypt with other type of issuers than ACME? Is there any other way where I can use GoDaddy DNS & let's encrypt with kubernetes?</p>
<p>For now I don't have any Ingresses but only 2 services that are external faced. One frontend and one API gateway as LoadBalancer services.</p>
<p>Thanks in advance!</p>
| <p>yes definitely you can use the cert-manager with k8s and let's encrypt will be also nice to manage the certificate.</p>
<p>ACME have different api URL to register domain. from there also you can get wildcard * SSl for doamin. </p>
<p>in simple term install cert manager and use ingress controller of nginx and you will be done with it. you have to add the TLS cert on define it on the ingress object. </p>
<p>You can refer this tutorial for setup of cert-manager and nginx ingress controller.</p>
<p><a href="https://docs.cert-manager.io/en/venafi/tutorials/quick-start/index.html" rel="nofollow noreferrer">https://docs.cert-manager.io/en/venafi/tutorials/quick-start/index.html</a> </p>
|
<p>I am practicing <a href="https://www.katacoda.com/courses/kubernetes/storage-introduction" rel="nofollow noreferrer">k8s</a> on storage topic. I don't understand why step2: <code>PersistentVolume</code> has different storage size when the tutorial configures <code>PersistenVolumeClaim</code> in step3</p>
<p>For example <code>nfs-0001.yaml, nfs-0002.yaml</code>. <code>storage</code>s are <code>2Gi and 5Gi</code></p>
<pre><code>apiVersion: v1
kind: PersistentVolumemetadata:
name: nfs-0001
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
nfs:
server: 172.17.0.7
path: /exports/data-0001
apiVersion: v1
kind: PersistentVolume
metadata: name: nfs-0002
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
nfs:
server: 172.17.0.7
path: /exports/data-0002
</code></pre>
<p>Example in step3 : <code>pvc-mysql.yaml, pvc-http.yaml</code></p>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1metadata:
name: claim-mysql
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: claim-http
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
</code></pre>
<p>And when I check the <code>pv and pvc</code></p>
<pre><code>master $ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
claim-http Bound nfs-0001 2Gi RWO,RWX 17m
claim-mysql Bound nfs-0002 5Gi RWO,RWX 17m
master $ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-0001 2Gi RWO,RWX Recycle Bound default/claim-http 19m
nfs-0002 5Gi RWO,RWX Recycle Bound default/claim-mysql 19m
</code></pre>
<p>Neither <code>1Gi and 3Gi</code> shown up in the terminal. </p>
<p><strong>Question:</strong></p>
<ol>
<li>Where are the <code>1Gi and 3Gi</code>?<br></li>
<li>If they are not used. Is it safe to put arbitrary number to <code>storage</code> in <code>PersistenVolumeClaim</code> yaml?</li>
</ol>
| <p>You need to understand the difference between PV and PVC's. PVC is a declaration of stroage which that at some point become available for application to use and that is not the actual size of volume allocated.</p>
<p>PV's are the actual volume allocated at the time on the disk and ready to use. In order to use these PVs user needs to create PersistentVolumeClaims which is nothing but a request for PVs. A claim must specify the access mode and storage capacity, once a claim is created PV is automatically bound to this claim.</p>
<p>In your case, you have PV size of 5 and 3 GB respectively and you have started two PVC's with 3 and 1 GB respectively with <code>accessmode: ReadWriteOnce</code> that means there can be only one PV is attached to the one PVC. </p>
<p>Now the capacity of the PV available is the larger than requested and hence it allocated the larger size PV to the PVC.</p>
<blockquote>
<p><code>PVC.spec.capacity</code> is user's request for storage, "I want 10 GiB volume". <code>PV.spec.capacity</code> is actual size of the PV. PVC can bind to a bigger PV when there is no smaller available PV, so the user can get actually more than he wants.</p>
</blockquote>
<p>Similarly, dynamic provisioning works typically in bigger chunks. So if user asks for 0.5GiB in a PVC, he will get 1 GiB PV because that's the smallest one that AWS can provision.</p>
<p>There is nothing wrong about it. Also, you should not put any random number in PVC size, it should be well calculated according to your application need and scaling.</p>
|
<p>I've created clusters using kops command. For each cluster I've to create a hosted zone and add namespaces to DNS provider. To create a hosted zone, I've created a sub-domain in the hosted zone in aws(example.com) by using the following command :</p>
<pre><code>ID=$(uuidgen) && aws route53 create-hosted-zone --name subdomain1.example.com --caller-reference $ID | jq .DelegationSet.NameServers
</code></pre>
<p>The nameservers I get by executing above command are included in a newly created file subdomain1.json with the following content.</p>
<pre><code>{
"Comment": "Create a subdomain NS record in the parent domain",
"Changes": [
{
"Action": "CREATE",
"ResourceRecordSet": {
"Name": "subdomain1.example.com",
"Type": "NS",
"TTL": 300,
"ResourceRecords": [
{
"Value": "ns-1.awsdns-1.co.uk"
},
{
"Value": "ns-2.awsdns-2.org"
},
{
"Value": "ns-3.awsdns-3.com"
},
{
"Value": "ns-4.awsdns-4.net"
}
]
}
}
]
}
</code></pre>
<p>To get the parent-zone-id, I've used the following command:</p>
<pre><code>aws route53 list-hosted-zones | jq '.HostedZones[] | select(.Name=="example.com.") | .Id'
</code></pre>
<p>To apply the subdomain NS records to the parent hosted zone-</p>
<pre><code>aws route53 change-resource-record-sets --hosted-zone-id <parent-zone-id> --change-batch file://subdomain1.json
</code></pre>
<p>then I created a cluster using kops command-</p>
<pre><code>kops create cluster --name=subdomain1.example.com --master-count=1 --master-zones ap-southeast-1a --node-count=1 --zones=ap-southeast-1a --authorization=rbac --state=s3://example.com --kubernetes-version=1.11.0 --yes
</code></pre>
<p>I'm able to create a cluster, validate it and get its nodes. By using the same procedure, I created one more cluster (subdomain2.example.com).</p>
<p>I've set aliases for the two clusters using these commands-</p>
<pre><code>kubectl config set-context subdomain1 --cluster=subdomain1.example.com --user=subdomain1.example.com
kubectl config set-context subdomain2 --cluster=subdomain2.example.com --user=subdomain2.example.com
</code></pre>
<p>To set up federation between these two clusters, I've used these commands-
kubectl config use-context subdomain1</p>
<pre><code>kubectl create clusterrolebinding admin-to-cluster-admin-binding --clusterrole=cluster-admin --user=admin
kubefed init interstellar --host-cluster-context=subdomain1 --dns-provider=aws-route53 --dns-zone-name=example.com
</code></pre>
<p>-<a href="https://i.stack.imgur.com/EUsBN.png" rel="nofollow noreferrer">The output of kubefed init command should be</a></p>
<p>But for me it's showing as "waiting for the federation control plane to come up...", but it does not come up. What might be the error?</p>
<p>I've followed the following tutorial to create 2 clusters.</p>
<p><a href="https://gist.github.com/arun-gupta/02f534c7720c8e9c9a875681b430441a" rel="nofollow noreferrer">https://gist.github.com/arun-gupta/02f534c7720c8e9c9a875681b430441a</a></p>
| <p>There was a problem with the default image used for federation api server and controller manager binaries. By default, the below mentioned image is considered for the kubefed init command-
"gcr.io/k8s-jkns-e2e-gce-federation/fcp-amd64:v0.0.0-master_$Format:%h$". </p>
<p>But this image is old and is not available, the federation control plane tries to pull the image but fails. This is the error I was getting. </p>
<p>To rectify it, build a fcp image of your own and push it to some repository and use this image in kubefed init command. Below are the instructions to be executed(Run all of these commands from this path "$GOPATH/src/k8s.io/kubernetes/federation")-</p>
<h1>to create fcp image and push it to a repository -</h1>
<pre><code>docker load -i _output/release-images/amd64/fcp-amd64.tar
docker tag gcr.io/google_containers/fcp-amd64:v1.9.0-alpha.2.60_430416309f9e58-dirty REGISTRY/REPO/IMAGENAME[:TAG]
docker push REGISTRY/REPO/IMAGENAME[:TAG]
</code></pre>
<h1>now create a federation control plane with the following command-</h1>
<pre><code>_output/dockerized/bin/linux/amd64/kubefed init myfed --host-cluster-context=HOST_CLUSTER_CONTEXT --image=REGISTRY/REPO/IMAGENAME[:TAG] --dns-provider="PROVIDER" --dns-zone-name="YOUR_ZONE" --dns-provider-config=/path/to/provider.conf
</code></pre>
|
<p>I have a gitlab runner build by helm on GKE, I had registration this runner.</p>
<p>When I trigger my pipelines, runner run failed and got this error</p>
<pre><code>Running with gitlab-runner 11.7.0 (8bb608ff)
on gitlab-runner-gitlab-runner-5bb7b68b87-wsbzf -xsPNg33
Using Kubernetes namespace: gitlab
Using Kubernetes executor with image docker ...
Waiting for pod gitlab/runner--xspng33-project-3-concurrent-0rsbpp to be running, status is Pending
Waiting for pod gitlab/runner--xspng33-project-3-concurrent-0rsbpp to be running, status is Pending
Waiting for pod gitlab/runner--xspng33-project-3-concurrent-0rsbpp to be running, status is Pending
Waiting for pod gitlab/runner--xspng33-project-3-concurrent-0rsbpp to be running, status is Pending
Waiting for pod gitlab/runner--xspng33-project-3-concurrent-0rsbpp to be running, status is Pending
Waiting for pod gitlab/runner--xspng33-project-3-concurrent-0rsbpp to be running, status is Pending
Running on runner--xspng33-project-3-concurrent-0rsbpp via gitlab-runner-gitlab-runner-5bb7b68b87-wsbzf...
Cloning into '/general/year-end-party/yep-web'...
Cloning repository...
fatal: unable to access 'https://gitlab-ci-token:[email protected]/general/year-end-party/yep-web.git/': SSL certificate problem: unable to get issuer certificate
/bin/bash: line 72: cd: /general/year-end-party/yep-web: No such file or directory
ERROR: Job failed: command terminated with exit code 1
</code></pre>
<p>I saw many solutions say I could set ssl_verify false.
But my runner is installed by helm, I didn't touch runner's config.toml.
I don't know how could I solve this. Please help me.</p>
<p>I also had add cert for runner
<a href="https://i.stack.imgur.com/51wLt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/51wLt.png" alt="enter image description here"></a>
<a href="https://i.stack.imgur.com/wOug8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wOug8.png" alt="enter image description here"></a></p>
| <p>You need to create a Kubernetes Secret with the content of your certificate in the namespace of your gitlab-runner. The secret will be used to populate the /etc/gitlab-runner/certs directory in the gitlab-runner. </p>
<p>After that, you need to <a href="https://docs.gitlab.com/ee/install/kubernetes/gitlab_runner_chart.html#configuring-gitlab-runner-using-the-helm-chart" rel="nofollow noreferrer">refer</a> the secret name in your values.yaml helm chart :</p>
<pre><code>## Set the certsSecretName in order to pass custom certficates for GitLab Runner to use
## Provide resource name for a Kubernetes Secret Object in the same namespace,
## this is used to populate the /etc/gitlab-runner/certs directory
## ref: https://docs.gitlab.com/runner/configuration/tls-self-signed.html#supported-options-for-self-signed-certificates
##
certsSecretName: <name_of_your_secret>
</code></pre>
<p>More info in the <a href="https://docs.gitlab.com/ee/install/kubernetes/gitlab_runner_chart.html#configuring-gitlab-runner-using-the-helm-chart" rel="nofollow noreferrer">gitlab</a> documentation.</p>
|
<p>Hi yet once again my beloved community. </p>
<p>My v0.33.1 minikube hangs on the "Starting VM..." step. I am using Windows 10 and a HyperV vm underneath. I am running my cluster with the following command:</p>
<pre><code>minikube start --kubernetes-version="v1.10.11" --memory 4096 --vm-driver hyperv --hyperv-virtual-switch "HyperV Switch"
</code></pre>
<p>and my Docker is:</p>
<pre><code>Version 2.0.0.3 (31259)
Channel: stable
Build: 8858db3
</code></pre>
<p>The VM underneath goes up but its CPU eventually falls down to 0% usage and it just stalls. Kubectl hangs as well.</p>
<p>I have already tried: </p>
<ol>
<li>Clearing out the Minikube cache under users/.../.minikube</li>
<li>Minikube Delete</li>
<li>Reinstall Minikube and Kubernetes CLI</li>
<li>Reinstall Docker</li>
<li>Meddling with the VM on the HyperV Host</li>
</ol>
| <p>Following the suggestion from Diego Mendes in the comment I investigated the issue causing the minikube machine to get IPv6 addressation which would cause it to hang on startup. </p>
<p>I disabled <strong>IPv6</strong> on the <strong>Virtual Network Switch</strong> (this can be done from the <strong>Network and Sharing Center</strong> -> <strong>Adapter Settings</strong> -> Right Click relevant Switch and just find the relevant checkbox) but the VM would regardless fetch an <strong>IPv6</strong> address. </p>
<p>Since <strong>v18.02</strong> or later, <strong>Docker for Windows</strong> comes with an embedded Kubernetes cluster, this meddles with the minikube config causing it to choke having 2 clusters. The solution that fit my requirements was switching from using minikube to just using the internal native docker k8s cluster (The only major drawback is that you cannot specify k8s version but overall it makes the scripts simpler).</p>
<p>You will have to run:</p>
<ul>
<li><strong>minikube delete</strong></li>
</ul>
<p>Then change the kubernetes cluster context to point to the docker instance:</p>
<ul>
<li><strong>kubectl config use-context docker-for-desktop</strong></li>
</ul>
<p>And now you should be able to do all the operations you would normally do with <strong>kubectl</strong>.</p>
|
<ul>
<li><p>spark version: v2.4.0</p></li>
<li><p>eks info: v1.10.11-eks</p></li>
</ul>
<p>after submit, got wrong message as follow:</p>
<blockquote>
<p>019-02-21 15:08:44 WARN WatchConnectionManager:185 - Exec Failure: HTTP 403, Status: 403 - pods is forbidden: User "system:anonymous" cannot watch pods in the namespace "spark"
java.net.ProtocolException: Expected HTTP 101 response but was '403 Forbidden'</p>
<p>Exception in thread "main" io.fabric8.kubernetes.client.KubernetesClientException: pods is forbidden: User "system:anonymous" cannot watch pods in the namespace "spark"</p>
</blockquote>
| <p>You need to create <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole" rel="nofollow noreferrer">Role</a> for system:anonymous user to watch pods on your namespace with similar to below yaml</p>
<pre><code>kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: spark # your namespace
name: pod-reader # Role name will be needed for binding to user
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["pods"]
verbs: ["get", "watch", "list"]
</code></pre>
<p>Then you need to create <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole" rel="nofollow noreferrer">RoleBindging</a> to bind this role to <strong>system:anonymous</strong> user with similar to below yaml</p>
<pre><code># This role binding allows "system:anonymous" to read pods in the "spark" namespace.
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-pods
namespace: spark # your namespace
subjects:
- kind: User
name: system:anonymous # Name is case sensitive
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role #this must be Role or ClusterRole
name: pod-reader # this must match the name of the Role or ClusterRole you wish to bind to
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p><a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#anonymous-requests" rel="nofollow noreferrer">Documentation</a> for more info about Anonymous requests </p>
|
<p>I'm trying to setup an Ingress to talk to MinIO on my cluster. </p>
<p>My MinIO service is called <code>storage</code>, and exposes port 9000:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: storage
name: storage
namespace: abc
spec:
ports:
- name: "http-9000"
protocol: TCP
port: 9000
targetPort: 9000
type: NodePort
selector:
app: storage
</code></pre>
<p>And my Ingress is configured like so:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myingress
namespace: abc
spec:
backend:
serviceName: storage
servicePort: http-9000
</code></pre>
<p>Yet its not responding to the health-checks, and instead my Ingress is redirecting me to a different service (which is actually listening on port 80). How do I get the ingress to redirect me to my <code>storage</code> service on 9000?</p>
| <p>Turns out the config was fine, it was just taking ages to propagate :D</p>
|
<p>Hi I am trying to do communication between a mongo database and nodejs application using kubernetes. everything is running fine . but I ma unable to access my api from outside environment. I am also not able to telnet the port.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: node
labels:
app: node
tier: backend
spec:
type: NodePort
ports:
- port: 3000
nodePort: 30005
externalIPs:
- 34.73.154.127
# # Replace with the IP of your minikube node / master node
# selector:
# app: node
# tier: backend
</code></pre>
<p>this is my service yaml file </p>
<p>when i am checking the status of port using command<br>
<code>sudo lsof -i:30005</code><br>
I am able to see the results as below </p>
<pre><code>COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
kube-prox 2925 root 8u IPv6 32281 0t0 TCP *:30005 (LISTEN)
</code></pre>
<p>Now i should be able to telnet the port with ip like<br>
<code>telnet 34.73.154.127 30005</code> but I am getting result as below.</p>
<pre><code>Trying 34.73.154.127...
telnet: Unable to connect to remote host: Connection refused
</code></pre>
<p>If any of my friend is going to suggest that port is not open then please note that i have open all the port range from anywhere.</p>
<p>One more thing I want to let you know that I deployed a sample node application natively using npm on port 30006 and i am able to telnet on this port. So conclusion is that all the port range is open and working.</p>
<p>This is the describe command result of service<br>
<code>kubectl describe service/node</code>
result:</p>
<pre><code>Name: node
Namespace: default
Labels: app=node
tier=backend
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"node","tier":"backend"},"name":"node","namespace":"defau...
Selector: <none>
Type: NodePort
IP: 10.102.42.145
External IPs: 34.73.154.127
Port: <unset> 3000/TCP
TargetPort: 3000/TCP
NodePort: <unset> 30005/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
</code></pre>
<p>Please let me know what is wrong with me ..</p>
<pre><code>ENVIRONMENT:
cloud :google cloud platform
container :using docker and kubernetes
ubuntu 16.04 LTS
kubernetes 1.13.0
</code></pre>
| <p>Hi I was doing a silly mistake .<br>
Just uncommented this in my service yaml file and it started working </p>
<pre><code># # Replace with the IP of your minikube node / master node
# selector:
# app: node
# tier: backend
</code></pre>
|
<p>I'd like to pass env vars into the <code>exec</code> command.</p>
<p>So far I have tried the following</p>
<pre><code>SOME_VAR="A String"
kubectl exec -it a-pod-name -- sh -c 'env NEW_VAR=$SOME_VAR; echo $NEW_VAR > some-file-name.txt'
</code></pre>
<p>I realise I could use <code>kubectl cp</code> if I wanted to copy files, but that doesn't fit my use case.</p>
| <p>You need to put the command in double quotes and it will work like following:</p>
<pre><code>kubectl exec -it a-pod-name -- sh -c "env new_var=$var; echo $new_var > some-file-name.txt"
</code></pre>
<p>The reason behind that is bash doesn't extract variable into values in single quotes and hence you need to use double quotes.</p>
|
<p>A simplified version of the code</p>
<pre><code>trap 'rm /tmp/not_done; echo "trap command got executed"; sleep 10' SIGTERM
wait_sigterm() {
touch /tmp/not_done
while [ -f /tmp/not_done ]
do
sleep 60 &
wait $!
done
}
wait_sigterm
</code></pre>
<p>How can I make sure trap command got executed?</p>
<p>One way would be if I can get access to the logs of the killed container and check for my echo message.</p>
<p>I tried getting the logs (kubectl -n namespace-name logs pod-name container-name) in a while loop, so that I would be able to pick the last written logs till the container is alive.</p>
<p>But the echo message was not present.
My guess is logs are not written once we kill the container or maybe something else.
Is there any way I can get the logs of a killed container or test the above scenario?</p>
| <p><strong>Approach 1</strong></p>
<p>If you want to check out the logs of the previously terminated container from pod POD_NAME.
You can use the following command with -p flag</p>
<pre><code>kubectl logs POD_NAME -c CONTAINER_NAME -p
</code></pre>
<p>You can get further information such as <strong>options and flags</strong> with the following command </p>
<pre><code>kubectl logs --help
</code></pre>
<p><strong>Approach 2</strong></p>
<p>another approach to determine the termination message is in the Status field of Pod object. </p>
<p><code>kubectl get pod $POD_NAME -o yaml</code></p>
<p>check the field <strong>lastState</strong> of <strong>containerStatuses</strong>:</p>
<pre><code>status:
conditions:
containerStatuses:
- containerID:
image: registry.access.redhat.com/rhel7:latest
imageID:
lastState: {}
state:
running:
</code></pre>
<p>Here is more detail <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/determine-reason-pod-failure/#writing-and-reading-a-termination-message" rel="nofollow noreferrer">reading-a-termination-message</a></p>
|
<p>GKE uses the <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#kubenet" rel="nofollow noreferrer">kubenet</a> network plugin for setting up container interfaces and configures routes in the VPC so that containers can reach eachother on different hosts.</p>
<p>Wikipedia defines an <a href="https://en.wikipedia.org/wiki/Overlay_network" rel="nofollow noreferrer">overlay</a> as <code>a computer network that is built on top of another network</code>.</p>
<p>Should GKE's network model be considered an overlay network? It is built on top of another network in the sense that it relies on the connectivity between the nodes in the cluster to function properly, but the Pod IPs are natively routable within the VPC as the routes inform the network which node to go to to find a particular Pod.</p>
| <p>VPC-native and non VPC native GKE clusters uses <a href="https://cloud.google.com/vpc/docs/overview" rel="nofollow noreferrer">GCP virtual networking</a>. It is not strictly an overlay network by definition. An overlay network would be one that's isolated to just the GKE cluster. </p>
<p>VPC-native clusters work like this: </p>
<p>Each node VM is given a primary internal address and two alias IP ranges. One alias IP range is for pods and the other is for services.
The GCP subnet used by the cluster must have at least two secondary IP ranges (one for the pod alias IP range on the node VMs and the other for the services alias IP range on the node VMs).</p>
<p>Non-VPC-native clusters:</p>
<p>GCP creates custom static routes whose destinations match pod IP space and services IP space. The next hops of these routes are node VMs by name, so there is instance based routing that happens as a "next step" within each VM. </p>
<p>I could see where some might consider this to be an overlay network. I don’t believe this is the best definition because the pod and service IPs are addressable from other VMs, outside of GKE cluster, in the network.</p>
<p>For a deeper dive on GCP’s network infrastructure, <a href="https://www.usenix.org/system/files/conference/nsdi18/nsdi18-dalton.pdf" rel="nofollow noreferrer">GCP’s network virtualization whitepaper</a> can be found <a href="https://www.usenix.org/system/files/conference/nsdi18/nsdi18-dalton.pdf" rel="nofollow noreferrer">here</a>.</p>
|
<p>I am following this <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">concept guide</a> on the kubernetes docs to connect to a service in a different namespace using the fully qualified domain name for the service.</p>
<p><strong>service.yml</strong></p>
<pre><code>---
# declare front service
kind: Service
apiVersion: v1
metadata:
name: traefik-frontend-service
namespace: traefik
spec:
selector:
k8s-app: traefik-ingress-lb
tier: reverse-proxy
ports:
- port: 80
targetPort: 8080
type: NodePort
</code></pre>
<p><strong>ingress.yml</strong></p>
<pre><code>---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: traefik-web-ui-ingress
namespace: traefik
annotations:
kubernetes.io/ingress.class: traefik
traefik.frontend.passHostHeader: "false"
traefik.frontend.priority: "1"
spec:
rules:
- host: example.com
http:
paths:
- path: /
backend:
serviceName: traefik-frontend-service.traefik.svc.cluster.local
servicePort: 80
</code></pre>
<p>But I keep getting this error:</p>
<blockquote>
<p>The Ingress "traefik-web-ui-ingress" is invalid: spec.rules[0].http.backend.serviceName: Invalid value: "traefik-frontend-service.traefik.svc.cluster.local": a DNS-1035 label must consist of lower case alphanumeric characters or '-', start with an alphabetic character, and end with an alphanumeric character (e.g. 'my-name', or 'abc-123', regex used for validation is 'a-z?')</p>
</blockquote>
<p>The service name of <code>traefik-frontend-service.traefik.svc.cluster.local</code>:</p>
<ul>
<li>starts with an alphanumeric character</li>
<li>ends with an alphanumeric character</li>
<li>only contains alphanumeric numbers or <code>-</code></li>
</ul>
<p>Not sure what I'm doing wrong here... <strong>unless a new ingress has to be created for each namespace</strong>.</p>
| <p>This is by design to avoid cross-namespace exposure, In this <a href="https://github.com/kubernetes/kubernetes/issues/17088" rel="nofollow noreferrer">thread</a> is explained why this limitation on the ingress specification was intentional.</p>
<p>That means, the <strong>Ingress can only expose services within the same namespace</strong>.</p>
<p><strong><em>The values provided should be the service name, not the FQDN.</em></strong></p>
<p>If you really need to design this way, your other alternatives are:</p>
<ul>
<li>Expose Traefik as a LB Service and then create a data service to provide the routing rules to traefik.</li>
<li><p>Use <a href="https://github.com/heptio/contour/blob/master/docs/ingressroute.md#across-namespaces" rel="nofollow noreferrer">Contour Ingress (by heptio)</a> to delegate the routing to other namespaces.</p>
<p>Using Contour would be something like this:</p>
<pre><code># root.ingressroute.yaml
apiVersion: contour.heptio.com/v1beta1
kind: IngressRoute
metadata:
name: namespace-delegation-root
namespace: default
spec:
virtualhost:
fqdn: ns-root.bar.com
routes:
- match: /
services:
- name: s1
port: 80
# delegate the subpath, `/blog` to the IngressRoute object in the marketing namespace with the name `blog`
- match: /blog
delegate:
name: blog
namespace: marketing
------------------------------------------------------------
# blog.ingressroute.yaml
apiVersion: contour.heptio.com/v1beta1
kind: IngressRoute
metadata:
name: blog
namespace: marketing
spec:
routes:
- match: /blog
services:
- name: s2
port: 80
</code></pre></li>
</ul>
|
<p>When accessing the k8s api endpoint (FQDN:6443), a successful retrieval will return a JSON object containing REST endpoint paths. I have a user who is granted cluster-admin privileges on the cluster who is able to successfully interact with that endpoint.</p>
<p>I've created a certificate for another user and granted them a subset of privileges in my cluster. The error I'm attempting to correct: They cannot access FQDN:6443, but instead get a 403 with a message that "User cannot get path /". I get the same behavior whether I specify as FQDN:6443/ or FQDN:6443 (no trailing slash). I've examined the privileges granted to cluster-admin role users, but have not recognized the gap. </p>
<p>Other behavior: They CAN access FQDN:6443/api, which I have not otherwise explicitly granted them, as well as the various endpoints I have explicitly granted. I believe they the api endpoint via the system:discovery role granted to the system:authenticated group. Also, if I attempt to interact with the cluster without a certificate, I correctly am identified as an anonymous user. If I interact with the cluster with a certificate whose user name does not match my rolebindings, I get the expected behaviors for all but the FQDN:6443 endpoint. </p>
| <p>I had a similar similar issue. I was trying to curl the base url: <a href="https://api_server_ip:6443" rel="noreferrer">https://api_server_ip:6443</a> with the correct certificates.</p>
<p>I got this error:</p>
<pre><code> {
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"kubernetes\" cannot get path \"/\"",
"reason": "Forbidden",
"details": {
},
"code": 403
}
</code></pre>
<p>It appears the system:discovery doesn't grant access to the base url: <a href="https://api_server_ip:6443/" rel="noreferrer">https://api_server_ip:6443/</a>. The system:discovery roles only gives access to the following paths:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:discovery
rules:
- nonResourceURLs:
- /api
- /api/*
- /apis
- /apis/*
- /healthz
- /openapi
- /openapi/*
- /swagger-2.0.0.pb-v1
- /swagger.json
- /swaggerapi
- /swaggerapi/*
- /version
- /version/
verbs:
- get
</code></pre>
<p>No access to / granted. So I created the following ClusterRole which I called discover_base_url. It grants access to the / path:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: discover_base_url
rules:
- nonResourceURLs:
- /
verbs:
- get
</code></pre>
<p>Then I created a ClusterRoleBinding binding the forbidden user "kubernetes", (it could be any user) to the above cluster role. The following is the yaml for the ClusterRoleBinding (replace "kubernetes" with your user):</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: discover-base-url
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: discover_base_url
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubernetes
</code></pre>
<p>After creating these two resources, the curl request works:</p>
<pre><code>curl --cacert ca.pem --cert kubernetes.pem --key kubernetes-key.pem https://api_server_ip:6443
{
"paths": [
"/api",
"/api/v1",
"/apis",
"/apis/",
"/apis/admissionregistration.k8s.io",
"/apis/admissionregistration.k8s.io/v1beta1",
"/apis/apiextensions.k8s.io",
"/apis/apiextensions.k8s.io/v1beta1",
"/apis/apiregistration.k8s.io",
"/apis/apiregistration.k8s.io/v1",
"/apis/apiregistration.k8s.io/v1beta1",
"/apis/apps",
"/apis/apps/v1",
"/apis/apps/v1beta1",
"/apis/apps/v1beta2",
"/apis/authentication.k8s.io",
"/apis/authentication.k8s.io/v1",
"/apis/authentication.k8s.io/v1beta1",
"/apis/authorization.k8s.io",
"/apis/authorization.k8s.io/v1",
"/apis/authorization.k8s.io/v1beta1",
"/apis/autoscaling",
"/apis/autoscaling/v1",
"/apis/autoscaling/v2beta1",
"/apis/autoscaling/v2beta2",
"/apis/batch",
"/apis/batch/v1",
"/apis/batch/v1beta1",
"/apis/certificates.k8s.io",
"/apis/certificates.k8s.io/v1beta1",
"/apis/coordination.k8s.io",
"/apis/coordination.k8s.io/v1beta1",
"/apis/events.k8s.io",
"/apis/events.k8s.io/v1beta1",
"/apis/extensions",
"/apis/extensions/v1beta1",
"/apis/networking.k8s.io",
"/apis/networking.k8s.io/v1",
"/apis/policy",
"/apis/policy/v1beta1",
"/apis/rbac.authorization.k8s.io",
"/apis/rbac.authorization.k8s.io/v1",
"/apis/rbac.authorization.k8s.io/v1beta1",
"/apis/scheduling.k8s.io",
"/apis/scheduling.k8s.io/v1beta1",
"/apis/storage.k8s.io",
"/apis/storage.k8s.io/v1",
"/apis/storage.k8s.io/v1beta1",
"/healthz",
"/healthz/autoregister-completion",
"/healthz/etcd",
"/healthz/log",
"/healthz/ping",
"/healthz/poststarthook/apiservice-openapi-controller",
"/healthz/poststarthook/apiservice-registration-controller",
"/healthz/poststarthook/apiservice-status-available-controller",
"/healthz/poststarthook/bootstrap-controller",
"/healthz/poststarthook/ca-registration",
"/healthz/poststarthook/generic-apiserver-start-informers",
"/healthz/poststarthook/kube-apiserver-autoregistration",
"/healthz/poststarthook/rbac/bootstrap-roles",
"/healthz/poststarthook/scheduling/bootstrap-system-priority-classes",
"/healthz/poststarthook/start-apiextensions-controllers",
"/healthz/poststarthook/start-apiextensions-informers",
"/healthz/poststarthook/start-kube-aggregator-informers",
"/healthz/poststarthook/start-kube-apiserver-admission-initializer",
"/healthz/poststarthook/start-kube-apiserver-informers",
"/logs",
"/metrics",
"/openapi/v2",
"/swagger-2.0.0.json",
"/swagger-2.0.0.pb-v1",
"/swagger-2.0.0.pb-v1.gz",
"/swagger-ui/",
"/swagger.json",
"/swaggerapi",
"/version"
]
}
</code></pre>
|
<p>I just completely setup prometheus and grafana dashboard using this tutorial <a href="https://kubernetes.github.io/ingress-nginx/user-guide/monitoring/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/monitoring/</a>.</p>
<p>I try to query something in prometheus and it was successfully plotting the graph. But when I access my Grafana dashboard with connecting to prometheus data, it returns empty charts like the below pic.</p>
<p><a href="https://i.stack.imgur.com/WpQvU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WpQvU.png" alt="Grafana Result"></a></p>
<p>Do I miss something in the step?</p>
| <p>Probably, you didn't create datasource in Grafana before the dashboard import. It is not specified in the manual, but dashboard will not work correctly without it.</p>
<p>How to create Data Source in Grafana:</p>
<ol>
<li>Open Configuration(gear) -> Data Sources</li>
<li>Press "Add data source"</li>
<li>Select Prometheus</li>
<li>Specify Prometheus server URL: (e.g: <a href="http://10.22.0.3:32150/" rel="nofollow noreferrer">http://10.22.0.3:32150/</a>)</li>
<li>Press "Save & Test"</li>
<li>See the confirmation about passed test.</li>
</ol>
<p>Now, select existing Data Source from the drop-down list during the import of the Nginx Ingress Dashboard from <a href="https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/grafana/dashboards/nginx.json" rel="nofollow noreferrer">JSON</a>. URL to the dashboard didn't work for me, so I've just copypasted the whole JSON content except two first lines with comments.</p>
<p>For existing dashboard you can change the data source:</p>
<ol>
<li>Open Dashboards -> Manage</li>
<li>Click on "Nginx Ingress Controller"</li>
<li>Open its settings (gear picture on the top)</li>
<li>Select "JSON Model"</li>
<li>Update all lines with "datasource": "old_data_source_name", to the desired name</li>
<li>Press "Save changes"</li>
<li>Press green "Save" button on the left, under "Settings"</li>
</ol>
<p>Alternatively, you can edit every element on the dashboard and select the desired data source from the drop-down list. Not very convenient way, so I'd prefer to import dashboard again. </p>
|
<p>We have a bunch of services that run off of the same Docker image: some long running services, some cron jobs, and a webservice. </p>
<p>I'm wondering what the current best practice here is? I essentially want some basic templating for reusing an image and its config, keeping all of them at the same revision (so sirensoftitan-image:{gitsha1hash} is used where gitsha1hash isn't repeated everywhere). </p>
<p>Should I be using a helm chart? Kustomize? Some other type of yaml templating? I want something light with as little added complexity as possible. </p>
| <p>I found helm chart heavy compared to kustomize. Give kustomize a try, very simple and easy to use.
You can deploy the same template for different environments by adding new labels, updating the deployment objects name by prefixing with environment value. So you can have unique naming convention for different environments. </p>
<p>More over it uses YAML format which makes it easy to learn and adopt it.
All custom configuration goes into one YAML file unlike helm in which you manage multiple files. I personally like kustomize as it is simple and flexible and not the least comes from Google community. Give it a try</p>
|
<p><strong>Edit</strong> This question used to be focused on the Kubernetes side of things. It is now apparent that the problem is at the Digitalocean Loadbalancer level.</p>
<p>I'm in the process of moving our service from Docker Swarm to a Kubernetes setup. The new K8S environment is up and running and I am starting to switch over traffic to the new K8S setup. However, when the traffic seems to be ramping up, it slows to a halt. The browser just spins for a while and then it loads snappily. </p>
<p>Running a simple <code>curl -vvv https://thehostname.com</code> and this happens</p>
<pre><code>* Trying 12.123.123.123...
* Connected to thehostname.com (12.123.123.123) port 443 (#0)
* found 148 certificates in /etc/ssl/certs/ca-certificates.crt
* found 594 certificates in /etc/ssl/certs
* ALPN, offering http/1.1
</code></pre>
<p>Then it pauses on that line for a while and after around 30 seconds it loads the rest of the request.</p>
<p>The symptom is that when the amount of traffic increases, the response time increases. It starts at 0.5 seconds, and then steadily increases to 30 seconds and is caped there. When I turn off traffic, the response time goes back to normal. The number of requests per second isn't more than 20-30 at most when this starts happening.</p>
<p>It seems that the act of opening a TCP connection is the slow part. I'm in contact with DigitalOcean support, but so far it has not yielded anything as it probably needs to be escalated.</p>
| <p>Issue most likely is with the number of certificates in /etc/ssl/certs.
It says that 594, certs are found. Do you really need all of them. Validate them and remove unwanted ones. Also try to copy all certs into file instead of maintaining one file for each cert</p>
|
<p>I know there are multiple different solutions to do what I am looking for, but I am looking for a/the proper way to perform some requests in parallel. I am new to Go but it feels cumbersome what I am doing at the moment.</p>
<p><strong>Use case:</strong></p>
<p>I need to query 4 different REST endpoints (kubernetes client requests) in parallel. Once I got all these 4 results I need to do some processing.</p>
<p><strong>My problem:</strong></p>
<p>I know I need to use go routines for that, but what's the best way to collect the results. What I am doing at the moment (see code sample below) is probably very cumbersome, but I am not sure what else I could do to improve the code.</p>
<p><strong>Code:</strong></p>
<p>This code is the easiest to understand but I don't want to perform the requests in sequence:</p>
<pre><code>// Get node resource usage metrics
nodeMetricsList, err := c.kubernetesClient.NodeMetricses()
if err != nil {
log.Warn("Failed to get node usage list from Kubernetes", err)
return err
}
// Get pod resource usage metrics
podMetricsList, err := c.kubernetesClient.PodMetricses()
if err != nil {
log.Warn("Failed to get pod usage list from Kubernetes", err)
return err
}
</code></pre>
<p>This is how I would run the requests in parallel. This makes the code so much less readable and it feels very cumbersome to implement too:</p>
<pre><code>var nodeMetricsList *v1beta1.NodeMetricsList
var nodeMetricsListError error
var podMetricsList *v1beta1.PodMetricsList
var podMetricsListError error
go func() {
nodeMetricsList, nodeMetricsListError = c.kubernetesClient.NodeMetricses()
}()
if nodeMetricsListError != nil {
log.Warn("Failed to get podList from Kubernetes", err)
return err
}
// Get pod resource usage metrics
go func() {
podMetricsList, podMetricsListError = c.kubernetesClient.PodMetricses()
}()
if podMetricsListError != nil {
log.Warn("Failed to get pod usage list from Kubernetes", err)
return err
}
</code></pre>
<p>What's the proper way to perform the requests in my given example in parallel?</p>
| <p>Your code has 2 race conditions, and likely will never correctly report an error.</p>
<p>You need to wait for the goroutines to complete before you can read the values they operate on, which is easily done with a <code>sync.WaitGroup</code> like so:</p>
<pre><code>var nodeMetricsList *v1beta1.NodeMetricsList
var podMetricsList *v1beta1.PodMetricsList
var nodeMetricsListError, podMetricsListError error
var wg sync.WaitGroup
// Get node resource usage metrics
wg.Add(1)
go func() {
defer wg.Done()
nodeMetricsList, nodeMetricsListError = c.kubernetesClient.NodeMetricses()
}()
// Get pod resource usage metrics
wg.Add(1)
go func() {
defer wg.Done()
podMetricsList, podMetricsListError = c.kubernetesClient.PodMetricses()
}()
wg.Wait()
if nodeMetricsListError != nil {
log.Warn("Failed to get podList from Kubernetes", err)
return err
}
if podMetricsListError != nil {
log.Warn("Failed to get pod usage list from Kubernetes", err)
return err
}
fmt.Println("Hello, playground")
</code></pre>
|
<p>I have got a Google Cloud IAM service account key file (in json format) that contains below data.</p>
<pre><code>{
"type": "service_account",
"project_id": "****",
"private_key_id":"****",
"private_key": "-----BEGIN PRIVATE KEY----blah blah -----END PRIVATE KEY-----\n",
"client_email": "*****",
"client_id": "****",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth/v1/certs",
"client_x509_cert_url": "****"
}
</code></pre>
<p>I can use this service account to access kubernetes API server by passing this key file to kube API client libraries. </p>
<p>But, I'm not finding any way to pass this service account to kubectl binary to have kubectl get authenticated to project for which this service account created for.</p>
<p>Is there any way that I can use to make Kubectl to use this service account file for authentication ?</p>
| <p>This answer provides some guidance: <a href="https://stackoverflow.com/questions/48400966/access-kubernetes-gke-cluster-outside-of-gke-cluster-with-client-go/48412272#48412272">Access Kubernetes GKE cluster outside of GKE cluster with client-go?</a> but it's not complete.</p>
<p>You need to do two things:</p>
<ol>
<li><p>set <code>GOOGLE_APPLICATION_CREDENTIALS</code> environment variable to path to your JSON key file for the IAM service account, and use <code>kubectl</code> while this variable is set, you should be authenticated with the token.</p></li>
<li><p>(this may be optional, not sure) Create a custom <code>KUBECONFIG</code> that only contains your cluster IP and CA certificate, save this file, and use it to connect to the cluster.</p></li>
</ol>
<p>Step 2 looks like this:</p>
<pre><code>cat > kubeconfig.yaml <<EOF
apiVersion: v1
kind: Config
current-context: cluster-1
contexts: [{name: cluster-1, context: {cluster: cluster-1, user: user-1}}]
users: [{name: user-1, user: {auth-provider: {name: gcp}}}]
clusters:
- name: cluster-1
cluster:
server: "https://$(eval "$GET_CMD --format='value(endpoint)'")"
certificate-authority-data: "$(eval "$GET_CMD --format='value(masterAuth.clusterCaCertificate)'")"
EOF
</code></pre>
<p>So with this, you should do</p>
<pre><code>export GOOGLE_APPLICATION_CREDENTIALS=<path-to-key.json>
export KUBECONFIG=kubeconfig.yaml
kubectl get nodes
</code></pre>
|
<p>I'm trying to deploy a postgres service to google cloud kubernetes with a persistent volume and persistent volume claim to provide storage for my application.</p>
<p>When I deploy, the pod gets stuck in a <code>CrashLoopBackOff</code>.</p>
<p>One of the pod's events fails with the message:</p>
<p><code>Error: failed to start container "postgres": Error response from daemon: error while creating mount source path '/data/postgres-pv': mkdir /data: read-only file system</code></p>
<p>This is the yaml I am trying to deploy using kubectl:</p>
<pre><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv
labels:
type: local
app: postgres
spec:
capacity:
storage: 5Gi
storageClassName: standard
accessModes:
- ReadWriteOnce
hostPath:
path: /data/postgres-pv
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pvc
labels:
type: local
app: postgres
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
volumeName: postgres-pv
---
apiVersion: v1
kind: Secret
metadata:
name: postgres-credentials
type: Opaque
data:
user: YWRtaW4=
password: password==
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres-container
image: postgres:9.6.6
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-credentials
key: user
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-credentials
key: password
- name: POSTGRES_DB
value: kubernetes_django
ports:
- containerPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-volume-mount
volumes:
- name: postgres-volume-mount
persistentVolumeClaim:
claimName: postgres-pvc
---
apiVersion: v1
kind: Service
metadata:
name: postgres
labels:
app: postgres
spec:
ports:
- protocol: TCP
port: 5432
targetPort: 5432
selector:
app: postgres
</code></pre>
<p>Nothing fails to deploy, but the pod gets stuck in a CrashLoopBackOff.</p>
<p>Thanks for the help!</p>
| <p>I had the same problem. Following <a href="https://medium.com/@markgituma/kubernetes-local-to-production-with-django-1-introduction-d73adc9ce4b4" rel="nofollow noreferrer">this tutorial</a> I got it all to work with <code>minikube</code> but it gave the same error on GCP. </p>
<p>As mentioned by Patrick W, <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes" rel="nofollow noreferrer">the docs</a> say: </p>
<blockquote>
<h1>Types of Persistent Volumes</h1>
<ul>
<li>...</li>
<li>HostPath (Single node testing only – local
storage is not supported in any way and WILL NOT WORK in a multi-node
cluster)</li>
<li>...</li>
</ul>
</blockquote>
<p>To solve this, I found a solution in the kubernetes <a href="https://docs.okd.io/latest/install_config/persistent_storage/persistent_storage_gce.html" rel="nofollow noreferrer">docs</a></p>
<p>You'll first have to create a <code>gcePersistentDisk</code>:</p>
<pre><code>gcloud compute disks create --size=[SIZE] --zone=[ZONE] [DISK_NAME]
</code></pre>
<p>and then the configuration as described in the link should do the trick:</p>
<pre><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv
labels:
type: local
spec:
capacity:
storage: 4Gi
storageClassName: standard
accessModes:
- ReadWriteMany
gcePersistentDisk:
pdName:data-disk
fsType: ext4
readOnly: true
</code></pre>
|
<p>I want to execute equivalent of</p>
<p><code>kubectl get all -l app=myapp -n mynamespace</code></p>
<p>or</p>
<p><code>kubectl label all -l version=1.2.0,app=myapp track=stable --overwrite</code></p>
<p>using client-go</p>
<p>I looked at <a href="https://github.com/kubernetes/client-go/blob/master/dynamic" rel="noreferrer">dynamic</a> package, but it seems like it needs <code>GroupVersionResource</code>, which is different for, say, Service objects and Deployment objects. Also when I pass <code>schema.GroupVersionResource{Group: "apps", Version: "v1"}</code> it doesn't find anything, when I pass <code>schema.GroupVersionResource{Version: "v1"}</code> it finds only namespace object and also doesn't looks for labels, though I provided label options:</p>
<pre><code>resource := schema.GroupVersionResource{Version: "v1"}
listOptions := metav1.ListOptions{LabelSelector: fmt.Sprintf("app=%s", AppName), FieldSelector: ""}
res, listErr := dynamicClient.Resource(resource).Namespace("myapps").List(listOptions)
</code></pre>
<p>I also looked at runtime package, but didn't find anything useful. I took a look at how <code>kubectl</code> implement this, bit haven't figured it out yet, too many levels of abstractions.</p>
| <p>You can't list "all objects" with one call.</p>
<p>Unfortunately the way Kubernetes API is architected is via API groups, which have multiple APIs under them.</p>
<p>So you need to:</p>
<ol>
<li>Query all API groups (<code>apiGroup</code>)</li>
<li>Visit each API group to see what APIs (<code>kind</code>) it exposes.</li>
<li>Actually query that <code>kind</code> to get all the objects (here you may actually filter the list query with the label).</li>
</ol>
<p>Fortunately, <code>kubectl api-versions</code> and <code>kubectl api-resources</code> commands do these.</p>
<p>So to learn how kubectl finds all "kinds" of API resources, run:</p>
<pre><code>kubectl api-resources -v=6
</code></pre>
<p>and you'll see kubectl making calls like:</p>
<ul>
<li><code>GET https://IP/api</code></li>
<li><code>GET https://IP/apis</code></li>
<li>then it visits every api group:
<ul>
<li><code>GET https://IP/apis/metrics.k8s.io/v1beta1</code></li>
<li><code>GET https://IP/apis/storage.k8s.io/v1</code></li>
<li>...</li>
</ul>
</li>
</ul>
<p>So if you're trying to clone this behavior with client-go, you should use the same API calls, <strike>or better just write a script just shells out to <code>kubectl api-resources -o=json</code> and script around it.</strike></p>
<p>If you aren't required to use client-go, there's a <a href="https://github.com/corneliusweig/ketall" rel="nofollow noreferrer">kubectl plugin called <code>get-all</code></a> which exists to do this task.</p>
|
<p>I've installed Istio 1.1 RC on a fresh GKE cluster, using Helm, and enabled mTLS (some options omitted like Grafana and Kiali):</p>
<pre><code>helm template istio/install/kubernetes/helm/istio \
--set global.mtls.enabled=true \
--set global.controlPlaneSecurityEnabled=true \
--set istio_cni.enabled=true \
--set istio-cni.excludeNamespaces={"istio-system"} \
--name istio \
--namespace istio-system >> istio.yaml
kubectl apply -f istio.yaml
</code></pre>
<p>Next, I installed the Bookinfo example app like this:</p>
<pre><code>kubectl label namespace default istio-injection=enabled
kubectl apply -f istio/samples/bookinfo/platform/kube/bookinfo.yaml
kubectl apply -f istio/samples/bookinfo/networking/bookinfo-gateway.yaml
kubectl apply -f istio/samples/bookinfo/networking/destination-rule-all-mtls.yaml
</code></pre>
<p>Then I've gone about testing by following the examples at: <a href="https://istio.io/docs/tasks/security/mutual-tls/" rel="nofollow noreferrer">https://istio.io/docs/tasks/security/mutual-tls/</a> </p>
<p>My results show the config is incorrect, but the verification guide above doesn't provide any hints about how to fix or diagnose issues. Here's what I see:</p>
<pre><code>istio/bin/istioctl authn tls-check productpage.default.svc.cluster.local
Stderr when execute [/usr/local/bin/pilot-discovery request GET /debug/authenticationz ]: gc 1 @0.015s 6%: 0.016+1.4+1.0 ms clock, 0.064+0.31/0.45/1.6+4.0 ms cpu, 4->4->1 MB, 5 MB goal, 4 P
gc 2 @0.024s 9%: 0.007+1.4+1.0 ms clock, 0.029+0.15/1.1/1.1+4.3 ms cpu, 4->4->2 MB, 5 MB goal, 4 P
HOST:PORT STATUS SERVER CLIENT AUTHN POLICY DESTINATION RULE
productpage.default.svc.cluster.local:9080 OK mTLS mTLS default/ default/istio-system
</code></pre>
<p>This appears to show that mTLS is OK. And the previous checks all pass, like checking the cachain is present, etc. The above check passes for all the bookinfo components.</p>
<p>However, the following checks show an issue:</p>
<pre><code>1: Confirm that plain-text requests fail as TLS is required to talk to httpbin with the following command:
kubectl exec $(kubectl get pod -l app=productpage -o jsonpath={.items..metadata.name}) -c istio-proxy -- curl http://productpage:9080/productpage -o /dev/null -s -w '%{http_code}\n'
200 <== Error. Should fail.
2: Confirm TLS requests without client certificate also fail:
kubectl exec $(kubectl get pod -l app=productpage -o jsonpath={.items..metadata.name}) -c istio-proxy -- curl https://productpage:9080/productpage -o /dev/null -s -w '%{http_code}\n' -k
000 <=== Correct behaviour
command terminated with exit code 35
3: Confirm TLS request with client certificate succeed:
kubectl exec $(kubectl get pod -l app=productpage -o jsonpath={.items..metadata.name}) -c istio-proxy -- curl https://productpage:9080/productpage -o /dev/null -s -w '%{http_code}\n' --key /etc/certs/key.pem --cert /etc/certs/cert-chain.pem --cacert /etc/certs/root-cert.pem -k
000 <=== Incorrect. Should succeed.
command terminated with exit code 35
</code></pre>
<p>What else can I do to debug my installation? I've followed the installation process quite carefully. Here's my cluster info:</p>
<pre><code>Kubernetes master is running at https://<omitted>
calico-typha is running at https://<omitted>/api/v1/namespaces/kube-system/services/calico-typha:calico-typha/proxy
GLBCDefaultBackend is running at https://<omitted>/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy
Heapster is running at https://<omitted>/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://<omitted>/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://<omitted>/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
Kubernetes version: 1.11.7-gke.4
</code></pre>
<p>I guess I'm after either a more comprehensive guide or some specific things I can check.</p>
<p>Edit: Additional info:
Pod status:</p>
<pre><code>default ns:
NAME READY STATUS RESTARTS AGE
details-v1-68868454f5-l7srt 2/2 Running 0 3h
productpage-v1-5cb458d74f-lmf7x 2/2 Running 0 2h
ratings-v1-76f4c9765f-ttstt 2/2 Running 0 2h
reviews-v1-56f6855586-qszpm 2/2 Running 0 2h
reviews-v2-65c9df47f8-ztrss 2/2 Running 0 3h
reviews-v3-6cf47594fd-hq6pc 2/2 Running 0 2h
istio-system ns:
NAME READY STATUS RESTARTS AGE
grafana-7b46bf6b7c-2qzcv 1/1 Running 0 3h
istio-citadel-5bf5488468-wkmvf 1/1 Running 0 3h
istio-cleanup-secrets-release-1.1-latest-daily-zmw7s 0/1 Completed 0 3h
istio-egressgateway-cf8d6dc69-fdmw2 1/1 Running 0 3h
istio-galley-5bcd455cbb-7wjkl 1/1 Running 0 3h
istio-grafana-post-install-release-1.1-latest-daily-vc2ff 0/1 Completed 0 3h
istio-ingressgateway-68b6767bcb-65h2d 1/1 Running 0 3h
istio-pilot-856849455f-29nvq 2/2 Running 0 2h
istio-policy-5568587488-7lkdr 2/2 Running 2 3h
istio-sidecar-injector-744f68bf5f-h22sp 1/1 Running 0 3h
istio-telemetry-7ffd6f6d4-tsmxv 2/2 Running 2 3h
istio-tracing-759fbf95b7-lc7fd 1/1 Running 0 3h
kiali-5d68f4c676-qrxfd 1/1 Running 0 3h
prometheus-c4b6997b-6d5k9 1/1 Running 0 3h
</code></pre>
<p>Example destinationrule:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
creationTimestamp: "2019-02-21T15:15:09Z"
generation: 1
name: productpage
namespace: default
spec:
host: productpage
subsets:
- labels:
version: v1
name: v1
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
</code></pre>
| <p>If you are using Istio 1.1 RC, you should be looking at the docs at <a href="https://preliminary.istio.io/" rel="nofollow noreferrer">https://preliminary.istio.io/</a> instead of <a href="https://istio.io/" rel="nofollow noreferrer">https://istio.io/</a>. The preliminary.istio.io site is always the working copy of the docs, corresponding to the next to be Istio release (1.1 currently).</p>
<p>That said, those docs are currently changing a lot day-to-day as they are being cleaned up and corrected during final testing before 1.1 is released, probably in the next couple of weeks.</p>
<p>A possible explanation for the plain text http request returning 200 in you test is that you may be running with <a href="https://preliminary.istio.io/docs/concepts/security/#permissive-mode" rel="nofollow noreferrer">permissive mode</a>.</p>
|
<p>When I use helm, It creates a <code>.helm</code> folder in my home directory. Is it important? Should it be committed to source control? Poking around I only see cache related information, that makes me think the whole folder can be deleted. And helm will re-create it if needed. Am I wrong in this thinking? Or is there something important in those folders that is worth putting into source control?</p>
| <p>In simple terms, no.</p>
<p>The <code>.helm</code> directory contains user specific data that will depend on the version of helm, the OS being used, the layout of the user’s system.</p>
<p>However, the main reason to not add it is that it also can contain TLS secrets which would then be disclosed to other users. Worse, if you use Git, these secrets would remain in the history and would be hard to remove, even if you deleted the <code>.helm</code> directory at a later date.</p>
|
<p>I am using ansible version 2.7 for kubernetes deployment.
For sending logs to datadog on kubernetes one of the way is to configure annotations like below,</p>
<pre><code>template:
metadata:
annotations:
ad.datadoghq.com/nginx.logs: '[{"source":"nginx","service":"webapp"}]'
</code></pre>
<p>this works fine and I could see logs in DataDog.</p>
<p>However I would like to achieve above configuration via ansible deployment on kubernetes for which I have used below code</p>
<pre><code> template:
metadata:
annotations:
ad.datadoghq.com/xxx.logs: "{{ lookup('template', './datadog.json.j2')}}"
</code></pre>
<p>and datadog.json.j2 looks like below</p>
<pre><code>'[{{ '{' }}"source":"{{ sourcea }}"{{ ',' }} "service":"{{ serviceb }}"{{ '}' }}]' **--> sourcea and serviceb are defined as vars**
</code></pre>
<p>However the resulting config on deployment is below</p>
<pre><code>template:
metadata:
annotations:
ad.datadoghq.com/yps.logs: |
'[{"source":"test", "service":"test"}]'
</code></pre>
<p>and this config does not allow datadog agent to parse logs failing with below error</p>
<pre><code>[ AGENT ] 2019-xx-xx xx10:50 UTC | ERROR | (kubelet.go:97 in parseKubeletPodlist) | Can't parse template for pod xxx-5645f7c66c-s9zj4: could not extract logs config: in logs: invalid character '\'' looking for beginning of value
</code></pre>
<p>if I use ansible code as below (using replace) </p>
<pre><code>template:
metadata:
annotations:
ad.datadoghq.com/xxx.logs: "{{ lookup('template', './datadog.json.j2', convert_data=False) | string | replace('\n','')}}"
</code></pre>
<p>it generates deployment config as below</p>
<pre><code>template:
metadata:
annotations:
ad.datadoghq.com/yps.logs: '''[{"source":"test", "service":"test"}]'''
creationTimestamp: null
labels:
</code></pre>
<p>Which also fails,</p>
<p>to configure the working config with ansible, I have to either remove leading pipe (|) or three quotes coming when using replace). </p>
<p>I would like to have jinja variables substitution in place so that I could configure deployment with desired source and service at deployment time.</p>
<p>kindly suggest</p>
| <p>By introducing space in datadog.json.j2 template definition .i.e.</p>
<pre><code> [{"source":"{{ sourcea }}"{{ ',' }} "service":"{{ serviceb }}"}] (space at start)
</code></pre>
<p>and running deployment again I got the working config as below </p>
<pre><code>template:
metadata:
annotations:
ad.datadoghq.com/yps.logs: ' [{"source":"test", "service":"test"}]'
</code></pre>
<p>However I am not able to understand the behaviour if anyone could help me understand it</p>
|
<p>I have a kubernetes cluster (v 1.9.0) in which I deployed 3 zookeeper pods (working correctly) and I want to have 3 kafka replicas.</p>
<p>The following statefulset works only if I comment the readinessProbe section.</p>
<pre><code>apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: kafka
spec:
selector:
matchLabels:
app: kafka
serviceName: kafka
replicas: 3
template:
metadata:
labels:
app: kafka
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- kafka
topologyKey: "kubernetes.io/hostname"
podAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- zookeeper
topologyKey: "kubernetes.io/hostname"
terminationGracePeriodSeconds: 30
containers:
- name: kafka
imagePullPolicy: IfNotPresent
image: sorintdev/kafka:20171204a
resources:
requests:
memory: 500Mi
cpu: 200m
ports:
- containerPort: 9092
name: server
command:
- sh
- -c
- "exec kafka-server-start.sh /opt/kafka/config/server.properties --override broker.id=${HOSTNAME##*-} \
--override listeners=PLAINTEXT://:9092 \
--override zookeeper.connect=zookeeper-0.zookeeper:2181,zookeeper-1.zookeeper:2181,zookeeper-2.zookeeper:2181 \
--override log.dirs=/var/lib/kafka \
--override auto.create.topics.enable=true \
--override auto.leader.rebalance.enable=true \
--override background.threads=10 \
--override broker.id.generation.enable=true \
--override compression.type=producer \
--override delete.topic.enable=false \
--override leader.imbalance.check.interval.seconds=300 \
--override leader.imbalance.per.broker.percentage=10 \
--override log.flush.interval.messages=9223372036854775807 \
--override log.flush.offset.checkpoint.interval.ms=60000 \
--override log.flush.scheduler.interval.ms=9223372036854775807 \
--override log.message.format.version=1.0 \
--override log.retention.bytes=-1 \
--override log.retention.hours=168 \
--override log.roll.hours=168 \
--override log.roll.jitter.hours=0 \
--override log.segment.bytes=1073741824 \
--override log.segment.delete.delay.ms=60000 \
--override message.max.bytes=1000012 \
--override min.insync.replicas=1 \
--override num.io.threads=8 \
--override num.network.threads=3 \
--override num.recovery.threads.per.data.dir=1 \
--override num.replica.fetchers=1 \
--override offset.metadata.max.bytes=4096 \
--override offsets.commit.required.acks=-1 \
--override offsets.commit.timeout.ms=5000 \
--override offsets.load.buffer.size=5242880 \
--override offsets.retention.check.interval.ms=600000 \
--override offsets.retention.minutes=1440 \
--override offsets.topic.compression.codec=0 \
--override offsets.topic.num.partitions=50 \
--override offsets.topic.replication.factor=1 \
--override offsets.topic.segment.bytes=104857600 \
--override queued.max.requests=500 \
--override quota.consumer.default=9223372036854775807 \
--override quota.producer.default=9223372036854775807 \
--override request.timeout.ms=30000 \
--override socket.receive.buffer.bytes=102400 \
--override socket.request.max.bytes=104857600 \
--override socket.send.buffer.bytes=102400 \
--override unclean.leader.election.enable=true \
--override connections.max.idle.ms=600000 \
--override controlled.shutdown.enable=true \
--override controlled.shutdown.max.retries=3 \
--override controlled.shutdown.retry.backoff.ms=5000 \
--override controller.socket.timeout.ms=30000 \
--override default.replication.factor=1 \
--override fetch.purgatory.purge.interval.requests=1000 \
--override group.max.session.timeout.ms=300000 \
--override group.min.session.timeout.ms=6000 \
--override inter.broker.protocol.version=1.0 \
--override log.cleaner.backoff.ms=15000 \
--override log.cleaner.dedupe.buffer.size=134217728 \
--override log.cleaner.delete.retention.ms=86400000 \
--override log.cleaner.enable=true \
--override log.cleaner.io.buffer.load.factor=0.9 \
--override log.cleaner.io.buffer.size=524288 \
--override log.cleaner.io.max.bytes.per.second=1.7976931348623157E308 \
--override log.cleaner.min.cleanable.ratio=0.5 \
--override log.cleaner.min.compaction.lag.ms=0 \
--override log.cleaner.threads=1 \
--override log.cleanup.policy=delete \
--override log.index.interval.bytes=4096 \
--override log.index.size.max.bytes=10485760 \
--override log.message.timestamp.difference.max.ms=9223372036854775807 \
--override log.message.timestamp.type=CreateTime \
--override log.preallocate=false \
--override log.retention.check.interval.ms=300000 \
--override max.connections.per.ip=2147483647 \
--override num.partitions=1 \
--override producer.purgatory.purge.interval.requests=1000 \
--override replica.fetch.backoff.ms=1000 \
--override replica.fetch.min.bytes=1 \
--override replica.fetch.max.bytes=1048576 \
--override replica.fetch.response.max.bytes=10485760 \
--override replica.fetch.wait.max.ms=500 \
--override replica.high.watermark.checkpoint.interval.ms=5000 \
--override replica.lag.time.max.ms=10000 \
--override replica.socket.receive.buffer.bytes=65536 \
--override replica.socket.timeout.ms=30000 \
--override reserved.broker.max.id=1000 \
--override zookeeper.session.timeout.ms=6000 \
--override zookeeper.set.acl=false "
env:
- name: KAFKA_HEAP_OPTS
value: "-Xmx512M -Xms512M"
- name: KAFKA_OPTS
value: "-Dlogging.level=WARN"
- name: ZOOKEEPER_CONNECT
value: "zookeeper"
volumeMounts:
- name: data
mountPath: /var/lib/kafka
#
# ATTENTION: readinessProbe causes the statefulset not to deploy correctly, don't use it!
#
readinessProbe:
initialDelaySeconds: 10
timeoutSeconds: 5
exec:
command:
- sh
- -c
- "/opt/kafka/bin/kafka-broker-api-versions.sh --bootstrap-server=localhost:9092"
livenessProbe:
initialDelaySeconds: 10
timeoutSeconds: 5
exec:
command:
- sh
- -c
- "/opt/kafka/bin/kafka-broker-api-versions.sh --bootstrap-server=localhost:9092"
securityContext:
runAsUser: 1000
fsGroup: 1000
volumeClaimTemplates:
- metadata:
name: data
spec:
resources:
requests:
storage: 250Mi
accessModes:
- ReadWriteOnce
storageClassName: nfs
</code></pre>
<p>With readinessProbe I get only 2 of 3 replicas, but only one is running:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
kafka-0 1/1 Running 0 40s
kafka-1 0/1 Running 0 20s
</code></pre>
<p>kafka-1 logs are:</p>
<pre><code>[2017-12-21 11:23:24,964] INFO Starting the log cleaner (kafka.log.LogCleaner)
[2017-12-21 11:23:24,966] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner)
[2017-12-21 11:23:25,394] INFO [controller-event-thread]: Starting (kafka.controller.ControllerEventManager$ControllerEventThread)
[2017-12-21 11:23:25,403] DEBUG [Controller id=1] Broker 0 has been elected as the controller, so stopping the election process. (kafka.controller.KafkaController)
</code></pre>
<p>And kafka-0 ones are:</p>
<pre><code>[2017-12-21 11:25:48,184] WARN [Controller-0-to-broker-1-send-thread]: Controller 0's connection to broker kafka-1.kafka.default.svc.cluster.local:9092 (id: 1 rack: null) was unsuccessful (kafka.controller.RequestSendThread)
java.io.IOException: Connection to kafka-1.kafka.default.svc.cluster.local:9092 (id: 1 rack: null) failed.
at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:68)
at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:269)
at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:223)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:64)
</code></pre>
<p>Producing messages on kafka-0 works, but consuming doesn't.</p>
<p>If I remove readinessProbe, all 3 replicas are created and kafka works like a charm.
It seems that readinessProbe causes the election process not completing correctly.
I can't figure out why the third replica is not created.
Any suggestions are very very welcome.</p>
| <p>I run into the exact similar issue for kafka. I tried with following changes, this resolved the issue. Maybe this would help.
Readiness - instead of executing script, i tried with tcpSocket & livenessprobe done by command.</p>
<p>You can update <code>initialDelaySeconds</code> as per your requirement though. </p>
<pre><code> readinessProbe:
tcpSocket:
port: 9092
timeoutSeconds: 5
periodSeconds: 5
initialDelaySeconds: 40
livenessProbe:
exec:
command:
- sh
- -c
- "kafka-broker-api-versions.sh --bootstrap-server=localhost:9093"
timeoutSeconds: 5
periodSeconds: 5
initialDelaySeconds: 70
$ kubectl get pods -w -l app=kafka
NAME READY STATUS RESTARTS AGE
kafka-0 1/1 Running 0 3m
kafka-1 1/1 Running 0 2m
kafka-2 1/1 Running 0 1m
</code></pre>
|
<p>I am just curious to know how k8s master/scheduler will handle this.</p>
<p>Lets consider I have a k8s master with 2 nodes. Assume that each node has 8GB RAM and each node running a pod which consumes 3GB RAM.</p>
<pre><code>node A - 8GB
- pod A - 3GB
node B - 8GB
- pod B - 3GB
</code></pre>
<p>Now I would like to schedule another pod, say pod C, which requires 6GB RAM. </p>
<p><strong>Question:</strong></p>
<ol>
<li>Will the k8s master shift pod A or B to other node to accommodate the pod C in the cluster or will the pod C be in the pending status? </li>
<li>If the pod C is going to be in pending status, how to use the resources efficiently with k8s?</li>
</ol>
<p>Unfortunately I could not try this with my minikube. If you know how k8s scheduler assigns the nodes, please clarify.</p>
| <p>Most of the Kubernetes components are split by responsibility and workload assignment is no different. We could define the workload assignment process as <strong><em>Scheduling</em></strong> and <strong><em>Execution</em></strong>.</p>
<p>The <strong>Scheduler</strong> as the name suggests will be responsible for the <strong><em>Scheduling</em></strong> step, The process can be briefly described as, "<em>get a list of pods, if it is not scheduled to a node, assign it to one node with capacity to run the pod</em>". There is a nice blog post from Julia Evan <a href="https://jvns.ca/blog/2017/07/27/how-does-the-kubernetes-scheduler-work/" rel="nofollow noreferrer">here</a> explaining Schedulers.</p>
<p>And <strong>Kubelet</strong> is responsible for the <strong><em>Execution</em></strong> of pods scheduled to it's node. It will get a list of POD Definitions allocated to it's node, make sure they are running with the right configuration, if not running start then.</p>
<p>With that in mind, the scenario you described will have the behavior expected, the POD will not be scheduled, because you don't have a node with capacity available for the POD.</p>
<p>Resource Balancing is mainly decided at scheduling level, a nice way to see it is when you add a new node to the cluster, if there are no PODs pending allocation, the node will not receive any pods. A brief of the logic used to Resource balancing can be seen on <a href="https://github.com/kubernetes/kubernetes/pull/6150" rel="nofollow noreferrer">this PR</a></p>
<p>The solutions,</p>
<p>Kubernetes ships with a default scheduler. If the default scheduler does not suit your needs you can implement your own scheduler as described <a href="https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/" rel="nofollow noreferrer">here</a>. The idea would be implement and extension for the Scheduler to ReSchedule PODs already running when the cluster has capacity but not well distributed to allocated the new load.</p>
<p>Another option is use tools created for scenarios like this, the <a href="https://github.com/kubernetes-incubator/descheduler" rel="nofollow noreferrer">Descheduler</a> is one, it will monitor the cluster and evict pods from nodes to make the scheduler re-allocate the PODs with a better balance. There is a nice blog post <a href="https://itnext.io/keep-you-kubernetes-cluster-balanced-the-secret-to-high-availability-17edf60d9cb7" rel="nofollow noreferrer">here</a> describing these scenarios.</p>
<p>PS:
Keep in mind that the total memory of a node is not allocatable, depending on which provider you use, the capacity allocatable will be much lower than the total, take a look on this SO: <a href="https://stackoverflow.com/questions/54786341/cannot-create-a-deployment-that-requests-more-than-2gi-memory/54786781#54786781">Cannot create a deployment that requests more than 2Gi memory</a></p>
|
<p>So I recently installed stable/redis-ha cluster (<a href="https://github.com/helm/charts/tree/master/stable/redis-ha" rel="nofollow noreferrer">https://github.com/helm/charts/tree/master/stable/redis-ha</a>) on my G-Cloud based kubernetes cluster. The cluster was installed as a "Headless Service" without a ClusterIP. There are 3 pods that make up this cluster one of which is elected master. </p>
<p>The cluster has installed with no issues and can be accessed via redis-cli from my local pc (after port-forwarding with kubectl).</p>
<p>The output from the cluster install provided me with DNS name for the cluster. Because the service is a headless I am using the following DNS Name </p>
<p>port_name.port_protocol.svc.namespace.svc.cluster.local (As specified by the documentation)</p>
<p>When attempting to connect I get the following error:</p>
<blockquote>
<p>"redis.exceptions.ConnectionError: Error -2 connecting to
port_name.port_protocol.svc.namespace.svc.cluster.local :6379. Name does not
resolve."</p>
</blockquote>
<p>This is not working.</p>
<p>Not sure what to do here. Any help would be greatly appreciated.</p>
| <p>the DNS appears to be incorrect. it should be in the below format</p>
<pre><code><redis-service-name>.<namespace>.svc.cluster.local:6379
say, redis service name is redis and namespace is default then it should be
redis.default.svc.cluster.local:6379
</code></pre>
<p>you can also use pod dns, like below</p>
<pre><code><redis-pod-name>.<redis-service-name>.<namespace>.svc.cluster.local:6379
say, redis pod name is redis-0 and redis service name is redis and namespace is default then it should be
redis-0.redis.default.svc.cluster.local:6379
</code></pre>
<p>assuming the service port is same as container port and that is 6379</p>
|
<p>I have an ingress definition in Kubernetes. </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: dev
annotations:
kubernetes.io/ingress.class: nginx
#nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- xyz.org
secretName: ingress-tls
rules:
- host: xyz.org
http:
paths:
- path: /configuration/*
backend:
serviceName: dummysvc
servicePort: 80
</code></pre>
<p>I need that whenver I hit url : <a href="https://example.com/configuration/" rel="nofollow noreferrer">https://example.com/configuration/</a> it should go to some file or some entity which the service sends as a response but this does not happen it gives me an error page "No webpage was found for above address"
Is this the issue with ingress??</p>
<p>Below is my service spec:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: dummysvc
spec:
#type: LoadBalancer
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app: configurationservice
</code></pre>
<p>Below is my <strong><em>deployment spec</em></strong>:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: dummy-deployment
labels:
app: configurationservice
spec:
replicas: 3
selector:
matchLabels:
app: configurationservice
template:
metadata:
labels:
app: configurationservice
spec:
volumes:
- name: appinsights
secret:
secretName: appinsightngm-secrets
- name: cosmosdb
secret:
secretName: cosmosdbngm-secrets
- name: blobstorage
secret:
secretName: blobstoragengm-secrets
- name: azuresearch
secret:
secretName: azuresearchngm-secrets
containers:
- name: configurationservice
image: xyz.azurecr.io/xyz.configurationservice:develop
imagePullPolicy: Always
ports:
- containerPort: 80
volumeMounts:
- name: appinsights
mountPath: "/appinsights/"
readOnly: true
- name: cosmosdb
mountPath: "/cosmosdb/"
readOnly: true
- name: blobstorage
mountPath: "/blobstorage/"
readOnly: true
- name: azuresearch
mountPath: "/azuresearch/"
readOnly: true
---
apiVersion: v1
kind: Service
metadata:
name: dummysvc
spec:
#type: LoadBalancer
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
selector:
app: configurationservice
</code></pre>
| <p>You can try this example on:
<a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/rewrite/README.md#rewrite-target" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/rewrite/README.md#rewrite-target</a></p>
<pre><code>$ echo "
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
name: rewrite
namespace: default
spec:
rules:
- host: rewrite.bar.com
http:
paths:
- backend:
serviceName: http-svc
servicePort: 80
path: /something/?(.*)
" | kubectl create -f -
</code></pre>
|
<p>I have some dilemma to choose what should be the right request and limit setting for a pod in Openshift. Some data:</p>
<ol>
<li>during start up, the application requires at least 600 millicores to be able to fulfill the readiness check within 150 seconds.</li>
<li>after start up, 200 millicores should be sufficient for the application to stay in idle state.</li>
</ol>
<p>So my understanding from documentation:</p>
<p><strong>CPU Requests</strong></p>
<blockquote>
<p>Each container in a pod can specify the amount of CPU it requests on a node. The scheduler uses CPU requests to find a node with an appropriate fit for a container.
The CPU request represents a minimum amount of CPU that your container may consume, but if there is no contention for CPU, it can use all available CPU on the node. If there is CPU contention on the node, CPU requests provide a relative weight across all containers on the system for how much CPU time the container may use.
On the node, CPU requests map to Kernel CFS shares to enforce this behavior.</p>
</blockquote>
<p>Noted that the scheduler will refer to the request CPU to perform allocation on the node, and then it is a guarantee resource once allocated.
Also on the other side, I might allocate extra CPU as the 600 millicores might be only required during start up.</p>
<p>So should i go for</p>
<pre><code>resources:
limits:
cpu: 1
requests:
cpu: 600m
</code></pre>
<p>for guarantee resource or</p>
<pre><code>resources:
limits:
cpu: 1
requests:
cpu: 200m
</code></pre>
<p>for better cpu saving</p>
| <p>I think you didn't get the idea of <em>Requests vs Limits</em>, I would recommend you take a look on the <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#how-pods-with-resource-limits-are-run" rel="noreferrer">docs</a> before you take that decision.</p>
<p>In a brief explanation,</p>
<p><strong>Request</strong> is how much resource will be virtually allocated to the container, it is a guarantee that you can use it when you need, does not mean it keeps reserved exclusively to the container. With that said, if you request 200mb of RAM but only uses 100mb, the other 100mb will be "borrowed" by other containers when they consume all their Requested memory, and will be "claimed back" when your container needs it.</p>
<p><strong>Limit</strong> is simple terms, is how much the container can consume, requested + borrow from other containers, before it is shutdown for consuming too much resources. </p>
<ol>
<li>If a Container exceeds its memory <strong><em>limit</em></strong>, it will <em>probably</em> be terminated.</li>
<li>If a Container exceeds its memory <strong><em>request</em></strong>, it is <em>likely</em> that its Pod will be evicted whenever the <em>node runs out of memory</em>.</li>
</ol>
<p>In simple terms, <strong>the limit is an absolute value, it should be equal or higher than the request</strong>, and the good practice is to avoid having the limits higher than the request for all containers, only in cases while certain workloads might need it, this is because most of the containers can consume more resources (ie: memory) than they requested, suddenly the PODs will start to be evicted from the node in an unpredictable way that makes it worse than if had a fixed limit for each one.</p>
<p>There is also a nice post in the <a href="https://docs.docker.com/config/containers/resource_constraints/" rel="noreferrer">docker docs</a> about resources limits.</p>
<p>The <strong>scheduling</strong> rule is the same for CPU and Memory, K8s will only assign a POD to a the node if the node has enough CPU and Memory allocatable to fit all resources <strong>requested</strong> by the containers within a pod.</p>
<p>The <strong>execution</strong> rule is a bit different:</p>
<p>Memory is a limited resource in the node and the capacity is an absolute limit, the containers can't consume more than the node have capacity.</p>
<p>The CPU on the other hand is measure as CPU time, when you reserve a CPU capacity, you are telling how much CPU time a container can use, if the container need more time than the requested, it can be throttled and go to an execution queue until other containers have consumed their allocated time or finished their work. In summary is very similar to memory, but is very unlikely the container being killed for consuming too much CPU. The container will be able to use more CPU when the other containers does not use the full CPU time allocated to them. The main issue is when a container uses more CPU than was allocated, the throttling will degrade de performance of the application and at certain point might stop working properly. If you do not provide limits, the containers will start affecting other resources in the node.</p>
<p>Regarding the values to be used, there is no right value or right formula, each application requires a different approach, only measuring multiple times you can find the right value, the advice I give to you is to identify the min and the max and adjust somewhere in the middle, then keep monitoring to see how it behaves, if you feel is wasting\lacking resources you can reduce\increase to an optimal value. If the service is something crucial, start with higher values and reduce afterwards. </p>
<p>For readiness check, you should not use it as parameters to specify these values, you can delay the readiness using <code>initialDelaySeconds</code> parameter in the probe to give extra time to start the POD containers.</p>
<p>PS: I quoted the terms "Borrow" and "Claimed back" because the container is not actually borrowing from another container, in general, the node have a pool of memory and give you chunk of the memory to the container when they need it, so the memory is not technically borrowed from the container but from the Pool.</p>
|
<p>How can we auto-update (<em>delete, create, change</em>) entries in <code>/etc/hosts</code> file of running Pod without actually entering the pod?</p>
<p>We working on containerisation of <em>SAP</em> application server and so far succeeded in achieving this using <em>Kubernetes</em>.</p>
<pre><code>apiVersion: v1
kind: Pod
spec:
hostNetwork: true
</code></pre>
<p>Since we are using host network approach, all entries of our VMs <code>/etc/hosts</code> file are getting copied whenever a new pod is created.</p>
<p>However, once pod has been created and in running state, any changes to VMs <code>/etc/hosts</code> file are not getting transferred to already running pod.</p>
<p>We would like to achieve this for our project requirement.</p>
| <p>Kubernetes does have several different ways of affecting name resolution, your request is most similar to <a href="https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/" rel="nofollow noreferrer">here</a> and related pages.</p>
<p>Here is an extract, emphasis mine.</p>
<blockquote>
<p>Adding entries to a Pod’s /etc/hosts file provides Pod-level override of hostname resolution when DNS and other options are not applicable. In 1.7, users can add these custom entries with the HostAliases field in PodSpec.</p>
<p><strong>Modification not using HostAliases is not suggested because the file is managed by Kubelet and can be overwritten on during Pod creation/restart.</strong></p>
</blockquote>
<p>An example Pod specification using <code>HostAliases</code> is as follows:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: hostaliases-pod
spec:
restartPolicy: Never
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "foo.local"
- "bar.local"
- ip: "10.1.2.3"
hostnames:
- "foo.remote"
- "bar.remote"
containers:
- name: cat-hosts
image: busybox
command:
- cat
args:
- "/etc/hosts"
</code></pre>
<p>One issue here is that you will need to update and restart the Pods with a new set of <code>HostAliases</code> if your network IPs change. That might cause downtime in your system.</p>
<p>Are you sure you need this mechanism and not <a href="https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors" rel="nofollow noreferrer">a service that points to an external endpoint</a>?</p>
|
<p>I have installed bookinfo on EKS according to the instructions <a href="https://istio.io/docs/setup/kubernetes/helm-install/#option-2-install-with-helm-and-tiller-via-helm-install" rel="nofollow noreferrer">here</a> and <a href="https://istio.io/docs/examples/bookinfo/" rel="nofollow noreferrer">here</a>. </p>
<p>While verifying that the application was installed correctly, i received <code>000</code> when trying to bring up the product page. After checking my network connections VPC/Subnets/Routing/SecurityGroups, I have narrorwed the issue down to being an istio networking issue.</p>
<p>Upon further investigation, I logged into the istio-sidecar container for productpage and have noticed the following error.</p>
<pre><code>[2019-01-21 09:06:01.039][10][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:41] Unable to establish new stream
[2019-01-21 09:06:28.150][10][warning][upstream] external/envoy/source/common/config/grpc_mux_impl.cc:240] gRPC config stream closed: 14, no healthy upstream
</code></pre>
<p>This led me to to notice that the istio-proxy is pointing to the <code>istio-pilot.istio-system:15007</code> address for discovery. Only the strange thing was, the kubernetes <code>istio-pilot.istio-system</code> service does not seem to be exposing port <code>15007</code> as shown below.</p>
<pre><code>[procyclinsur@localhost Downloads]$ kubectl get svc istio-pilot --namespace=istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-pilot ClusterIP 172.20.185.72 <none> 15010/TCP,15011/TCP,8080/TCP,9093/TCP 1d
</code></pre>
<p>Infact none of the services from the <code>istio-system</code> namespace seem to expose that port.
I am assuming that this <code>istio-pilot.istio-system</code> address is the one used for gRPC and would like to know how to fix this as it seems to be pointing to the wrong address; please correct me if I am wrong.</p>
<p><strong>Relevant Logs</strong></p>
<p>istio-proxy</p>
<pre><code>2019-01-21T09:04:58.949152Z info Version [email protected]/istio-1.0.5-c1707e45e71c75d74bf3a5dec8c7086f32f32fad-Clean
2019-01-21T09:04:58.949283Z info Proxy role: model.Proxy{ClusterID:"", Type:"sidecar", IPAddress:"10.20.228.89", ID:"productpage-v1-54b8b9f55-jpz8g.default", Domain:"default.svc.cluster.local", Metadata:map[string]string(nil)}
2019-01-21T09:04:58.949971Z info Effective config: binaryPath: /usr/local/bin/envoy
configPath: /etc/istio/proxy
connectTimeout: 10s
discoveryAddress: istio-pilot.istio-system:15007
discoveryRefreshDelay: 1s
drainDuration: 45s
parentShutdownDuration: 60s
proxyAdminPort: 15000
serviceCluster: productpage
zipkinAddress: zipkin.istio-system:9411
</code></pre>
| <p>I wanted to post the solution to my issue.</p>
<p><strong>Problem:</strong></p>
<p>EKS DNS was not properly working which is why none of the other solutions (while very good!!) worked for me.</p>
<p><strong>Cause:</strong></p>
<p>When an AWS VPC is first created VPC DNS settings are not properly setup for EKS. The following VPC settings are required to be enabled by EKS.</p>
<ul>
<li>DNS resolution: Enabled</li>
<li>DNS hostnames: Disabled <code><-- Default VPC Settings</code></li>
</ul>
<p><strong>Solution:</strong></p>
<p>Set <code>DNS hostnames</code> to <code>Enabled</code> and DNS begins to work as expected.</p>
|
<p>I have a MySQL pod running in my cluster.<br> I need to temporarily pause the pod from working without deleting it, something similar to docker where the <code>docker stop container-id</code> cmd will stop the container not delete the container.<br> Are there any commands available in kubernetes to pause/stop a pod?</p>
| <p>So, like others have pointed out, Kubernetes doesn't support stop/pause of current state of pod and resume when needed. However, you can still achieve it by having no working deployments which is setting number of replicas to 0.</p>
<pre><code>kubectl scale --replicas=0 deployment/<your-deployment>
</code></pre>
<p>see the help</p>
<pre><code># Set a new size for a Deployment, ReplicaSet, Replication Controller, or StatefulSet.
kubectl scale --help
</code></pre>
<p>Scale also allows users to specify one or more preconditions for the scale action.</p>
<p>If <code>--current-replicas</code> or <code>--resource-version</code> is specified, it is validated before the scale is attempted, and it is
guaranteed that the precondition holds true when the scale is sent to the server.</p>
<p><strong>Examples:</strong></p>
<pre><code> # Scale a replicaset named 'foo' to 3.
kubectl scale --replicas=3 rs/foo
# Scale a resource identified by type and name specified in "foo.yaml" to 3.
kubectl scale --replicas=3 -f foo.yaml
# If the deployment named mysql's current size is 2, scale mysql to 3.
kubectl scale --current-replicas=2 --replicas=3 deployment/mysql
# Scale multiple replication controllers.
kubectl scale --replicas=5 rc/foo rc/bar rc/baz
# Scale statefulset named 'web' to 3.
kubectl scale --replicas=3 statefulset/web
</code></pre>
|
<p>I am running database inside Kubernetes pod. i am planning to run K8s job to take automatic backup of databases from pod.</p>
<p>There is also i can write shell script to take snapshot of volume(PV).</p>
<p>which method will be better to use? in emergency which one will save time restore data ?</p>
| <p>You can use <a href="https://appscode.com/products/stash/" rel="nofollow noreferrer">Stash by AppsCode</a> which is a great solution to backup Kubernetes volumes.</p>
<p>For supported versions check <a href="https://github.com/appscode/stash#supported-versions" rel="nofollow noreferrer">here</a></p>
<blockquote>
<p>Stash by AppsCode is a Kubernetes operator for restic. If you are
running production workloads in Kubernetes, you might want to take
backup of your disks. Traditional tools are too complex to setup and
maintain in a dynamic compute environment like Kubernetes. restic is a
backup program that is fast, efficient and secure with few moving
parts. Stash is a CRD controller for Kubernetes built around restic to
address these issues.
Using Stash, you can backup Kubernetes volumes mounted in following
types of workloads:</p>
<p>Deployment, DaemonSet, ReplicaSet, ReplicationController, StatefulSet</p>
</blockquote>
<p>After installing stash using <a href="https://appscode.com/products/stash/0.8.3/setup/install/#using-script" rel="nofollow noreferrer">Script</a> or <a href="https://appscode.com/products/stash/0.8.3/setup/install/#using-helm" rel="nofollow noreferrer">HELM</a> you would want to follow
Instructions for <a href="https://appscode.com/products/stash/0.8.3/guides/backup/" rel="nofollow noreferrer">Backup</a> and <a href="https://appscode.com/products/stash/0.8.3/guides/restore/" rel="nofollow noreferrer">Restore</a> if you are not familiar</p>
<p>I find it very useful</p>
|
<p>I'm trying to bring up a RabbitMQ cluster on Kubernetes using Rabbitmq-peer-discovery-k8s plugin and I always have only on pod running and ready but the next one always fails.</p>
<p>I tried multiple changes to my configuration and this is what got at least one pod running</p>
<pre><code>---
apiVersion: v1
kind: ServiceAccount
metadata:
name: rabbitmq
namespace: namespace-dev
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: endpoint-reader
namespace: namespace-dev
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: endpoint-reader
namespace: namespace-dev
subjects:
- kind: ServiceAccount
name: rabbitmq
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: endpoint-reader
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: "rabbitmq-data"
labels:
name: "rabbitmq-data"
release: "rabbitmq-data"
namespace: "namespace-dev"
spec:
capacity:
storage: 5Gi
accessModes:
- "ReadWriteMany"
nfs:
path: "/path/to/nfs"
server: "xx.xx.xx.xx"
persistentVolumeReclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: "rabbitmq-data-claim"
namespace: "namespace-dev"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
selector:
matchLabels:
release: rabbitmq-data
---
# headless service Used to access pods using hostname
kind: Service
apiVersion: v1
metadata:
name: rabbitmq-headless
namespace: namespace-dev
spec:
clusterIP: None
# publishNotReadyAddresses, when set to true, indicates that DNS implementations must publish the notReadyAddresses of subsets for the Endpoints associated with the Service. The default value is false. The primary use case for setting this field is to use a StatefulSet's Headless Service to propagate SRV records for its Pods without respect to their readiness for purpose of peer discovery. This field will replace the service.alpha.kubernetes.io/tolerate-unready-endpoints when that annotation is deprecated and all clients have been converted to use this field.
# Since access to the Pod using DNS requires Pod and Headless service to be started before launch, publishNotReadyAddresses is set to true to prevent readinessProbe from finding DNS when the service is not started.
publishNotReadyAddresses: true
ports:
- name: amqp
port: 5672
- name: http
port: 15672
selector:
app: rabbitmq
---
# Used to expose the dashboard to the external network
kind: Service
apiVersion: v1
metadata:
namespace: namespace-dev
name: rabbitmq-service
spec:
type: NodePort
ports:
- name: http
protocol: TCP
port: 15672
targetPort: 15672
nodePort: 31672
- name: amqp
protocol: TCP
port: 5672
targetPort: 5672
nodePort: 30672
selector:
app: rabbitmq
---
apiVersion: v1
kind: ConfigMap
metadata:
name: rabbitmq-config
namespace: namespace-dev
data:
enabled_plugins: |
[rabbitmq_management,rabbitmq_peer_discovery_k8s].
rabbitmq.conf: |
cluster_formation.peer_discovery_backend = rabbit_peer_discovery_k8s
cluster_formation.k8s.host = kubernetes.default.svc.cluster.local
cluster_formation.k8s.address_type = hostname
cluster_formation.node_cleanup.interval = 10
cluster_formation.node_cleanup.only_log_warning = true
cluster_partition_handling = autoheal
queue_master_locator=min-masters
loopback_users.guest = false
cluster_formation.randomized_startup_delay_range.min = 0
cluster_formation.randomized_startup_delay_range.max = 2
cluster_formation.k8s.service_name = rabbitmq-headless
cluster_formation.k8s.hostname_suffix = .rabbitmq-headless.namespace-dev.svc.cluster.local
vm_memory_high_watermark.absolute = 1.6GB
disk_free_limit.absolute = 2GB
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: rabbitmq
namespace: rabbitmq
spec:
serviceName: rabbitmq-headless # Must be the same as the name of the headless service, used for hostname propagation access pod
selector:
matchLabels:
app: rabbitmq # In apps/v1, it needs to be the same as .spec.template.metadata.label for hostname propagation access pods, but not in apps/v1beta
replicas: 3
template:
metadata:
labels:
app: rabbitmq # In apps/v1, the same as .spec.selector.matchLabels
# setting podAntiAffinity
annotations:
scheduler.alpha.kubernetes.io/affinity: >
{
"podAntiAffinity": {
"requiredDuringSchedulingIgnoredDuringExecution": [{
"labelSelector": {
"matchExpressions": [{
"key": "app",
"operator": "In",
"values": ["rabbitmq"]
}]
},
"topologyKey": "kubernetes.io/hostname"
}]
}
}
spec:
serviceAccountName: rabbitmq
terminationGracePeriodSeconds: 10
containers:
- name: rabbitmq
image: rabbitmq:3.7.10
resources:
limits:
cpu: "0.5"
memory: 2Gi
requests:
cpu: "0.3"
memory: 2Gi
volumeMounts:
- name: config-volume
mountPath: /etc/rabbitmq
- name: rabbitmq-data
mountPath: /var/lib/rabbitmq/mnesia
ports:
- name: http
protocol: TCP
containerPort: 15672
- name: amqp
protocol: TCP
containerPort: 5672
livenessProbe:
exec:
command: ["rabbitmqctl", "status"]
initialDelaySeconds: 60
periodSeconds: 60
timeoutSeconds: 5
readinessProbe:
exec:
command: ["rabbitmqctl", "status"]
initialDelaySeconds: 20
periodSeconds: 60
timeoutSeconds: 5
imagePullPolicy: IfNotPresent
env:
- name: HOSTNAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: RABBITMQ_USE_LONGNAME
value: "true"
- name: RABBITMQ_NODENAME
value: "rabbit@$(HOSTNAME).rabbitmq-headless.namespace-dev.svc.cluster.local"
# If service_name is set in ConfigMap, there is no need to set it again here.
# - name: K8S_SERVICE_NAME
# value: "rabbitmq-headless"
- name: RABBITMQ_ERLANG_COOKIE
value: "mycookie"
volumes:
- name: config-volume
configMap:
name: rabbitmq-config
items:
- key: rabbitmq.conf
path: rabbitmq.conf
- key: enabled_plugins
path: enabled_plugins
- name: rabbitmq-data
persistentVolumeClaim:
claimName: rabbitmq-data-claim
</code></pre>
<p>I only get one pod running and ready instead of the 3 replicas</p>
<pre><code>[admin@devsvr3 yaml]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
rabbitmq-0 1/1 Running 0 2m2s
rabbitmq-1 0/1 Running 1 43s
</code></pre>
<p>Inspecting the failing pod I got this.</p>
<pre><code>[admin@devsvr3 yaml]$ kubectl logs rabbitmq-1
## ##
## ## RabbitMQ 3.7.10. Copyright (C) 2007-2018 Pivotal Software, Inc.
########## Licensed under the MPL. See http://www.rabbitmq.com/
###### ##
########## Logs: <stdout>
Starting broker...
2019-02-06 21:09:03.303 [info] <0.211.0>
Starting RabbitMQ 3.7.10 on Erlang 21.2.3
Copyright (C) 2007-2018 Pivotal Software, Inc.
Licensed under the MPL. See http://www.rabbitmq.com/
2019-02-06 21:09:03.315 [info] <0.211.0>
node : rabbit@rabbitmq-1.rabbitmq-headless.namespace-dev.svc.cluster.local
home dir : /var/lib/rabbitmq
config file(s) : /etc/rabbitmq/rabbitmq.conf
cookie hash : XhdCf8zpVJeJ0EHyaxszPg==
log(s) : <stdout>
database dir : /var/lib/rabbitmq/mnesia/rabbit@rabbitmq-1.rabbitmq-headless.namespace-dev.svc.cluster.local
2019-02-06 21:09:10.617 [error] <0.219.0> Unable to parse vm_memory_high_watermark value "1.6GB"
2019-02-06 21:09:10.617 [info] <0.219.0> Memory high watermark set to 103098 MiB (108106919116 bytes) of 257746 MiB (270267297792 bytes) total
2019-02-06 21:09:10.690 [info] <0.221.0> Enabling free disk space monitoring
2019-02-06 21:09:10.690 [info] <0.221.0> Disk free limit set to 2000MB
2019-02-06 21:09:10.698 [info] <0.224.0> Limiting to approx 1048476 file handles (943626 sockets)
2019-02-06 21:09:10.698 [info] <0.225.0> FHC read buffering: OFF
2019-02-06 21:09:10.699 [info] <0.225.0> FHC write buffering: ON
2019-02-06 21:09:10.702 [info] <0.211.0> Node database directory at /var/lib/rabbitmq/mnesia/rabbit@rabbitmq-1.rabbitmq-headless.namespace-dev.svc.cluster.local is empty. Assuming we need to join an existing cluster or initialise from scratch...
2019-02-06 21:09:10.702 [info] <0.211.0> Configured peer discovery backend: rabbit_peer_discovery_k8s
2019-02-06 21:09:10.702 [info] <0.211.0> Will try to lock with peer discovery backend rabbit_peer_discovery_k8s
2019-02-06 21:09:10.702 [info] <0.211.0> Peer discovery backend does not support locking, falling back to randomized delay
2019-02-06 21:09:10.702 [info] <0.211.0> Peer discovery backend rabbit_peer_discovery_k8s does not support registration, skipping randomized startup delay.
2019-02-06 21:09:10.710 [info] <0.211.0> Failed to get nodes from k8s - {failed_connect,[{to_address,{"kubernetes.default.svc.cluster.local",443}},
{inet,[inet],nxdomain}]}
2019-02-06 21:09:10.711 [error] <0.210.0> CRASH REPORT Process <0.210.0> with 0 neighbours exited with reason: no case clause matching {error,"{failed_connect,[{to_address,{\"kubernetes.default.svc.cluster.local\",443}},\n {inet,[inet],nxdomain}]}"} in rabbit_mnesia:init_from_config/0 line 164 in application_master:init/4 line 138
2019-02-06 21:09:10.711 [info] <0.43.0> Application rabbit exited with reason: no case clause matching {error,"{failed_connect,[{to_address,{\"kubernetes.default.svc.cluster.local\",443}},\n {inet,[inet],nxdomain}]}"} in rabbit_mnesia:init_from_config/0 line 164
{"Kernel pid terminated",application_controller,"{application_start_failure,rabbit,{bad_return,{{rabbit,start,[normal,[]]},{'EXIT',{{case_clause,{error,\"{failed_connect,[{to_address,{\\"kubernetes.default.svc.cluster.local\\",443}},\n {inet,[inet],nxdomain}]}\"}},[{rabbit_mnesia,init_from_config,0,[{file,\"src/rabbit_mnesia.erl\"},{line,164}]},{rabbit_mnesia,init_with_lock,3,[{file,\"src/rabbit_mnesia.erl\"},{line,144}]},{rabbit_mnesia,init,0,[{file,\"src/rabbit_mnesia.erl\"},{line,111}]},{rabbit_boot_steps,'-run_step/2-lc$^1/1-1-',1,[{file,\"src/rabbit_boot_steps.erl\"},{line,49}]},{rabbit_boot_steps,run_step,2,[{file,\"src/rabbit_boot_steps.erl\"},{line,49}]},{rabbit_boot_steps,'-run_boot_steps/1-lc$^0/1-0-',1,[{file,\"src/rabbit_boot_steps.erl\"},{line,26}]},{rabbit_boot_steps,run_boot_steps,1,[{file,\"src/rabbit_boot_steps.erl\"},{line,26}]},{rabbit,start,2,[{file,\"src/rabbit.erl\"},{line,815}]}]}}}}}"}
Kernel pid terminated (application_controller) ({application_start_failure,rabbit,{bad_return,{{rabbit,start,[normal,[]]},{'EXIT',{{case_clause,{error,"{failed_connect,[{to_address,{\"kubernetes.defau
Crash dump is being written to: /var/log/rabbitmq/erl_crash.dump...done
[admin@devsvr3 yaml]$
</code></pre>
<p>What did I do wrong here? </p>
| <p>finally i fixed it by adding this in /etc/resolv.conf of my pods:</p>
<pre><code>[my-rabbit-svc].[my-rabbitmq-namespace].svc.[cluster-name]
</code></pre>
<p>to add this in my pod i used this setting in my StatefulSet:</p>
<pre><code>dnsConfig:
searches:
- [my-rabbit-svc].[my-rabbitmq-namespace].svc.[cluster-name]
</code></pre>
<p>full documentation <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-config" rel="noreferrer">here</a></p>
|
<p>I am trying to transfer logs over to S3 just before the pod is terminated. For this, we need to</p>
<ol>
<li><p>Configure our container to have AWS-CLI. I did this successfully
using a script in postStart hook.</p></li>
<li><p>Execute AWS S3 command to transfer files from a hostPath to S3
bucket. Almost had this one !!!</p></li>
</ol>
<p>Here is my Kube Deployment (running on minikube):</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: logtransfer-poc
spec:
replicas: 1
template:
metadata:
labels:
app: logs
spec:
volumes:
- name: secret-resources
secret:
secretName: local-secrets
- name: testdata
hostPath:
path: /data/testdata
containers:
- name: logtransfer-poc
image: someImage
ports:
- name: https-port
containerPort: 8443
command: ["/bin/bash","-c","--"]
args: ["while true; do sleep 30; done;"]
volumeMounts:
- name: secret-resources
mountPath: "/data/apache-tomcat/tomcat/resources"
- name: testdata
mountPath: "/data/testdata"
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c", "cd /data/testdata/ && chmod u+x installS3Script.sh && ./installS3Script.sh > postInstall.logs"]
preStop:
exec:
command: ["/bin/sh", "-c", "cd /data/testdata/ && chmod u+x transferFilesToS3.sh && ./transferFilesToS3.sh > postTransfer.logs"]
terminationMessagePath: /data/testdata/termination-log
terminationGracePeriodSeconds: 30
imagePullSecrets:
- name: my-docker-credentials</code></pre>
</div>
</div>
</p>
<p>installS3Script.sh</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>#!/bin/bash
apt-get update
curl -O https://bootstrap.pypa.io/get-pip.py
python3 get-pip.py --user
chmod u+x get-pip.py
echo "PATH=$PATH:/root/.local/bin" >> ~/.bashrc && echo "Path Exported !!"
source ~/.bashrc && echo "Refreshed profile !"
pip3 install awscli --upgrade --user
mkdir -p ~/.aws
cp /data/testdata/config/config ~/.aws
cp /data/testdata/config/credentials ~/.aws</code></pre>
</div>
</div>
</p>
<p>transferFilesToS3.sh</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>#!/bin/bash
# export AWS_DEFAULT_PROFILE=admin
echo "Transfering files to S3.."
aws s3 cp /data/testdata/data s3://testpratham --recursive --profile admin
aws s3 ls s3://testpratham --profile admin
echo "Transfer to S3 successfull !!"</code></pre>
</div>
</div>
</p>
<p>What failed: the transferFilesToS3.sh runs successfully BUT its does NOT execute the AWS commands..</p>
<p>What works: I created test-log files and put the aws commands in postStart hook (installS3Script.sh) and it works fine !!</p>
<p>I think I might be looking into preStop hooks differently. I read a few articles on <a href="https://blog.openshift.com/kubernetes-pods-life/" rel="nofollow noreferrer">lifecycle</a> and <a href="https://pracucci.com/graceful-shutdown-of-kubernetes-pods.html" rel="nofollow noreferrer">preStop</a> hook. Also had a relative question on use of <a href="https://stackoverflow.com/questions/54676124/relation-between-prestop-hook-and-terminationgraceperiodseconds/54680573#54680573">preStop hook with grace period</a>.</p>
<p>Any suggestions/help on what I might be missing are appreciated.</p>
| <p>Maybe it would be easier using <a href="https://github.com/nuvo/skbn" rel="nofollow noreferrer">Skbn</a>.</p>
<blockquote>
<p><strong>Skbn</strong> is a tool for copying files and directories between Kubernetes and cloud storage providers. It is named after the 1981 video game <a href="https://en.wikipedia.org/wiki/Sokoban" rel="nofollow noreferrer">Sokoban</a>. Skbn uses an in-memory buffer for the copy process, to avoid excessive memory consumption. Skbn currently supports the following providers:
- AWS S3
- Minio S3
- Azure Blob Storage</p>
</blockquote>
<p>You could use:</p>
<pre><code>skbn cp \
--src k8s://<namespace>/<podName>/<containerName>/<path> \
--dst s3://<bucket>/<path>
</code></pre>
<p>You should look at <a href="https://github.com/nuvo/skbn/tree/master/examples/in-cluster" rel="nofollow noreferrer">in-cluster</a> usage, as it will require setting up ClusterRole, ClusterRoleBinding and ServiceAccount.</p>
|
<p>With a Kubernetes cluster in place, what would be the alternative way to send configurations/passwords into containers? I know about the secrets way but what I'm looking for is a centralised environment that has the password encrypted, not base64 encoded.</p>
| <p>You could also consider <strong>Kamus</strong> (and <strong><a href="https://kamus.soluto.io/docs/user/crd/" rel="nofollow noreferrer">KamusSecret</a></strong>, see at the end):</p>
<blockquote>
<p>An open source, GitOps, zero-trust secrets encryption and decryption solution for Kubernetes applications.</p>
<p>Kamus enable users to easily encrypt secrets than can be decrypted only by the application running on Kubernetes.<br />
The encryption is done using strong encryption providers (currently supported: Azure KeyVault, Google Cloud KMS and AES).<br />
To learn more about Kamus, check out the <a href="https://blog.solutotlv.com/can-kubernetes-keep-a-secret?utm_source=github" rel="nofollow noreferrer">blog post</a> and <a href="https://www.slideshare.net/SolutoTLV/can-kubernetes-keep-a-secret" rel="nofollow noreferrer">slides</a>.</p>
<pre><code>helm repo add soluto https://charts.soluto.io
helm upgrade --install kamus soluto/kamus
</code></pre>
<p>Architecture: Kamus has 3 components:</p>
<ul>
<li>Encrypt API</li>
</ul>
</blockquote>
<ul>
<li>Decrypt API</li>
<li>Key Management System (KMS)</li>
</ul>
<blockquote>
<p>The encrypt and decrypt APIs handle encryption and decryption requests. The KMS is a wrapper for various cryptographic solutions. Currently supported:</p>
<ul>
<li>AES - uses one key for all secrets</li>
<li>Azure KeyVault - creates one key per service account.</li>
<li>Google Cloud KMS - creates one key per service account.</li>
</ul>
</blockquote>
<hr />
<p>As noted by <a href="https://stackoverflow.com/users/4792970/omer-levi-hevroni">Omer Levi Hevroni</a> in <a href="https://stackoverflow.com/questions/54542638/kubernetes-with-secrets-alternative/54832340#comment102010506_54832340">the comments</a>:</p>
<blockquote>
<p>We are not planning to support env vars directly, as there are some security issues with using them.<br />
As an alternative, you can use <strong><a href="https://kamus.soluto.io/docs/user/crd/" rel="nofollow noreferrer">KamusSecret</a></strong> to create a regular secret and mount it</p>
</blockquote>
<blockquote>
<p>KamusSecret works very similary to regular secret encryption flow with Kamus.<br />
The encrypted data is represented in a format that is identical to regular Kubernetes Secrets.<br />
Kamus will create an identical secret with the decrypted content.</p>
</blockquote>
|
<p>i am trying to create mongodb-replicaset with kubernetes-helm on digitalocean using do-block-storage. since mongodb recommended to use xfs format, i try to format do-block-storage with xfs using configuration like below, but it seem didn't work. can you help? thank you.</p>
<pre><code>persistentVolume:
enabled: true
## mongodb-replicaset data Persistent Volume Storage Class
## If defined, storageClassName: <storageClass>
## If set to "-", storageClassName: "", which disables dynamic provisioning
## If undefined (the default) or set to null, no storageClassName spec is
## set, choosing the default provisioner. (gp2 on AWS, standard on
## GKE, AWS & OpenStack)
##
storageClass: "do-block-storage"
accessModes:
- ReadWriteOnce
size: 10Gi
parameters:
fsType: xfs
annotations: {}
</code></pre>
| <p>There are two issues with your custom parameters (values.yaml):</p>
<ol>
<li>MongoDB stable Helm chart does not know anything about user-defined field: "parameters". This is because it simply is not defined in any template file (mongodb/templates/*.yaml). In your case Helm will render a similar file to this:</li>
</ol>
<blockquote>
<p><code>volumeClaimTemplates:
- metadata:
name: datadir
annotations:
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "10Gi"
storageClassName: "do-block-storage"</code></p>
</blockquote>
<ol start="2">
<li>You can't specify "fsType" in volumeClaimTemplates, although it was once requested (see <a href="https://github.com/kubernetes/kubernetes/issues/44478" rel="nofollow noreferrer">this</a> github issue).</li>
</ol>
<p>I can see two possible workarounds for your issue:</p>
<ol>
<li>Use a separate StorageClass, with default xfs filesystem format, and then reference its name in helm chart`s values, e.g. create <strong>do-block-storage-xfs</strong> StorageClass liek this:</li>
</ol>
<blockquote>
<p><code>kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: do-block-storage-xfs
namespace: kube-system
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: com.digitalocean.csi.dobs
parameters:
fstype: xfs</code></p>
</blockquote>
<ol start="2">
<li>Create in advance a Persistent Volume in DigitalOcean of xfs fsType & PVC in K8S, and reference it as an existing PVC in the Helm chart (see <em>persistence.existingClaim</em> configurable parameter <a href="https://github.com/helm/charts/tree/master/stable/mongodb" rel="nofollow noreferrer">here</a>)</li>
</ol>
|
<p>I have a Spring Boot and PostgreSQL communication issue using service name.</p>
<p>I have created a cluster using Calico for networking, as follows:</p>
<pre><code>sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=178.12.1.10
</code></pre>
<p>and joined to the worker node using the output produced from the above command.</p>
<p>When I am deploying the pod and service both are running on different subnets. Am I missing something?</p>
<pre><code>kubectl get svc
</code></pre>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend-service ClusterIP 10.110.149.43 <none> 8091/TCP 12s
postgres ClusterIP 10.108.1.52 <none> 5432/TCP 14m
</code></pre>
<p>The service endpoint is not getting generated.</p>
<pre><code>kubectl get endpoints
</code></pre>
<pre><code>NAME ENDPOINTS AGE
backend-service 20m
postgres <none> 21m
</code></pre>
<pre><code>kubectl get pods --all-namespaces
</code></pre>
<pre><code>default backend-service-6dbd64ff4d-gqkq8 0/1 CrashLoopBackOff 12 57m
default postgres-7564bcc778-578fx 1/1 Running 0 57m
kube-system calico-etcd-b7wqf 1/1 Running 1 2d3h
kube-system calico-kube-controllers-74887d7bdf-wxhkd 1/1 Running 1 2d3h
kube-system calico-node-689b5 0/1 Running 0 47h
kube-system calico-node-smkq5 0/1 Running 1 47h
kube-system coredns-86c58d9df4-7ncdk 1/1 Running 1 2d3h
kube-system coredns-86c58d9df4-g4jcp 1/1 Running 1 2d3h
kube-system etcd-kmaster 1/1 Running 1 2d3h
kube-system kube-apiserver-kmaster 1/1 Running 1 2d3h
kube-system kube-controller-manager-kmaster 1/1 Running 3 2d3h
kube-system kube-proxy-njx5c 1/1 Running 1 2d3h
kube-system kube-proxy-pkxx5 1/1 Running 1 2d3h
kube-system kube-scheduler-kmaster 1/1 Running 3 2d3h
kube-system kubernetes-dashboard-57df4db6b-zcvcc 1/1 Running 1 2d3h
</code></pre>
| <p>You need to check service-cluster-ip-range CIDR that is specified by flag to the API server. service clusterIP is assigned from service-cluster-ip-range.</p>
<p>you have initialized the cluster with
--pod-network-cidr=192.168.0.0/16</p>
<p>note that pod-network-cidr range is used for assigning the ip address to pods. it is different from service clusterIP</p>
<p>you should be checking service-cluster-ip-range that is defined in api server startup parameters</p>
|
<p>I am having issues deploying juypterhub on kubernetes cluster. The issue I am getting is that the hub pod is stuck in pending. </p>
<p>Stack:
kubeadm
flannel
weave
helm
jupyterhub</p>
<p>Runbook:</p>
<pre><code>$kubeadm init --pod-network-cidr="10.244.0.0/16"
$sudo cp /etc/kubernetes/admin.conf $HOME/ && sudo chown $(id -u):$(id -g) $HOME/admin.conf && export KUBECONFIG=$HOME/admin.conf
$kubectl create -f pvc.yml
$kubectl create -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel-aliyun.yml
$kubectl apply --filename https://git.io/weave-kube-1.6
$kubectl taint nodes --all node-role.kubernetes.io/master-
</code></pre>
<p>Helm installations as per <a href="https://zero-to-jupyterhub.readthedocs.io/en/latest/setup-helm.html" rel="nofollow noreferrer">https://zero-to-jupyterhub.readthedocs.io/en/latest/setup-helm.html</a></p>
<p>Jupyter installations as per <a href="https://zero-to-jupyterhub.readthedocs.io/en/latest/setup-jupyterhub.html" rel="nofollow noreferrer">https://zero-to-jupyterhub.readthedocs.io/en/latest/setup-jupyterhub.html</a></p>
<p><strong>config.yml</strong></p>
<pre><code>proxy:
secretToken: "asdf"
singleuser:
storage:
dynamic:
storageClass: local-storage
</code></pre>
<p><strong>pvc.yml</strong></p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: standard
spec:
capacity:
storage: 100Gi
# volumeMode field requires BlockVolume Alpha feature gate to be enabled.
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /dev/vdb
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- example-node
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: standard
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
</code></pre>
<p>The warning is:</p>
<pre><code>$kubectl --namespace=jhub get pod
NAME READY STATUS RESTARTS AGE
hub-fb48dfc4f-mqf4c 0/1 Pending 0 3m33s
proxy-86977cf9f7-fqf8d 1/1 Running 0 3m33s
$kubectl --namespace=jhub describe pod hub
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 35s (x3 over 35s) default-scheduler pod has unbound immediate PersistentVolumeClaims
$kubectl --namespace=jhub describe pv
Name: standard
Labels: type=local
Annotations: pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: manual
Status: Bound
Claim: default/standard
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 10Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /dev/vdb
HostPathType:
Events: <none>
$kubectl --namespace=kube-system describe pvc
Name: hub-db-dir
Namespace: jhub
StorageClass:
Status: Pending
Volume:
Labels: app=jupyterhub
chart=jupyterhub-0.8.0-beta.1
component=hub
heritage=Tiller
release=jhub
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 13s (x7 over 85s) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
Mounted By: hub-fb48dfc4f-mqf4c
</code></pre>
<p>I tried my best to follow the localstorage volume configuration on the official kubernetes website, but with no luck</p>
<p>-G</p>
| <p>Managed to fix it using the following configuration.
Key points:
- I forgot to add the node in nodeAffinity
- it works without putting in volumeBindingMode</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: standard
spec:
capacity:
storage: 2Gi
# volumeMode field requires BlockVolume Alpha feature gate to be enabled.
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /temp
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- INSERT_NODE_NAME_HERE
</code></pre>
<pre><code>kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
annotations:
storageclass.kubernetes.io/is-default-class: "true"
name: local-storage
provisioner: kubernetes.io/no-provisioner
</code></pre>
<p>config.yaml</p>
<pre><code>proxy:
secretToken: "token"
singleuser:
storage:
dynamic:
storageClass: local-storage
</code></pre>
<p>make sure your storage/pv looks like this:</p>
<pre><code>root@asdf:~# kubectl --namespace=kube-system describe pv
Name: standard
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"name":"standard"},"spec":{"accessModes":["ReadWriteOnce"],"capa...
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: local-storage
Status: Bound
Claim: jhub/hub-db-dir
Reclaim Policy: Retain
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 2Gi
Node Affinity:
Required Terms:
Term 0: kubernetes.io/hostname in [asdf]
Message:
Source:
Type: LocalVolume (a persistent volume backed by local storage on a node)
Path: /temp
Events: <none>
</code></pre>
<pre><code>root@asdf:~# kubectl --namespace=kube-system describe storageclass
Name: local-storage
IsDefaultClass: Yes
Annotations: storageclass.kubernetes.io/is-default-class=true
Provisioner: kubernetes.io/no-provisioner
Parameters: <none>
AllowVolumeExpansion: <unset>
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events: <none>
</code></pre>
<p>Now the hub pod looks something like this:</p>
<pre><code>root@asdf:~# kubectl --namespace=jhub describe pod hub
Name: hub-5d4fcd8fd9-p6crs
Namespace: jhub
Priority: 0
PriorityClassName: <none>
Node: asdf/192.168.0.87
Start Time: Sat, 23 Feb 2019 14:29:51 +0800
Labels: app=jupyterhub
component=hub
hub.jupyter.org/network-access-proxy-api=true
hub.jupyter.org/network-access-proxy-http=true
hub.jupyter.org/network-access-singleuser=true
pod-template-hash=5d4fcd8fd9
release=jhub
Annotations: checksum/config-map: --omitted
checksum/secret: --omitted--
Status: Running
IP: 10.244.0.55
Controlled By: ReplicaSet/hub-5d4fcd8fd9
Containers:
hub:
Container ID: docker://d2d4dec8cc16fe21589e67f1c0c6c6114b59b01c67a9f06391830a1ea711879d
Image: jupyterhub/k8s-hub:0.8.0
Image ID: docker-pullable://jupyterhub/k8s-hub@sha256:e40cfda4f305af1a2fdf759cd0dcda834944bef0095c8b5ecb7734d19f58b512
Port: 8081/TCP
Host Port: 0/TCP
Command:
jupyterhub
--config
/srv/jupyterhub_config.py
--upgrade-db
State: Running
Started: Sat, 23 Feb 2019 14:30:28 +0800
Ready: True
Restart Count: 0
Requests:
cpu: 200m
memory: 512Mi
Environment:
PYTHONUNBUFFERED: 1
HELM_RELEASE_NAME: jhub
POD_NAMESPACE: jhub (v1:metadata.namespace)
CONFIGPROXY_AUTH_TOKEN: <set to the key 'proxy.token' in secret 'hub-secret'> Optional: false
Mounts:
/etc/jupyterhub/config/ from config (rw)
/etc/jupyterhub/secret/ from secret (rw)
/srv/jupyterhub from hub-db-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from hub-token-bxzl7 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: hub-config
Optional: false
secret:
Type: Secret (a volume populated by a Secret)
SecretName: hub-secret
Optional: false
hub-db-dir:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: hub-db-dir
ReadOnly: false
hub-token-bxzl7:
Type: Secret (a volume populated by a Secret)
SecretName: hub-token-bxzl7
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
</code></pre>
|
<p>I have done a sample application to deploy Jenkins on Kubernetes and exposing the same using Ingress. When I access the jenkins pod via NodePort it is working but when I try to access it via Ingress / Nginx setup I am getting 404</p>
<p>I did google around and tried a few workarounds but none have worked so far. Here are the details for the files</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose -f ../docker-compose.yml -f ../docker-compose.utils.yml -f
../docker-compose.demosite.yml convert
kompose.version: 1.17.0 (a74acad)
creationTimestamp: null
labels:
io.kompose.service: ci
name: ci
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: ci
spec:
containers:
- image: jenkins/jenkins:lts
name: almsmart-ci
ports:
- containerPort: 8080
env:
- name: JENKINS_USER
value: admin
- name: JENKINS_PASS
value: admin
- name: JAVA_OPTS
value: -Djenkins.install.runSetupWizard=false
- name: JENKINS_OPTS
value: --prefix=/ci
imagePullPolicy: Always
resources: {}
restartPolicy: Always
status: {}
---
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: kompose -f ../docker-compose.yml -f ../docker-compose.utils.yml -f
../docker-compose.demosite.yml convert
kompose.version: 1.17.0 (a74acad)
creationTimestamp: null
labels:
io.kompose.service: ci
name: ci
spec:
type : NodePort
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
io.kompose.service: ci
status:
loadBalancer: {}
</code></pre>
<p>Here is my ingress definition </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: demo-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
nginx.ingress.kubernetes.io/cors-allow-headers: Authorization, origin, accept
nginx.ingress.kubernetes.io/cors-allow-methods: GET, OPTIONS
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /ci
backend:
serviceName: ci
servicePort: 8080
</code></pre>
<p>When I checked the logs in nginx controller I am seeing the following </p>
<pre><code>I0222 19:59:45.826062 6 controller.go:172] Configuration changes detected, backend reload required.
I0222 19:59:45.831627 6 event.go:221] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"almsmart-ingress", UID:"444858e5-36d9-11e9-9e29-080027811fa3", APIVersion:"extensions/v1beta1", ResourceVersion:"198832", FieldPath:""}): type: 'Normal' reason: 'DELETE' Ingress default/almsmart-ingress
I0222 19:59:46.063220 6 controller.go:190] Backend successfully reloaded.
[22/Feb/2019:19:59:46 +0000]TCP200000.000
W0222 20:00:00.870990 6 endpoints.go:76] Error obtaining Endpoints for Service "default/ci": no object matching key "default/ci" in local store
W0222 20:00:00.871023 6 controller.go:842] Service "default/ci" does not have any active Endpoint.
I0222 20:00:00.871103 6 controller.go:172] Configuration changes detected, backend reload required.
I0222 20:00:00.872556 6 event.go:221] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"almsmart-ingress", UID:"6fc5272c-36dc-11e9-9e29-080027811fa3", APIVersion:"extensions/v1beta1", ResourceVersion:"198872", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress default/almsmart-ingress
I0222 20:00:01.060291 6 controller.go:190] Backend successfully reloaded.
[22/Feb/2019:20:00:01 +0000]TCP200000.000
W0222 20:00:04.205398 6 controller.go:842] Service "default/ci" does not have any active Endpoint.
[22/Feb/2019:20:00:09 +0000]TCP200000.000
10.244.0.0 - [10.244.0.0] - - [22/Feb/2019:20:00:36 +0000] "GET /ci/ HTTP/1.1" 404 274 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:65.0) Gecko/20100101 Firefox/65.0" 498 0.101 [default-ci-8080] 10.244.1.97:8080 315 0.104 404 b5b849647749e2b626f00c011c15bc4e
10.244.0.0 - [10.244.0.0] - - [22/Feb/2019:20:00:46 +0000] "GET /ci HTTP/1.1" 404 274 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:65.0) Gecko/20100101 Firefox/65.0" 497 0.003 [default-ci-8080] 10.244.1.97:8080 315 0.004 404 ac8fbe2faa37413f5e533ed3c8d98a7d
10.244.0.0 - [10.244.0.0] - - [22/Feb/2019:20:00:49 +0000] "GET /ci/ HTTP/1.1" 404 274 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:65.0) Gecko/20100101 Firefox/65.0" 498 0.003 [default-ci-8080] 10.244.1.97:8080 315 0.004 404 865cb82af7f570f2144ef27fdea850c9
I0222 20:00:54.871828 6 status.go:388] updating Ingress default/almsmart-ingress status from [] to [{ }]
I0222 20:00:54.877693 6 event.go:221] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"almsmart-ingress", UID:"6fc5272c-36dc-11e9-9e29-080027811fa3", APIVersion:"extensions/v1beta1", ResourceVersion:"198974", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress default/almsmart-ingress
</code></pre>
<p>When I try </p>
<blockquote>
<p>kubectl get endpoints, I get the following</p>
</blockquote>
<pre><code>NAME ENDPOINTS AGE
ci 10.244.1.97:8080 31m
</code></pre>
<p>The default 404 page is available so I assume Ingress Controller is working fine but not sure why it is not able to find the service. I have all the objects in default namespace only and working but still unable to access it using nginx ingress. </p>
| <p>+1 for this well asked question.</p>
<p>You setup seemed OK to me. One problem is that, you have <code>--prefix=/ci</code> configured for your Jenkins, but you configured <code>nginx.ingress.kubernetes.io/rewrite-target: /</code> for your ingress. This would cause rewrite of your route: <code>xxx/ci => xxx/</code>. I think the 404 is returned by your Jenkins. </p>
<p>You could try to modify your rewrite rule to <code>nginx.ingress.kubernetes.io/rewrite-target: /ci</code> and see if this works for you.</p>
|
<p>I need to build an application that has many <strong>bare-metal</strong> nodes joined in a Kubernetes cluster and I need a shared persistent file system between those <strong>nodes</strong>. The nodes should be able to read-write in this file system simultaneously.</p>
<p>Bonus: is there a way to keep the file system alive even if the cluster crashes?</p>
<p>I read <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="noreferrer">this</a> article but cant find answers to this question.</p>
<p>This problem is very important to me because it is a requirement to my undergraduate paper.</p>
| <p>Yes it does. What you're looking for is to set your AccessMode to <code>ReadWriteMany</code>.
Note that not all Volume Plugins provide <code>ReadWriteMany</code>.</p>
<p>Multiple pods might be reading/writing to the Volume plugin at the same time. If a node/pod were to restart, you would still have access to the volume.</p>
<p>To get a full list of what which Volume Plugin supports that, refer to the <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="noreferrer">official documentation</a>.</p>
|
<p>I have been trying to get a basic Kubernetes cluster running according to the following tutorial <a href="https://kubernetes.io/docs/setup/independent/install-kubeadm/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/independent/install-kubeadm/</a></p>
<p>I started with an up-to-date ubuntu 16.04 system and installed docker.</p>
<p><code>wget -qO- https://get.docker.com/ | sed 's/docker-ce/docker-ce=18.06.3~ce~3-0~ubuntu/' | sh</code></p>
<p>After that I installed the kubelet / Kubeadm and kubectl modules</p>
<pre><code>apt-get update && apt-get install -y apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectl
</code></pre>
<p>Made sure that swap etc was off <code>sudo swapoff -a</code></p>
<p>Performed the init using <code>sudo kubeadm init</code></p>
<pre><code>[init] Using Kubernetes version: v1.13.3
...
To start using your cluster
...
mkdir ...
You can now join any ...
...
</code></pre>
<p>I make the .kube folder and config</p>
<pre><code>mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
</code></pre>
<p><code>kubectl cluster-info</code> then shows</p>
<pre><code>To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server 10.117.xxx.xxx:6443 was refused - did you specify the right host or port?
</code></pre>
<p>After giving it a few attempt I once received:</p>
<pre><code>sudo kubectl cluster-info
Kubernetes master is running at https://10.117.xxx.xxx:6443
KubeDNS is running at https://10.117.xxx.xxx:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
</code></pre>
<p>But a second later its back to the permission denied</p>
<pre><code>sudo kubectl cluster-info
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server 10.117.xxx.xxx:6443 was refused - did you specify the right host or port?
</code></pre>
<p>I tried with and without sudo... and <code>sudo kubectl get nodes</code> also refused to work.</p>
<pre><code>The connection to the server 10.117.xxx.xxx:6443 was refused - did you specify the right host or port?
</code></pre>
<p>What am I missing that it won't connect?</p>
<p><code>ping 10.117.xxx.xxx</code> works fine and even <code>ssh</code> to this address works and is the same server.</p>
<p><strong>Edit</strong></p>
<p><code>sudo systemctl restart kubelet.service</code> shows that the cluster comes online but for some reason goes offline within minutes.</p>
<pre><code>kubectl cluster-info
Kubernetes master is running at https://10.117.0.47:6443
KubeDNS is running at https://10.117.0.47:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
...
kubectl cluster-info
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server 10.117.0.47:6443 was refused - did you specify the right host or port?
</code></pre>
<p><strong>edit2</strong></p>
<p>After doing a full reset and using the following init...</p>
<pre><code>sudo kubeadm init --pod-network-cidr=10.244.0.0/16
</code></pre>
<p>Followed by</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
</code></pre>
<p>Allowed me to install the pod network add-on but was only short-lived.</p>
<pre><code>clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
</code></pre>
<pre><code>kubectl cluster-info
Kubernetes master is running at https://10.117.xxx.xxx:6443
KubeDNS is running at https://10.117.xxx.xxx:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
➜ ~ kubectl cluster-info
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server 10.117.xxx.xxx:6443 was refused - did you specify the right host or port?
</code></pre>
<p><strong>edit3</strong></p>
<p>Removing all docker images, containers etc... and then perform a restart using <code>sudo systemctl restart kubelet.service</code> seems to do the trick for a few minutes but then all docker containers are killed and removed without any signal. How can I look into the logging of these containers to perhaps find out why they are killed?</p>
<p><a href="https://pastebin.com/wWSWXM31" rel="nofollow noreferrer">https://pastebin.com/wWSWXM31</a> log file</p>
| <p>I don't know the specifics of "Kubernetes", but i can explain what you are seeing. </p>
<p>The connection refused is not a permission denied. It means: "I contacted the server at that IP address and port and no server was running on that port."</p>
<p>So.... you will have to go to the remote system (the one that you keep calling 10.117.xxx.xxx) and doublecheck that the server is running. And on what port. </p>
<p>For example, the "netstat -a" tool will list all open ports and connections. You should see listening servers as </p>
<pre><code>tcp 0 0 0.0.0.0:9090 0.0.0.0:* LISTEN
</code></pre>
<p>here in my case it is listening on port 9090. You are looking for an entry with 6443. It probably won't be there, because that's what "connection refused" is already telling you. You need to start the server process that's supposed to provide that service and watch for errors. Check for errors in /var/log/syslog if you don't see them on your terminal. </p>
|
<p>I have deployed kops k8s in AWS, everything in the same namespace.</p>
<p>nginx ingress controller route traffic to https backends (wordpress apps).</p>
<p>I'm able to reach the website, but unfortunately for every 10~ calls only 1 call get http 200. all the other 9 get 404 nginx not found.
tried to search everywhere but no luck :(</p>
<p>My configuration:
DNS -> AWS NLB -> 2 Nodes</p>
<p>ingress.yaml</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-nginx
namespace: example-ns
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/ssl-passthrough: "True"
nginx.org/ssl-services: test-service
nginx.ingress.kubernetes.io/affinity: "cookie"
spec:
rules:
- host: "test.example.com"
http:
paths:
- path: /
backend:
serviceName: test-service
servicePort: 8443
</code></pre>
<p>nginx-service.yaml:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: example-ns
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "60"
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: https
</code></pre>
<p>nginx-daemonset.yaml:</p>
<pre><code>kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: nginx-ingress-controller
namespace: example-ns
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
spec:
serviceAccountName: nginx-ingress-serviceaccount
imagePullSecrets:
- name: private-repo
containers:
- name: nginx-ingress-controller
image: private_repo/private_image
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
- --default-ssl-certificate=$(POD_NAMESPACE)/tls-cert
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
runAsUser: 33
resources:
limits:
cpu: 500m
memory: 300Mi
requests:
cpu: 400m
memory: 200Mi
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: https
containerPort: 443
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
</code></pre>
<p>wordpress.yaml:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: test-example
namespace: example-ns
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
strategy:
type: RollingUpdate
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
restartPolicy: Always
volumes:
- name: volume
persistentVolumeClaim:
claimName: volume-claim
imagePullSecrets:
- name: private-repo
containers:
- name: test-example-httpd
image: private_repo/private_image
imagePullPolicy: Always
ports:
- containerPort: 8443
name: https
- name: test-example-php-fpm
image: private_repo/private_image
imagePullPolicy: Always
securityContext:
runAsUser: 82
securityContext:
allowPrivilegeEscalation: false
---
apiVersion: v1
kind: Service
metadata:
name: test-service
namespace: example-ns
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
ports:
- name: https-web
targetPort: 8443
port: 8443
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
</code></pre>
<p>---UPDATE---</p>
<pre><code>kubectl get endpoints,services -n example-ns
NAME ENDPOINTS AGE
endpoints/ingress-nginx 100.101.0.1:8443,100.100.0.4:443,100.101.0.2:443 1d
endpoints/test-service 100.100.0.1:8443,100.101.0.1:8443,100.101.0.2:8443 4h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ingress-nginx LoadBalancer SOME-IP sometext.elb.us-west-3.amazonaws.com 80:31541/TCP,443:31017/TCP 1d
service/test-service ClusterIP SOME-IP <none> 8443/TCP 4h
</code></pre>
<p>Thanks!</p>
| <p>Apparently changing the annotation nginx.ingress.kubernetes.io/ssl-passthrough from "True" to "False" solved it. </p>
<p>Probably has to do something with ssl termination in NGINX and not in the apache.</p>
|
<p>I am try run k8s via minikube, I follow the <a href="https://kubernetes.io/docs/setup/minikube/" rel="nofollow noreferrer">offical article</a>, when I try the command of <code>minikube start</code>, The error said I cannot pull the docker images(k8s.gcr):</p>
<pre><code>tianyu@ubuntu:~$ minikube start
😄 minikube v0.34.1 on linux (amd64)
🔥 Creating virtualbox VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
📶 "minikube" IP address is 192.168.99.100
🌐 Found network options:
▪ HTTP_PROXY=http://127.0.0.1:35033
▪ HTTPS_PROXY=http://127.0.0.1:35033/
▪ NO_PROXY=localhost,127.0.0.0/8,::1
🐳 Configuring Docker as the container runtime ...
✨ Preparing Kubernetes environment ...
🚜 Pulling images required by Kubernetes v1.13.3 ...
❌ Unable to pull images, which may be OK: running cmd: sudo kubeadm config images pull --config /var/lib/kubeadm.yaml: command failed: sudo kubeadm config images pull --config /var/lib/kubeadm.yaml
stdout:
stderr: failed to pull image "k8s.gcr.io/kube-apiserver:v1.13.3": output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
: Process exited with status 1
🚀 Launching Kubernetes v1.13.3 using kubeadm ...
💣 Error starting cluster: kubeadm init:
sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI
[init] Using Kubernetes version: v1.13.3
[preflight] Running pre-flight checks
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[WARNING Swap]: running with swap on is not supported. Please disable swap
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.13.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.13.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.13.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
....
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
: Process exited with status 1
</code></pre>
<p>I'm in china, so I access the gcr docker hub by a vpn.
I followed this link(<a href="https://docs.docker.com/config/daemon/systemd/#httphttps-proxy" rel="nofollow noreferrer">https://docs.docker.com/config/daemon/systemd/#httphttps-proxy</a>), so I can pull images(k8s.gcr) by docker. The infos are following:</p>
<pre><code>tianyu@ubuntu:~$ sudo docker pull k8s.gcr.io/kube-apiserver:v1.13.3
v1.13.3: Pulling from kube-apiserver
73e3e9d78c61: Pull complete
d261e2f8ca5b: Pull complete
Digest: sha256:cdb80dc78f3c25267229012a33800b8b203e8e8b9fa59f9fe93e156cc859f89c
Status: Downloaded newer image for k8s.gcr.io/kube-apiserver:v1.13.3
tianyu@ubuntu:~$ sudo docker pull k8s.gcr.io/kube-controller-manager:v1.13.3
v1.13.3: Pulling from kube-controller-manager
73e3e9d78c61: Already exists
846fc1deb4d7: Pull complete
Digest: sha256:408350880946f037be82d6ad2ed7dc3746b221932f0eeb375aef935d62392031
Status: Downloaded newer image for k8s.gcr.io/kube-controller-manager:v1.13.3
</code></pre>
<p>Why I cannot pull images via minikube? </p>
<p>thanks for your time.</p>
| <p>I'm in china too.Pls use the params to start minikube</p>
<pre><code>--docker-env HTTP_PROXY=http://10.0.2.2:35033 --docker-env HTTPS_PROXY=http://10.0.2.2:35033
</code></pre>
|
<p>I'm trying to deploy logstash to kubernetes. Before I was running it by docker compose and it was working. On compose i have:</p>
<pre><code> volumes:
- ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro
- ./logstash/pipeline:/usr/share/logstash/pipeline:ro
</code></pre>
<p>To deploy it on kubernetes I created dockerfile:</p>
<pre><code>FROM docker.elastic.co/logstash/logstash-oss:6.2.3
ADD ./config/logstash.yml /usr/share/logstash/config/logstash.yml:ro
ADD ./pipeline /usr/share/logstash/pipeline:ro
</code></pre>
<p>But after run that image logstash doesn't see the config file. (Even locally)</p>
| <p>That Dockerfile works fine for me:</p>
<pre><code>FROM docker.elastic.co/logstash/logstash-oss:6.2.3
RUN rm -f /usr/share/logstash/pipeline/logstash.conf
ADD pipeline/ /usr/share/logstash/pipeline/
ADD config/ /usr/share/logstash/config/
</code></pre>
<p>I think before the problem was with overriding the logstash.conf file</p>
|
<p>I am deploying sample springboot application using fabric8 maven deploy. The build fails with SSLHandshakeException.</p>
<pre><code>F8: Cannot access cluster for detecting mode: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
Failed to execute goal io.fabric8:fabric8-maven-plugin:3.1.80.redhat-000010:build (default) on project fuse-camel-sb-rest: Execution default of goal io.fabric8:fabric8-maven-plugin:3.1.80.redhat-000010:build failed: An error has occurred. sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target -> [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal io.fabric8:fabric8-maven-plugin:3.1.80.redhat-000010:build (default) on project fuse-camel-sb-rest: Execution default of goal io.fabric8:fabric8-maven-plugin:3.1.80.redhat-000010:build failed: An error has occurred.
</code></pre>
<p>So, I downloaded the public certificate from the Openshift webconsole and added it to JVM using </p>
<pre><code>C:\...\jdk.\bin>keytool -import -alias rootcert -file C:\sample\RootCert.cer -keystore cacerts
</code></pre>
<p>and got message that its successfully added to the keystore and the list command shows the certificates added.</p>
<pre><code> C:\...\jdk.\bin>keytool -list -keystore cacerts
Enter keystore password:
Keystore type: JKS
Keystore provider: SUN
Your keystore contains 2 entries
rootcert, May 18, 2018, trustedCertEntry,
Certificate fingerprint (SHA1): XX:XX:XX:..........
</code></pre>
<p>But the mvn:fabric8 deploy build still fails with the same exception.</p>
<p>Can someone shed some light on this issue? Am I missing anything?</p>
| <p>The following works on MacOS:</p>
<p>The certificate to install is the one found on the browser URL bar;<br>
On Firefox (at least) click the padlock <a href="https://i.stack.imgur.com/5H4i9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5H4i9.png" alt="padlock icon"></a> to the left of the URL, then proceed down <em>Connection ></em> / <em>More Information</em> / <em>View Certificate</em> / <em>Details</em>, finally <strong>Export...</strong> allows you to save the certificate locally.</p>
<p>On the command-line, determine which JRE maven is using:</p>
<pre><code>$ mvn --version
Apache Maven 3.5.4 (1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T19:33:14+01:00)
Maven home: /Users/.../apache-maven-3.5.4
Java version: 1.8.0_171, vendor: Oracle Corporation, runtime: /Library/Java/JavaVirtualMachines/jdk1.8.0_171.jdk/Contents/Home/jre
Default locale: en_GB, platform encoding: UTF-8
OS name: "mac os x", version: "10.12.6", arch: "x86_64", family: "mac"
</code></pre>
<p>You will likely need to be 'root' to update the cacerts file. </p>
<pre><code>$ sudo keytool -import -alias my-openshift-clustername -file /Users/.../downloads/my-cluster-cert.crt -keystore $JAVA_HOME/jre/lib/security/cacerts
Password: {your password for sudo}
Enter keystore password: {JRE cacerts password, default is changeit}
... keytool prints certificate details...
Trust this certificate? [no]: yes
Certificate was added to keystore
</code></pre>
<p>Verify that the certificate was indeed added successfully:</p>
<pre><code>$ keytool -list -keystore $JAVA_HOME/jre/lib/security/cacerts
Enter keystore password: changeit
Keystore type: JKS
Keystore provider: SUN
Your keystore contains 106 entries
...other entries, in no particular order...
my-openshift-clustername, 25-Feb-2019, trustedCertEntry,
Certificate fingerprint (SHA1): F4:17:3B:D8:E1:4E:0F:AD:16:D3:FF:0F:22:73:40:AE:A2:67:B2:AB
...other entries...
</code></pre>
|
<p><strong>Kubernetes</strong> is unable to launch <strong>container</strong> using <strong>image</strong> from private gcr.io <strong>container registry</strong>.
The error says "ImagePullBackOff".
Both Kubernetes and Container registry are in the same Google Cloud project.</p>
| <p>The issue was with permissions.
It turns out that a <strong>service account</strong> that is used to launch <strong>Kubernetes</strong> needs to have reading permissions for <em>Google Cloud Storage</em> (this is important as the <strong>registry</strong> itself is using <strong>buckets</strong> to store images)</p>
<p>Exact details <a href="https://cloud.google.com/container-registry/docs/access-control#granting_users_and_other_projects_access_to_a_registry" rel="nofollow noreferrer">here</a></p>
|
<p>What is the significance of having this section - <code>spec.template.metadata</code>? It does not seem to be mandatory. However I would like to know where it would be very useful! Otherwise what is the point of repeating all the selectors?</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: hello
spec:
selector:
matchLabels:
app: hello
tier: backend
track: stable
replicas: 7
template:
metadata:
labels:
app: hello
tier: backend
track: stable
spec:
containers:
- name: hello
image: "gcr.io/google-samples/hello-go-gke:1.0"
ports:
- name: http
containerPort: 80
</code></pre>
| <p>What makes you think it is not required?</p>
<p>If you don't provide the <code>Metadata</code> for a deployment template, it will fail with a message like this:</p>
<pre><code>The Deployment "nginx" is invalid: spec.template.metadata.labels:
Invalid value: map[string]string(nil): `selector` does not match template `lab
els`
</code></pre>
<p>Or if the metadata does not match the selector, will fail with a message like this:</p>
<pre><code>The Deployment "nginx" is invalid: spec.template.metadata.labels:
Invalid value: map[string]string{"run":"nginxv1"}: `selector` does not match template `labels`
</code></pre>
<p>Also, if you do not provide the <code>selector</code> it will error with a message like this:</p>
<pre><code>error validating "STDIN": error validating data: ValidationError(Deployment.spec):
missing required field "selector" in io.k8s.api.apps.v1.DeploymentSpec;
if you choose to ignore these errors, turn validation off with --validate=false
</code></pre>
<p>The yaml used is the below:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
run: nginx
name: nginx
spec:
replicas: 2
selector:
matchLabels:
run: nginx
strategy: {}
template:
metadata:
labels:
run: nginxv1
spec:
containers:
- image: nginx
name: nginx
</code></pre>
<p>When you read the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment" rel="nofollow noreferrer">docs</a>, the description for the <code>selector</code> says:</p>
<blockquote>
<p>The selector field defines how the Deployment finds which Pods to manage. In this case, you simply select a label that is defined in the Pod template (app: nginx). However, more sophisticated selection rules are possible, <strong>as long as the Pod template itself satisfies the rule</strong>.</p>
</blockquote>
<h1><a href="https://kubernetes.io/docs/reference/federation/v1/definitions/#_v1_objectmeta" rel="nofollow noreferrer">Metadata</a></h1>
<p>Most objects in Kubernetes have a metadata, it is responsible to store information about the resource like, name, labels, annotations and so on.</p>
<p>When you create a deployment, the template is needed for creation\update of ReplicaSet and PODs, in this case, they need to match the selector, otherwise you would end up with orphan resources around your cluster, and the metadata store the data used to link them.</p>
<p>This was designed this way to make the resources are loosely coupled from each other, if you make simple change to the label of a pod created by this deployment\replicaSet, you will notice that the old POD keep running, but a new one is created, because the old one does not attend the selector rule anymore and ReplicaSet create a new one to keep the number of desired replicas.</p>
|
<p>I want to count k8s cluster cpu/memory usage (not k8s pod usage) with prometheus, so that i can show in grafana.</p>
<p>I use <code>sum (container_memory_usage_bytes{id="/"})</code> to get k8s cluster used memory, and <code>topk(1, sum(kube_node_status_capacity_memory_bytes) by (instance))</code> to get whole k8s cluster memory, but they can not divide since <code>topk</code> function does not return value but vector.</p>
<p>How can i do this?</p>
| <p>I have installed Prometheus on google Cloud through the gcloud default applications. The dashboards automatically got deployed with the installation. The following queries are what was used for memory and CPU usage of the cluster:</p>
<p>CPU usage by namespace:</p>
<pre><code>sum(irate(container_cpu_usage_seconds_total[1m])) by (namespace)
</code></pre>
<p>Memory usage (no cache) by namespace:</p>
<pre><code>sum(container_memory_rss) by (namespace)
</code></pre>
<p>CPU request commitment:</p>
<pre><code>sum(kube_pod_container_resource_requests_cpu_cores) / sum(node:node_num_cpu:sum)
</code></pre>
<p>Memory request commitment:</p>
<pre><code>sum(kube_pod_container_resource_requests_memory_bytes) / sum(node_memory_MemTotal)
</code></pre>
|
<p>I am following this tutorial: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/" rel="noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/</a></p>
<p>I have created the memory pod demo and I am trying to get the metrics from the pod but it is not working.</p>
<p>I installed the metrics server by cloning: <a href="https://github.com/kubernetes-incubator/metrics-server" rel="noreferrer">https://github.com/kubernetes-incubator/metrics-server</a></p>
<p>And then running this command from top level:</p>
<pre><code>kubectl create -f deploy/1.8+/
</code></pre>
<p>I am using kubernetes version 1.10.11.</p>
<p>The pod is definitely created:</p>
<pre><code>λ kubectl get pod memory-demo --namespace=mem-example
NAME READY STATUS RESTARTS AGE
memory-demo 1/1 Running 0 6m
</code></pre>
<p>But the metics command does not work and gives an error:</p>
<pre><code>λ kubectl top pod memory-demo --namespace=mem-example
Error from server (NotFound): podmetrics.metrics.k8s.io "mem-example/memory-demo" not found
</code></pre>
<p>What did I do wrong?</p>
| <p>There are some patches to be done to metrics server deployment to get the metrics working.</p>
<h1>Follow the below steps</h1>
<pre><code>kubectl delete -f deploy/1.8+/
wait till the metrics server gets undeployed
run the below command
kubectl create -f https://raw.githubusercontent.com/epasham/docker-repo/master/k8s/metrics-server.yaml
</code></pre>
<pre><code>master $ kubectl get po -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-78fcdf6894-6zg78 1/1 Running 0 2h
coredns-78fcdf6894-gk4sb 1/1 Running 0 2h
etcd-master 1/1 Running 0 2h
kube-apiserver-master 1/1 Running 0 2h
kube-controller-manager-master 1/1 Running 0 2h
kube-proxy-f5z9p 1/1 Running 0 2h
kube-proxy-ghbvn 1/1 Running 0 2h
kube-scheduler-master 1/1 Running 0 2h
metrics-server-85c54d44c8-rmvxh 2/2 Running 0 1m
weave-net-4j7cl 2/2 Running 1 2h
weave-net-82fzn 2/2 Running 1 2h
</code></pre>
<pre><code>master $ kubectl top pod -n kube-system
NAME CPU(cores) MEMORY(bytes)
coredns-78fcdf6894-6zg78 2m 11Mi
coredns-78fcdf6894-gk4sb 2m 9Mi
etcd-master 14m 90Mi
kube-apiserver-master 24m 425Mi
kube-controller-manager-master 26m 62Mi
kube-proxy-f5z9p 2m 19Mi
kube-proxy-ghbvn 3m 17Mi
kube-scheduler-master 8m 14Mi
metrics-server-85c54d44c8-rmvxh 1m 19Mi
weave-net-4j7cl 2m 59Mi
weave-net-82fzn 1m 60Mi
</code></pre>
<p>Check and verify the below lines in metrics server deployment manifest.</p>
<pre><code> command:
- /metrics-server
- --metric-resolution=30s
- --kubelet-preferred-address-types=InternalIP
- --kubelet-insecure-tls
</code></pre>
|
<p>I'm currently running on AWS and use <a href="https://github.com/kube-aws/kube-spot-termination-notice-handler" rel="noreferrer">kube-aws/kube-spot-termination-notice-handler</a> to intercept an AWS spot termination notice and gracefully evict the pods.</p>
<p>I'm reading <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/preemptible-vm" rel="noreferrer">this GKE documentation page</a> and I see:</p>
<blockquote>
<p>Preemptible instances terminate after 30 seconds upon receiving a preemption notice.</p>
</blockquote>
<p>Going into the Compute Engine documentation, I see that a ACPI G2 Soft Off is sent 30 seconds before the termination happens but <a href="https://github.com/kubernetes/kubernetes/issues/22494" rel="noreferrer">this issue</a> suggests that the kubelet itself doesn't handle it.</p>
<p><strong>So, how does GKE handle preemption?</strong> Will the node do a drain/cordon operation or does it just do a hard shutdown?</p>
| <p><strong>More recent and relevant answer</strong></p>
<p>There's a GitHub project (not mine) that catches this ACPI handler and has the node cordon and drain itself, and then restart itself which in our tests results in a much cleaner preemptible experience, it's almost not noticeable with a highly available deployments on your cluster.</p>
<p>See: <a href="https://github.com/GoogleCloudPlatform/k8s-node-termination-handler" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/k8s-node-termination-handler</a></p>
|
<p>In Kubernetes, we have ClusterIp/Nodeport/LoadBalancer as the <strong>service</strong> to expose pods.
When there are <strong>multiple</strong> endpoints binds to one serivce (like deployment), then what is the policy Kubernetes route the traffic to one of the endpoints? Will it always try to respect a <code>load balancing</code> policy, or randomly selection?</p>
| <p>Kubernetes uses <a href="https://en.wikipedia.org/wiki/Iptables" rel="noreferrer">iptables</a> to distribute traffic across a set of pods, as officially <a href="https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies" rel="noreferrer">explained by kubernetes.io</a>. Basically what happens is when you create a <code>kind: service</code> object, K8s creates a virtual ClusterIP and instructs the kube-proxy daemonset to update iptables on each node so that requests matching that virtual IP will get load balanced across a set of pod IPs. The word "virtual" here means that ClusterIPs, unlike pod IPs, are not real IP addresses allocated by a network interface, and are merely used as a "filter" to match traffic and forward them to the right destination.</p>
<p>Kubernetes documentation says the load balancing method by default is round robin, but this is not entirely accurate. If you look at iptables on any of the worker nodes, you can see that for a given service <code>foo</code> with ClusterIP of 172.20.86.5 and 3 pods, the [overly simplified] iptables rules look like this:</p>
<pre><code>$ kubectl get service foo
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
foo ClusterIP 172.20.86.5 <none> 443:30937/TCP 12m
</code></pre>
<pre><code>Chain KUBE-SERVICES (2 references)
target prot opt source destination
KUBE-SVC-4NIQ26WEGJLLPEYD tcp -- anywhere 172.20.86.5 /* default/foo:https cluster IP */ tcp dpt:https
</code></pre>
<p>This <code>KUBE-SERVICES</code> chain rule looks for all traffic whose <code>destination</code> is 172.20.86.5, and applies rules defined in another chain called <code>KUBE-SVC-4NIQ26WEGJLLPEYD</code>:</p>
<pre><code>Chain KUBE-SVC-4NIQ26WEGJLLPEYD (2 references)
target prot opt source destination
KUBE-SEP-4GQBH7D5EV5ANHLR all -- anywhere anywhere /* default/foo:https */ statistic mode random probability 0.33332999982
KUBE-SEP-XMNJYETXA5COSMOZ all -- anywhere anywhere /* default/foo:https */ statistic mode random probability 0.50000000000
KUBE-SEP-YGQ22DTWGVO4D4MM all -- anywhere anywhere /* default/foo:https */
</code></pre>
<p>This chain uses <code>statistic mode random probability</code> to randomly send traffic to one of the three chains defined (since I have three pods, I have three chains here each with 33.3% chance of being chosen to receive traffic). Each one of these chains is the final rule in sending the traffic to the backend pod IP. For example looking at the first one:</p>
<pre><code>Chain KUBE-SEP-4GQBH7D5EV5ANHLR (1 references)
target prot opt source destination
DNAT tcp -- anywhere anywhere /* default/foo:https */ tcp to:10.100.1.164:12345
</code></pre>
<p>the <code>DNAT</code> directive forwards packets to IP address 10.100.1.164 (real pod IP) and port 12345 (which is what <code>foo</code> listens on). The other two chains (<code>KUBE-SEP-XMNJYETXA5COSMOZ</code> and <code>KUBE-SEP-YGQ22DTWGVO4D4MM</code>) are similar except each will have a different IP address.</p>
<p>Similarly, if your service type is <code>NodePort</code>, Kubernetes assigns a random port (from 30000-32767 by default) on the node. What's interesting here is that there is no process on the worker node actively listening on this port - instead, this is yet another iptables rule to match traffic and send it to the right set of pods:</p>
<pre><code>Chain KUBE-NODEPORTS (1 references)
target prot opt source destination
KUBE-SVC-4NIQ26WEGJLLPEYD tcp -- anywhere anywhere /* default/foo:https */ tcp dpt:30937
</code></pre>
<p>This rule matches inbound traffic going to port 30937 (<code>tcp dpt:30937</code>), and forwards it to chain <code>KUBE-SVC-4NIQ26WEGJLLPEYD</code>. But guess what: <code>KUBE-SVC-4NIQ26WEGJLLPEYD</code> is the same exact chain that cluster ip 172.20.86.5 matches on and sends traffic to, as shown above.</p>
|
<p>I have deployed a mongo db, Spring Boot BE, Angular app within GKE. My FE service is a load balancer, it needs to connect with my BE to get data but I'm getting an console error in my browser: <strong>GET <a href="http://contactbe.default.svc.cluster.local/contacts" rel="nofollow noreferrer">http://contactbe.default.svc.cluster.local/contacts</a> net::ERR_NAME_NOT_RESOLVED</strong>. My FE needs to consume <em>/contacts</em> endpoint to get data. I'm using DNS from my BE service (contactbe.default.svc.cluster.local) within my Angular app. It is my yml file that I used to create my deployment:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
run: mongo
spec:
type: NodePort
ports:
- port: 27017
targetPort: 27017
protocol: TCP
selector:
run: mongo
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: mongo
spec:
template:
metadata:
labels:
run: mongo
spec:
containers:
- name: mongo
image: mongo
ports:
- containerPort: 27017
---
apiVersion: v1
kind: Service
metadata:
name: contactbe
labels:
app: contactbe
spec:
type: NodePort
ports:
- port: 8181
targetPort: 8181
protocol: TCP
selector:
app: contactbe
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: contactbe
spec:
template:
metadata:
labels:
app: contactbe
spec:
containers:
- name: contactbe
image: glgelopfalcon/k8s_contactbe:latest
ports:
- containerPort: 8181
---
apiVersion: apps/v1beta1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: angular-deployment
spec:
selector:
matchLabels:
app: angular
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: angular
spec:
containers:
- name: angular
image: glgelopfalcon/my-angular-app:latest
ports:
- containerPort: 80
---
# https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service
kind: Service
apiVersion: v1
metadata:
name: angular-service
spec:
selector:
app: angular
ports:
- port: 80
targetPort: 80
protocol: TCP
type: LoadBalancer
</code></pre>
<p>I googled a lot but I still have not found how to solve this problem. I would appreciate if someone could give a hand. </p>
<p><a href="https://i.stack.imgur.com/O2aBS.png" rel="nofollow noreferrer">Console error</a></p>
| <p>check for your load balancer having open port 27017 as your sending request to port 27017</p>
<p>otherwise you can change service node port to 80 and target port will be same</p>
|
<p>Are there any working samples of using cert-manager on AKS with an Nginx ingress where multiple domains have been granted SSL via LetsEncrypt, and then those dns names are directed to separate containers?</p>
<p>I’ve had a single SSL setup for a while, but upon adding a second everything stopped working. </p>
<p>I have several clusters I’ll need to apply this to, so I’m hoping to ind a bullet proof example. </p>
| <p>I dont think it should matter, i didnt really test that, but if you add 2 individual ingress resources with different domains\secrets, it should work (at least I dont see any reason why it shouldnt):</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: tls-example-ingress
annotations:
ingress.kubernetes.io/ssl-redirect: "true"
kubernetes.io/tls-acme: "true"
certmanager.k8s.io/issuer: letsencrypt-production
kubernetes.io/ingress.class: "nginx
spec:
tls:
- hosts:
- sslexample.foo.com
secretName: testsecret-tls
rules:
- host: sslexample.foo.com
http:
paths:
- path: /
backend:
serviceName: service1
servicePort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: tls-example-ingress
annotations:
ingress.kubernetes.io/ssl-redirect: "true"
kubernetes.io/tls-acme: "true"
certmanager.k8s.io/issuer: letsencrypt-production
kubernetes.io/ingress.class: "nginx
spec:
tls:
- hosts:
- sslexample1.foo.com
secretName: testsecret-tls1
rules:
- host: sslexample1.foo.com
http:
paths:
- path: /
backend:
serviceName: service2
servicePort: 80
</code></pre>
<p>tls is an array, so should take more than 1 item. not sure about interaction with cert-manager, though</p>
<pre><code>tls:
- hosts:
- sslexample.foo.com
secretName: testsecret-tls
- hosts:
- sslexample1.foo.com
secretName: testsecret1-tls
</code></pre>
|
<p>I am testing out some Terraform code to create a Kubernetes cluster so I chose the smallest/cheapest VM</p>
<pre><code>resource "azurerm_kubernetes_cluster" "k8s" {
name = "${var.cluster_name}"
location = "${azurerm_resource_group.resource_group.location}"
resource_group_name = "${azurerm_resource_group.resource_group.name}"
dns_prefix = "${var.dns_prefix}"
agent_pool_profile {
name = "agentpool"
count = "${var.agent_count}"
vm_size = "Standard_B1s"
os_type = "Linux"
os_disk_size_gb = "${var.agent_disk_size}"
}
service_principal {
client_id = "${var.client_id}"
client_secret = "${var.client_secret}"
}
}
</code></pre>
<p>However, when I <code>terraform apply</code> I get this error message back from azure:</p>
<blockquote>
<p>"The VM SKU chosen for this cluster <code>Standard_B1s</code> does not have enough CPU/memory to run as an AKS node."</p>
</blockquote>
<p>How do I list the valid VM SKUs for AKS nodes and sort them by cost?</p>
| <p>You need to select an instance with <strong>at least 3.5 GB of memory</strong>. Read <em>A note on node size</em> from this <a href="https://stackify.com/azure-container-service-kubernetes/" rel="noreferrer">blog</a>. You can list the VM size and price on the <a href="https://azure.microsoft.com/en-ca/pricing/details/virtual-machines/linux/" rel="noreferrer">Azure sales site</a>.</p>
<p>Currently, the cheapest is <code>Standard_B2s</code> with 4 GB RAM. You can also sort it directly in the Azure portal.
<a href="https://i.stack.imgur.com/p3uFf.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/p3uFf.jpg" alt="enter image description here"></a></p>
|
<p>I am deploying k8s in a private lab and using --external-ip option in the k8s service:</p>
<pre><code>Name: my-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=myapp
Type: ClusterIP
IP: 10.98.4.250
External IPs: 10.10.16.21
Port: http 80/TCP
TargetPort: 80/TCP
Endpoints: 192.168.237.3:80
Session Affinity: None
Events: <none>
user@k8s-master:~$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 22h
my-service ClusterIP 10.98.4.250 10.10.16.21 80/TCP 7m
</code></pre>
<p>But I can only curl the endpoint from the same node (k8s-master) via the External IP. If I use other node (which is the same subnet as k8s-master), curl will not work.</p>
<p>Running tcpdump and I can see the http request is coming thru but there is no reply.</p>
<p>How does External IP work in service?</p>
| <p>If you check the kubectl source code in <a href="https://github.com/kubernetes/kubernetes/search?q=external-ip&unscoped_q=external-ip" rel="nofollow noreferrer">github</a>, you would find that <code>external-ip</code> is only documented in <code>kubectl expose</code> with the description as following:</p>
<blockquote>
<p>cmd.Flags().String("external-ip", "", i18n.T("Additional external IP
address (not managed by Kubernetes) to accept for the service. If this
IP is routed to a node, the service can be accessed by this IP in
addition to its generated service IP."))</p>
</blockquote>
<p>When adding <code>external-ip</code> parameters to a service, Kubernete would likely to do nothing as it does not manage the behavior of this IP nor its routing inside the cluster. It just assumes that this IP would be routed to a node in the cluster. It would be treated same as the generated service IP. </p>
<p>To have an external IP working, configurations similar to Service type <code>LoadBalancer</code> or <code>NodePort</code> should be applied. I think it is just more of an annotation purpose.</p>
|
<p>I have a pretty simple Kubernetes pod. I want a stateful set and want the following process:</p>
<ol>
<li>I want to have an initcontainer download and uncompress a tarball from s3 into a volume mounted to the initcontainer</li>
<li>I want to mount that volume to my main container to be used </li>
</ol>
<p></p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: app
namespace: test
labels:
name: app
spec:
serviceName: app
replicas: 1
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
initContainers:
- name: preparing
image: alpine:3.8
imagePullPolicy: IfNotPresent
command:
- "sh"
- "-c"
- |
echo "Downloading data"
wget https://s3.amazonaws.com/.........
tar -xvzf xxxx-........ -C /root/
volumeMounts:
- name: node-volume
mountPath: /root/data/
containers:
- name: main-container
image: ecr.us-west-2.amazonaws.com/image/:latest
imagePullPolicy: Always
volumeMounts:
- name: node-volume
mountPath: /root/data/
volumeClaimTemplates:
- metadata:
name: node-volume
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: gp2-b
resources:
requests:
storage: 80Gi
</code></pre>
<p>I continue to get the following error:</p>
<p>At first I run this and I can see the logs flowing of my tarball being downloaded by the initcontainer. About half way done it terminates and gives me the following error:</p>
<pre><code>Multi-Attach error for volume "pvc-faedc8" Volume is
already exclusively attached to one node and can't be
attached to another
</code></pre>
| <p>Looks like you have a dangling <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims" rel="noreferrer">PVC</a> and/or <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="noreferrer">PV</a> that is attached to one of your nodes. You can ssh into the node and run a <code>df</code> or <code>mount</code> to check.</p>
<p>If you look at <a href="https://stackoverflow.com/questions/46442238/can-i-rely-on-volumeclaimtemplates-naming-convention">this</a> the PVCs in a StatefulSet are always mapped to their pod names, so it may be possible that you still have a dangling pod(?)</p>
<p>If you have a dangling pod:</p>
<pre><code>$ kubectl -n test delete pod <pod-name>
</code></pre>
<p>You may have to force it:</p>
<pre><code>$ kubectl -n test delete pod <pod-name> --grace-period=0 --force
</code></pre>
<p>Then, you can try deleting the PVC and it's corresponding PV:</p>
<pre><code>$ kubectl delete pvc pvc-faedc8
$ kubectl delete pv <pv-name>
</code></pre>
|
<p>I'm running a kubernetes cluster on docker-desktop (mac).</p>
<p>It has a local docker registry inside it.</p>
<p>I'm able to query the registry no problem via the API calls to get the list of tags.</p>
<p>I was able to push an image before, but it took multiple attempts to push.</p>
<p>I can't push new changes now. It looks like it pushes successfully for layers, but then doesn't acknowledge the layer has been pushed and then retries.</p>
<p>Repo is called localhost:5000 and I am correctly port forwarding as per instructions on <a href="https://blog.hasura.io/sharing-a-local-registry-for-minikube-37c7240d0615/" rel="noreferrer">https://blog.hasura.io/sharing-a-local-registry-for-minikube-37c7240d0615/</a></p>
<p>I'm ot using ssl certs as this is for development on local machine.</p>
<p>(The port forwarding is proven to work otherwise API call would fail)</p>
<pre><code>e086a4af6e6b: Retrying in 1 second
35c20f26d188: Layer already exists
c3fe59dd9556: Pushing [========================> ] 169.3MB/351.5MB
6ed1a81ba5b6: Layer already exists
a3483ce177ce: Retrying in 16 seconds
ce6c8756685b: Layer already exists
30339f20ced0: Retrying in 1 second
0eb22bfb707d: Pushing [==================================================>] 45.18MB
a2ae92ffcd29: Waiting
received unexpected HTTP status: 502 Bad Gateway
</code></pre>
<p><em>workaround (this will suffice but not ideal, as have to build each container</em></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: producer
namespace: aetasa
spec:
containers:
- name: kafkaproducer
image: localhost:5000/aetasa/cta-user-create-app
imagePullPolicy: Never // this line uses the built container in docker
ports:
- containerPort: 5005
</code></pre>
<p>Kubectl logs for registry </p>
<pre><code>10.1.0.1 - - [20/Feb/2019:19:18:03 +0000] "POST /v2/aetasa/cta-user-create-app/blobs/uploads/ HTTP/1.1" 202 0 "-" "docker/18.09.2 go/go1.10.6 git-commit/6247962 kernel/4.9.125-linuxkit os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.2 \x5C(darwin\x5C))" "-"
2019/02/20 19:18:03 [warn] 12#12: *293 a client request body is buffered to a temporary file /var/cache/nginx/client_temp/0000000011, client: 10.1.0.1, server: localhost, request: "PATCH /v2/aetasa/cta-user-create-app/blobs/uploads/16ad0e41-9af3-48c8-bdbe-e19e2b478278?_state=qjngrtaLCTal-7-hLwL9mvkmhOTHu4xvOv12gxYfgPx7Ik5hbWUiOiJhZXRhc2EvY3RhLXVzZXItY3JlYXRlLWFwcCIsIlVVSUQiOiIxNmFkMGU0MS05YWYzLTQ4YzgtYmRiZS1lMTllMmI0NzgyNzgiLCJPZmZzZXQiOjAsIlN0YXJ0ZWRBdCI6IjIwMTktMDItMjBUMTk6MTg6MDMuMTU2ODYxNloifQ%3D%3D HTTP/1.1", host: "localhost:5000"
2019/02/20 19:18:03 [error] 12#12: *293 connect() failed (111: Connection refused) while connecting to upstream, client: 10.1.0.1, server: localhost, request: "PATCH /v2/aetasa/cta-user-create-app/blobs/uploads/16ad0e41-9af3-48c8-bdbe-e19e2b478278?_state=qjngrtaLCTal-7-hLwL9mvkmhOTHu4xvOv12gxYfgPx7Ik5hbWUiOiJhZXRhc2EvY3RhLXVzZXItY3JlYXRlLWFwcCIsIlVVSUQiOiIxNmFkMGU0MS05YWYzLTQ4YzgtYmRiZS1lMTllMmI0NzgyNzgiLCJPZmZzZXQiOjAsIlN0YXJ0ZWRBdCI6IjIwMTktMDItMjBUMTk6MTg6MDMuMTU2ODYxNloifQ%3D%3D HTTP/1.1", upstream: "http://10.104.68.90:5000/v2/aetasa/cta-user-create-app/blobs/uploads/16ad0e41-9af3-48c8-bdbe-e19e2b478278?_state=qjngrtaLCTal-7-hLwL9mvkmhOTHu4xvOv12gxYfgPx7Ik5hbWUiOiJhZXRhc2EvY3RhLXVzZXItY3JlYXRlLWFwcCIsIlVVSUQiOiIxNmFkMGU0MS05YWYzLTQ4YzgtYmRiZS1lMTllMmI0NzgyNzgiLCJPZmZzZXQiOjAsIlN0YXJ0ZWRBdCI6IjIwMTktMDItMjBUMTk6MTg6MDMuMTU2ODYxNloifQ%3D%3D", host: "localhost:5000"
</code></pre>
| <p>Try configure <code>--max-concurrent-uploads=1</code> for your docker client. You are pushing quite large layers (350MB), so probably you are hitting some limits (request sizes, timeouts) somewhere. Single concurrent upload may help you, but it is only a work around. Real solution will be configuration (buffer sizes, timeouts, ...) of registry + reverse proxy in front of registry eventually.</p>
|
<p>Does Kubernetes implement its own container or use Docker containers or Both?</p>
<p>Can Kubernetes implement a container that is not a Docker container?</p>
| <p>Kubernetes is a cluster technology and a container orchestration tool. It helps in Deploying containers, managing its life cycle, rolling updates, roll back, scaling up, scaling down, networking, routing and much more all that you need to run your application services inside containers.</p>
<p>Docker is a vitrualization technology that makes the apps, run time environments and dependencies all bundled together in a n image that can be deployed as a container.</p>
<p>K8s under the hood uses docker to deploy containers. In addition to docker., other container technologies like rkt and crio are also supported </p>
|
<p>I am using the NetworkPolicy below to allow egress on HTTP and HTTPS ports, but running <code>wget https://google.com</code> doesn't work when the network policy is applied. The domain name is resolved (DNS egress rule works) but connecting to the external host times out.</p>
<p>I've tried on minikube with cilium and on Azure with azure-npm in case it was some quirk with the network policy controller, but it behaves the same on both. I'm confused since I use the same method for DNS egress (which works) but this fails for other ports.</p>
<p>What's preventing egress on HTTP/HTTPS ports?</p>
<p>Kubernetes version 1.11.5</p>
<pre><code>apiVersion: extensions/v1beta1
kind: NetworkPolicy
metadata:
name: my-netpolicy
spec:
egress:
- ports:
- port: 53
protocol: UDP
- port: 53
protocol: TCP
- ports:
- port: 443
protocol: UDP
- port: 443
protocol: TCP
- port: 80
protocol: UDP
- port: 80
protocol: TCP
podSelector:
matchLabels:
my-label: my-app
</code></pre>
<p>(Yes, the UDP rules are probably unnecessary, but trying everything here)</p>
<p>(I've also tried <code>wget</code> on a private server in case Google/etc. block Azure IPs, same result)</p>
<p>(I've also tried matching ingress rules because "why not", same result)</p>
<hr>
<p><code>kubectl describe</code> on the network policy:</p>
<pre><code>Name: my-netpolicy
Namespace: default
Created on: 2019-01-21 19:00:04 +0000 UTC
Labels: ...
Annotations: <none>
Spec:
PodSelector: ...
Allowing ingress traffic:
To Port: 8080/TCP
From: <any> (traffic not restricted by source)
----------
To Port: https/UDP
To Port: https/TCP
To Port: http/TCP
To Port: http/UDP
From: <any> (traffic not restricted by source)
Allowing egress traffic:
To Port: 53/UDP
To Port: 53/TCP
To: <any> (traffic not restricted by source)
----------
To Port: https/UDP
To Port: https/TCP
To Port: http/UDP
To Port: http/TCP
To: <any> (traffic not restricted by source)
Policy Types: Ingress, Egress
</code></pre>
<hr>
<p>Minimal reproducible example:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: netpolicy-poc-pod
labels:
name: netpolicy-poc-pod
spec:
containers:
- name: poc
image: ubuntu:18.04
command: ["bash", "-c", "while true; do sleep 1000; done"]
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: netpolicy-poc
spec:
podSelector:
matchLabels:
name: netpolicy-poc-pod
egress:
- ports:
- port: 80
protocol: UDP
- port: 80
protocol: TCP
- port: 443
protocol: UDP
- port: 443
protocol: TCP
- port: 53
protocol: UDP
- port: 53
protocol: TCP
ingress: []
</code></pre>
<p>Then:</p>
<pre><code>kubectl exec -it netpolicy-poc /bin/bash
apt update
apt install wget -y
wget https://google.com
</code></pre>
| <p>Turns out the policy I gave works fine, it's just that the controllers implementing the policy had some bugs. On Minikube+Cilium it just didn't work for IPv6 but worked fine for IPv4, and on AKS the feature is still generally in beta and there are other options that we could try. I haven't found anything on my specific issue when using the azure-npm implementation but since it works fine in Minikube on IPv4 I'll assume that it would work fine in Azure as well once a "working" controller is set up.</p>
<p>Some resources I found for the Azure issue:</p>
<ul>
<li><a href="https://github.com/Azure/AKS/issues/435" rel="nofollow noreferrer">https://github.com/Azure/AKS/issues/435</a></li>
<li><a href="https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md" rel="nofollow noreferrer">https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md</a></li>
<li><a href="https://learn.microsoft.com/en-us/azure/aks/use-network-policies" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/use-network-policies</a></li>
</ul>
|
<p>Unfortunately I lost my local </p>
<blockquote>
<p>~/.kube/config</p>
</blockquote>
<p>where I had configuration for my namespace.</p>
<p>Is there a way to get this config if I have access to master nodes?</p>
<p>Thanks in advance</p>
| <p>I believe you're using kubeadm to start your kubernetes cluster, you can generate the new kubeconfig file using following command:</p>
<pre><code>kubeadm alpha phase kubeconfig admin --kubeconfig-dir /etc/kubernetes --cert-dir /etc/kubernetes/pki
</code></pre>
<p>This will generate a new config file in <code>/etc/kubernetes/admin.conf</code>. Then you can copy the file in following way:</p>
<pre><code>mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
</code></pre>
|
<p>How does kubernetes pod gets IP instead of container instead of it as CNI plugin works at container level?</p>
<p>How all containers of same pod share same network stack?</p>
| <p>Containers use a feature from the kernel called <strong><em>virtual network interface</em></strong>, the virtual network Interface( lets name it <strong>veth0</strong>) is created and then assigned to a namespace, when a container is created, it is also assigned to a namespace, when multiple containers are created within the same namespace, only a single network interface veth0 will be created.</p>
<p>A POD is just the term used to specify a set of resources and features, one of them is the namespace and the containers running in it.</p>
<p>When you say the POD get an IP, what actually get the ip is the veth0 interface, container apps see the veth0 the same way applications outside a container see a single physical network card on a server.</p>
<p>CNI is just the technical specification on how it should work to make multiple network plugins work without changes to the platform. The process above should be the same to all network plugins.</p>
<p>There is a nice explanation in <a href="https://medium.com/google-cloud/understanding-kubernetes-networking-pods-7117dd28727" rel="nofollow noreferrer">this blog post</a></p>
|
<p>I have 3 different Kubernetes Secrets and I want to mount each one into its own Pod managed by a StatefulSet with 3 replicas.</p>
<p>Is it possible to configure the StatefulSet such that each Secret is mounted into its own Pod?</p>
| <p>Not really. A StatefulSet (and any workload controller for that matter) allows only a single pod definition template (it could have multiple containers). The issue with this is that a StatefulSet is designed to have N replicas so can you have an N number of secrets. It would have to be a SecretStatefulSet: a different controller. </p>
<p>Some solutions:</p>
<ul>
<li><p>You could define a single <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">Kubernetes secret</a> that contains all your required secrets for all of your pods. The downside is that you will have to share the secret between the pods. For example:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
pod1: xxx
pod2: xxx
pod3: xxx
...
podN: xxx
</code></pre></li>
<li><p>Use something like <a href="https://www.vaultproject.io/" rel="nofollow noreferrer">Hashicorp's Vault</a> and store your secret remotely with keys such as <code>pod1</code>, <code>pod2</code>, <code>pod3</code>,...<code>podN</code>. You can also use an <a href="https://en.wikipedia.org/wiki/Hardware_security_module" rel="nofollow noreferrer">HSM</a>. This seems to be the more solid solution IMO but it might take longer to implement.</p></li>
</ul>
<p>In all cases, you will have to make sure that the number of secrets matches your number of pods in your StatefulSet.</p>
|
<p>I'm following the tutorial in <a href="https://github.com/kubernetes/kops/blob/master/docs/aws.md" rel="nofollow noreferrer">https://github.com/kubernetes/kops/blob/master/docs/aws.md</a>
with a bootstrap EC2 instance with Amazon Linux installed.</p>
<p>And everything seems to be working fine until I need to start configuring the cluster. </p>
<p>This error when running the kops command to create a configuration for the cluster. I couldn`t find any post on how to solve this issue.</p>
<p>Any help?</p>
<pre><code>[ec2-user@ip-172-31-19-231 ~]$ kops create cluster --zones us-west-2a,us-west-2b,us-west-2c,us-west-2d ${NAME}
I0224 22:43:29.639232 3292 create_cluster.go:496] Inferred --cloud=aws from zone "us-west-2a"
I0224 22:43:29.699503 3292 subnets.go:184] Assigned CIDR 172.20.32.0/19 to subnet us-west-2a
I0224 22:43:29.699582 3292 subnets.go:184] Assigned CIDR 172.20.64.0/19 to subnet us-west-2b
I0224 22:43:29.699632 3292 subnets.go:184] Assigned CIDR 172.20.96.0/19 to subnet us-west-2c
I0224 22:43:29.699677 3292 subnets.go:184] Assigned CIDR 172.20.128.0/19 to subnet us-west-2d
error assigning default machine type for nodes: error finding default machine type: could not find a suitable supported instance type for the instance group "nodes" (type "Node") in region "us-west-2"
</code></pre>
| <p>The error is stating you haven't specified an instance type for the EC2 nodes that will act as master and worker nodes.</p>
<p>The following is an example command: </p>
<pre><code> kops create cluster --name=my-cluster.k8s.local \
--state=s3://kops-state-1234 --zones=eu-west-1a \
--node-count=2 --node-size=t2.micro --master-size=t2.micro
</code></pre>
|
<p>I am asked to create a pod with 5 container, what is the best solution for this situation, should i create a pod with 5 Con or split it into multiple pods , any suggestion</p>
| <p>By rule of thumb , multiple containers should reside in same pod only if they share the same lifecycle. For example , suppose you have an application and along with it another helper/bridge service which serves as a window to outside world , then it might make sense for these two containers to stay together in one pod. But , I am really not sure in which sort of usecase will be require 5 containers to be put together in one pod.</p>
<p>Please understand , this does not provide any sort of advantage from resource point of view whether you have 5 containers in 1 pod or 5 pods , the resource utilization like cpu and memory will be still cumulative numbers of all the containers together.</p>
|
<p>I'm trying to create two deployments, one for Wordpress the other for MySQL which refer to two different Persistent Volumes.</p>
<p><strong>Sometimes</strong>, while deleting and recreating volumes and deployments, the MySQL deployment populates the Wordpress volume (ending up with a database in the <code>wordpress-volume</code> directory).</p>
<p>This is clearer when you do <code>kubectl get pv --namespace my-namespace</code>:</p>
<pre><code>mysql-volume 2Gi RWO Retain Bound flashart-it/wordpress-volume-claim manual 1h
wordpress-volume 2Gi RWO Retain Bound flashart-it/mysql-volume-claim manual
</code></pre>
<p>.</p>
<p>I'm pretty sure the settings are ok. Please find the yaml file below.</p>
<h3>Persistent Volume Claims + Persistent Volumes</h3>
<pre><code>kind: PersistentVolume
apiVersion: v1
metadata:
namespace: my-namespace
name: mysql-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /path/to/mount/mysql-volume
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
namespace: my-namespace
name: mysql-volume-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
---
kind: PersistentVolume
apiVersion: v1
metadata:
namespace: my-namespace
name: wordpress-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /path/to/mount/wordpress-volume
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
namespace: my-namespace
name: wordpress-volume-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
</code></pre>
<h3>Deployments</h3>
<pre><code>kind: Deployment
apiVersion: apps/v1
metadata:
name: wordpress
namespace: my-namespace
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: frontend
strategy:
type: Recreate
template:
metadata:
namespace: my-namespace
labels:
app: wordpress
tier: frontend
spec:
containers:
- image: wordpress:5.0-php7.1-apache
name: wordpress
env:
# ...
ports:
# ...
volumeMounts:
- name: wordpress-volume
mountPath: /var/www/html
volumes:
- name: wordpress-volume
persistentVolumeClaim:
claimName: wordpress-volume-claim
---
kind: Deployment
apiVersion: apps/v1
metadata:
namespace: my-namespace
name: wordpress-mysql
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
tier: mysql
strategy:
type: Recreate
template:
metadata:
namespace: my-namespace
labels:
app: wordpress
tier: mysql
spec:
containers:
- image: mysql:5.7
name: mysql
env:
# ...
ports:
# ...
volumeMounts:
- name: mysql-volume
mountPath: /var/lib/mysql
volumes:
- name: mysql-volume
persistentVolumeClaim:
claimName: mysql-volume-claim
</code></pre>
| <p>It's expected behavior in Kubernetes. PVC can bind to any available PV, given that storage class is matched, access mode is matched, and storage size is sufficient. Names are not used to match PVC and PV.</p>
<p>A possible solution for your scenario is to use <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#selector" rel="noreferrer">label selector</a> on PVC to filter qualified PV.</p>
<p>First, add a label to PV (in this case: app=mysql)</p>
<pre><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-volume
labels:
app: mysql
</code></pre>
<p>Then, add a label selector in PVC to filter PV.</p>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
namespace: my-namespace
name: mysql-volume-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
selector:
matchLabels:
app: mysql
</code></pre>
|
<p>I have a helm chart for an application that needs some kind of database.
Both mysql or postgresql would be fine.</p>
<p>I would like to give the chart user the option to install one of these as a dependency like this:</p>
<pre><code>dependencies:
- name: mysql
version: 0.10.2
repository: https://kubernetes-charts.storage.googleapis.com/
condition: mysql.enabled
- name: postgresql
version: 3.11.5
repository: https://kubernetes-charts.storage.googleapis.com/
condition: postgresql.enabled
</code></pre>
<p>However this makes it possible to enable both of them.</p>
<p>Is there an easy way to make sure only one is selected?</p>
<p>I was thinking of a single variable selecting one of <code>[mysql, postgres, manual]</code> and depend on a specific database if it is selected. - Is there a way to do so?</p>
| <p>I don't think there's a straightforward way to do this. In particular, it looks like the <code>requirements.yaml</code> <code>condition:</code> field only takes a boolean value (or a list of them) and not an arbitrary expression. From <a href="https://helm.sh/docs/developing_charts/#tags-and-condition-fields-in-requirements-yaml" rel="nofollow noreferrer">the Helm documentation</a>:</p>
<blockquote>
<p>The condition field holds one or more YAML paths (delimited by commas). If this path exists in the top parent’s values and resolves to a boolean value, the chart will be enabled or disabled based on that boolean value. Only the first valid path found in the list is evaluated and if no paths exist then the condition has no effect.</p>
</blockquote>
<p>(The tags mechanism described below that is extremely similar and doesn't really help.)</p>
<p>When it comes down to actually writing your Deployment spec you have a more normal conditional system and can test that only one of the values is set; so I don't think you can prevent having redundant databases installed, but you'll at least only use one of them. You could also put an after-the-fact warning to this effect in your <code>NOTES.txt</code> file.</p>
<pre><code>{{ if and .Values.mysql.enabled .Values.postgresql.enabled -}}
WARNING: you have multiple databases enabled in your Helm values file.
Both MySQL and PostgreSQL are installed as part of this chart, but only
PostgreSQL is being used. You can update this chart installation setting
`--set mysql.enabled=false` to remove the redundant database.
{{ end -}}
</code></pre>
|
<p>I am hosting a jupyterhub with kubernetes on my google cloud account. I noticed that google cloud charges me for the runtime that the jupyterhub instance is running. <br/>
I am wondering if I can sorta shut down the jupyterhub instance or the kubernetes when we are not using the jupyterhub to save money? <br/>
If I restart the instance, will the data be whiped out? I want to do a experiment on this but I am afraid of doing something inreversible.<br/>
Also, where can I learn more about the adminstration tips about using google cloud?
Thanks!</p>
| <p>You can resize your GKE cluster to "0", when you don't need it, with the below command </p>
<pre><code>cloud container clusters resize CLUSTERNAME --size=0
</code></pre>
<p>Then you won't be charged, GKE charges only for worker nodes and not for master nodes.</p>
<p>And if you want to make sure your data is persistent after each time you are scaling your cluster, then you will need to use <a href="https://kubernetes.io/docs/concepts/storage/volumes/#gcepersistentdisk" rel="nofollow noreferrer">gcePersistentDisk</a>.
You can create PD using gcloud before mounting it to your deployment.</p>
<pre><code>gcloud compute disks create --size=500GB --zone=us-central1-a my-data-disk
</code></pre>
<p>Then you can configure your Pod configuration like in example <a href="https://kubernetes.io/docs/concepts/storage/volumes/#example-pod-1" rel="nofollow noreferrer">here</a></p>
<p>Just make sure to mount all necessary paths of containers on Persistent Disk.</p>
<p>For more information for Kubernetes Engine pricing <a href="https://cloud.google.com/kubernetes-engine/pricing" rel="nofollow noreferrer">check</a></p>
|
<p>I'm currently investigating using dynamically provisioned persistent disks in the GCE application: In my application I have 1-n pods, where each pod contains a single container that needs rw access to a persistent volume. The volume needs to be pre-populated with some data which is copied from a bucket.</p>
<p>What I'm confused about is; if the persistent disk is dynamically allocated, how do I ensure that data is copied onto it before it is mounted to my pod? The copying of the data is infrequent but regular, the only time I might need to do this out of sequence is if a pod falls over and I need a new persistent disk and pod to take it's place.</p>
<p>How do I ensure the persistent disk is pre populated before it is mounted to my pod?</p>
<p>My current thought is to have the bucket mounted to the pod, and as part of the startup of the pod, copy from the bucket to the persistent disk. This creates another problem, in that the bucket cannot be write enabled and mounted to multiple pods.</p>
<p>Note: I'm using a seperate persistent disk as I need it to be an ssd for speed.</p>
| <p>Looks like the copy is a good candidate to be done as an <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">"init container"</a>.</p>
<p>That way on every pod start, the "init container" would connect to the GCS bucket and check the status of the data, and if required, copy the data to the dynamically assigned PersistentDisk. </p>
<p>When completed, the main container of the pod starts, with data ready for it to use. By using an "init container" you are guaranteeing that: </p>
<ol>
<li><p>The copy is complete before your main pod container starts.</p></li>
<li><p>The main container does not need access to the GCS, just the dynamically created PV.</p></li>
<li><p>If the "init container" fails to complete successfully then your pod would fail to start and be in an error state.</p></li>
</ol>
<p>Used in conjunction with a <code>StatefulSet</code> of N pods this approach works well, in terms of being able to initialize a new replica with a new disk, and keep persistent data across main container image (code) updates.</p>
|
<p>I'm trying out AWS EKS following this guide <a href="https://learn.hashicorp.com/terraform/aws/eks-intro" rel="nofollow noreferrer">https://learn.hashicorp.com/terraform/aws/eks-intro</a></p>
<p>I understand all the steps except for the last one where the instructions say to <code>apply a configmap</code>. Before this step, I couldn't see my worker nodes from the cluster <code>kubectl get nodes</code>. But, I can see my worker nodes after applying this configmap. Can someone please explain to me how this configmap actually accomplishes this feat.</p>
<p>Here is the configmap: </p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: ${aws_iam_role.demo-node.arn}
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
</code></pre>
<p>Thanks,</p>
<p>Ashutosh Singh</p>
| <p>The data in that configmap are what enables the worker node to join the cluster. Specifically, it needs the proper role ARN permissions. In the tutorial you are following, look at how <code>aws_iam_role.demo-node.arn</code> is defined, then look up the permissions associated with those policies. You can experiment around and remove/add other policies and see how it affects the node's ability to interact with the cluster.</p>
|
<p>what is the difference between l2 cni plugin vs l3 cni plugin? </p>
<p>Does L2 CNI plugin doesn't provide public access to the pods?What are the examples of L2 and L3 plugins</p>
| <p>Usually when one refers to L2 vs L3 CNI plugins, they are talking less about the reach-ability of their pods (public vs private), and more about the OSI network model layer of connectivity the networking plugin provides between that pod and other Kubernetes pods.</p>
<p>For example, if all pods can send L2 traffic to each other (e.g., ARP) then the CNI plugin is providing L2 connectivity. Most CNI plugins provide IP (L3) networking to Kubernetes pods, since that is what is defined by the <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/#the-kubernetes-network-model" rel="noreferrer">Kubernetes networking model.</a></p>
<p>Some examples of Kubernetes network implementations that provide L3 networking across hosts: Calico, flannel, Canal, kube-router, etc. </p>
<p>The only example I can think of that can provide L2 networking across hosts is Weave Net, but I expect there are likely others I'm forgetting. </p>
<p>Note that many of the above can use encapsulation methods like VXLAN to provide pod-to-pod networking across hosts. This is commonly misunderstood to mean that they provide L2 networking between pods. However, they often still use an IP routing step between the pod and its host, meaning it provides L3 pod-to-pod connectivity. </p>
<p>Also note that many of the above connect pods on the same host using a linux bridge, meaning that pods on the same host will get L2 connectivity but pods on other hosts will be routed (L3). It's much easier to scale L3 networks to large number of endpoints, so this makes sense. Calico is a slight exception here, using an L3 routed network even for pods on the same host.</p>
|
<p>I want to build an application/service which uses a static application data which can get updated over time. Currently, I implemented this by having both the application and the data within the same container. However, this requires redeployment when either application or data change. I want to separate the app and data volume implementation so that I can update the application and the data independently, meaning I won't have to rebuild the application layer when the application data is updated and vice versa.</p>
<p>Here are the characteristics of the Application Data and its Usage:</p>
<ul>
<li>Data is <strong>not frequently updated</strong>, but <strong>read very frequently</strong> by the application</li>
<li>Data is not a database, it's a collection of <strong>file objects</strong> with size ranging from 100MB to 4GB which is initially stored in a cloud storage</li>
<li>Data stored in the cloud storage serves as a single source of truth for the application data</li>
<li>The application will <strong>only read</strong> from the Data. The process of updating data in cloud storage is an external process outside the application scope.</li>
</ul>
<p>So here, we are interested in sync-ing the data in cloud storage to the volume in Kubernetes deployment. What's the best way to achieve this objective in Kubernetes?</p>
<p>I have several options in mind:</p>
<ol>
<li><p>Using one app container in one deployment, in which the app will also include the logic for data loading and update which pulls data
from cloud storage to the container --> simple but tightly coupled with the storage read-write implementation</p></li>
<li><p>Using the cloud store directly from the app --> this doesn't require container volume, but I was concerned with the huge file size because the app is an interactive service which requires a quick response</p></li>
<li><p>Using two containers in one deployment sharing the same volume --> allow great flexibility for the storage read-write implementation</p>
<ul>
<li>one container for application service reading from the shared volume</li>
<li>one container for updating data and listening to update data request which writes data to the shared volume --> this process will pull data from cloud storage to the shared volume</li>
</ul></li>
<li><p>Using one container with a Persistent Disk</p>
<ul>
<li>an external process which writes to the persistent disk (not sure how to do this yet with cloud storage/file objects, need to find a way to sync gcs to persistent disk)</li>
<li>one container for application service which reads from the mounted volume</li>
</ul></li>
<li><p>Using Fuse mounts</p>
<ul>
<li>an external process which writes to cloud storage</li>
<li>a container which uses fuse mounts</li>
</ul></li>
</ol>
<p>I am currently leaning towards option 3, but I am not sure if it's the common practice of achieving my objective. Please let me know if you have better solutions.</p>
| <p>Yes. 3. is the most common option but make sure you use an <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">initContainer</a> to copy the data from your cloud storage to a local volume. That local volume could be any of the <a href="https://kubernetes.io/docs/concepts/storage/volumes/#types-of-volumes" rel="nofollow noreferrer">types</a> supported by Kubernetes.</p>
|
<p>I have a Kubernetes deployment for a Scylla database with a volume attached. It has one replica, with the manifest similar to the following:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: scylla
labels:
app: myapp
role: scylla
spec:
replicas: 1
selector:
matchLabels:
app: myapp
role: scylla
template:
metadata:
labels:
app: myapp
role: scylla
spec:
containers:
- name: scylla
image: scylladb/scylla
imagePullPolicy: Always
volumeMounts:
- mountPath: /var/lib/scylla/data
name: scylladb
volumes:
- name: scylladb
hostPath:
path: /var/myapp/scylla/
type: DirectoryOrCreate
</code></pre>
<p>When I perform an update, it will terminate the old pod and start a new pod before the old pod has stopped. This causes the database on the new pod to fail because it can't access the database files stored in the volume (because the old pod is still using it). How can I make it so that only one pod uses the volume at a time? (Short downtime is okay)</p>
| <p>You can use <strong>Recreate Strategy</strong> in Deployment to do that. This will kill all the existing Pods before new ones are created. Ref: <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#recreate-deployment" rel="nofollow noreferrer">Kubernetes doc</a>. So their will be some downtime because of that.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: scylla
labels:
app: myapp
role: scylla
spec:
replicas: 1
selector:
matchLabels:
app: myapp
role: scylla
strategy:
type: Recreate
template:
metadata:
labels:
app: myapp
role: scylla
spec:
containers:
- name: scylla
image: scylladb/scylla
imagePullPolicy: Always
volumeMounts:
- mountPath: /var/lib/scylla/data
name: scylladb
volumes:
- name: scylladb
hostPath:
path: /var/myapp/scylla/
type: DirectoryOrCreate
</code></pre>
|
<p>I need to create a shell-script which examine the cluster
Status.**</p>
<p>I saw that the <code>kubectl describe-nodes</code> provides lots of data
I can output it to json and then parse it but maybe it’s just overkill.
Is there a simple way to with <code>kubectl</code> command to get the status of the cluster ? just if its up / down</p>
| <p>The least expensive way to check if you can reach the API server is <code>kubectl version</code>. In addition <code>kubectl cluster-info</code> gives you some more info.</p>
|
<p>I want to change <code>namespace</code> in <code>alok-pod.json</code> in the below json using jsonnet.</p>
<pre><code>{
"apiVersion": "v1",
"items": [
{
"apiVersion": "v1",
"data": {
"alok-pod.json": "{\n \"namespace\": \"alok\",\n \"editable\": true,\n}"
}
},
]
}
</code></pre>
<p>Please suggest how this can be done using jsonnet?</p>
| <p>NOTE you'll need a <code>jsonnet</code> binary built from master, as <code>std.parseJson()</code> is not yet released as of 2019-02-26.</p>
<blockquote>
<p><strong>input.json</strong></p>
</blockquote>
<pre><code>{
"apiVersion": "v1",
"items": [
{
"apiVersion": "v1",
"data": {
"alok-pod.json": "{\n \"namespace\": \"alok\",\n \"editable\": true\n}"
}
},
]
}
</code></pre>
<blockquote>
<p><strong>edit_ns.jsonnet</strong></p>
</blockquote>
<pre><code>// edit_ns.jsonnet for https://stackoverflow.com/questions/54880959/make-changes-to-json-string-using-jsonnet
//
// NOTE: as of 2019-02-26 std.parseJson() is unreleased,
// need to build jsonnet from master.
local input = import "input.json";
local edit_ns_json(json, ns) = (
std.manifestJson(std.parseJson(json) { namespace: ns })
);
local edit_ns(x, ns) = (
x {
local d = super.data,
data+: {
[key]: edit_ns_json(d[key], ns) for key in std.objectFields(d)
}
}
);
[edit_ns(x, "foo") for x in input.items]
</code></pre>
<blockquote>
<p>Example run:</p>
</blockquote>
<pre><code>$ jsonnet-dev edit_ns.jsonnet
[
{
"apiVersion": "v1",
"data": {
"alok-pod.json": "{\n \"editable\": true,\n \"namespace\": \"foo\"\n}"
}
}
]
</code></pre>
|
<p>I'm trying to set up my mosquitto server inside a Kubernetes cluster and somehow I'm getting the following error and I can't figure out why.
Could someone help me?</p>
<p><strong>Error:</strong></p>
<pre><code>1551171948: mosquitto version 1.4.10 (build date Wed, 13 Feb 2019 00:45:38 +0000) starting
1551171948: Config loaded from /etc/mosquitto/mosquitto.conf.
1551171948: |-- *** auth-plug: startup
1551171948: |-- ** Configured order: http
1551171948: |-- with_tls=false
1551171948: |-- getuser_uri=/api/mosquitto/users
1551171948: |-- superuser_uri=/api/mosquitto/admins
1551171948: |-- aclcheck_uri=/api/mosquitto/permissions
1551171948: |-- getuser_params=(null)
1551171948: |-- superuser_params=(null)
1551171948: |-- aclcheck_paramsi=(null)
1551171948: Opening ipv4 listen socket on port 1883.
1551171948: Error: Cannot assign requested address
</code></pre>
<p><strong>Mosquitto.conf:</strong></p>
<pre><code>allow_duplicate_messages false
connection_messages true
log_dest stdout stderr
log_timestamp true
log_type all
persistence false
listener 1883 mosquitto
allow_anonymous true
# Public
# listener 8883 0.0.0.0
listener 9001 0.0.0.0
protocol websockets
allow_anonymous false
auth_plugin /usr/lib/mosquitto-auth-plugin/auth-plugin.so
auth_opt_backends http
auth_opt_http_ip 127.0.0.1
auth_opt_http_getuser_uri /api/mosquitto/users
auth_opt_http_superuser_uri /api/mosquitto/admins
auth_opt_http_aclcheck_uri /api/mosquitto/permissions
auth_opt_acl_cacheseconds 1
auth_opt_auth_cacheseconds 0
</code></pre>
<p><strong>Kubernetes.yaml:</strong></p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: mosquitto
spec:
replicas: 1
template:
metadata:
labels:
app: mosquitto
spec:
imagePullSecrets:
- name: abb-login
containers:
- name: mosquitto
image: ****mosquitto:develop
imagePullPolicy: Always
ports:
- containerPort: 9001
protocol: TCP
- containerPort: 1883
protocol: TCP
- containerPort: 8883
protocol: TCP
resources: {}
---
apiVersion: v1
kind: Service
metadata:
name: mosquitto
spec:
ports:
- name: "9001"
port: 9001
targetPort: 9001
protocol: TCP
- name: "1883"
port: 1883
targetPort: 1883
protocol: TCP
- name: "8883"
port: 8883
targetPort: 8883
protocol: TCP
selector:
app: mosquitto
</code></pre>
| <p>The problem is with the listener on port 1883, this can be determined because the log hasn't got to the 9001 listener yet.</p>
<p>The problem is most likely because mosquitto can not resolve the IP address of the hostname <code>mosquitto</code>. When passing a hostname the name must resolve to a valid IP address. The same problem has been discussed in <a href="https://stackoverflow.com/questions/54863408/facing-error-while-using-tls-with-mosquitto/54865869#54865869">this</a> recent answer. It could also be that <code>mosquitto</code> is resolving to an address that is not bound to any of the interfaces on the actual machine (e.g. if Address Translation is being used).</p>
<p>Also for the 9001 listener rather than passing <code>0.0.0.0</code> you can just not include a bind address and the default is to listen on all interfaces.</p>
|
<p>I have a back-end service deployed in Kubernetes (at <a href="http://purser.default.svc.cluster.local:3030" rel="nofollow noreferrer">http://purser.default.svc.cluster.local:3030</a>) and a front-end angular 6 application with <code>nginx.conf</code> as </p>
<pre><code>upstream purser {
server purser.default.svc.cluster.local:3030;
}
server {
listen 4200;
location / {
proxy_pass http://purser;
root /usr/share/nginx/html/appDApp;
index index.html index.htm;
try_files $uri $uri/ /index.html =404;
}
}
</code></pre>
<p>In angular code we are using <code>http.get('http://purser.default.svc.cluster.local:3030', {observe: 'body', responseType: 'json'})</code></p>
<p>Case1: With <code>proxy_pass</code> set in <code>nginx.conf</code> when we hit the ui service it redirects to back-end and gives <code>json</code> output directly from back-end.</p>
<p>Case2: Without <code>proxy_pass</code> when we hit front-end service it shows the UI but no data is coming from backend i.e, browser is not able understand <code>http://purser.default.svc.cluster.local:3030</code></p>
<p><a href="https://i.stack.imgur.com/WeKAz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WeKAz.png" alt="enter image description here"></a></p>
| <p>Solved it using this <code>nginx.conf</code></p>
<pre><code>upstream purser {
server purser.default.svc.cluster.local:3030;
}
server {
listen 4200;
location /api {
proxy_pass http://purser;
}
location / {
root /usr/share/nginx/html/purser;
index index.html index.htm;
try_files $uri $uri/ /index.html =404;
}
}
</code></pre>
<p>and calling backend from frontend using <code>BACKEND_URL = window.location.protocol + '//' + window.location.host + '/api/'</code></p>
<p><strong>Explanation:</strong>
Frontend when it requires data from backend calls itself at path <code>/api</code>, nginx finds this path and according to configuration forwards it to backend kubernetes service <code>purser.default.svc.cluster.local:3030</code> using <code>proxy_pass</code></p>
|
<p>When deploying a service via a Helm Chart, the installation failed because the <code>tiller</code> serviceaccount was not allowed to create a <code>ServiceMonitor</code> resource.</p>
<p>Note:</p>
<ul>
<li><code>ServiceMonitor</code> is a CRD defined by the Prometheus Operator to automagically get metrics of running containers in Pods.</li>
<li>Helm Tiller is installed in a single namespace and the RBAC has been setup using Role and RoleBinding.</li>
</ul>
<p>I wanted to verify the permissions of the <code>tiller</code> serviceaccount.<br>
<code>kubectl</code> has the <code>auth can-i</code> command, queries like these (see below) always return <code>no</code>.</p>
<ul>
<li><code>kubectl auth can-i list deployment --as=tiller</code></li>
<li><code>kubectl auth can-i list deployment --as=staging:tiller</code></li>
</ul>
<p>What is the proper way to check permissions for a serviceaccount?<br>
How to enable the <code>tiller</code> account to create a ServiceMonitor resource?</p>
| <p>After trying lots of things and Googling all over the universe I finally found <a href="https://docs.giantswarm.io/guides/securing-with-rbac-and-psp/#verifying-if-you-have-access" rel="noreferrer">this blogpost about Securing your cluster with RBAC and PSP</a> where an example is given how to check access for serviceaccounts.</p>
<p>The correct command is:<br>
<code>kubectl auth can-i <verb> <resource> --as=system:serviceaccount:<namespace>:<serviceaccountname> [-n <namespace>]</code></p>
<p>To check whether the <code>tiller</code> account has the right to create a <code>ServiceMonitor</code> object:<br>
<code>kubectl auth can-i create servicemonitor --as=system:serviceaccount:staging:tiller -n staging</code></p>
<p>Note: to solve my issue with the <code>tiller</code> account, I had to add rights to the <code>servicemonitors</code> resource in the <code>monitoring.coreos.com</code> apiGroup. After that change, the above command returned <code>yes</code> (finally) and the installation of our Helm Chart succeeded.</p>
<p>Updated <code>tiller-manager</code> role:</p>
<pre><code>kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: tiller-manager
labels:
org: ipos
app: tiller
annotations:
description: "Role to give Tiller appropriate access in namespace"
ref: "https://docs.helm.sh/using_helm/#example-deploy-tiller-in-a-namespace-restricted-to-deploying-resources-only-in-that-namespace"
rules:
- apiGroups: ["", "batch", "extensions", "apps"]
resources: ["*"]
verbs: ["*"]
- apiGroups:
- monitoring.coreos.com
resources:
- servicemonitors
verbs:
- '*'
</code></pre>
|
<p>I have Nginx-based service which configured to accept HTTPS-only.
However GKE ingress answers HTTP requests in HTTP. I know that GKE Ingress doesn't know to enforce HTTP -> HTTPS redirect, but is it possible to learn it at least return HTTPS from service?</p>
<pre><code>rules:
- http:
paths:
- path: /*
backend:
serviceName: dashboard-ui
servicePort: 8443
</code></pre>
<p>UPDATE: I do have TSL configured on GKE ingress and my K8S service. When request comes in HTTPS everything works nice. But HTTP requests gets HTTP response. I implemented HTTP->HTTPS redirect in my service, but it didn't help. In fact, for now all communication between ingress and my service is HTTTPS because service exposes only HTTPS port</p>
<p>SOLUTION - thanks to Paul Annetts: Nginx should check original protocol inside <em>HTTPS</em> block and redirect, like this</p>
<pre><code>if ($http_x_forwarded_proto = "http") {
return 301 https://$host$request_uri;
}
</code></pre>
| <p>Yes, you can configure the GKE Kubernetes Ingress to both terminate HTTPS for external traffic, and also to use HTTPS internally between Google HTTP(S) Load Balancer and your service inside the GKE cluster.</p>
<p>This is documented <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress" rel="nofollow noreferrer">here</a>, but it is fairly complex.</p>
<p>For HTTPS to work you will need a TLS certificate and key.</p>
<p>If you have your own TLS certificate and key in the cluster as a secret, you can provide it using the <code>tls</code> section of <code>Ingress</code>:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress-2
spec:
tls:
- secretName: my-secret
rules:
- http:
paths:
- path: /*
backend:
serviceName: my-metrics
servicePort: 60000
</code></pre>
<p>You can also upload your TLS certificate and key directly to Google Cloud and provide a <code>ingress.gcp.kubernetes.io/pre-shared-cert</code> annotation that tells GKE Ingress to use it.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-psc-ingress
annotations:
ingress.gcp.kubernetes.io/pre-shared-cert: "my-domain-tls-cert"
...
</code></pre>
<p>To use HTTPS for traffic inside Google Cloud, from the Load Balancer to your GKE cluster, you need the <code>cloud.google.com/app-protocols: '{"my-https-port":"HTTPS","my-http-port":"HTTP"}'</code> annotation on your <code>NodePort</code> service. Note that your ports must be named for the HTTPS to work.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-service-3
annotations:
cloud.google.com/app-protocols: '{"my-https-port":"HTTPS","my-http-port":"HTTP"}'
spec:
type: NodePort
selector:
app: metrics
department: sales
ports:
- name: my-https-port
port: 443
targetPort: 8443
- name: my-http-port
port: 80
targetPort: 50001
</code></pre>
<p>The load balancer itself doesn’t support redirection from HTTP->HTTPS, you need to find another way for that.</p>
<p>As you have NGINX as entry-point into your cluster, you can detect the protocol used to connect to the load-balancer with the <code>X-forwarded-Proto</code> HTTP header and do a redirect, something like this.</p>
<pre><code>if ($http_x_forwarded_proto = "http") {
return 301 https://$host$request_uri;
}
</code></pre>
|
<p>I try to create roles in an automated way in Google Kubernetes (GKE).</p>
<p>For that, I use the python client library, but I don't want to have any dependency to kubectl and kubeconfig, or gcloud,</p>
<p>I use a service account (with a json key file from GCP) which has the permissions to create roles in namespaces (it is a cluster admin). When I use the access token given by this command :</p>
<pre><code>gcloud auth activate-service-account --key-file=credentials.json
gcloud auth print-access-token
</code></pre>
<p>It works.</p>
<p>But when I try to generate the token by myself, I can create namespaces and other standard resources, but I have this error when it comes to roles :</p>
<pre><code>E kubernetes.client.rest.ApiException: (403)
E Reason: Forbidden
E HTTP response headers: HTTPHeaderDict({'Audit-Id': 'b89b0fc2-9350-456e-9eca-730e7ad2cea1', 'Content-Type': 'application/json', 'Date': 'Tue, 26 Feb 2019 20:35:20 GMT', 'Content-Length': '1346'})
E HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"roles.rbac.authorization.k8s.io \"developers\" is forbidden: attempt to grant extra privileges: [{[*] [apps] [statefulsets] [] []} {[*] [apps] [deployments] [] []} {[*] [autoscaling] [horizontalpodautoscalers] [] []} {[*] [] [pods] [] []} {[*] [] [pods/log] [] []} {[*] [] [pods/portforward] [] []} {[*] [] [serviceaccounts] [] []} {[*] [] [containers] [] []} {[*] [] [services] [] []} {[*] [] [secrets] [] []} {[*] [] [configmaps] [] []} {[*] [extensions] [ingressroutes] [] []} {[*] [networking.istio.io] [virtualservices] [] []}] user=\u0026{100701357824788592239 [system:authenticated] map[user-assertion.cloud.google.com:[AKUJVp+KNvF6jw9II+AjCdqjbC0vz[...]hzgs0JWXOyk7oxWHkaXQ==]]} ownerrules=[{[create] [authorization.k8s.io] [selfsubjectaccessreviews selfsubjectrulesreviews] [] []} {[get] [] [] [] [/api /api/* /apis /apis/* /healthz /openapi /openapi/* /swagger-2.0.0.pb-v1 /swagger.json /swaggerapi /swaggerapi/* /version /version/]}] ruleResolutionErrors=[]","reason":"Forbidden","details":{"name":"developers","group":"rbac.authorization.k8s.io","kind":"roles"},"code":403}
</code></pre>
<p>I'm using the same service account, so I guess gcloud is doing something more than my script.</p>
<p>Here the python code I use to generate the token :</p>
<pre><code>def _get_token(self) -> str:
# See documentation here
# https://developers.google.com/identity/protocols/OAuth2ServiceAccount
epoch_time = int(time.time())
# Generate a claim from the service account file.
claim = {
"iss": self._service_account_key["client_email"],
"scope": "https://www.googleapis.com/auth/cloud-platform",
"aud": "https://www.googleapis.com/oauth2/v4/token",
"exp": epoch_time + 3600,
"iat": epoch_time
}
# Sign claim with JWT.
assertion = jwt.encode(
claim,
self._service_account_key["private_key"],
algorithm='RS256'
).decode()
# Create payload for API.
data = urlencode({
"grant_type": "urn:ietf:params:oauth:grant-type:jwt-bearer",
"assertion": assertion
})
# Request the access token.
result = requests.post(
url="https://www.googleapis.com/oauth2/v4/token",
headers={
"Content-Type": "application/x-www-form-urlencoded"
},
data=data
)
result.raise_for_status()
return json.loads(result.text)["access_token"]
def _get_api_client(self) -> client.ApiClient:
configuration = client.Configuration()
configuration.host = self._api_url
configuration.verify_ssl = self._tls_verify
configuration.api_key = {
"authorization": f"Bearer {self._get_token()}"
}
return client.ApiClient(configuration)
</code></pre>
<p>And the function to create the role (which generates the 403 error):</p>
<pre><code>def _create_role(self, namespace: str, body: str):
api_client = self._get_api_client()
rbac = client.RbacAuthorizationV1Api(api_client)
rbac.create_namespaced_role(
namespace,
body
)
</code></pre>
<p>If I short-circuit the _get_token method with the token extracted from gcloud, it works.</p>
<p>I guess it has something to do with the way I create my token (missing scope ?), but I don't find any documentation about it.</p>
<p><strong>ANSWER :</strong></p>
<p>Adding a scope does the job ! Thanks a lot :</p>
<pre><code># Generate a claim from the service account file.
claim = {
"iss": self._service_account_key["client_email"],
"scope": " ".join([
"https://www.googleapis.com/auth/cloud-platform",
"https://www.googleapis.com/auth/userinfo.email"
]),
"aud": "https://www.googleapis.com/oauth2/v4/token",
"exp": epoch_time + 3600,
"iat": epoch_time
}
</code></pre>
| <p>So if you look at the code <a href="https://github.com/google-cloud-sdk/google-cloud-sdk/blob/af9de150b5107cd0dc66cb1cf3de417f06d56691/lib/surface/auth/application_default/print_access_token.py#L67" rel="nofollow noreferrer">here</a> for <code>print-access-token</code> you can see that the access token is generally printed without a scope. You see:</p>
<pre><code>try:
creds = client.GoogleCredentials.get_application_default()
except client.ApplicationDefaultCredentialsError as e:
log.debug(e, exc_info=True)
raise c_exc.ToolException(str(e))
if creds.create_scoped_required():
...
</code></pre>
<p>and then on this <a href="https://github.com/google-cloud-sdk/google-cloud-sdk/blob/af9de150b5107cd0dc66cb1cf3de417f06d56691/lib/third_party/oauth2client/client.py#L1163" rel="nofollow noreferrer">file</a> you see:</p>
<pre><code>def create_scoped_required(self):
"""Whether this Credentials object is scopeless.
create_scoped(scopes) method needs to be called in order to create
a Credentials object for API calls.
"""
return False
</code></pre>
<p>Apparently, in your code, you are getting the token with the <code>https://www.googleapis.com/auth/cloud-platform</code> scope. You could try removing it or try with the <a href="https://github.com/google-cloud-sdk/google-cloud-sdk/blob/5514f695e0f3068d72f6ee7ee0de57e89aa259d1/lib/googlecloudsdk/api_lib/auth/util.py#L42" rel="nofollow noreferrer">USER_EMAIL_SCOPE</a> since you are specifying: <code>"iss": self._service_account_key["client_email"]</code>.</p>
<p>You can always check what <code>gcloud auth activate-service-account --key-file=credentials.json</code> stores under <code>~/.config</code>. So you know what <code>gcloud auth print-access-token</code> uses. Note that as per <a href="https://github.com/google-cloud-sdk/google-cloud-sdk/blob/5514f695e0f3068d72f6ee7ee0de57e89aa259d1/lib/googlecloudsdk/core/credentials/store.py#L415" rel="nofollow noreferrer">this</a> and <a href="https://github.com/google-cloud-sdk/google-cloud-sdk/blob/5514f695e0f3068d72f6ee7ee0de57e89aa259d1/lib/googlecloudsdk/core/credentials/creds.py#L311" rel="nofollow noreferrer">this</a> it looks like the store is in <a href="https://www.sqlite.org/index.html" rel="nofollow noreferrer">sqlite</a> format.</p>
|
<p>I'm trying to implement a <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/#streaming-sidecar-container" rel="nofollow noreferrer">Streaming Sidecar Container</a> logging architecture in Kubernetes using Fluentd.</p>
<p>In a single pod I have:</p>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/storage/volumes/#emptydir" rel="nofollow noreferrer">emptyDir</a> Volume (as log storage)</li>
<li>Application container</li>
<li>Fluent log-forwarder container</li>
</ul>
<p>Basically, the Application container logs are stored in the shared emptyDir volume. Fluentd log-forwarder container tails this log file in the shared emptyDir volume and forwards it an external log-aggregator.</p>
<p>The Fluentd log-forwarder container uses the following config in <code>td-agent.conf</code>:</p>
<pre><code><source>
@type tail
tag "#{ENV['TAG_VALUE']}"
path (path to log file in volume)
pos_file /var/log/td-agent/tmp/access.log.pos
format json
time_key time
time_format %iso8601
keep_time_key true
</source>
<match *.*>
@type forward
@id forward_tail
heartbeat_type tcp
<server>
host (server-host-address)
</server>
</match>
</code></pre>
<p>I'm using an environment variable to set the <code>tag</code> value so I can change it dynamically e.g. when I have to use this container side-by-side with a different Application container, I don't have to modify this config and rebuild this image again.</p>
<p>Now, I set the environment variable value during pod creation in Kubernetes:</p>
<pre><code> .
.
spec:
containers:
- name: application-pod
image: application-image:1.0
ports:
- containerPort: 1234
volumeMounts:
- name: logvolume
mountPath: /var/log/app
- name: log-forwarder
image: log-forwarder-image:1.0
env:
- name: "TAG_VALUE"
value: "app.service01"
volumeMounts:
- name: logvolume
mountPath: /var/log/app
volumes:
- name: logvolume
emptyDir: {}
</code></pre>
<p>After deploying the pod, I found that the tag value in the Fluentd log-forwarder container comes out empty (expected value: "app.service01"). I imagine it's because Fluentd's td-agent initializes first before the <code>TAG_VALUE</code> environment variable gets assigned.</p>
<p>So, the main question is...<br>
<strong>How can I dynamically set the td-agent's tag value?</strong></p>
<p>But really, what I'm wondering is:<br>
<strong>Is it possible to assign an environment variable before a container's initialization in Kubernetes?</strong></p>
| <p>You can use the combo <a href="https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter" rel="nofollow noreferrer">fluent-plugin-kubernetes_metadata_filter</a> and <a href="https://github.com/fluent/fluent-plugin-rewrite-tag-filter" rel="nofollow noreferrer">fluent-plugin-rewrite-tag-filter</a> to set container name or something to the tag.</p>
|
<p>I've found a documentation about how to configure your NginX ingress controller using ConfigMap: <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/" rel="noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/</a></p>
<p>Unfortunately I've no idea and couldn't find it anywhere how to load that ConfigMap from my Ingress controller.</p>
<p>My ingress controller:</p>
<pre><code>helm install --name ingress --namespace ingress-nginx --set rbac.create=true,controller.kind=DaemonSet,controller.service.type=ClusterIP,controller.hostNetwork=true stable/nginx-ingress
</code></pre>
<p>My config map:</p>
<pre><code>kind: ConfigMap
apiVersion: v1
metadata:
name: ingress-configmap
data:
proxy-read-timeout: "86400s"
client-max-body-size: "2g"
use-http2: "false"
</code></pre>
<p>My ingress:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
tls:
- hosts:
- my.endpoint.net
secretName: ingress-tls
rules:
- host: my.endpoint.net
http:
paths:
- path: /
backend:
serviceName: web
servicePort: 443
- path: /api
backend:
serviceName: api
servicePort: 443
</code></pre>
<p>How do I make my Ingress to load the configuration from the ConfigMap? </p>
| <p>I've managed to display what YAML gets executed by Helm using the: <code>--dry-run --debug</code> options at the end of <code>helm install</code> command. Then I've noticed that there controller is executed with the: <code>--configmap={namespace-where-the-nginx-ingress-is-deployed}/{name-of-the-helm-chart}-nginx-ingress-controller</code>.
In order to load your ConfigMap you need to override it with your own (check out the namespace).</p>
<pre><code>kind: ConfigMap
apiVersion: v1
metadata:
name: {name-of-the-helm-chart}-nginx-ingress-controller
namespace: {namespace-where-the-nginx-ingress-is-deployed}
data:
proxy-read-timeout: "86400"
proxy-body-size: "2g"
use-http2: "false"
</code></pre>
<p>The list of config properties can be found <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/configmap.md" rel="noreferrer">here</a>.</p>
|
<p><strong>I have the following json</strong></p>
<pre><code>{
"namespace": "monitoring",
"name": "alok",
"spec": {
"replicas": 1,
"template": {
"metadata": "aaa",
"spec": {
"containers": [
{
"image": "practodev/test:test",
"env": [
{
"name":"GF_SERVER_HTTP_PORT",
"value":"3000"
},
{
"name":"GF_SERVER_HTTPS_PORT",
"value":"443"
},
]
}
]
}
}
}
}
</code></pre>
<p><strong>How do I add <code>deployment_env.json</code> using jsonnet?</strong></p>
<pre><code>{
"env": [
{
"name":"GF_AUTH_DISABLE_LOGIN_FORM",
"value":"false"
},
{
"name":"GF_AUTH_BASIC_ENABLED",
"value":"false"
},
]
}
</code></pre>
<p>I need to add it under spec.template.containers[0].env = deployment_env.json</p>
<p>I wrote the below jsonnet to do that. It appends a new element. But i need to change the existing 0th container element in the json. <strong>Please suggest how to do it.</strong></p>
<pre><code>local grafana_envs = (import 'custom_grafana/deployment_env.json');
local grafanaDeployment = (import 'nested.json') + {
spec+: {
template+: {
spec+: {
containers+: [{
envs: grafana_envs.env,
}]
}
}
},
};
grafanaDeployment
</code></pre>
| <p>See below for an implementation that allows adding <code>env</code> to an existing container by its index in the <code>containers[]</code> array.</p>
<p>Do note that <code>jsonnet</code> is much better suited to work with objects (i.e. dictionaries / maps) rather than arrays, thus it needs contrived handling via <code>std.mapWithIndex()</code>, to be able to modify an entry from its matching index.</p>
<pre><code>local grafana_envs = (import 'deployment_env.json');
// Add extra_env to a container by its idx passed containers array
local override_env(containers, idx, extra_env) = (
local f(i, x) = (
if i == idx then x {env+: extra_env} else x
);
std.mapWithIndex(f, containers)
);
local grafanaDeployment = (import 'nested.json') + {
spec+: {
template+: {
spec+: {
containers: override_env(super.containers, 0, grafana_envs.env)
}
}
},
};
grafanaDeployment
</code></pre>
|
<p>When I run my application from IntelliJ, I get a <code>warning / error</code> though application builds fine and runs all tests through, is this something that I should ignore, or can this be handled fixed?</p>
<p>Kubernetes secret is used to create a random password, therefor I have a placeholder for that particular variable.</p>
<pre><code>2019-02-26 19:45:29.600 INFO 38918 --- [ main] j.LocalContainerEntityManagerFactoryBean : Initialized JPA EntityManagerFactory for persistence unit 'default'
2019-02-26 19:45:29.684 WARN 38918 --- [ main] ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'actuatorSecurity': Injection of autowired dependencies failed; nested exception is java.lang.IllegalArgumentException: Could not resolve placeholder 'ACTUATOR_PASSWORD' in value "${ACTUATOR_PASSWORD}"
2019-02-26 19:45:29.685 INFO 38918 --- [ main] j.LocalContainerEntityManagerFactoryBean : Closing JPA EntityManagerFactory for persistence unit 'default'
2019-02-26 19:45:29.685 INFO 38918 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown initiated...
2019-02-26 19:45:29.707 INFO 38918 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Shutdown completed.
2019-02-26 19:45:29.713 INFO 38918 --- [ main] o.apache.catalina.core.StandardService : Stopping service [Tomcat]
</code></pre>
<p><code>qronicle-deployment.yaml</code></p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: actuator
namespace: {{ .Release.Namespace }}
type: Opaque
data:
actuator-password: {{ randAlphaNum 10 | b64enc | quote }}
....
- name: ACTUATOR_PASSWORD
valueFrom:
secretKeyRef:
name: actuator
key: actuator-password
</code></pre>
<p><code>application.properties</code></p>
<pre><code># spring boot actuator access control
management.endpoints.web.exposure.include=*
security.user.actuator-username=admin
security.user.actuator-password=${ACTUATOR_PASSWORD}
</code></pre>
<p><code>ACTUATOR_PASSWORD</code> is consumed here</p>
<pre><code>@Configuration
@EnableWebSecurity
class ActuatorSecurity : WebSecurityConfigurerAdapter() {
@Value("\${security.user.actuator-username}")
private val actuatorUsername: String? = null
@Value("\${security.user.actuator-password}")
private val actuatorPassword: String? = null
....
}
</code></pre>
| <p>Usually for secrets you will mount it outside of the deployment yaml. </p>
<p>Here you could run <code>kubectl create secret generic <secret_name> --from-literal <secret_key> ='<password>'</code> under the k8 context where the nodes are going to be running. </p>
<p>This will create a secret there and the deployment yaml you have above will map it to an environment variable.</p>
|
<p>I'm attempting to run Minikube in a VMWare Workstation guest, running Ubuntu 18.04.</p>
<p><code>kubectl version</code> results in:</p>
<p><code>Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.0", GitCommit:"0ed33881dc4355495f623c6f22e7dd0b7632b7c0", GitTreeState:"clean", BuildDate:"2018-09-27T17:05:32Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
</code></p>
<p><code>minikube version</code> results in:</p>
<pre><code>minikube version: v0.29.0
</code></pre>
<p>I have enabled Virtualize Intel VT-x/EPT or AMD-V/RVI on the VMWare guest configuration. I have 25GB of hard drive space. Yet, regardless of how I attempt to start Minikube, I get the following error:</p>
<pre><code>Starting local Kubernetes v1.10.0 cluster...
Starting VM...
E1005 11:02:32.495579 5913 start.go:168] Error starting host: Error creating host: Error creating machine: Error in driver during machine creation: Error creating VM: virError(Code=1, Domain=10, Message='internal error: qemu unexpectedly closed the monitor: 2018-10-05T09:02:29.926633Z qemu-system-x86_64: error: failed to set MSR 0x38d to 0x0
qemu-system-x86_64: /build/qemu-11gcu0/qemu-2.11+dfsg/target/i386/kvm.c:1807: kvm_put_msrs: Assertion `ret == cpu->kvm_msr_buf->nmsrs' failed.').
Retrying.
</code></pre>
<p>Commands I've tried:</p>
<pre><code>minikube start --vm-driver=kvm2
minikube start --vm-driver=kvm
minikube start --vm-driver=none
</code></pre>
<p>All result in the same thing.</p>
<p>I notice that on the Ubuntu guest, the network will shortly disconnect and re-connect when I run <code>minikube start</code>. Is it a problem with the network driver? How would I debug this?</p>
| <p>I observed a similar issue on Ubuntu 18.04.1 VM (Intel), the solution I found is:</p>
<ol>
<li>Run this from the console:</li>
</ol>
<pre><code>$ sudo cat > /etc/modprobe.d/qemu-system-x86.conf << EOF
options kvm_intel nested=1 enable_apicv=n
options kvm ignore_msrs=1
EOF
</code></pre>
<ol start="2">
<li>Reboot the VM</li>
</ol>
|
<p>Is it possible to describe just 1 container in a pod? </p>
<pre><code>$ kubectl describe pod <pod_name>
</code></pre>
<p>describes all the containers in that pod.</p>
<p>Thanks.</p>
| <p>No, there is no a dedicated command to do that. But you can use <code>kubectl get pods</code> to show the info per container. For example with <a href="https://stedolan.github.io/jq/" rel="nofollow noreferrer"><code>jq</code></a> you can get info for a container by its name:</p>
<pre class="lang-none prettyprint-override"><code>kubectl get pod <pod_name> -n <namespace_name> -o json | jq -c '.spec.containers[] | select( .name | contains("<container_name>"))'
</code></pre>
|
<p>I have a Docker image and I'd like to share an entire directory on a volume (a Persistent Volume) on Kubernetes.</p>
<h3>Dockerfile</h3>
<pre><code>FROM node:carbon
WORKDIR /node-test
COPY hello.md /node-test/hello.md
VOLUME /node-test
CMD ["tail", "-f", "/dev/null"]
</code></pre>
<p>Basically it copies a file <code>hello.md</code> and makes it part of the image (lets call it <code>my-image</code>).</p>
<p>On the Kubernetes deployment config I create a container from <code>my-image</code> and I share a specific directory to a volume.</p>
<h3>Kubernetes deployment</h3>
<pre><code># ...
spec:
containers:
- image: my-user/my-image:v0.0.1
name: node
volumeMounts:
- name: node-volume
mountPath: /node-test
volumes:
- name: node-volume
persistentVolumeClaim:
claimName: node-volume-claim
</code></pre>
<p>I'd expect to see the <code>hello.md</code> file in the directory of the persistent volume, but nothing shows up.</p>
<p>If I don't bind the container to a volume I can see the <code>hello.md</code> file (with <code>kubectl exec -it my-container bash</code>).</p>
<p>I'm not doing anything different from what this <a href="https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/" rel="nofollow noreferrer">official Kubernetes example</a> does. As matter of fact I can change <code>mountPath</code> and switch to the official Wordpress image and it works as expected. </p>
<p>How can <a href="https://hub.docker.com/_/wordpress" rel="nofollow noreferrer">Wordpress image</a> copy all files into the volume directory?</p>
<p>What's in the Wordpress Dockerfile that is missing on mine?</p>
| <p>In order not to overwrite the existing files/content, you can use subpath to mount the testdir directory (In the example below) in the existing Container file system. </p>
<pre><code> volumeMounts:
- name: node-volume
mountPath: /node-test/testdir
subPath: testdir
volumes:
- name: node-volume
persistentVolumeClaim:
claimName: node-volume-claim
</code></pre>
<p>you can find for more information here <a href="https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath" rel="nofollow noreferrer">using-subpath</a></p>
|
<p>Can I set the default namespace? That is:</p>
<pre><code>$ kubectl get pods -n NAMESPACE
</code></pre>
<p>It saves me having to type it in each time especially when I'm on the one namespace for most of the day.</p>
| <p>Yes, you can set the namespace <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/#setting-the-namespace-preference" rel="noreferrer">as per the docs</a> like so:</p>
<pre><code>$ kubectl config set-context --current --namespace=NAMESPACE
</code></pre>
<p>Alternatively, you can use <a href="https://github.com/ahmetb/kubectx" rel="noreferrer">kubectx</a> for this.</p>
|
<p>When creating a kubernetes service of type <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">nodePort</a>, kube-proxy configures each worker node to listen on a particular port.</p>
<p>How does kube-proxy (in the iptables proxy mode) actually configure this? Is it just done using iptables which opens a port? (not sure if that is even possible) </p>
| <p>Kube Proxy uses IPTable and netfilter rules for forwarding traffic from nodeports to pods. Mark Betz's article series on K8's networking is a good read.
<a href="https://medium.com/google-cloud/understanding-kubernetes-networking-services-f0cb48e4cc82" rel="nofollow noreferrer">https://medium.com/google-cloud/understanding-kubernetes-networking-services-f0cb48e4cc82</a></p>
|
<p>How to schedule a cronjob which executes a kubectl command?</p>
<p>I would like to run the following kubectl command every 5 minutes:</p>
<pre><code>kubectl patch deployment runners -p '{"spec":{"template":{"spec":{"containers":[{"name":"jp-runner","env":[{"name":"START_TIME","value":"'$(date +%s)'"}]}]}}}}' -n jp-test
</code></pre>
<p>For this, I have created a cronjob as below:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
args:
- /bin/sh
- -c
- kubectl patch deployment runners -p '{"spec":{"template":{"spec":{"containers":[{"name":"jp-runner","env":[{"name":"START_TIME","value":"'$(date +%s)'"}]}]}}}}' -n jp-test
restartPolicy: OnFailure
</code></pre>
<p>But it is failing to start the container, showing the message : </p>
<pre><code>Back-off restarting failed container
</code></pre>
<p>And with the error code 127:</p>
<pre><code>State: Terminated
Reason: Error
Exit Code: 127
</code></pre>
<p>From what I checked, the error code 127 says that the command doesn't exist. How could I run the kubectl command then as a cron job ? Am I missing something?</p>
<p><em>Note: I had posted a similar question ( <a href="https://stackoverflow.com/questions/54763195/scheduled-restart-of-kubernetes-pod-without-downtime">Scheduled restart of Kubernetes pod without downtime</a> ) , but that was more of having the main deployment itself as a cronjob, here I'm trying to run a kubectl command (which does the restart) using a CronJob - so I thought it would be better to post separately</em></p>
<p>kubectl describe cronjob hello -n jp-test:</p>
<pre><code>Name: hello
Namespace: jp-test
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"batch/v1beta1","kind":"CronJob","metadata":{"annotations":{},"name":"hello","namespace":"jp-test"},"spec":{"jobTemplate":{"spec":{"templ...
Schedule: */5 * * * *
Concurrency Policy: Allow
Suspend: False
Starting Deadline Seconds: <unset>
Selector: <unset>
Parallelism: <unset>
Completions: <unset>
Pod Template:
Labels: <none>
Containers:
hello:
Image: busybox
Port: <none>
Host Port: <none>
Args:
/bin/sh
-c
kubectl patch deployment runners -p '{"spec":{"template":{"spec":{"containers":[{"name":"jp-runner","env":[{"name":"START_TIME","value":"'$(date +%s)'"}]}]}}}}' -n jp-test
Environment: <none>
Mounts: <none>
Volumes: <none>
Last Schedule Time: Wed, 27 Feb 2019 14:10:00 +0100
Active Jobs: hello-1551273000
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 6m cronjob-controller Created job hello-1551272700
Normal SuccessfulCreate 1m cronjob-controller Created job hello-1551273000
Normal SawCompletedJob 16s cronjob-controller Saw completed job: hello-1551272700
</code></pre>
<p>kubectl describe job hello -v=5 -n jp-test</p>
<pre><code>Name: hello-1551276000
Namespace: jp-test
Selector: controller-uid=fa009d78-3a97-11e9-ae31-ac1f6b1a0950
Labels: controller-uid=fa009d78-3a97-11e9-ae31-ac1f6b1a0950
job-name=hello-1551276000
Annotations: <none>
Controlled By: CronJob/hello
Parallelism: 1
Completions: 1
Start Time: Wed, 27 Feb 2019 15:00:02 +0100
Pods Statuses: 0 Running / 0 Succeeded / 0 Failed
Pod Template:
Labels: controller-uid=fa009d78-3a97-11e9-ae31-ac1f6b1a0950
job-name=hello-1551276000
Containers:
hello:
Image: busybox
Port: <none>
Host Port: <none>
Args:
/bin/sh
-c
kubectl patch deployment runners -p '{"spec":{"template":{"spec":{"containers":[{"name":"jp-runner","env":[{"name":"START_TIME","value":"'$(date +%s)'"}]}]}}}}' -n jp-test
Environment: <none>
Mounts: <none>
Volumes: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 7m job-controller Created pod: hello-1551276000-lz4dp
Normal SuccessfulDelete 1m job-controller Deleted pod: hello-1551276000-lz4dp
Warning BackoffLimitExceeded 1m (x2 over 1m) job-controller Job has reached the specified backoff limit
Name: hello-1551276300
Namespace: jp-test
Selector: controller-uid=ad52e87a-3a98-11e9-ae31-ac1f6b1a0950
Labels: controller-uid=ad52e87a-3a98-11e9-ae31-ac1f6b1a0950
job-name=hello-1551276300
Annotations: <none>
Controlled By: CronJob/hello
Parallelism: 1
Completions: 1
Start Time: Wed, 27 Feb 2019 15:05:02 +0100
Pods Statuses: 1 Running / 0 Succeeded / 0 Failed
Pod Template:
Labels: controller-uid=ad52e87a-3a98-11e9-ae31-ac1f6b1a0950
job-name=hello-1551276300
Containers:
hello:
Image: busybox
Port: <none>
Host Port: <none>
Args:
/bin/sh
-c
kubectl patch deployment runners -p '{"spec":{"template":{"spec":{"containers":[{"name":"jp-runner","env":[{"name":"START_TIME","value":"'$(date +%s)'"}]}]}}}}' -n jp-test
Environment: <none>
Mounts: <none>
Volumes: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 2m job-controller Created pod: hello-1551276300-8d5df
</code></pre>
| <p>Long story short <a href="https://busybox.net/downloads/BusyBox.html" rel="noreferrer">BusyBox</a> doesn' have <code>kubectl</code> installed.</p>
<p>You can check it yourself using <code>kubectl run -i --tty busybox --image=busybox -- sh</code> which will run a BusyBox pod as interactive shell.</p>
<p>I would recommend using <a href="https://hub.docker.com/r/bitnami/kubectl/" rel="noreferrer">bitnami/kubectl:latest</a>.</p>
<p>Also keep in mind that You will need to set proper <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="noreferrer">RBAC</a>, as you will get <code>Error from server (Forbidden): services is forbidden</code></p>
<p>You could use something like this:</p>
<pre><code>kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: jp-test
name: jp-runner
rules:
- apiGroups:
- extensions
- apps
resources:
- deployments
verbs:
- 'patch'
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: jp-runner
namespace: jp-test
subjects:
- kind: ServiceAccount
name: sa-jp-runner
namespace: jp-test
roleRef:
kind: Role
name: jp-runner
apiGroup: ""
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: sa-jp-runner
namespace: jp-test
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
serviceAccountName: sa-jp-runner
containers:
- name: hello
image: bitnami/kubectl:latest
command:
- /bin/sh
- -c
- kubectl patch deployment runners -p '{"spec":{"template":{"spec":{"containers":[{"name":"jp-runner","env":[{"name":"START_TIME","value":"'$(date +%s)'"}]}]}}}}' -n jp-test
restartPolicy: OnFailure
</code></pre>
|
<p>I am using helm and the file <a href="https://github.com/vitessio/vitess/blob/master/examples/helm/101_initial_cluster.yaml" rel="nofollow noreferrer">101_initial_cluster.yaml</a> from the Vitess example to setup my initial cluster. The example has a schema initialization using SQL string as shown below:</p>
<pre><code>schema:
initial: |-
create table product(
sku varbinary(128),
description varbinary(128),
price bigint,
primary key(sku)
);
create table customer(
customer_id bigint not null auto_increment,
email varbinary(128),
primary key(customer_id)
);
create table corder(
order_id bigint not null auto_increment,
customer_id bigint,
sku varbinary(128),
price bigint,
primary key(order_id)
);
</code></pre>
<p>I would like to replace this with a file <code>initial: my_initial_keyspace_schema.sql</code>. From the Vitess documentation I can see Vitess does allow for this using <code>ApplySchema -sql_file=user_table.sql user</code>, but I would like to initialize using the helm file.</p>
<p>This would be very helpful as it is very tedious to organize and paste the schema as a <code>string</code>. Tables that depend on others have to be pasted first and the rest follow. Forgetting makes Vitess throw an error. </p>
| <p>Welcome to StackOverflow.</p>
<p>I'm afraid there is no out-of-the-box feature to enable initializing DbSchema directly from SQL file in the current state of Vitess Helm chart. You can identify any of its configurable parameters via <code>helm inspect <chart_name></code> command.</p>
<p>However, you can try customizing it to match your needs in following ways:</p>
<ol>
<li><p>Stay with ApplySchema {-sql=} mode<br><br>
but let the SQL schema be slurped from static file as a part of Helm chart
template<br> (e.g. from <em>static/initial_schema.sql</em> location):</p>
<p>So just add a piece of control flow code like this one:</p></li>
</ol>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>{{ if .Values.initialSchemaSqlFile.enabled }}
{{- $files := .Files }}
{{- range tuple "static/initial_schema.sql" }}
{{ $schema := $files.Get . }}
{{- end }}
{{ else }}
# Default inline schema from Values.topology.cells.keyspaces[0].schema
{{ end }}</code></pre>
</div>
</div>
</p>
<p>Check more on using Helm built-in Objects like File <a href="https://helm.sh/docs/chart_template_guide/#glob-patterns" rel="nofollow noreferrer">here</a></p>
<ol start="2">
<li>Use ApplySchema {-sql-file=} mode<br><br>
Adapt a piece of code <a href="https://github.com/vitessio/vitess/blob/d4176b97bcd95072ae452c399d90e4eefe135273/helm/vitess/templates/_keyspace.tpl#L79" rel="nofollow noreferrer">here</a>, where vtctlclient command is constructed.<br>
This would require also to introduce a new Kubernetes Volume object (<a href="https://github.com/kubernetes/examples/blob/master/staging/volumes/nfs/nfs-pv.yaml" rel="nofollow noreferrer">nfs</a>
or <a href="https://stackoverflow.com/questions/53683594/how-to-clone-a-private-git-repository-into-a-kubernetes-pod-using-ssh-keys-in-se?noredirect=1&lq=1">Git</a> repo based are good options here), which you could <a href="https://kubernetes.io/docs/concepts/storage/volumes/#example-pod-2" rel="nofollow noreferrer">mount</a> on
Job, under specific path, from where initial_schema.sql file would be used.</li>
</ol>
|
<p>I am currently running a django app under python3 through kubernetes by going through <code>skaffold dev</code>. I have hot reload working with the Python source code. Is it currently possible to do interactive debugging with python on kubernetes?</p>
<p>For example,</p>
<pre><code>def index(request):
import pdb; pdb.set_trace()
return render(request, 'index.html', {})
</code></pre>
<p>Usually, outside a container, hitting the endpoint will drop me in the <code>(pdb)</code> shell. </p>
<p>In the current setup, I have set <code>stdin</code> and <code>tty</code> to <code>true</code> in the <code>Deployment</code> file. The code does stop at the breakpoint but it doesn't give me access to the <code>(pdb)</code> shell.</p>
| <p>There is a <code>kubectl</code> command that allows you to <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#attach" rel="nofollow noreferrer">attach</a> to a running container in a pod:</p>
<pre><code>kubectl attach <pod-name> -c <container-name> [-n namespace] -i -t
-i (default:false) Pass stdin to the container
-t (default:false) Stdin is a TTY
</code></pre>
<p>It should allow you to interact with the debugger in the container.
Probably you may need to adjust your pod to use a debugger, so the following article might be helpful:</p>
<ul>
<li><a href="https://medium.com/@vladyslav.krylasov/how-to-use-pdb-inside-a-docker-container-eeb230de4d11" rel="nofollow noreferrer">How to use PDB inside a docker container.</a></li>
</ul>
<p>There is also <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/local-debugging/" rel="nofollow noreferrer"><strong>telepresence</strong></a> tool that helps you to use different approach of application debugging:</p>
<blockquote>
<p>Using <strong>telepresence</strong> allows you to use custom tools, such as a debugger and IDE, for a local service and provides the service full access to ConfigMap, secrets, and the services running on the remote cluster.</p>
<p>Use the <code>--swap-deployment</code> option to swap an existing deployment with the Telepresence proxy. Swapping allows you to run a service locally and connect to the remote Kubernetes cluster. The services in the remote cluster can now access the locally running instance.</p>
</blockquote>
|
<p><strong>Some quick background:</strong> creating an app in golang, running on minikube on MacOS 10.14.2</p>
<pre><code>karlewr [0] $ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-18T11:37:06Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.4", GitCommit:"f49fa022dbe63faafd0da106ef7e05a29721d3f1", GitTreeState:"clean", BuildDate:"2018-12-14T06:59:37Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p><strong>The Issue:</strong>
I cannot access my pod via it's pod IP from inside my cluster. This problem is only happening with this one pod which leads me to believe I have a misconfiguration somewhere.</p>
<p>My pods spec is as follows:</p>
<pre><code>containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
ports:
- name: http
containerPort: 8080
protocol: TCP
livenessProbe:
httpGet:
path: /ping
port: 8080
initialDelaySeconds: 60
readinessProbe:
httpGet:
path: /ping
port: 8080
initialDelaySeconds: 60
</code></pre>
<p>What's weird is that I can access it by port-forwarding that pod on port <code>8080</code> and running <code>curl localhost:8080/ping</code> before the liveness and readiness probes run and after the pod has been initialized. This returns 200 OK.</p>
<p>Also during this time before <code>CrashLoopBackoff</code>, if I ssh into my minikube node and run <code>curl http://172.17.0.21:8080/ping</code> I get <code>curl: (7) Failed to connect to 172.17.0.21 port 8080: Connection refused</code>. The IP used is my pod's IP.</p>
<p>But then when I describe the pod after the <code>initialDelaySeconds</code> period, I see this:</p>
<pre><code> Warning Unhealthy 44s (x3 over 1m) kubelet, minikube Readiness probe failed: Get http://172.17.0.21:8080/ping: dial tcp 172.17.0.21:8080: connect: connection refused
Warning Unhealthy 44s (x3 over 1m) kubelet, minikube Liveness probe failed: Get http://172.17.0.21:8080/ping: dial tcp 172.17.0.21:8080: connect: connection refused
</code></pre>
<p>Why would my connection be getting refused only from the pod's IP?</p>
<p><strong>Edit</strong> I am not running any custom networking things, just minikube out of the box</p>
| <blockquote>
<p>Why would my connection be getting refused only from the pod's IP?</p>
</blockquote>
<p>Because your program is apparently only listening on localhost (aka <code>127.0.0.1</code> aka <code>lo0</code>)</p>
<p>Without knowing more about your container we can't advise you further, but that's almost certainly the problem based on your description.</p>
|
<p>I’ve created a <code>Cronjob</code> in kubernetes with schedule(<code>8 * * * *</code>), with job’s <code>backoffLimit</code> defaulting to 6 and pod’s <code>RestartPolicy</code> to <code>Never</code>, the pods are deliberately configured to FAIL. As I understand, (for podSpec with <code>restartPolicy : Never</code>) Job controller will try to create <code>backoffLimit</code> number of pods and then it marks the job as <code>Failed</code>, so, I expected that there would be 6 pods in <code>Error</code> state.</p>
<p>This is the actual Job’s status:</p>
<pre><code>status:
conditions:
- lastProbeTime: 2019-02-20T05:11:58Z
lastTransitionTime: 2019-02-20T05:11:58Z
message: Job has reached the specified backoff limit
reason: BackoffLimitExceeded
status: "True"
type: Failed
failed: 5
</code></pre>
<p>Why were there only 5 failed pods instead of 6? Or is my understanding about <code>backoffLimit</code> in-correct?</p>
| <p>In short: You might not be seeing all created pods because period of schedule in the cronjob is too short.</p>
<p>As described in <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#pod-backoff-failure-policy" rel="noreferrer">documentation</a>:</p>
<blockquote>
<p>Failed Pods associated with the Job are recreated by the Job
controller with an exponential back-off delay (10s, 20s, 40s …) capped
at six minutes. The back-off count is reset if no new failed Pods
appear before the Job’s next status check.</p>
</blockquote>
<p>If new job is scheduled before Job controller has a chance to recreate a pod (having in mind the delay after previous failure), Job controller starts counting from one again.</p>
<p>I reproduced your issue in GKE using following <code>.yaml</code>:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: hellocron
spec:
schedule: "*/3 * * * *" #Runs every 3 minutes
jobTemplate:
spec:
template:
spec:
containers:
- name: hellocron
image: busybox
args:
- /bin/cat
- /etc/os
restartPolicy: Never
backoffLimit: 6
suspend: false
</code></pre>
<p>This job will fail because file <code>/etc/os</code> doesn't exist.</p>
<p>And here is an output of <code>kubectl describe</code> for one of the jobs:</p>
<pre><code>Name: hellocron-1551194280
Namespace: default
Selector: controller-uid=b81cdfb8-39d9-11e9-9eb7-42010a9c00d0
Labels: controller-uid=b81cdfb8-39d9-11e9-9eb7-42010a9c00d0
job-name=hellocron-1551194280
Annotations: <none>
Controlled By: CronJob/hellocron
Parallelism: 1
Completions: 1
Start Time: Tue, 26 Feb 2019 16:18:07 +0100
Pods Statuses: 0 Running / 0 Succeeded / 6 Failed
Pod Template:
Labels: controller-uid=b81cdfb8-39d9-11e9-9eb7-42010a9c00d0
job-name=hellocron-1551194280
Containers:
hellocron:
Image: busybox
Port: <none>
Host Port: <none>
Args:
/bin/cat
/etc/os
Environment: <none>
Mounts: <none>
Volumes: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 26m job-controller Created pod: hellocron-1551194280-4lf6h
Normal SuccessfulCreate 26m job-controller Created pod: hellocron-1551194280-85khk
Normal SuccessfulCreate 26m job-controller Created pod: hellocron-1551194280-wrktb
Normal SuccessfulCreate 26m job-controller Created pod: hellocron-1551194280-6942s
Normal SuccessfulCreate 25m job-controller Created pod: hellocron-1551194280-662zv
Normal SuccessfulCreate 22m job-controller Created pod: hellocron-1551194280-6c6rh
Warning BackoffLimitExceeded 17m job-controller Job has reached the specified backoff limit
</code></pre>
<p>Note the delay between creation of pods <code>hellocron-1551194280-662zv</code> and <code>hellocron-1551194280-6c6rh</code>.</p>
|
<p>I am tring to configure highly available cluster with 3 master nodes.I follow
<a href="https://kazuhisya.netlify.com/2018/02/08/how-to-install-k8s-on-el7/" rel="nofollow noreferrer">https://kazuhisya.netlify.com/2018/02/08/how-to-install-k8s-on-el7/</a>
tutorial.</p>
<p>kubeadm version.</p>
<pre><code>kubeadm version: &version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-01T20:05:53Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>config file</p>
<pre><code>apiVersion: kubeadm.k8s.io/v1alpha3
kind: MasterConfiguration
api:
advertiseAddress: 10.1.1.20
etcd:
endpoints:
- https://${PEER_HOST1IP}:2379
- https://${PEER_HOST2IP}:2379
- https://${PEER_HOST3IP}:2379
caFile: /etc/kubernetes/pki/etcd/ca.pem
certFile: /etc/kubernetes/pki/etcd/client.pem
keyFile: /etc/kubernetes/pki/etcd/client-key.pem
networking:
podSubnet: 10.244.0.0/16
apiServerCertSANs:
- 10.1.1.20
apiServerExtraArgs:
apiserver-count: "3"
</code></pre>
<p>when I try to run below command,kubeadm init fail with error.</p>
<p>command.</p>
<pre><code>kubeadm init --config=config.yaml
</code></pre>
<p>error.</p>
<pre><code>W0227 18:22:25.467977 6564 strict.go:47] unknown configuration schema.GroupVersionKind{Group:"kubeadm.k8s.io", Version:"v1alpha3", Kind:"MasterConfiguration"} for scheme definitions in "k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/scheme/scheme.go:31" and "k8s.io/kubernetes/cmd/kubeadm/app/componentconfigs/scheme.go:28"
</code></pre>
<p>I just need to init kubedeam by config.ymal file. I don't know what goes wron. Could not found proper answer.</p>
<p>Your help will be highly appreciated.</p>
| <p>From the Kubernetes <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/" rel="nofollow noreferrer">docs</a>:</p>
<blockquote>
<p>Kubernetes 1.11 and later, the default configuration can be printed out using the kubeadm config print command. It is recommended that you migrate your old v1alpha3 configuration to v1beta1 using the kubeadm config migrate command, because v1alpha3 will be removed in Kubernetes 1.14.</p>
</blockquote>
<p>From Kubernetes ver 1.13 onwards <code>v1alpha3</code> has been deprecated. You need to change the apiVersion to <code>v1beta1</code></p>
<pre><code>kubeadm config migrate --old-config config.yaml
</code></pre>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.