prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>OS: Mac OS 10.13.6 Terminal</p>
<p>Kubectl for Remote Access</p>
<p>When I execute the command with "--insecure-skip-tls-verify" it works fine. </p>
<pre><code>dev-env at balabimac in ~/kthw
$ kubectl --insecure-skip-tls-verify --context=kubernetes-me get pods
No resources found.
dev-env at balabimac in ~/kthw
$ kubectl --insecure-skip-tls-verify --context=kubernetes-me get nodes
NAME STATUS ROLES AGE VERSION
balab29123.mylabserver.com NotReady <none> 4h v1.10.2
balab29124.mylabserver.com NotReady <none> 4h v1.10.2
dev-env at balabimac in ~/kthw
$ kubectl --insecure-skip-tls-verify --context=kubernetes-me version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.0", GitCommit:"0ed33881dc4355495f623c6f22e7dd0b7632b7c0", GitTreeState:"clean", BuildDate:"2018-09-28T15:20:58Z", GoVersion:"go1.11", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:10:24Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>But I am unable to access using the below command, I am stuck.</p>
<pre><code>dev-env at balabimac in ~/kthw
$ kubectl config use-context kubernetes-me
Switched to context "kubernetes-me".
dev-env at balabimac in ~/kthw
$ kubectl get pods
Unable to connect to the server: x509: certificate is valid for balab29121.mylabserver.com, balab29122.mylabserver.com, balab29126.mylabserver.com, 127.0.0.1.localhost, kubernetes.default, not localhost
dev-env at balabimac in ~/kthw
$ kubectl get nodes
Unable to connect to the server: x509: certificate is valid for balab29121.mylabserver.com, balab29122.mylabserver.com, balab29126.mylabserver.com, 127.0.0.1.localhost, kubernetes.default, not localhost
dev-env at balabimac in ~/kthw
</code></pre>
| <p>It looks like when you generated the kubernetes API server certificate, you put <strong>127.0.0.1.localhost</strong> instead of <strong>127.0.0.1,localhost</strong> . Just a small typo, but as a result the cert is not properly signed for localhost, which will lead to this error when you are trying to connect. Re-generate the Kube API server cert with the correct values. Then copy the cert files to your control nodes and put the files in the correct place, replacing the old files.</p>
<pre><code>dev-env at balabimac in ~/kthw
$ kubectl get pods Unable to connect to the server: x509: certificate is valid for balab29121.mylabserver.com, balab29122.mylabserver.com, balab29126.mylabserver.com, **127.0.0.1.localhost**, kubernetes.default, not localhost
</code></pre>
|
<p>Does anyone know if and how I can access the origin IP-Address of the client originally performed the request if I have a LoadBalancer service in OpenShift? It is overwritten by the Load-Balancer's IP-Address.</p>
<p>There should be a way to access that information - may be in the optional TCP data or so. Does anyone know?</p>
<p>Or maybe I can use a different approach to forward the package without losing that information but still being able to scale the service to multiple Pod's. Thank you for any hint!</p>
<p>Best regards, Dominic</p>
<p>OpenShift Master: v3.9.41
Kubernetes Master: v1.9.1</p>
| <p>IP addresses are source NAT'ed by default in Kubernetes services as of K8s 1.5, so you won't see the source IP. For <code>LoadBalancer</code> type of service you can set in its spec the <a href="https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-loadbalancer" rel="nofollow noreferrer"><code>externalTrafficPolicy</code> field to <code>Local</code></a> to preserve the source IP address:</p>
<pre><code>$ kubectl patch svc yourservice -p '{"spec":{"externalTrafficPolicy":"Local"}}'
</code></pre>
<p>It should work out the box for the GCE and Azure <a href="https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/" rel="nofollow noreferrer">cloud providers</a>. For others follow the 'Cross platform support' section <a href="https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-loadbalancer" rel="nofollow noreferrer">here</a>.</p>
<p>Quoted from the docs:</p>
<blockquote>
<p>As of Kubernetes 1.5, support for source IP preservation through Services with Type=LoadBalancer is only implemented in a subset of cloudproviders (GCP and Azure). </p>
</blockquote>
<p>Note that when you set the <code>externalTrafficPolicy</code> field to <code>Local</code>, only the nodes where your pod is running will show as healthy because they are ones replying to your service.</p>
<p>OpenShift uses the same Kubernetes Service syntax.</p>
|
<p>I have a problem with my ingress and my service, I can not get that when I connect to the IP of my server, I redirect to the service I have associated with port 80, which is my website. I pass you the configuration files and the description of the ingress:</p>
<pre><code>apiVersion: v1
kind: Namespace
metadata:
name: bookstack
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
service: mysql
name: mysql
namespace: bookstack
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
service: mysql
spec:
containers:
- env:
- name: MYSQL_DATABASE
value: bookstack
- name: MYSQL_PASS
value: pass
- name: MYSQL_ROOT_PASSWORD
value: root
- name: MYSQL_USER
value: user
image: mysql:5.7
name: mysql
ports:
- containerPort: 3306
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
labels:
service: mysql
name: mysql
namespace: bookstack
spec:
type: NodePort
ports:
- name: "3306"
port: 3306
targetPort: 3306
selector:
service: mysql
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: bookstack
name: bookstack
namespace: bookstack
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: bookstack
spec:
containers:
- env:
- name: namespace
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: podname
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: nodename
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: DB_DATABASE
value: bookstack
- name: DB_HOST
value: mysql
- name: DB_PASSWORD
value: root
- name: DB_USERNAME
value: root
image: solidnerd/bookstack:latest
name: bookstack
ports:
- name: http
containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
labels:
app: bookstack
name: bookstack
namespace: bookstack
spec:
type: NodePort
ports:
- name: http-port
port: 80
protocol: TCP
selector:
app: bookstack
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: http
namespace: bookstack
spec:
backend:
serviceName: bookstack
servicePort: http-port
</code></pre>
<p>This is what appears on my ingress:</p>
<pre><code>Name: http
Namespace: bookstack
Address:
Default backend: bookstack:http-port (10.36.0.22:80)
Rules:
Host Path Backends
---- ---- --------
* * bookstack:http-port (10.36.0.22:80)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"http","namespace":"bookstack"},"spec":{"backend":{"serviceName":"bookstack","servicePort":"http-port"}}}
Events: <none>
</code></pre>
<p>It doesn't return any external IP to connect me, why could it be? I want to avoid using LoadBalancer as a service type.</p>
| <p>The main problem was that I didn't have activated the balancer that Google Kubernetes Engine offers by default, not having it active I couldn't generate an external ip because there wasn't a balancer. There are two solutions, either activate GKE's default load balancer or create a type of service: LoadBalancer.</p>
<p>Important to activate also within the deploy the readinessProbe
and livenessProbe.</p>
<p>An example:</p>
<pre><code> readinessProbe:
httpGet:
path: /login
port: 80
initialDelaySeconds: 5
timeoutSeconds: 1
periodSeconds: 15
livenessProbe:
httpGet:
path: /login
port: 80
initialDelaySeconds: 15
timeoutSeconds: 1
periodSeconds: 15
</code></pre>
|
<p>I'm hoping someone can help me here because I'm stuck. </p>
<p>I'm moving over our nginx config from a traditional nginx/node server config whereby both nginx and node server are on the same machine.</p>
<p>In Kubernetes, the ingress controller (nginx) obviously lives in another container. </p>
<p>Where I'm getting stuck is reimplementing our rules that disable access logging for images and assets using location blocks.</p>
<p>Our configuration looks like </p>
<pre><code>location ~* \.(?:jpg|jpeg|gif|png|ico|cur|mp4|ogg|ogv|webm|htc)$ {
access_log off;
expires 2M;
add_header Cache-Control "public, max-age=5184000"; # 5184000 is 60 days
}
</code></pre>
<p>When I implement this same block in a <code>server-snippet</code> it matches, but all the assets throw a 404. </p>
<p>I did some Googling and found an answer that might explain why here <a href="https://stackoverflow.com/a/52711388/573616">https://stackoverflow.com/a/52711388/573616</a></p>
<p>but the suggested answer hints to use an <code>if</code> block instead of a <code>location</code> block because the location interferes with the proxy upstream, however, disabling access logs is not possible from inside the <code>if</code> block, only from a <code>location</code> context.</p>
<p>The rest of my ingress looks like (everything else is default)</p>
<pre><code>real_ip_header X-Forwarded-For;
real_ip_recursive on;
underscores_in_headers on;
gzip_types text/css application/x-javascript application/javascript application/json image/svg+xml;
client_max_body_size 5M;
proxy_buffers 8 16k;
proxy_set_header X-Request-Start "t=${msec}";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_host;
proxy_redirect off;
</code></pre>
<p>The images live at /images/ on the upstream server path.</p>
<p>So I'm back to trying to figure out how to get these location blocks working so I can actually disable the access logs for these images from a <code>server-snippet</code></p>
<p>So can anyone tell me how to get the above location block to not throw 404's for assets in an ingress controller?</p>
| <p>I'm assuming that your backend is serving your assets, so I think the problem is that your <code>location {}</code> block doesn't have an upstream like the regular paths defined in the nginx ingress. </p>
<p>There's a lot of lua code in the <code>nginx.conf</code> of your nginx-ingress-controller so it might take time to understand, but you can copy your <code>nginx.conf</code> locally:</p>
<pre><code>$ kubectl cp nginx-ingress-controller-xxxxxxxxx-xxxxx:nginx.conf .
</code></pre>
<p>Check the <code>location {}</code> blocks that are defined for your current services and copy them in the bottom of your <code>server-snippet</code> <code>location {}</code> block like this:</p>
<p>I believe a <code>server-snippet</code> like this would do:</p>
<pre><code>location ~* \.(?:jpg|jpeg|gif|png|ico|cur|mp4|ogg|ogv|webm|htc)$ {
access_log off;
expires 2M;
add_header Cache-Control "public, max-age=5184000"; # 5184000 is 60 days
<== add what you copied here
set $namespace "k8s-namespace";
set $ingress_name "ingress-name";
set $service_name "service-name";
set $service_port "80";
set $location_path "/images";
...
...
...
proxy_http_version 1.1;
proxy_cookie_domain off;
proxy_cookie_path off;
# In case of errors try the next upstream server before returning an error
proxy_next_upstream error timeout;
proxy_next_upstream_tries 3;
proxy_pass http://upstream_balancer;
proxy_redirect off;
}
</code></pre>
|
<p>My ingress controller is working and I am able to access the service outside the cluster using <code>http://(externalIP)/path</code> using an HTTP GET request from a RestClient. However, I had to specify the <code>"Host"</code> header with <code>value = "host"</code> (value of my Ingress Resource) for this to work. Because of this, I am not able to hit <code>http://(externalIP)/path</code> from my web browser. Is there any way I can enable access from my external web browser without having to specify <code>"Host"</code> in the request header?</p>
<p><strong>Ingress Resource :</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
namespace: ingress-nginx
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: master1.saas.com
http:
paths:
- backend:
serviceName: gen-devops
servicePort: 10311
path: /*
</code></pre>
<p><strong>Ingress Service :</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
- name: https
port: 443
targetPort: 443
protocol: TCP
externalIPs:
- 172.16.32.85
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
</code></pre>
| <p>I assume you want to use this for testing.</p>
<p>If you are using any *nix flavor OS (MacOS, Linux) you can add an entry to your <code>/etc/hosts</code> file, something like this:</p>
<pre><code>172.16.32.85 master1.saas.com
</code></pre>
<p>If you are using any Windows box you can add the same entry in <code>C:\Windows\System32\Drivers\etc\hosts</code></p>
|
<p>I'm having trouble rendering a template for a Helm Chart that needs to have easily expandable amounts of nodes and replicas. I get the below error message. What's odd is I do not get the below error message if I remove the inner loop to not nest loops. I'm completely new to go, but this seems valid. I'm at a loss.</p>
<p>Error:</p>
<pre><code>$ helm install . --dry-run --debug
Error: render error in "app-on-k8s/templates/configmap_configd.yaml": template: app-on-k8s/templates/configmap_configd.yaml:18:77: executing "app-on-k8s/templates/configmap_configd.yaml" at <.Values.nodeCount>: can't evaluate field Values in type int
</code></pre>
<p>This is the relevant section from my values.yaml file:</p>
<pre><code># number of nodes / shards
nodeCount: 5
replicaCount: 3
</code></pre>
<p>And the relevant section from my template file:</p>
<pre><code> <default>
{{range $i, $e := until (atoi (printf "%d" (int64 .Values.nodeCount))) }}
<node>
{{range $j, $k := until (atoi (printf "%d" (int64 .Values.replicaCount))) }} #line 18
<replica>
<host>{{ $.Release.Name }}-{{$j}}</host>
<port>{{ $.Values.service.rpc_port }}</port>
</replica>
{{end}}
</node>
{{end}}
</default>
</code></pre>
| <p>The problem is, when you are using <code>.Values.replicaCount</code> in the second loop,<code>.</code> scope has been changed and now pointing at <code>.Values.nodeCount</code>. So <code>.Values.replicaCount</code> now pointing to <code>.Values.nodeCount.Values.replicaCount</code>. Since, there is no such field in values.yaml file you are getting this error.</p>
<p>Use <code>$.Values.replicaCount</code> instead of <code>.Values.replicaCount</code> in second loop.</p>
<p>Ref: <a href="https://docs.helm.sh/chart_template_guide/#looping-with-the-range-action" rel="noreferrer">helm.sh</a></p>
|
<p>What is the reasoning for using service discovery like <a href="https://www.consul.io/discovery.html" rel="nofollow noreferrer">Consul</a> instead of <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">Kubernetes internal DNS</a>, like <code>my-svc.my-namespace.svc.cluster.local</code>? I am wondering because I am starting on a fairly large project and want to make sure I'm not setting myself up for failure using the build in Kubernetes DNS.</p>
| <p>The service discovery by DNS name itself (inside the cluster) is somehow similar. </p>
<p>Other technologies are adding more functionality to the stack like: Encrypted Traffic, Connection Authorization, Proxy Sidecars, etc </p>
<p>You can take a look here: <a href="https://www.hashicorp.com/blog/consul-1-2-service-mesh" rel="nofollow noreferrer">https://www.hashicorp.com/blog/consul-1-2-service-mesh</a> as an example. </p>
<p>Those frameworks will help reduce code on each container (example: retry/circuit breaker) by adding this 'plumbling' at the service level. </p>
<p>--M</p>
|
<p>I'm trying to follow this <a href="https://blog.inkubate.io/install-and-configure-a-multi-master-kubernetes-cluster-with-kubeadm/" rel="nofollow noreferrer">tutorial</a>. </p>
<ol>
<li>What would be the advantage of generating the certs yourself instead of depending on kubeadm? </li>
<li>if you create the certs yourself, does the auto-rotation happens after setting up the cluster from kubeadm?</li>
</ol>
<p>Thanks!</p>
| <ol>
<li><p>No major advantage. kubeadm does the same: generate self-signed certs. The only mini advantage is that you could add some custom values in the <a href="https://en.wikipedia.org/wiki/Certificate_signing_request" rel="nofollow noreferrer">CSR</a>, such as a City, Organization, etc.</p></li>
<li><p>Not really.</p>
<ul>
<li>There's a <a href="https://kubernetes.io/docs/tasks/tls/certificate-rotation/#enabling-client-certificate-rotation" rel="nofollow noreferrer">kubelet certificate</a> rotation flag <code>--rotate-certificates</code> that needs to be enabled.</li>
<li><p>There's also the certificate rotation from the masters and <code>kubeadm</code> can help with that with these <a href="https://stackoverflow.com/questions/46360361/invalid-x509-certificate-for-kubernetes-master?answertab=votes#tab-top">commands</a>:</p>
<pre><code>mkdir /etc/kubernetes/pkibak
mv /etc/kubernetes/pki/* /etc/kubernetes/pkibak
rm /etc/kubernetes/pki/*
kubeadm init phase certs all --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=x.x.x.x,x.x.x.x
systemctl restart docker
</code></pre></li>
</ul></li>
</ol>
<p>If you'd like to regenerate the <code>admin.conf</code> file, you can also use <code>kubeadm</code>:</p>
<pre><code>$ kubeadm init phase kubeconfig admin \
--cert-dir /etc/kubernetes/pki \
--kubeconfig-dir /tmp/.
</code></pre>
|
<p>I have a kubernetes instalation with master and 1 node. </p>
<p>It is configured and everything is working very good. </p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mantis-gfs 1/1 Running 1 22h
mongodb-gfs 1/1 Running 0 14h
</code></pre>
<p>I exposed the pod mongodb-gfs:</p>
<pre><code>$ kubectl expose pod mongodb-gfs --port=27017 --external-ip=10.9.8.100 --name=mongodb --labels="env=development"
</code></pre>
<p>The extrnal IP 10.9.8.100 is the IP of the kubernetes master node</p>
<p>The service was created successfully.</p>
<pre><code>$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
glusterfs-cluster ClusterIP 10.111.96.254 <none> 1/TCP 23d
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 29d
mongodb ClusterIP 10.100.149.90 10.9.8.100 27017/TCP 1m
</code></pre>
<p>Now i am able to access the mongo using:</p>
<pre><code>mongo 10.9.8.100:27017
</code></pre>
<p>And here is the problem. It works some time, but some time not.
I connect once and i get the shell, i connect second time and get:</p>
<pre><code>$ mongo 10.9.8.100:27017
MongoDB shell version v3.4.17
connecting to: mongodb://10.9.8.100:27017/test
2018-11-01T09:27:23.524+0100 W NETWORK [thread1] Failed to connect to 10.9.8.100:27017, in(checking socket for error after poll), reason: Connection refused
2018-11-01T09:27:23.524+0100 E QUERY [thread1] Error: couldn't connect to server 10.9.8.100:27017, connection attempt failed :
connect@src/mongo/shell/mongo.js:240:13
@(connect):1:6
exception: connect failed
</code></pre>
<p>Then i try again and it works, try again it works, try again it not works...</p>
<p>Any clues what may cause the problem?</p>
| <p>I found the problem and the solution. The problem was, the pod definition. For both pods: mongodb-gfs and mantis-gfs i have the same label settings. Then i exposed services with the same label="env=development". In this case the traffic that i expected to go always to one pod was "loadbalanced" to one or other pod (they have the same label) of different types. </p>
<p>Changing the label in the mongodb-gfs pod definition solved the problem with connection issues.</p>
|
<p>Let's say I have a flask app, a PostgreSQL, and a Redis app. what is the best practice way to develop those apps locally which then later be deployed to Kubernetes. </p>
<p>Because, I have tried to develop in minikube with ksync, but I get difficulties in getting detailed debug log information. </p>
<p>Any ideas?</p>
| <p>What we do with our systems is that we develop and test them locally. I am not very knowledgeable with Flask and ksyncy, but say for example, you are using <a href="https://www.lagomframework.com/" rel="nofollow noreferrer">Lagom Microservices Framework</a> in Java, you run you app locally using the SBT shell where you can view all your logs. We then automate the deployment using <a href="https://developer.lightbend.com/docs/lightbend-orchestration/current/" rel="nofollow noreferrer">LightBend Orchestration</a>.</p>
<p>When you then decide to test the app on Kubernetes, you can choose to use minikube, but you have to configure the logging properly. You can configure centralised logging for Kubernetes using the <a href="https://blog.ptrk.io/how-to-deploy-an-efk-stack-to-kubernetes/" rel="nofollow noreferrer">EFK</a> stack. This will collect all the logs from the various components of your app and store them in Elastic Search. You can then view these logs using The Kibana Dashboard. You can do a lot with the dashboard, you can view logs for a given period, or search logs by k8s namespace, or by container. </p>
|
<p>In a Kubernetes cluster I have a per-node HTTP node-specific service deployed (using a DaemonSet). This service returns node-specific data (which is otherwise not available via the cluster/remote API). I cannot make use of a Kubernetes Service, as this would result in kind of a service roulette, as the client cannot control the exact node to which to connect (forward the HTTP request) to. As the service needs to return node-specific data, this would cause data to be returned for a random node, but not for the node the client wants.</p>
<p>My suspection is that I need a reverse proxy that uses a part of its own URL path to relay an incomming HTTP request deterministically to exactly the node the client indicates. This proxy then in turn could be either accessed by clients using the cluster/remote API service proxy functionality.</p>
<ul>
<li><code>http://myservice/node1/..</code> --> <code>http://node1:myservice/...</code></li>
<li><code>http://myservice/node2/...</code> --> <code>http://node2:myservice/...</code></li>
<li>...</li>
</ul>
<p>Is there a ready-made pod (or Helm chart) available that maps a service running on all cluster nodes to a single proxy URL, with some path component specifying the node whose service instance to relay to? Is there some way to restrict the reverse proxy to relay only to those nodes being specified in the DaemonSet of the pod spec defining my per-node service?</p>
<p>Additionally, is there some ready-made "hub page" available for the reverse proxy listing/linking to only those nodes were my service is currently running on?</p>
<p>Or is this something where I need to create my own reverse proxy setup specifically? Is there some integration between, e.g. nginx and Kubernetes?</p>
| <p>It is almost impossible if you use DaemonSet, because you can't add a unique label for the pod in DaemonSet. If you need to distribute one pod per node you can use <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/" rel="nofollow noreferrer">podaffinity.</a> with <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a> or <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployments.</a></p>
<p>Then create a service for each node:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: svc-for-node1
spec:
selector:
nodename: unique-label-for-pod-on-node
ports:
- protocol: TCP
port: 80
targetPort: 9376
</code></pre>
<p>And final setup <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress:</a></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /svc-for-node1
backend:
serviceName: svc-for-node1
servicePort: 80
- path: /svc-for-node2
backend:
serviceName: svc-for-node2
servicePort: 80
</code></pre>
|
<p>The configuration I have is Jenkins on Kubernetes and the project is written in PHP.</p>
<p>The issue here is that the pod is attached to an ingress(than on a loadBalancer using GCE) and when the pod is unhealthy it won't add it.</p>
<p>The first time I load the project from 0 it works after I update it, it fails since it's unhealthy.</p>
<p>When I describe the pod I get the following warning:</p>
<blockquote>
<p>Readiness probe failed: Get <a href="http://10.32.1.71:80/setting" rel="noreferrer">http://10.32.1.71:80/setting</a> s: net/http:
request canceled (Client.Timeout exceeded while awaiting headers)</p>
</blockquote>
<p>My production configuration:</p>
<pre><code># Configuration for the SQL connection
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: wobbl-main-backend-production
spec:
replicas: 1
template:
metadata:
name: backend
labels:
app: wobbl-main
role: backend
env: production
spec:
containers:
- name: backend
image: gcr.io/cloud-solutions-images/wobbl-mobile-backend:1.0.0
resources:
limits:
memory: "500Mi"
cpu: "100m"
imagePullPolicy: Always
readinessProbe:
httpGet: # make an HTTP request
port: 80 # port to use
path: /settings # endpoint to hit
scheme: HTTP # or HTTPS
initialDelaySeconds: 3 # how long to wait before checking
periodSeconds: 5 # how long to wait between checks
successThreshold: 1 # how many successes to hit before accepting
failureThreshold: 2 # how many failures to accept before failing
timeoutSeconds: 10 # how long to wait for a response
ports:
- name: backend
containerPort: 80
</code></pre>
<p>Any hints on how to solve this.</p>
| <p>The error message implies your HTTP request is not successful. The readiness probe needs to succeed for the pod to be added as an endpoint for the service exposing it.</p>
<blockquote>
<p>1) kubectl get po -o wide</p>
</blockquote>
<p>This is so you can get the pod's cluster IP</p>
<blockquote>
<p>2) kubectl exec -t [another_pod] -- curl -I [pod's cluster IP]</p>
</blockquote>
<p>If you get a 200 response, you know the path is configured properly and the readiness probe should be passing. If you get anything other than a 200 response, this is why the readiness probe fails and you need to check your image.</p>
|
<p>I am having trouble enabling webhook authentication for the kubelet API. My cluster is deployed with kubeadm. <a href="https://stackoverflow.com/questions/44855609/enabling-kubelet-server-bearer-token-authentication">This post is similar, but not the same issue</a></p>
<p>I can authenticate to my API server with a bearer token just fine:</p>
<pre><code>curl -k https://localhost:6443/api --header "Authorization: Bearer $TOKEN"
</code></pre>
<p>I cannot authenticate against the kubelet api with the same header. I have enabled the following on the API server:</p>
<pre><code>--authorization-mode=Node,RBAC
--anonymous-auth=false
--runtime-config=authentication.k8s.io/v1beta1=true,authorization.k8s.io/v1beta1=true
</code></pre>
<p>The following is enabled on the kubelet node(s) (via /var/lib/kubelet/config.yaml)</p>
<pre><code>address: 0.0.0.0
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
</code></pre>
<p>Despite this, I get a "403 forbidden" when curling the /metrics endpoint on the kubelet. Something to note, I can perform the same API call against a cluster deployed with KOPS just fine. I am not sure what the difference is. </p>
| <p>The 403 indicates you successfully authenticated (or you would have gotten a 401 error), the kubelet checked with the apiserver if you were authorized to access kubelet metrics (otherwise it would have just allowed it), it got a definitely response from the apiserver (otherwise you would have gotten a 500 error), and the apiserver indicated the authenticated user is not authorized to access kubelet metrics. </p>
<p>See <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/#kubelet-authorization" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet-authentication-authorization/#kubelet-authorization</a> for details about what permission needs to be granted to access various endpoints on the kubelet's API. For metrics, the <code>nodes/metrics</code> resource in the <code>""</code> apiGroup must be granted. </p>
|
<p>I am trying to install istio using helm. I get an error "forbidden: attempt to grant extra privileges". I am using Azure AKS cluster. </p>
<p>Here is what I've tried with no luck.</p>
<ul>
<li>Using --set rbac.create=false</li>
<li>Using a brand new cluster with RBAC turned off</li>
<li>Created cluster role binding for cluster admin for the current user</li>
</ul>
<blockquote>
<p>[root@59373cb6f571 codebase]# helm install k8s/istio/helm/istio --name
istio --namespace istio-system --set servicegraph.enabled=true --set
grafana.enabled=true Error: release istio failed:
clusterroles.rbac.authorization.k8s.io "istio-galley-istio-system" is
forbidden: attempt to grant extra privileges: [{[<em>]
[admissionregistration.k8s.io] [validatingwebhookconfigurations] []
[]} {[get] [config.istio.io] [</em>] [] []} {[list] [config.istio.io] [<em>]
[] []} {[watch] [config.istio.io] [</em>] [] []} {[get] [<em>] [deployments]
[istio-galley] []} {[get] [</em>] [endpoints] [istio-galley] []}]
user=&{system:serviceaccount:kube-system:default
8134fa11-dd8d-11e8-967b-56582c65801d [system:serviceaccounts
system:serviceaccounts:kube-system system:authenticated] map[]}
ownerrules=[] ruleResolutionErrors=[]</p>
</blockquote>
| <p>From the error messages, the Tiller, component of helm that running in the cluster, use the default serviceaccount in the kube-system namespace to create resources in istio-system namespace, but don't have enough privilege.</p>
<p>So you can configure Tiller to use another serviceaccount, and provide cluster admin privilege to that serviceaccount, or continue to use default serviceaccount, and provide cluster admin to default serviceaccount. since all Pod startuped in this namespace will use default serviceaccount by default, give full privilege to default serviceaccount is not recommended.</p>
<p>for example, excerpt from helm document: </p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
</code></pre>
|
<p>I have an ingress controller and ingress resource running with all /devops mapped to devopsservice in the backend. When I try to hit "<a href="http://hostname/devops" rel="noreferrer">http://hostname/devops</a>" things work and I get a page (although without CSS and styles) with a set of hyperlinks for e.g one of them is "logs".</p>
<p>When I click on the "logs" hyperlink, it is redirecting me to <a href="http://hostname/logs" rel="noreferrer">http://hostname/logs</a> whereas I need it to be <a href="http://hostname/devops/logs" rel="noreferrer">http://hostname/devops/logs</a>.</p>
<p>Any idea what I can do?</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
namespace: ingress-nginx
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/add-base-url : "true"
spec:
rules:
- host: master1.dev.local
http:
paths:
- backend:
serviceName: devops1
servicePort: 10311
path: /devops
</code></pre>
| <p>Looks like your ingress is not serving anything <code>/devops/*</code>. Try adding another path <code>/devops/*</code> with the same backend. Basically this:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
namespace: ingress-nginx
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/add-base-url : "true"
spec:
rules:
- host: master1.dev.local
http:
paths:
- backend:
serviceName: devops1
servicePort: 10311
path: /devops/*
- backend:
serviceName: devops1
servicePort: 10311
path: /devops
</code></pre>
<p>Update: the above has been <a href="https://github.com/kubernetes/ingress-nginx/pull/3174" rel="noreferrer">deprecated</a> in favor of something like this:</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: rewrite
namespace: default
spec:
rules:
- host: master1.dev.local
http:
paths:
- backend:
serviceName: devops1
servicePort: 10311
path: /devops(/|$)(.*)
</code></pre>
|
<p>Is it possible to share a ServiceAccount between namespaces or somehow start a pod with a ServiceAccount from a different namespace? </p>
<p>We are looking to use vault to store common secret data between dynamic development environments. Following the very good walk though <a href="https://medium.com/@gmaliar/dynamic-secrets-on-kubernetes-pods-using-vault-35d9094d169" rel="noreferrer">HERE</a> we were able to authenticate and pull secrets for a single namespace. However in our use case we will be creating a new namespace for each development environment during it's lifetime. </p>
<p>If possible we would like to avoid having to also configure vault with a new auth backend for each namespace.</p>
| <p>When you create the Vault role, you can <a href="https://www.vaultproject.io/api/auth/kubernetes/index.html#bound_service_account_namespaces" rel="noreferrer">configure <code>bound_service_account_namespaces</code> to be the special value <code>*</code></a>, and allow a fixed service account name from any namespace. To adapt the "create role" <a href="https://www.vaultproject.io/docs/auth/kubernetes.html#configuration" rel="noreferrer">example from the documentation</a>:</p>
<pre><code>vault write auth/kubernetes/role/demo \
bound_service_account_names=vault-auth \
bound_service_account_namespaces='*' \
policies=default \
ttl=1h
</code></pre>
<p>You have to recreate the Kubernetes service account in every namespace, and it must have the exact name specified in the role. However, the Kubernetes service account is a single k8s object and it's not any harder than the Deployments, Services, ConfigMaps, and Secrets you already have; this pattern doesn't require any Vault reconfiguration.</p>
<p>(If you're using a templating tool like Helm, the service account can't follow a naming convention like <code>{{ .Release.Name }}-{{ .Chart.Name }}</code>: Vault doesn't appear to have any sort of pattern matching on this name.)</p>
|
<p>I am running a HashiCorp Vault stateless set in K8s with 3 pods on three nodes.</p>
<p>After deployment I manually unseal Vault. Vault then stays unsealed all the time.</p>
<p>The problem is when one of the nodes is restarted, the Vault pod restarts in unsealed mode.
<em>Is there any way to automatically unseal the Vault node by server-server-communication with one of the already unsealed pods?</em></p>
<p>I do not want to have to unseal the Vault pods manually when my Kubernetes environment gets updated and all nodes get restarted (rolling update - one by one).</p>
<p>I also do not want to store the unseal keys in a K8s secret or even a file as this makes encryption of my secrets useless.</p>
<p><em>This is my yaml:</em></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: vault
spec:
clusterIP: None
ports:
- name: http
port: 8200
- name: server
port: 8201
selector:
xyz.service: vault
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: vault
labels:
xyz.service: vault
spec:
serviceName: "vault"
selector:
matchLabels:
xyz.service: vault
replicas: 3
template:
metadata:
labels:
xyz.service: vault
spec:
imagePullSecrets:
- name: reg-dhc-xyzoms-pull-secret
securityContext:
runAsUser: 100
fsGroup: 1000
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: xyz.service
operator: In
values:
- vault
topologyKey: kubernetes.io/hostname
containers:
- name: vault
image: vault:0.11.0
resources:
requests:
memory: "100Mi"
env:
- name: SKIP_SETCAP
value: dontcare
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: "status.podIP"
- name: "VAULT_CLUSTER_ADDR"
value: "https://$(POD_IP):8201"
ports:
- name: http
containerPort: 8200
protocol: "TCP"
- name: server
containerPort: 8201
protocol: "TCP"
</code></pre>
| <p>After some more digging I learned, what I want is impossible.
Whenever a Vault instance is restarted it will be unsealed first and there is no way to unseal it automatically using Vault-own techniques.</p>
<p>You can find a lot of <em>"vault-unsealer"</em> implementations in GitHub and <a href="https://hub.docker.com/search/?q=unsealer" rel="nofollow noreferrer">Docker store</a> which try to fill this gap by regularly checking the Vault pods state and unsealing it if necessary.</p>
<p>It is suggested to use an K8s readinessprobe to avoid that services access a sealed Vault pod.</p>
<p>As there is no official "vault-unsealer" image, the public implementations must be used with caution. I ended up writing my own "vault-unsealer" to avoid security flaws and licensing problems.</p>
<p>My solution is a sidecar-container with each Vault pod. The unseal keys first have to be entered once manually with <code>kubectl exec ...</code> at one sidecar.
The sidecars regularly check all Vault pods and communicate the unseal keys to the other sidecar if sealed. If a sidecar receives unseal keys, they are stored in memory and are used to unseal its own Vault instance.</p>
<ol>
<li><code>kubect aply -f vault.yaml</code> -> vault-0 starting</li>
<li><code>kubectl exec vault-0 -c sidecar ...</code> to enter unseal keys -> vault-0 sidecar unseals vault-0 and is ready</li>
<li>vault-1 starting</li>
<li>vault-0 sidecar detects vault-1 unsealed and calls the vault-1 sidecar to transmit the unseal keys. -> vault-1 sidecar unseals vault-0 and is ready</li>
<li>and so on...</li>
</ol>
|
<p>My Deployment and Pods have specific annotations:</p>
<p><strong>Deployment Annotations:</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.owners: '[email protected],[email protected],[email protected]'
</code></pre>
<p> </p>
<p><strong>Pod Annotations:</strong></p>
<pre><code>metadata:
annotations:
pod.owners: '[email protected],[email protected],[email protected]'
</code></pre>
<p>I cannot create labels for these because labels have a size limit of 63 characters and they do not allow special characters like ",". (<a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set</a>)</p>
<p>I have a kube-state-metrics pod which scrapes all the metrics from kube-api. However, in kube-state-metrics/metrics, I do not see the <code>deployment.owners</code> or <code>pod.owners</code> annotations anywhere. I see <code>kube_namespace_annotations</code> metric, but I do not see any annotations related to deployments or pods.</p>
<p>Are annotations information not captured by kube-state-metrics? How do I get those info?</p>
| <p>Annotation information are not collected by kube-state-metrics. You can check their documentation to see which metrics are collected for the resources. Here, are link for documentation about <code>Deployment</code> and <code>Pod</code> metrics,</p>
<ol>
<li><a href="https://github.com/kubernetes/kube-state-metrics/blob/master/Documentation/deployment-metrics.md" rel="nofollow noreferrer">Deployment Metrics</a></li>
<li><a href="https://github.com/kubernetes/kube-state-metrics/blob/master/Documentation/pod-metrics.md" rel="nofollow noreferrer">Pod Metrics</a></li>
</ol>
|
<p>I am trying to get Kibana 6.2.4 in my GKE Kubernetes cluster running under www.mydomain.com/kibana without success. Though, I can run it perfectly fine with <code>kubectl proxy</code> and the default <code>SERVER_BASEPATH</code>.</p>
<p>Here is my Kibana deployment with the <code>SERVER_BASEPATH</code> removed.</p>
<pre><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana-logging
namespace: logging
labels:
k8s-app: kibana-logging
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
replicas: 1
selector:
matchLabels:
k8s-app: kibana-logging
template:
metadata:
labels:
k8s-app: kibana-logging
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'docker/default'
spec:
containers:
- name: kibana-logging
image: docker.elastic.co/kibana/kibana-oss:6.2.4
resources:
# need more cpu upon initialization, therefore burstable class
limits:
cpu: 1000m
requests:
cpu: 100m
env:
- name: ELASTICSEARCH_URL
value: http://elasticsearch-logging:9200
# - name: SERVER_BASEPATH
# value: /api/v1/namespaces/logging/services/kibana-logging/proxy
ports:
- containerPort: 5601
name: ui
protocol: TCP
</code></pre>
<p>My nginx ingress definition (nginx-ingress-controller:0.19.0):</p>
<pre><code>---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: logging-ingress
namespace: logging
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/configuration-snippet: |
rewrite ^/kibana/(.*)$ /$1 break;
spec:
tls:
- hosts:
- dev.mydomain.net
secretName: mydomain-net-tls-secret
rules:
- host: dev.mydomain.net
http:
paths:
- path: /kibana
backend:
serviceName: kibana-logging
servicePort: 5601
</code></pre>
<p>This results in this nginx location</p>
<pre><code> location /kibana {
set $namespace "logging";
set $ingress_name "logging-ingress";
set $service_name "kibana-logging";
set $service_port "5601";
set $location_path "/kibana";
rewrite_by_lua_block {
balancer.rewrite()
}
log_by_lua_block {
balancer.log()
monitor.call()
}
port_in_redirect off;
set $proxy_upstream_name "logging-kibana-logging-5601";
# enforce ssl on server side
if ($redirect_to_https) {
return 308 https://$best_http_host$request_uri;
}
client_max_body_size "1m";
proxy_set_header Host $best_http_host;
# Pass the extracted client certificate to the backend
# Allow websocket connections
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Request-ID $req_id;
proxy_set_header X-Real-IP $the_real_ip;
proxy_set_header X-Forwarded-For $the_real_ip;
proxy_set_header X-Forwarded-Host $best_http_host;
proxy_set_header X-Forwarded-Port $pass_port;
proxy_set_header X-Forwarded-Proto $pass_access_scheme;
proxy_set_header X-Original-URI $request_uri;
proxy_set_header X-Scheme $pass_access_scheme;
# Pass the original X-Forwarded-For
proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
# mitigate HTTPoxy Vulnerability
# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
proxy_set_header Proxy "";
# Custom headers to proxied server
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
proxy_buffering "off";
proxy_buffer_size "4k";
proxy_buffers 4 "4k";
proxy_request_buffering "on";
proxy_http_version 1.1;
proxy_cookie_domain off;
proxy_cookie_path off;
# In case of errors try the next upstream server before returning an error
proxy_next_upstream error timeout;
proxy_next_upstream_tries 3;
rewrite ^/kibana/(.*)$ /$1 break;
proxy_pass http://upstream_balancer;
proxy_redirect off;
}
</code></pre>
<p>However, going to <code>/kibana</code> results in a 404.</p>
<p>Stackdriver</p>
<pre><code>2018-10-30 08:30:48.000 MDT
GET /kibana 404 61ms - 9.0B
</code></pre>
<p>Web page</p>
<pre><code>{
statusCode: 404,
error: "Not Found",
message: "Not Found"
}
</code></pre>
<p>I feel as though I am missing some sort of setting with either <code>SERVER_BASEPATH</code> and/or my nginx ingress configuration.</p>
| <p>I believe what you want is the <code>nginx.ingress.kubernetes.io/rewrite-target: /</code> annotation in your ingress.</p>
<p>This way the <code>location {}</code> block will look something like this:</p>
<pre><code>location ~* ^/kibana\/?(?<baseuri>.*) {
...
rewrite (?i)/kibana/(.*) /$1 break;
rewrite (?i)/kibana$ / break;
...
}
</code></pre>
|
<p>Is it possible to visualize kubernetes topology and see it update on-the-fly as objects are added/deleted/linked?</p>
<p>I saw a video at <a href="https://www.youtube.com/watch?v=38SNQPhsGBk" rel="nofollow noreferrer">https://www.youtube.com/watch?v=38SNQPhsGBk</a> where service/pods show up as icons on a graph. For example, see</p>
<p><a href="https://i.stack.imgur.com/Ol4SQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ol4SQ.png" alt="enter image description here"></a></p>
<p>I am new to kubernetes and have installed minikube. How do I visualize my cluster's topology? Thank you.</p>
| <p>There are many options but the one I like most is <a href="https://www.weave.works/oss/scope/" rel="nofollow noreferrer">Weave Scope</a> where you get visualizations such as:</p>
<p><a href="https://i.stack.imgur.com/lTFKn.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lTFKn.jpg" alt="Weave scope screen shot" /></a><br />
<sub>(source: <a href="https://images.contentstack.io/v3/assets/blt300387d93dabf50e/blt94bf2945b508e588/5b8fb098fffba0957b6ea7e6/download" rel="nofollow noreferrer">contentstack.io</a>)</sub></p>
|
<p>I'm trying to follow this <a href="https://blog.inkubate.io/install-and-configure-a-multi-master-kubernetes-cluster-with-kubeadm/" rel="nofollow noreferrer">tutorial</a>. </p>
<ol>
<li>What would be the advantage of generating the certs yourself instead of depending on kubeadm? </li>
<li>if you create the certs yourself, does the auto-rotation happens after setting up the cluster from kubeadm?</li>
</ol>
<p>Thanks!</p>
| <p>I am creating all the certs by myself, the reason behind that is</p>
<ol>
<li><p>The kubernetes cluster we use might not be updated every year, so we need certificates with longer expiry. Our applications doesn't support random docker restart and we are not accepting the kubeadm phase command to regenerate the certificates and restart the docker. Hence we created all the certificates with 5 years of expiry and provided it to kubeadm and it is working fine. Now, we don't have to worry about our certificate expiry every year.</p></li>
<li><p>No kubeadm doesn't provide the auto rotate facility of certificates, this is the reason we needed longer expiry of certificates in the first place.</p></li>
</ol>
<p>Hope this helps.</p>
|
<p>I'm migrating an application to Docker/Kubernetes. This application has 20+ well-known ports it needs to be accessed on. It needs to be accessed from outside the kubernetes cluster. For this, the application writes its public accessible IP to a database so the outside service knows how to access it. The IP is taken from the downward API (<code>status.hostIP</code>). </p>
<p>One solution is defining the well-known ports as (static) nodePorts in the service, but I don't want this, because it will limit the usability of the node: if another service has started and incidentally taken one of the known ports the application will not be able to start. Also, because Kubernetes opens the ports on all nodes in the cluster, I can only run 1 instance of the application per cluster.</p>
<p>Now I want to make the application aware of the port mappings done by the NodePort-service. How can this be done? As I don't see a hard link between the <code>Service</code> and the <code>Statefulset</code> object in Kubernetes.</p>
<p>Here is my (simplified) Kubernetes config: </p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-app-svc
labels:
app: my-app
spec:
ports:
- port: 6000
targetPort: 6000
protocol: TCP
name: debug-port
- port: 6789
targetPort: 6789
protocol: TCP
name: traffic-port-1
selector:
app: my-app
type: NodePort
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: my-app-sf
spec:
serviceName: my-app-svc
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: my-repo/myapp/my-app:latest
imagePullPolicy: Always
env:
- name: K8S_ServiceAccountName
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
- name: K8S_ServerIP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: serverName
valueFrom:
fieldRef:
fieldPath: metadata.name
ports:
- name: debug
containerPort: 6000
- name: traffic1
containerPort: 6789
</code></pre>
| <p>This can be done with an initContainer. </p>
<p>You can define an initContainer to get the nodeport and save into a directory that shared with the container, then container can get the nodeport from that directory later, a simple demo like this:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
containers:
- name: my-app
image: busybox
command: ["sh", "-c", "cat /data/port; while true; do sleep 3600; done"]
volumeMounts:
- name: config-data
mountPath: /data
initContainers:
- name: config-data
image: tutum/curl
command: ["sh", "-c", "TOKEN=`cat /var/run/secrets/kubernetes.io/serviceaccount/token`; curl -kD - -H \"Authorization: Bearer $TOKEN\" https://kubernetes.default:443/api/v1/namespaces/test/services/app 2>/dev/null | grep nodePort | awk '{print $2}' > /data/port"]
volumeMounts:
- name: config-data
mountPath: /data
volumes:
- name: config-data
emptyDir: {}
</code></pre>
|
<p>When doing a <code>helm reset</code> I get:</p>
<pre><code>helm reset
Error: Get https://10.96.0.1:443/api/v1/namespaces/kube-system/configmaps?labelSelector=OWNER%!D(MISSING)TILLER: dial tcp 10.96.0.1:443: i/o timeout
</code></pre>
<p>Any suggestions?</p>
| <p>The <a href="https://github.com/helm/helm/issues/2464" rel="nofollow noreferrer">issue</a> on GitHub looks pretty close to your case.</p>
<p>The solution provided by <a href="https://github.com/fossxplorer" rel="nofollow noreferrer">fossxplorer </a> and improved by <a href="https://github.com/johnhamelink" rel="nofollow noreferrer">johnhamelink </a> is to set <code>automountServiceAccountToken</code> parameter to "<code>true</code>" in the <code>tiller</code> deployment:</p>
<pre><code>$ kubectl -n kube-system patch deployment tiller-deploy -p '{"spec": {"template": {"spec": {"automountServiceAccountToken": true}}}}'
</code></pre>
<p>If after that you have the following error:</p>
<blockquote>
<p>Error: configmaps is forbidden: User
"system:serviceaccount:kube-system:default" cannot list configmaps in
the namespace "kube-system"</p>
</blockquote>
<p>you should create <code>ClusterRoleBinding</code> for service account <code>kube-system:default</code></p>
<pre><code>$ kubectl --namespace=kube-system create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:default
</code></pre>
<p>I recommend to create separate service account and select it during <code>Helm</code> initialization:</p>
<pre><code>$ kubectl create serviceaccount --namespace kube-system tiller
$ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
$ helm init --service-account tiller
</code></pre>
<p>If you want secure <code>Helm</code> installation please follow <a href="https://docs.helm.sh/using_helm/#best-practices-for-securing-helm-and-tiller" rel="nofollow noreferrer">the manual</a>.</p>
|
<p>I am trying to debug why pod security policy (psp) isn't applying. Running the following shows no resources found. Not sure if this is sufficient to confirm psp is enabled.</p>
<pre><code>$ kubectl get psp
No resources found.
</code></pre>
<p>Thanks. </p>
| <p><a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/#enabling-pod-security-policies" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/policy/pod-security-policy/#enabling-pod-security-policies</a></p>
<blockquote>
<p>Pod security policy control is implemented as an optional (but
recommended) admission controller. PodSecurityPolicies are enforced by
enabling the admission controller, but doing so without authorizing
any policies will prevent any pods from being created in the cluster.</p>
<p>Since the pod security policy API (policy/v1beta1/podsecuritypolicy)
is enabled independently of the admission controller, for existing
clusters it is recommended that policies are added and authorized
before enabling the admission controller.</p>
</blockquote>
|
<p>Is setting a resource request in a container allocates a resource? Let us say I set a 1 CPU request, will that allocate a 1 CPU to that pod?</p>
<p>Or is resource request just like a threshold or an identifier if that pod with a resource request can be deployed to this instance based on available resources?</p>
| <p>There are separately resource <em>requests</em> and resource <em>limits</em>.</p>
<p>Resource <em>requests</em> are <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#how-pods-with-resource-requests-are-scheduled" rel="nofollow noreferrer">mostly used for scheduling</a>. If you have a node with 8 CPU cores and 32 GB of RAM, Kubernetes won't schedule pods to run on that node that, combined, request more than 8.0 CPU cores and 32 GB memory. (That includes any number of pods that have no resource requests at all.) The one exception is that, if you request some amount of CPU, that gets <a href="https://docs.docker.com/engine/reference/run/#cpu-share-constraint" rel="nofollow noreferrer">passed on to Docker</a> and if the system is overloaded, process time is allocated proportional to the requested CPU count. Neither cores nor memory are "reserved" in any meaningful way (you're not guaranteed to run on a single specific core if you request 1 CPU unit).</p>
<p>Resource <em>limits</em> <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#how-pods-with-resource-limits-are-run" rel="nofollow noreferrer">affect the pod's operation</a>. Kubernetes will limit a pod's CPU consumption based on the requested CPU limit, and if it goes over its memory limit, the Linux kernel will kill it (described in <a href="https://docs.docker.com/engine/reference/run/#user-memory-constraints" rel="nofollow noreferrer">extended discussion of the <code>docker run -m</code> option</a>). Remember that pods are scheduled based only on their resource requests, so it's possible to have a collection of pods that together don't request more than available memory but do wind up using too much memory, and in this case the kernel OOM-killer will wind up killing something off too.</p>
|
<p>I have a manifest as the following</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: my-redis
spec:
selector:
matchLabels:
app: my-redis
replicas: 1
template:
metadata:
labels:
app: my-redis
spec:
containers:
- name: my-redis
image: redis
ports:
- name: redisport1
containerPort: 6379
hostPort: 6379
---
apiVersion: v1
kind: Service
metadata:
name: redis-service
labels:
app: my-redis
spec:
type: NodePort
selector:
name: my-redis
ports:
- name: redisport1
port: 6379
targetPort: 6379
nodePort: 30036
protocol: TCP
</code></pre>
<p>This is a sample that reproduces my problem. My intention here is to create a simple cluster that has a pod with a redis container in it, and it should be exposed to my localhost. Still, get services gives me the following output:</p>
<pre><code>redis-service NodePort 10.107.233.66 <none> 6379:30036/TCP 10s
</code></pre>
<p>If I swap NodePort with LoadBalancer, I get an external-ip but still port doesn't work. </p>
<p>Can you help me identify why I'm failing to map the 6379 port to my localhost, please?</p>
<p>Thanks,</p>
| <p>In order to access your app through node port, you have to use this url
<code>http://{node ip}:{node port}</code>. </p>
<p>If you are using minikube, your minikube ip is the node ip. You can retrieve it using <code>minikube ip</code> command.</p>
<p>You can also use <code>minikube service redis-service --url</code> command to get the url to access your application through node port.</p>
|
<p>Is there a possibility to deploy a <strong>kubernetes Pod</strong> using <strong>golang</strong> code instead of command line Linux with <strong>kubectl</strong> ?</p>
| <p>Yes. Definitely you can. Kubernetes is written in golang. It has official client for go. You can create,delete,patch,list, control almost any Kubernetes resources with this go client.</p>
<p>Here is the official client repository: <a href="https://github.com/kubernetes/client-go" rel="noreferrer">kubernetes/client-go</a></p>
<p>I have create a sample repository <a href="https://github.com/hossainemruz/k8s-client-go-practice" rel="noreferrer">here</a> to show how to create a pod using <code>client-go</code>.</p>
<p>You have to do following things to create a pod with this client. We are going to create a simple busybox pod.</p>
<ol>
<li>Create <code>configuration</code> using your kube-config file. Generally the configuration file is <code>$HOME/.kube/config</code> file. See example <a href="https://github.com/hossainemruz/k8s-client-go-practice/blob/56eeb12f1ce54d3fe4d428e7546d70cd5bde3b50/main.go#L26" rel="noreferrer">here</a>. </li>
<li>Create a <code>clientset</code> using this <code>configuration</code>. See example <a href="https://github.com/hossainemruz/k8s-client-go-practice/blob/56eeb12f1ce54d3fe4d428e7546d70cd5bde3b50/main.go#L31" rel="noreferrer">here</a>.</li>
<li>Now, generate a pod definition that we want to deploy. See example <a href="https://github.com/hossainemruz/k8s-client-go-practice/blob/56eeb12f1ce54d3fe4d428e7546d70cd5bde3b50/main.go#L37" rel="noreferrer">here</a>.</li>
<li>Finally, create the pod in kubernetes cluster using the <code>clientset</code>. See example <a href="https://github.com/hossainemruz/k8s-client-go-practice/blob/56eeb12f1ce54d3fe4d428e7546d70cd5bde3b50/main.go#L40" rel="noreferrer">here</a>.</li>
</ol>
|
<p>I'm learning k8s and am struggling with writing Helm charts to generate config files for an application I'm using to ramp up on the ecosystem. I've hit an interesting issue where I need to generate configs that are common to all nodes, and some that are unique to each node. Any idea how I would do this?</p>
<p>From my values.xml file:</p>
<pre><code># number of nodes / replicas
nodeCount: 5
replicaCount: 3
</code></pre>
<p>The common config across all nodes called node_map.xml:</p>
<pre><code> <default>
<node>
<replica>
<host>wild-wallaby-0</host>
<port>8000</port>
</replica>
<replica>
<host>scary-rapids-1</host>
<port>8000</port>
</replica>
</node>
<node>
<replica>
<host>wild-wallaby-1</host>
<port>8000</port>
</replica>
<replica>
<host>scary-rapids-2</host>
<port>8000</port>
</replica>
</node>
<node>
<replica>
<host>wild-wallaby-2</host>
<port>8000</port>
</replica>
<replica>
<host>scary-rapids-0</host>
<port>8000</port>
</replica>
</node>
</default>
</code></pre>
<p>The above is easy enough to generate, and this config is ready be every pod that is a single container, but now each pod also needs to have an additional config file written letting that pod know what node and replica that instance is, called instance.xml. Note that the file doesn't need to be called instance.xml... I have the flexibility to point at and load any named file as long as I know what name to include in the start command.</p>
<p>For example...</p>
<p>Two instances would run on node wild-wallaby-0, node 0 replica 1, and node 0 replica 2. Each instance would need config files generated as such:</p>
<p>The first instance...</p>
<pre><code> <!-- node 0 replica 1 instance.xml -->
<id>
<node>0</node>
<replica>1</replica>
</id>
</code></pre>
<p>And the second instance ...</p>
<pre><code><!-- node 0 replica 2 instance.xml -->
<id>
<node>0</node>
<replica>2</replica>
</id>
</code></pre>
<p>This of course can follow some convention based on the number of nodes and replicas defined in my values file. While it's easy to generate the file that's common across all nodes, it's not clear to me how I can generate a custom config file for each node from a helm chart for the instance.xml files. </p>
<p>Any ideas or pointers? </p>
| <p>You can deploy this as a StatefulSet, and use an <code>initContainers:</code> to create the config file before the main task of the pod really starts up.</p>
<p>The Kubernetes documentation has <a href="https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/" rel="nofollow noreferrer">a fairly detailed of example of this</a> oriented around a replicated MySQL cluster, but with the same essential setup: there is a master node and some number of replicas, each needs to know its own ID, and the config files are different on the master and replicas.</p>
<p>It looks like the important detail you can work from is that the pod's <code>hostname</code> (as in the shell command) is <code>statefulsetname-123</code>, where the numbers are sequential and the individual pods are guaranteed to be started in order. The same detail is in <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-name-label" rel="nofollow noreferrer">a `statefulset.kubernetes.io/pod-name' label</a>, which you can retrieve via the <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/" rel="nofollow noreferrer">downward API</a>.</p>
<p>I might create a ConfigMap like:</p>
<pre><code>version: v1
kind: ConfigMap
metadata:
name: config-templates
data:
config.xml.tmpl: >-
<id>
<node>NODE</node>
<replica>REPLICA</replica>
</id>
</code></pre>
<p>And then my StatefulSet spec could look like, in part:</p>
<pre><code>version: apps/v1
kind: StatefulSet
...
spec:
...
template:
spec:
volumes:
- name: config
emptyDir: {}
- name: templates
configMap:
name: config-templates
initContainers:
- name: configfiles
image: ubuntu:16.04
command:
- sh
- -c
- |
POD_NUMBER=$(hostname | sed 's/.*-//')
NODE=$(( $POD_NUMBER / 5 ))
REPLICA=$(( $POD_NUMBER % 5 ))
sed -e "s/NODE/$NODE/g" -e "s/REPLICA/$REPLICA/g" \
/templates/config.xml.tmpl > /config/config.xml
volumeMounts:
- name: templates
mountPath: /templates
- name: config
mountPath: /config
containers:
- name: ...
...
volumeMounts:
- name: config
mountPath: /opt/myapp/etc/config
</code></pre>
<p>In that setup you ask Kubernetes to create an empty temporary volume (<code>config</code>) that's shared between the containers, and make the config map available as a volume too. The init container extracts the sequential pod ID, splits it into the two numbers, and writes the actual config file into the temporary volume. Then the main container mounts the shared config directory into wherever it expects its config files to be.</p>
|
<p>I have terraformed (Terraform version 11.10) a private Kubernetes cluster on the Google Kubernetes Engine (GKE) using the following <code>.tf</code>:</p>
<pre><code>module "nat" {
source = "GoogleCloudPlatform/nat-gateway/google"
region = "europe-west1"
network = "default"
subnetwork = "default"
}
resource "google_container_node_pool" "cluster_1_np" {
name = "cluster-1-np"
region = "europe-west1"
cluster = "${google_container_cluster.cluster_1.name}"
initial_node_count = 1
lifecycle {
ignore_changes = ["node_count"]
}
autoscaling {
min_node_count = 1
max_node_count = 50
}
management {
auto_repair = true
auto_upgrade = true
}
node_config {
oauth_scopes = [
"https://www.googleapis.com/auth/compute",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
"https://www.googleapis.com/auth/pubsub",
]
tags = ["${module.nat.routing_tag_regional}"]
}
}
resource "google_container_cluster" "cluster_1" {
provider = "google-beta"
name = "cluster-1"
region = "europe-west1"
remove_default_node_pool = true
private_cluster_config {
enable_private_endpoint = false
enable_private_nodes = true
master_ipv4_cidr_block = "172.16.0.0/28"
}
ip_allocation_policy {
create_subnetwork = true
}
lifecycle {
ignore_changes = ["initial_node_count", "network_policy", "node_config", "node_pool"]
}
node_pool {
name = "default-pool"
}
addons_config {
http_load_balancing {
disabled = false
}
horizontal_pod_autoscaling {
disabled = false
}
}
master_authorized_networks_config {
cidr_blocks = [
{
cidr_block = "<MY_OFFICE_CIDR>"
display_name = "Office"
},
]
}
}
</code></pre>
<p>Which works great, giving me a private cluster (and the NAT works, giving the nodes access to the internet), and machines in my office can run <code>kubectl</code> commands to interact with it no bother.</p>
<p>The problem I now face is integrating any web-based Continuous Integration (CI) or Continuous Deployment (CD). Private clusters are a new feature of the Google Cloud Platform (GCP), and the documentation is a bit lacking in this area.</p>
<p>My attempts thus far have completely failed, my networking knowledge is simply insufficient. I tried <a href="https://cloud.google.com/solutions/creating-kubernetes-engine-private-clusters-with-net-proxies" rel="noreferrer">this solution</a> but it seems the automation machine must be on the same network as the proxy.</p>
<p>I found <a href="https://stackoverflow.com/questions/51944817/google-cloud-build-deploy-to-gke-private-cluster">this similar SO question</a> (almost exactly the same but his is Cloud Build specific). In the comments to one of the answers of that question the OP mentions he found a workaround where he temporarily modifies the master authorized networks of the build machine but he has not stated exactly how he is carrying this out.</p>
<p>I attempted to replicate his workaround but the relevant <code>gcloud</code> commands seem to be able to <code>update</code> the list of networks, or completely remove all of them, not add/remove one at a time as his comment suggests.</p>
<p>Help from networking wizards would be much appreciated.</p>
| <p>This is a common problem while interfacing with CI systems like <a href="https://circleci.com/" rel="nofollow noreferrer">CircleCI</a> or <a href="https://travis-ci.org/" rel="nofollow noreferrer">Travis</a> that live in the public cloud. You can use this command to update your <code>master authorized networks</code></p>
<pre><code>gcloud container clusters update [CLUSTER_NAME] \
--enable-master-authorized-networks \
--master-authorized-networks=<MY_OFFICE_CIDR>,<NEW-CIDR-FROM-CI> \
--zone=<your-zone>
</code></pre>
<p>To remove the CI system network you can do something like this (just remove the network from the cli):</p>
<pre><code>gcloud container clusters update [CLUSTER_NAME] \
--enable-master-authorized-networks \
--master-authorized-networks=<MY_OFFICE_CIDR> \
--zone=<your-zone>
</code></pre>
<p>To completely remove all authorized networks (disable):</p>
<pre><code>gcloud container clusters update [CLUSTER_NAME] \
--no-enable-master-authorized-networks
</code></pre>
<p>You can also do it from the UI:</p>
<p><a href="https://i.stack.imgur.com/3pl9L.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3pl9L.png" alt="authorized networks"></a></p>
<p>It's actually documented <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/authorized-networks" rel="nofollow noreferrer">here</a>.</p>
|
<p>I have built a flask app that I would like to add to a Kubernetes ingress. Currently, I have 2 questions I cannot seem to get my head around: </p>
<ol>
<li>In order for the flask app to be able to handle several requests, I figured I would add gunicorn. Do I need this, or can I mitigate this by using some kind of automatic horizontal scaling and the ingress routing layer handle it? I am new to Kubernetes, and perhaps the solution is simpler than what I am trying below.</li>
<li><p>With the presumption that I do need gunicorn, I have proceeded and added it to the flask docker. The problem I have with this is that I now get a 502 Bad Gateway Error nginx and the log of the pod have not printed any error. If I create a load balancer service instead of the clusterIP I use with the ingress, the flask app with unicorn works fine, just as the flask app does on the ingress <strong>without</strong> adding gunicorn. I have no idea why hence writing this question. The dockerfile installs all dependencies to run flask and finishes with: </p>
<pre><code>EXPOSE 8080
CMD ["gunicorn", "--config", "/flaskapp/gunicorn_config.py", "run:app"]
</code></pre>
<p>I have configured my ingress like this:</p>
<pre><code>apiVersion: v1
items:
- apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.bluemix.net/client-max-body-size: 128m
ingress.bluemix.net/rewrite-path: serviceName=flask-service rewrite=/;
spec:
rules:
- host: <my-domain>
http:
paths:
- backend:
serviceName: flask-service
servicePort: 8080
path: /flask/
tls:
- hosts:
- <my-domain>
secretName: <my-secret>
status:
loadBalancer:
ingress:
- ip: <ip>
</code></pre>
<p>The service looks like this:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: flask-service
labels:
app: flask-service
spec:
type: ClusterIP
ports:
- port: 8080
protocol: TCP
selector:
app: flask
</code></pre>
<p>The deployment is also very simple specifying the correct image and port.</p></li>
</ol>
<p>Given that I need gunicorn(or similar), how can I solve the 502 Bad Gateway Error I get?</p>
| <ol>
<li><p>IMO, you don't need gunicorn scaling (it's an overkill) since an HPA will do the scaling if your single application instances already. This depending on CPUs, memory or <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-custom-metrics" rel="nofollow noreferrer">custom metrics</a>.</p></li>
<li><p>The 502 errors seem to me it's more of how gunicorn is configured issue (is there a limit on the workers? can you see the workers to just 1 to test? how is it scaling inside the container? What are the resource limits on the container?). Hard to tell without looking at logs or the environment, but it could be that you gunicorn workers are thrashing in the container thus returning an invalid response. You might want to try <a href="http://docs.gunicorn.org/en/stable/settings.html#loglevel" rel="nofollow noreferrer">--log-level debug</a> on the gunicorn command line.</p></li>
</ol>
<p>Hope it helps.</p>
|
<p>I am starting to experiment with Oauth2 authorisation for a Kubernetes cluster. </p>
<p>I have found a good Oauth2 identity provider using <a href="https://github.com/hortonworks/docker-cloudbreak-uaa" rel="nofollow noreferrer">UAA</a></p>
<p>My original intention was to deploy this into a Kubernetes cluster, and then allow it to provide authentication over that cluster. This would provide a single sign on solution hosted in the cloud, and enable that solution to manage Kubernetes access as well as access to the applications running on my cluster. </p>
<p>However, when thinking this solution through, there would seem to be some edge cases where this kind of configuration could be catastrophic. For instance if my cluster stops then I do not think I will be able to restart that cluster, as the Oauth2 provider would not be running, and thus I could not be authenticated to perform any restart operations.</p>
<ul>
<li>Has anybody else encountered this conundrum ? </li>
<li>Is this a real risk ? </li>
<li>Is there a 'standard' approach to circumvent this issue ? </li>
</ul>
<p>Many Thanks for taking the time to read this !</p>
| <p>Kubernetes support multiple authentication( ref: <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/authentication/</a>).</p>
<p>You can enable multiple of them. You can log into kubernetes cluster using any of the them(if they enabled and configured correctly) .</p>
<p>According to kubernetes documentation: <strong>When multiple authenticator modules are enabled, the first module to successfully authenticate the request short-circuits evaluation. The API server does not guarantee the order authenticators run in.</strong></p>
<p>So, if you enable multiple authentication, i think you are fine. I am using kubernetes cluster. In that cluster certificates authentication and <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#webhook-token-authentication" rel="nofollow noreferrer">webhook token authentication</a> using <a href="https://github.com/appscode/guard" rel="nofollow noreferrer">guard</a> is enabled. And this <a href="https://github.com/appscode/guard" rel="nofollow noreferrer">guard</a> is running in that kubernetes cluster.</p>
|
<p>I upgraded my mac to version 10.13.6 yesterday, restarted my mac laptop. Now, minikube won't start. Docker Community Edition 18.06.1-ce is running. The logs are below. Can anyone spot what is wrong. How do I debug this more?</p>
<pre><code>$ minikube version
minikube version: v0.30.0
$ minikube update-check
CurrentVersion: v0.30.0
LatestVersion: v0.30.0
$ minikube start --v=999 --logtostderr --vm-driver=hyperkit
W1101 12:02:25.674822 24809 root.go:146] Error reading config file at /Users/user/.minikube/config/config.json: open /Users/user/.minikube/config/config.json: no such file or directory
I1101 12:02:25.675028 24809 notify.go:121] Checking for updates...
I1101 12:02:25.971633 24809 start.go:99] Viper configuration:
Aliases:
map[string]string{}
Override:
map[string]interface {}{"v":"999"}
PFlags:
map[string]viper.FlagValue{"apiserver-names":viper.pflagValue{flag:(*pflag.Flag)(0xc42033a460)}, "network-plugin":viper.pflagValue{flag:(*pflag.Flag)(0xc42033a8c0)}, "registry-mirror":viper.pflagValue{flag:(*pflag.Flag)(0xc42033a6e0)}, "vm-driver":viper.pflagValue{flag:(*pflag.Flag)(0xc420367c20)}, "cpus":viper.pflagValue{flag:(*pflag.Flag)(0xc420367d60)}, "disk-size":viper.pflagValue{flag:(*pflag.Flag)(0xc420367e00)}, "feature-gates":viper.pflagValue{flag:(*pflag.Flag)(0xc42033a960)}, "hyperkit-vsock-ports":viper.pflagValue{flag:(*pflag.Flag)(0xc42033ac80)}, "disable-driver-mounts":viper.pflagValue{flag:(*pflag.Flag)(0xc420367ae0)}, "gpu":viper.pflagValue{flag:(*pflag.Flag)(0xc42033ad20)}, "nfs-share":viper.pflagValue{flag:(*pflag.Flag)(0xc42033a140)}, "uuid":viper.pflagValue{flag:(*pflag.Flag)(0xc42033ab40)}, "apiserver-name":viper.pflagValue{flag:(*pflag.Flag)(0xc42033a3c0)}, "cache-images":viper.pflagValue{flag:(*pflag.Flag)(0xc42033aa00)}, "xhyve-disk-driver":viper.pflagValue{flag:(*pflag.Flag)(0xc42033a0a0)}, "dns-domain":viper.pflagValue{flag:(*pflag.Flag)(0xc42033a5a0)}, "docker-opt":viper.pflagValue{flag:(*pflag.Flag)(0xc42033a320)}, "host-only-cidr":viper.pflagValue{flag:(*pflag.Flag)(0xc420367ea0)}, "memory":viper.pflagValue{flag:(*pflag.Flag)(0xc420367cc0)}, "nfs-shares-root":viper.pflagValue{flag:(*pflag.Flag)(0xc42033a1e0)}, "iso-url":viper.pflagValue{flag:(*pflag.Flag)(0xc420367b80)}, "bootstrapper":viper.pflagValue{flag:(*pflag.Flag)(0xc420366fa0)}, "container-runtime":viper.pflagValue{flag:(*pflag.Flag)(0xc42033a780)}, "docker-env":viper.pflagValue{flag:(*pflag.Flag)(0xc42033a280)}, "extra-config":viper.pflagValue{flag:(*pflag.Flag)(0xc42033aaa0)}, "insecure-registry":viper.pflagValue{flag:(*pflag.Flag)(0xc42033a640)}, "apiserver-ips":viper.pflagValue{flag:(*pflag.Flag)(0xc42033a500)}, "keep-context":viper.pflagValue{flag:(*pflag.Flag)(0xc420367900)}, "kvm-network":viper.pflagValue{flag:(*pflag.Flag)(0xc42033a000)}, "mount":viper.pflagValue{flag:(*pflag.Flag)(0xc4203679a0)}, "mount-string":viper.pflagValue{flag:(*pflag.Flag)(0xc420367a40)}, "profile":viper.pflagValue{flag:(*pflag.Flag)(0xc420366f00)}, "hyperkit-vpnkit-sock":viper.pflagValue{flag:(*pflag.Flag)(0xc42033abe0)}, "hyperv-virtual-switch":viper.pflagValue{flag:(*pflag.Flag)(0xc420367f40)}, "kubernetes-version":viper.pflagValue{flag:(*pflag.Flag)(0xc42033a820)}}
Env:
map[string]string{}
Key/Value Store:
map[string]interface {}{}
Config:
map[string]interface {}{}
Defaults:
map[string]interface {}{"log_dir":"", "reminderwaitperiodinhours":24, "wantnonedriverwarning":true, "showdriverdeprecationnotification":true, "showbootstrapperdeprecationnotification":true, "v":"0", "alsologtostderr":"false", "wantupdatenotification":true, "wantreporterror":false, "wantreporterrorprompt":true, "wantkubectldownloadmsg":true}
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
I1101 12:02:25.972050 24809 utils.go:100] retry loop 0
I1101 12:02:25.972089 24809 cluster.go:73] Skipping create...Using existing machine configuration
Found binary path at /usr/local/bin/docker-machine-driver-hyperkit
Launching plugin server for driver hyperkit
Plugin server listening at address 127.0.0.1:56021
() Calling .GetVersion
Using API Version 1
() Calling .SetConfigRaw
() Calling .GetMachineName
(minikube) Calling .GetState
I1101 12:02:26.011670 24809 cluster.go:82] Machine state: Stopped
(minikube) Calling .Start
(minikube) Using UUID fa403fab-dbb5-11e8-812b-8c859058d45f
(minikube) Generated MAC 86:5b:e6:33:fc:a1
(minikube) Starting with cmdline: loglevel=3 user=docker console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes base host=minikube
(minikube) Calling .GetConfigRaw
(minikube) Calling .DriverName
Waiting for SSH to be available...
Getting to WaitForSSH function...
(minikube) Calling .GetSSHHostname
(minikube) Calling .GetSSHPort
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHUsername
Using SSH client type: native
&{{{<nil> 0 [] [] []} docker [0x140f940] 0x140f910 [] 0s} 192.168.64.4 22 <nil> <nil>}
About to run SSH command:
exit 0
Error dialing TCP: dial tcp 192.168.64.4:22: connect: operation timed out
Error dialing TCP: dial tcp 192.168.64.4:22: connect: operation timed out
Error dialing TCP: dial tcp 192.168.64.4:22: connect: operation timed out
</code></pre>
<p>Update: I created an a json file containing {} at /Users/user/.minikube/config/config.json but it still hangs.</p>
<p>I printed the process tree and found that the child process</p>
<pre><code>/usr/local/bin/docker-machine-driver-hyperkit
</code></pre>
<p>is the one hanging.</p>
| <p>I have been fixing such issues by:</p>
<pre><code>minikube stop
rm -rf ~/.minikube
minikube start
</code></pre>
|
<p>Currently, there are two eks cluster a prod and dev. I am trying to access the dev cluster which exists in a different aws account and it gives me an error "You must be logged in to the server"</p>
<p>When I try to get the kubectl version I am getting an error. Please point my mistake. This happens only with the dev cluster. Please also let me know the steps to correct if I am wrong anywhere.</p>
<pre><code>AWS_PROFILE=eks_admin_dev kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-07-26T20:40:11Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
error: You must be logged in to the server (the server has asked for the client to provide credentials)
AWS_PROFILE=eks_admin_dev kubectl get pods
error: You must be logged in to the server (Unauthorized)
</code></pre>
<p>I have created access key and secret access key for my dev user( which are admin credentials). I created two profiles dev and eks_admin_dev.
I understand that the source_profile part is telling it to use the dev profile to do an sts:AssumeRole for the eks-admin role.</p>
<pre><code>$ aws --version
aws-cli/1.16.45 Python/2.7.12 Linux/4.4.0-1066-aws botocore/1.12.35
$ kubectl config current-context
dev
$ cat ~/.aws/config
[default] ---> prod account
region = us-east-1
[profile eks_admin_dev] ---> dev account
role_arn = arn:aws:iam::xxxxxxxx:role/eks-admin
source_profile = dev
region = us-east
[profile dev] ---> dev account
region = us-east-1
</code></pre>
<p>my credentials:</p>
<pre><code>$ cat ~/.aws/credentials
[old]
aws_secret_access_key = xxxxxxxxxxxxxx
aws_access_key_id = xxxxxxxxx
[default]
aws_access_key_id = xxxxxx
aws_secret_access_key = xxx
[dev]
aws_secret_access_key = xxx
aws_access_key_id = xxx
[eks_admin_dev]
aws_access_key_id = xx
aws_secret_access_key = xx
</code></pre>
<p><code>cat ~/.kube/kubeconfig</code>, I tried specifying the role here, same error.</p>
<pre><code>users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- token
- -i
- dev-0
command: aws-iam-authenticator
env:
- name: AWS_PROFILE
value: eks_admin_dev
</code></pre>
| <p>This works for me using both the <code>AWS_PROFILE</code> env on the command line and also setting the env in the <code>~/.kube/config</code> file.</p>
<p>The only thing that I can think may be happening is that you have the AWS credentials predefined for your prod account in the bash env already (Those take precedence over what's in <code>~/.aws/credentials</code>. You can check with this:</p>
<pre><code>$ env | grep AWS
AWS_SECRET_ACCESS_KEY=xxxxxxxx
AWS_ACCESS_KEY_ID=xxxxxxxxx
</code></pre>
<p>If that's the case you can unset them or remove them from whatever init file you may be sourcing on your shell.</p>
<pre><code>$ unset AWS_SECRET_ACCESS_KEY
$ unset AWS_ACCESS_KEY_ID
</code></pre>
|
<p>I am running a 4 worker node cluster in GCP.</p>
<p>And the current status of my nodes is :</p>
<p><strong>Node A</strong></p>
<pre><code>Non-terminated Pods: (4 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
kube-system calico-node-bfpbd 250m (6%) 500m (12%) 100Mi (0%) 700Mi (5%)
kube-system kube-proxy-7br2g 50m (1%) 100m (2%) 64Mi (0%) 256Mi (1%)
kube-system node-exporter-7kvcm 10m (0%) 20m (0%) 10Mi (0%) 50Mi (0%)
kube-system tiller-deploy-56c4cf647b-5vsvb 0 (0%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 310m (7%) 620m (15%)
memory 174Mi (1%) 1006Mi (7%)
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Thu, 01 Nov 2018 21:31:08 -0400 Thu, 01 Nov 2018 21:31:08 -0400 RouteCreated RouteController created a route
OutOfDisk False Thu, 01 Nov 2018 21:48:50 -0400 Thu, 01 Nov 2018 21:30:46 -0400 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Thu, 01 Nov 2018 21:48:50 -0400 Thu, 01 Nov 2018 21:30:46 -0400 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 01 Nov 2018 21:48:50 -0400 Thu, 01 Nov 2018 21:30:46 -0400 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 01 Nov 2018 21:48:50 -0400 Thu, 01 Nov 2018 21:30:46 -0400 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 01 Nov 2018 21:48:50 -0400 Thu, 01 Nov 2018 21:31:06 -0400 KubeletReady kubelet is posting ready status
</code></pre>
<p><strong>Node B</strong></p>
<pre><code>Non-terminated Pods: (4 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
jenkins-test jenkins-master 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system calico-node-qglbv 250m (6%) 500m (12%) 100Mi (0%) 700Mi (5%)
kube-system kube-proxy-g74ff 50m (1%) 100m (2%) 64Mi (0%) 256Mi (1%)
kube-system node-exporter-bvczb 10m (0%) 20m (0%) 10Mi (0%) 50Mi (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 310m (7%) 620m (15%)
memory 174Mi (1%) 1006Mi (7%)
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Thu, 01 Nov 2018 21:31:06 -0400 Thu, 01 Nov 2018 21:31:06 -0400 RouteCreated RouteController created a route
OutOfDisk False Thu, 01 Nov 2018 21:48:49 -0400 Thu, 01 Nov 2018 21:30:46 -0400 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Thu, 01 Nov 2018 21:48:49 -0400 Thu, 01 Nov 2018 21:30:46 -0400 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 01 Nov 2018 21:48:49 -0400 Thu, 01 Nov 2018 21:30:46 -0400 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 01 Nov 2018 21:48:49 -0400 Thu, 01 Nov 2018 21:30:46 -0400 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 01 Nov 2018 21:48:49 -0400 Thu, 01 Nov 2018 21:31:06 -0400 KubeletReady kubelet is posting ready status
</code></pre>
<p><strong>Node C</strong></p>
<pre><code>Non-terminated Pods: (3 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
kube-system calico-node-w9px6 250m (6%) 500m (12%) 100Mi (0%) 700Mi (5%)
kube-system kube-proxy-4r2ck 50m (1%) 100m (2%) 64Mi (0%) 256Mi (1%)
kube-system node-exporter-r92xs 10m (0%) 20m (0%) 10Mi (0%) 50Mi (0%)
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Thu, 01 Nov 2018 21:31:01 -0400 Thu, 01 Nov 2018 21:31:01 -0400 RouteCreated RouteController created a route
OutOfDisk False Thu, 01 Nov 2018 21:48:42 -0400 Thu, 01 Nov 2018 21:30:49 -0400 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Thu, 01 Nov 2018 21:48:42 -0400 Thu, 01 Nov 2018 21:30:49 -0400 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 01 Nov 2018 21:48:42 -0400 Thu, 01 Nov 2018 21:30:49 -0400 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 01 Nov 2018 21:48:42 -0400 Thu, 01 Nov 2018 21:30:49 -0400 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 01 Nov 2018 21:48:42 -0400 Thu, 01 Nov 2018 21:31:09 -0400 KubeletReady kubelet is posting ready status
</code></pre>
<p><strong>Node D</strong></p>
<pre><code>Non-terminated Pods: (9 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
kube-system addons-kubernetes-dashboard-8656b6fc5f-68wzm 50m (1%) 200m (5%) 50Mi (0%) 256Mi (1%)
kube-system addons-nginx-ingress-controller-77579b6d64-sqzl7 100m (2%) 300m (7%) 100Mi (0%) 512Mi (3%)
kube-system addons-nginx-ingress-nginx-ingress-k8s-backend-5d6d4598ff-nfzt4 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system calico-node-h2t5b 250m (6%) 500m (12%) 100Mi (0%) 700Mi (5%)
kube-system coredns-5c554d9f6f-fnwqq 100m (2%) 200m (5%) 15Mi (0%) 80Mi (0%)
kube-system kube-proxy-bfhjr 50m (1%) 100m (2%) 64Mi (0%) 256Mi (1%)
kube-system metrics-server-7f4cbf557d-985sj 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system node-exporter-frdrd 10m (0%) 20m (0%) 10Mi (0%) 50Mi (0%)
kube-system vpn-shoot-7bcd5f4bb-88sc7 100m (2%) 300m (7%) 128Mi (0%) 512Mi (3%)
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Thu, 01 Nov 2018 21:30:54 -0400 Thu, 01 Nov 2018 21:30:54 -0400 RouteCreated RouteController created a route
OutOfDisk False Thu, 01 Nov 2018 21:48:45 -0400 Thu, 01 Nov 2018 21:30:42 -0400 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Thu, 01 Nov 2018 21:48:45 -0400 Thu, 01 Nov 2018 21:30:42 -0400 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Thu, 01 Nov 2018 21:48:45 -0400 Thu, 01 Nov 2018 21:30:42 -0400 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Thu, 01 Nov 2018 21:48:45 -0400 Thu, 01 Nov 2018 21:30:42 -0400 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Thu, 01 Nov 2018 21:48:45 -0400 Thu, 01 Nov 2018 21:31:02 -0400 KubeletReady kubelet is posting ready status
</code></pre>
<p>As all the results show, my nodes are all healthy and have ample resources with 4CPUs each and 16GB memory each.</p>
<p>Now when I try to deploy my second statefulSet in my namespaces, the Pod remains in <code>Pending</code> state. The describe shows the below message:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 2m (x123 over 7m) default-scheduler pod has unbound PersistentVolumeClaims (repeated 2 times)
Normal NotTriggerScaleUp 12s (x26 over 6m) cluster-autoscaler pod didn't trigger scale-up (it wouldn't fit if a new node is added)
</code></pre>
<p>I also drained 2 of the nodes, labeled the two nodes and attached a <code>nodeSelector</code> to my statefulset to only deploy on those 2 almost empty nodes but the result is the same.</p>
<p>I'm not sure why my pod is trying to <code>scale-up</code>. That's not the intention. Any help will be greatly appreciated. </p>
<pre><code>kubectl get po -n jenkins-test -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
jenkins-agent 0/1 Pending 0 12m <none> <none> <none>
jenkins-master 1/1 Running 0 22m 100.96.1.2 shoot--t--csp-worker-hqh6g-z1-6df8f7dc66-bcj6t <none>
</code></pre>
| <p><code>pod has unbound PersistentVolumeClaims</code> , I think it's the key to this problem. About scale up, Pod is created through 'scale from 0 to any instance you defined'. </p>
|
<p>So basically I am starting with Kubernetes and wanted to try some things out. At this point I want to deploy a Webserver, a Database, a NodeJs Server and so on... And now, how do I decide how many instances of each of these services I need across my servers?</p>
| <p>This is a question with a complex answer depending on your particular application behavior and resource utilization. Put simply, the "short answer" is going to be: "It depends". It depends on these main factors:</p>
<ul>
<li>Application Resource Utilization
<ul>
<li>How much RAM, CPU, Disk, sockets,
etc... does your application generally use on: Average? Max? Min?</li>
<li>What bottlenecks or resource limits does the application bump into first?</li>
<li>What routines in the application might cause higher than normal utilization? (This is where a lot of complexity comes in... applications are all different and perform many functions in response to inputs such as client requests. Not every function has the same behavior w.r.t. resource utilization.)</li>
</ul></li>
<li>High Availability / Failover
<ul>
<li>One of the reasons you chose Kubernetes was probably for the ease of scaling an application and making it highly available with no single point of failure.</li>
<li>This comes down to: How available do you need your application to be?</li>
<li>On the Cluster / Server level: How many nodes can go down or be unhealthy and still maintain enough working nodes to handle requests?</li>
<li>On the Application / Container level: How many <code>Pod</code>s can go down and still handle the requests or intended operation?</li>
<li>What level of service degradation is acceptable?</li>
</ul></li>
<li>How do the separate applications interact & behave together?
<ul>
<li>Another really complicated issue that is hard to determine without observing their behavior together</li>
<li>You can try to do some analysis on metrics like "Requests Per Second" vs. resource utilization & spikes. However, this can be hard to simplify down to a single number or constant / linear cause / effect relationship.</li>
<li>Do some requests or input cause a "fan out" or amplification of load on sub-components?</li>
<li>For example:
<ul>
<li>Are there some SQL queries that result in higher DB load than others?</li>
<li>Are there some operations that can cause higher resource utilization in <code>Pod</code>s backing other <code>Service</code>s?</li>
<li>How do the systems behave together in a "max load" situation?</li>
</ul></li>
</ul></li>
</ul>
<p>This kind of thing is very hard to answer without doing load testing. Not many companies I've seen even do this at all! Sadly enough, any problems like this usually end up happening in production and having to be dealt with after the fact. It ends up being DevOps, Ops, or the on-call engineers having to deal with it, which isn't the greatest scenario because usually that person does not have full knowledge of the application's code in order to diagnose and introspect it fully.</p>
|
<p>I have created an instance of Azure Kubernetes Service (AKS) and have discovered that apart from the resource group I created the AKS instance in, two other resource groups were created for me. Here is what my resource groups and their contents looks like:</p>
<ul>
<li><code>MyResourceGroup-Production</code>
<ul>
<li><code>MyAKSInstance</code> - Azure Kubernetes Service (AKS)</li>
</ul></li>
<li><code>DefaultResourceGroup-WEU</code>
<ul>
<li><code>ContainerInsights(MyAKSInstance)</code> - Solution</li>
<li><code>MyAKSInstance</code> - Log Analytics</li>
</ul></li>
<li><code>MC_MyResourceGroup-Production_MyAKSInstance_westeurope</code>
<ul>
<li><code>agentpool-availabilitySet-36219400</code> - Availability set</li>
<li><code>aks-agentpool-36219400-0</code> - Virtual machine</li>
<li><code>aks-agentpool-36219400-0_OsDisk_1_09469b24b1ff4526bcfd5d00840cfbbc</code> - Disk</li>
<li><code>aks-agentpool-36219400-nic-0</code> - Network interface</li>
<li><code>aks-agentpool-36219400-nsg</code> - Network security group</li>
<li><code>aks-agentpool-36219400-routetable</code> - Route table</li>
<li><code>aks-vnet-36219400</code> - Virtual network</li>
</ul></li>
</ul>
<p>I have a few questions about these two separate resource groups:</p>
<ol>
<li>Can I rename the resource groups or control how they are named from my ARM template in the first place at the time of creation?</li>
<li>Can I move the contents of <code>DefaultResourceGroup-WEU</code> into <code>MyResourceGroup-Production</code>?</li>
<li>Can I safely edit their settings?</li>
<li>The <code>DefaultResourceGroup-WEU</code> seems to be created if you enable Log Analytics. Can I use this instance for accepting logs from other instances?</li>
</ol>
<h2>UPDATE</h2>
<p>I managed to pre-create a log analytics resource and use that for Kubernetes. However, there is a third resource that I'm having trouble moving into my resource group:</p>
<pre><code> {
"type": "Microsoft.Resources/deployments",
"name": "SolutionDeployment",
"apiVersion": "2017-05-10",
"resourceGroup": "[split(parameters('omsWorkspaceId'),'/')[4]]",
"subscriptionId": "[split(parameters('omsWorkspaceId'),'/')[2]]",
"properties": {
"mode": "Incremental",
"template": {
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {},
"variables": {},
"resources": [
{
"apiVersion": "2015-11-01-preview",
"type": "Microsoft.OperationsManagement/solutions",
"location": "[parameters('workspaceRegion')]",
"name": "[concat('ContainerInsights', '(', split(parameters('omsWorkspaceId'),'/')[8], ')')]",
"properties": {
"workspaceResourceId": "[parameters('omsWorkspaceId')]"
},
"plan": {
"name": "[concat('ContainerInsights', '(', split(parameters('omsWorkspaceId'),'/')[8], ')')]",
"product": "[concat('OMSGallery/', 'ContainerInsights')]",
"promotionCode": "",
"publisher": "Microsoft"
}
}
]
}
},
"dependsOn": []
}
</code></pre>
| <ol>
<li>No, you cant.</li>
<li>Yes, but I'd advice against it. I'd advice remove the health metrics from AKS, delete that resource group, create OMS in the same resource group with AKS (or wherever you need your OMS to be) and then use that OMS. it will just create container solution for you in the same resource group where oms is in.</li>
<li>To extent, if you break anything AKS wont fix it</li>
<li>Yes you can, but you better rework it like I mention in point 2.</li>
</ol>
|
<p>I have kubernetes running on version 1.5 with two nodes and one master nodes. I would like to deploy fluentd as a daemon set onto all nodes, but the master node (the master node spams warning messages as it can't find logs). How can I avoid deploying to the master node?</p>
| <p>So to make a pod not schedule on a master node you need to add the following</p>
<pre><code>nodeSelector:
kubernetes.io/role: node
</code></pre>
<p>This will make the pod schedule on only nodes. The above example shows the default label for node in kops provisioned cluster. Please very the key value if you have have provisioned the cluster from a different provider</p>
|
<p>I've defined this method, it should create a Kubernetes job for me.</p>
<pre><code>def make_job():
job = client.V1Job()
job.metadata = client.V1ObjectMeta()
job.metadata.name = "process"
job.spec = client.V1JobSpec()
job.spec.template = client.V1PodTemplate()
job.spec.template.spec = client.V1PodTemplateSpec()
job.spec.template.spec.restart_policy = "Never"
job.spec.template.spec.containers = [
make_container()
]
return job
</code></pre>
<p>However, it returns an error on this line.</p>
<pre><code>job.spec = client.V1JobSpec()
</code></pre>
<p>Saying</p>
<pre><code>ValueError: Invalid value for `template`, must not be `None`
</code></pre>
<p>I wonder if I am doing something wrong here, and if so, what am I doing wrong here?</p>
<p>EDIT:</p>
<p>I've solved the error with this change</p>
<pre><code>job.spec = client.V1JobSpec(template=client.V1PodTemplate)
</code></pre>
| <p>As you have already figured out, it is not possible to construct a <code>Job.Spec</code> without injecting its <code>template</code>, something that is set on stone in the Job <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="nofollow noreferrer">documentation</a>.</p>
<blockquote>
<p>The .spec.template is the only required field of the .spec. </p>
<p>The .spec.template is a pod template. It has exactly the same schema as a
pod, except it is nested and does not have an apiVersion or kind.</p>
</blockquote>
<p>Taking a look onto the Python's Kubernetes Client implementation of the <code>V1JobSpec</code>, it's possible to verify that the <code>spec</code> property is marked as <a href="https://github.com/kubernetes-client/python/blob/e057f273069de445a2d5a250ac5fe37d79671f3b/kubernetes/docs/V1JobSpec.md" rel="nofollow noreferrer">non-optional</a>, contrary to the other properties.</p>
<p>So by constructing the <code>template</code> beforehand and injecting it while constructing the <code>JobSpec</code> does solve the issue:</p>
<pre><code>job.spec.template = client.V1PodTemplate()
job.spec.template.spec = client.V1PodTemplateSpec()
job.spec.template.spec.restart_policy = "Never"
job.spec.template.spec.containers = [
make_container()
]
job.spec = client.V1JobSpec()
</code></pre>
<p>Following this reasoning, it does seem weird that the same does not apply in a higher-scope to the <code>Spec</code> property of the <code>Job</code>, since it's a mandatory section of the definition of a Job object.</p>
<p>But taking a look once again to the <a href="https://github.com/kubernetes-client/python/blob/e057f273069de445a2d5a250ac5fe37d79671f3b/kubernetes/docs/V1Job.md" rel="nofollow noreferrer">documentation</a> of the client, it is possible to observe that the <code>Spec</code> property is marked as optional, which explains why we are able to create a <code>Job</code> instance, without having to inject the <code>Spec</code>. </p>
|
<p>I try to get some basic routing between 2 apps deployed on a Google Cloud Kubernetes cluster with an lb ratio and I have this config:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: kubeapp
labels:
app: kubeapp
spec:
ports:
- port: 8080
name: http
selector:
app: kubeapp
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kubeapp-v1
spec:
replicas: 1
template:
metadata:
labels:
app: kubeapp
version: kubeapp-v1
spec:
containers:
- name: kubeapp-v1
image: .......
ports:
- name: kubeapp-v1
containerPort: 8080
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kubeapp-v2
spec:
replicas: 1
template:
metadata:
labels:
app: kubeapp
version: kubeapp-v2
spec:
containers:
- name: kubeapp-v2
image: .......
ports:
- name: kubeapp-v2
containerPort: 8080
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: kubeapp-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: kubeapp
spec:
hosts:
- "*"
gateways:
- kubeapp-gateway
http:
- route:
- destination:
host: kubeapp
port: 8080
</code></pre>
<p>which works perfectly and traffic goes 50/50 but when I try to add some basic rules for lb like:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: kubeapp
spec:
hosts:
- "*"
gateways:
- kubeapp-gateway
http:
- route:
- destination:
host: kubeapp
port:
number: 8080
subset: kubeapp-v1
weight: 90
- destination:
host: kubeapp
port:
number: 8080
subset: kubeapp-v2
weight: 10
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: kubeapp
spec:
host: kubeapp
subsets:
- name: kubeapp-v1
labels:
version: kubeapp-v1
- name: kubeapp-v2
labels:
version: kubeapp-v2
</code></pre>
<p>I got <code>upstream connect error or disconnect/reset before headers</code></p>
<p>I've tried to install Istio in all 3 modes and deploy it on different cluster nodes size (I saw that sometimes Istio has some bugs on some specific cluster size) and without success.</p>
| <p>A very common reason for this kind of problem is that your DestinationRule is causing an mTLS conflict. The issue is documented <a href="https://istio.io/help/ops/traffic-management/troubleshooting/#503-errors-after-setting-destination-rule" rel="nofollow noreferrer">here</a>.</p>
|
<p>I have configured EFK stack with Fluent-bit on my Kubernetes cluster. I can see the logs in Kibana.</p>
<p>I also have deployed nginx pod, I can see the logs of this nginx pod also in Kibana. But all the log data are sent to a single field "log" as shown below.</p>
<p><a href="https://i.stack.imgur.com/llL9t.png" rel="noreferrer"><img src="https://i.stack.imgur.com/llL9t.png" alt="enter image description here"></a></p>
<p>How can I extract each field into a separate field. There is a solution for fluentd already in this question. <a href="https://stackoverflow.com/questions/42270621/kibana-how-to-extract-fields-from-existing-kubernetes-logs">Kibana - How to extract fields from existing Kubernetes logs</a></p>
<p>But how can I achieve the same with fluent-bit?</p>
<p>I have tried the below by adding one more FILTER section under the default FILTER section for Kubernetes, but it didn't work.</p>
<blockquote>
<pre><code>[FILTER]
Name parser
Match kube.*
Key_name log
Parser nginx
</code></pre>
</blockquote>
<p>From this (<a href="https://github.com/fluent/fluent-bit/issues/723" rel="noreferrer">https://github.com/fluent/fluent-bit/issues/723</a>), I can see there is no grok support for fluent-bit.</p>
| <p>In our official documentation for Kubernetes filter we have an example about how to make your Pod suggest a parser for your data based in an annotation:</p>
<p><a href="https://docs.fluentbit.io/manual/filter/kubernetes" rel="noreferrer">https://docs.fluentbit.io/manual/filter/kubernetes</a></p>
|
<p>I am a Kubernetes newbie. I am running out ideas in solving the Pod status being stuck at <code>ContainerCreating</code>. I am working on a sample application from AWS (<a href="https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html#eks-guestbook" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html#eks-guestbook</a>), the sample is very similar to the official sample (<a href="https://kubernetes.io/docs/tutorials/stateless-application/guestbook/" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/stateless-application/guestbook/</a>). </p>
<p>Many thanks for anyone giving guidance in finding the root causes:</p>
<p>Why do I get conn refused error, what does port 50051 do? Thanks.</p>
<pre><code>$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default guestbook-8k9pp 0/1 ContainerCreating 0 15h
default guestbook-b2n49 0/1 ContainerCreating 0 15h
default guestbook-gtjnj 0/1 ContainerCreating 0 15h
default redis-master-rhwnt 0/1 ContainerCreating 0 15h
default redis-slave-b284x 0/1 ContainerCreating 0 15h
default redis-slave-vnlj4 0/1 ContainerCreating 0 15h
kube-system aws-node-jkfg8 0/1 CrashLoopBackOff 273 1d
kube-system aws-node-lpvn9 0/1 CrashLoopBackOff 273 1d
kube-system aws-node-nmwzn 0/1 Error 274 1d
kube-system kube-dns-64b69465b4-ftlm6 0/3 ContainerCreating 0 4d
kube-system kube-proxy-cxdj7 1/1 Running 0 1d
kube-system kube-proxy-g2js4 1/1 Running 0 1d
kube-system kube-proxy-rhq6v 1/1 Running 0 1d
$ kubectl describe pod guestbook-8k9pp
Name: guestbook-8k9pp
Namespace: default
Node: ip-172-31-91-242.ec2.internal/172.31.91.242
Start Time: Wed, 31 Oct 2018 04:59:11 -0800
Labels: app=guestbook
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicationController/guestbook
Containers:
guestbook:
Container ID:
Image: k8s.gcr.io/guestbook:v3
Image ID:
Port: 3000/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-jb75l (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-jb75l:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-jb75l
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SandboxChanged 11m (x19561 over 13h) kubelet, ip-172-31-91-242.ec2.internal Pod sandbox changed, it will be killed and re-created.
Warning FailedCreatePodSandBox 74s (x19368 over 13h) kubelet, ip-172-31-91-242.ec2.internal Failed create pod sandbox: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod "guestbook-8k9pp_default" network: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: **desc = "transport: Error while dialing dial tcp 127.0.0.1:50051: connect: connection refused"**
</code></pre>
| <p>The Kubernetes cluster that I created is on AWS EKS. The EKS cluster were created manually by me through the EKS console. </p>
<p>I have created a second cluster with official VPC sample for EKS cluster (<a href="https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2018-08-30/amazon-eks-vpc-sample.yaml" rel="nofollow noreferrer">https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2018-08-30/amazon-eks-vpc-sample.yaml</a>), and it seems to be working now. </p>
<p>So the problem should be the VPC configurations. Once I figured out what actually went wrong, will post info here, thank you.</p>
|
<p>The JSON output returned to me after running this command </p>
<pre><code>kubectl get pods -o json | jq '.items[].spec.containers[].env'
</code></pre>
<p>on my kuberntes cluster is this</p>
<pre><code>[
{
"name": "USER_NAME",
"value": "USER_NAME_VALUE_A"
},
{
"name": "USER_ADDRESS",
"value": "USER_ADDRESS_VALUE_A"
}
]
[
{
"name": "USER_NAME",
"value": "USER_NAME_VALUE_B"
},
{
"name": "USER_ADDRESS",
"value": "USER_ADDRESS_VALUE_B"
}
]
</code></pre>
<p>I'd like to create a unified array/dictionary (Using <strong>Bash</strong> script) which looks like the example below and how can I get the value of each key?</p>
<pre><code>[
{
"USER_NAME": "USER_NAME_VALUE_A",
"USER_ADDRESS": "USER_ADDRESS_VALUE_A"
},
{
"USER_NAME": "USER_NAME_VALUE_B",
"USER_ADDRESS": "USER_ADDRESS_VALUE_B"
}
]
</code></pre>
| <p>use the jsonpath</p>
<pre><code>C02W84XMHTD5:~ iahmad$ kubectl get pods --all-namespaces -o=jsonpath='{range .items[*]}{.metadata.name}{"\n"}'
coredns-c4cffd6dc-nsd2k
etcd-minikube
kube-addon-manager-minikube
kube-apiserver-minikube
kube-controller-manager-minikube
kube-dns-86f4d74b45-d5njm
kube-proxy-pg89s
kube-scheduler-minikube
kubernetes-dashboard-6f4cfc5d87-b7n7v
storage-provisioner
tiller-deploy-778f674bf5-vt4mj
</code></pre>
<p><a href="https://kubernetes.io/docs/reference/kubectl/jsonpath/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/kubectl/jsonpath/</a></p>
<p>it can output key values as well</p>
<pre><code>C02W84XMHTD5:~ iahmad$ kubectl get pods --all-namespaces -o=jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.status.startTime}{"\n"}{end}'
coredns-c4cffd6dc-nsd2k 2018-10-16T21:44:19Z
etcd-minikube 2018-10-29T17:30:56Z
kube-addon-manager-minikube 2018-10-29T17:30:56Z
kube-apiserver-minikube 2018-10-29T17:30:56Z
kube-controller-manager-minikube 2018-10-29T17:30:56Z
kube-dns-86f4d74b45-d5njm 2018-10-16T21:44:16Z
kube-proxy-pg89s 2018-10-29T17:32:05Z
kube-scheduler-minikube 2018-10-29T17:30:56Z
kubernetes-dashboard-6f4cfc5d87-b7n7v 2018-10-16T21:44:19Z
storage-provisioner 2018-10-16T21:44:19Z
tiller-deploy-778f674bf5-vt4mj 2018-11-01T13:45:23Z
</code></pre>
<p>then you can split those by space and form json or list</p>
|
<p>I created a service, which has 3 pods assigned. </p>
<p><a href="https://i.stack.imgur.com/pOX5I.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pOX5I.png" alt="enter image description here"></a></p>
<p>I would like to access the service through its hostname by an other service in the same project. How can I do that?</p>
<p>Tried:</p>
<pre><code>alxtbk@dns-test:~$ ping elassandra-0.elassandra
ping: elassandra-0.elassandra: Name or service not known
alxtbk@dns-test:~$ ping elassandra-0.default.svc.cluster.local
ping: elassandra-0.default.svc.cluster.local: Name or service not known
alxtbk@dns-test:~$ ping elassandra.default.svc.cluster.local
ping: elassandra.default.svc.cluster.local: Name or service not known
</code></pre>
<p>What is the correct way to resolve the ip adresses of the headless service?</p>
| <blockquote>
<p>For such Services, a cluster IP is not allocated, kube-proxy does not
handle these services, and there is no load balancing or proxying done
by the platform for them. How DNS is automatically configured depends
on whether the service has selectors defined.</p>
<p><strong>With selectors</strong></p>
<p>For headless services that define selectors, the endpoints controller
creates Endpoints records in the API, and modifies the DNS
configuration to return A records (addresses) that point directly to
the Pods backing the Service.</p>
<p><strong>Without selectors</strong></p>
<p>For headless services that do not define selectors, the endpoints
controller does not create Endpoints records. However, the DNS system
looks for and configures either:</p>
<p>CNAME records for ExternalName-type services.</p>
<p>A records for any Endpoints that share a name with the service, for all other types.</p>
</blockquote>
<p>so you maybe be able to do:</p>
<pre><code>kubectl get ep
</code></pre>
<p>to get the endpoints and then use them inside another kubernetes service.</p>
|
<p>I would like to run Docker instances in my local Kubernetes cloud.</p>
<p>I activated Hyper-V on my Windows 10 Pro to accommodate for Docker. Docker runs fine, I can use the CLI perfectly.</p>
<p>Now I'm trying to run Kubernetes / Minikube. Unfortunately, Minikube gives me an error if I have Hyper-V activated.</p>
<p>If I deactivate Hyper-V and reboot, Docker says that it cannot run without Hyper-V. That seems like a conundrum.</p>
<p>Any tips or suggestions to have both running? I'd like to spin docker images in my local Kubernetes cluster.</p>
<p>Thanks!</p>
| <p>Seems like you have a problem with the hypervisor usage. I've explained details about using Docker and Kubernetes with each other in one of my recent answers which I will link below.</p>
<ul>
<li><p>You can't use Kubernetes in Docker and minikube together (or maybe you can if you play with contexts, but I haven't tested it yet and for simplicity lets say you can't) . If you use Docker for your k8s cluster you will interact with your cluster using kubectl, there is no need for using minikube. Just go to Kubernetes -> enable Kubernetes in Docker app and use it according to Docker documentation, <a href="https://docs.docker.com/docker-for-windows/kubernetes/" rel="nofollow noreferrer">here</a> and <a href="https://docs.docker.com/docker-for-windows/#kubernetes" rel="nofollow noreferrer">here in section Kubernetes</a>:
<a href="https://i.stack.imgur.com/W89AF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/W89AF.png" alt="enter image description here"></a></p></li>
<li><p>If you want to use Docker for Windows and minikube, you have to
specify the arguments when you run minikube start. In your case you
need to use standard way of running minikube for Windows. You can
follow this <a href="https://medium.com/@JockDaRock/minikube-on-windows-10-with-hyper-v-6ef0f4dc158c" rel="nofollow noreferrer">guide</a> for example. When you are ready with the setup
you start it with <code>minikube start --vm-driver hyperv
--hyperv-virtual-switch "vSwitch name"</code> <strong>Note that Hyper-V should be the only hypervisor active.</strong> and you can keep using Docker as you
did.</p></li>
<li>Third option is using Docker Toolbox for your containers and
VirtualBox for minikube which I explained in details in this answer,
but it is not a recommended setup if you don't have a specific need.</li>
</ul>
<p>So the important part here is to decide which tools exactly you want to use.
One more important thing, you might get stuck with errors now, and they might be connected to leftovers of minikube. So before you go further remember to revert Docker to factory defaults and delete .minikube and .kube if you meet errors.</p>
|
<p>I'm after a very simple requirement - yet seems impossible to make Traefik redirect traffic from HTTP to HTTPS when behind an external load balancer.</p>
<p>This is my GCE ingress</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: platform
name: public-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: "kubernetes-cluster-dev-ip"
kubernetes.io/ingress.class: "gce"
ingress.gcp.kubernetes.io/pre-shared-cert: "application-dev-ssl,application-dev-graphql-ssl"
spec:
backend:
serviceName: traefik-ingress-service
servicePort: 80
</code></pre>
<p>Which receive traffic from HTTP(S) then forward to Traefik to port 80.</p>
<p>I initially tried to using Traefik way of redirecting matching the schema with this configuration:</p>
<pre><code>[entryPoints]
[entryPoints.http]
address = ":80"
compress = true
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
compress = true
[entryPoints.https.tls]
</code></pre>
<p>But obviously gets into an infinite redirect loop because of the load balancer always proxy traffic to Traefik port 80.</p>
<p>The simple solution to make this work is exactly what GCE suggests
<a href="https://github.com/kubernetes/ingress-gce#ingress-cannot-redirect-http-to-https" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-gce#ingress-cannot-redirect-http-to-https</a></p>
<p>Being able to check for the <code>http_x_forwarded_proto</code> header and redirect based on that.</p>
<p>Nginx equivalent</p>
<pre><code># Replace '_' with your hostname.
server_name _;
if ($http_x_forwarded_proto = "http") {
return 301 https://$host$request_uri;
}
</code></pre>
<p>Can someone advice what's the best way of handling this with Traefik, please!</p>
| <p>To recap, you have a GCE L7 (Layer 7) load balancer proxying another L7 load balancer in Traefik that you can potentially use it to proxy to another backend service. So looks like you have something like this:</p>
<pre><code>GCE L7 LB HTTP 80
=> Forwarded to Traefik HTTP 80
=> Redirect initial request to HTTPS 443
=> The client thinks it needs to talk to GCE L7 LB HTTPS 443
=> GCE L7 LB HTTP 443
=> Forwarded to Traefik HTTP 80
=> Infinite loop
</code></pre>
<p>and you need to have something like this:</p>
<pre><code>GCE L7 LB HTTP 80
=> Forwarded to Traefik HTTP 80
=> Redirect initial request to HTTPS 443
=> The client thinks it needs to talk to GCE L7 LB HTTPS 443
=> GCE L7 LB HTTP 443
=> Forwarded to Traefik HTTP 443
</code></pre>
<p>It's not documented anywhere if Traefik redirects to HTTPS based on the value of <code>http_x_forwarded_proto</code> being <code>http</code>, but that would be the general assumption. In any case, the <code>Ingress</code> doesn't know anything about an HTTPS backend (you didn't specify how you configured the HTTPS GCE LB endpoint).</p>
<p>You can see that it's documented <a href="https://github.com/kubernetes/ingress-gce#backend-https" rel="nofollow noreferrer">here</a> how to make the GCE LB directly create an HTTPS endpoint that forward directly to your HTTPS backend. Basically, you can try adding the <code>service.alpha.kubernetes.io/app-protocols</code> annotation to the HTTPS Traefik service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: traefik-https
annotations:
service.alpha.kubernetes.io/app-protocols: '{"my-https-port":"HTTPS"}'
labels:
app: echo
spec:
type: NodePort
ports:
- port: 443
protocol: TCP
name: my-https-port
selector:
app: traefik
</code></pre>
<p>So you would have something like this:</p>
<pre><code>GCE L7 LB HTTP 80
=> Forwarded to Traefik HTTP 80
=> Redirect initial request to HTTPS 443
=> The client thinks it needs to talk to GCE L7 LB HTTPS 443
=> GCE L7 LB HTTP 443
=> Forwarded to Traefik HTTPS service
=> Service forward to Traefik port 443
</code></pre>
<p>Hope this helps.</p>
|
<p><a href="https://cloud.google.com/composer/docs/how-to/using/using-kubernetes-pod-operator" rel="nofollow noreferrer">The Cloud Composer documentation</a> explicitly states that:</p>
<blockquote>
<p>Due to an issue with the Kubernetes Python client library, your Kubernetes pods should be designed to take no more than an hour to run.</p>
</blockquote>
<p>However, it doesn't provide any more context than that, and I can't find a definitively relevant issue on the Kubernetes Python client project.</p>
<p>To test it, I ran a pod for two hours and saw no problems. What issue creates this restriction, and how does it manifest? </p>
| <p><a href="https://issues.apache.org/jira/browse/AIRFLOW-3253" rel="nofollow noreferrer">https://issues.apache.org/jira/browse/AIRFLOW-3253</a> is the reason (and hopefully, my fix will be merged soon). As the others suggested, this affects anyone using the Kubernetes Python client with GCP auth. If you are authenticating with a Kubernetes service account, you should see no problem.</p>
<p>If you are authenticating via a GCP service account with gcloud (e.g. using the GKEPodOperator), you will generally see this problem with jobs that take longer than an hour because the auth token expires after an hour.</p>
|
<p>I'm trying to expose a SignalR hub hosted in a Kubernetes (Azure) pod. Basically, the authentication and the handshake steps work fine, but when I trigger some action, all clients connected via the k8s Ingress doesn't receive the message. Has anybody experienced this issue or just have shared SignalR hubs through Kubernetes - Ingress? </p>
<p><strong>ingress.yml</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: endpoints
annotations:
kubernetes.io/ingress.class: addon-http-application-routing
ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.org/websocket-services: "myservice"
spec:
rules:
- host: api.[MY-DOMAIN].com
http:
paths:
- backend:
serviceName: myservice
servicePort: 80
path: /myservice
</code></pre>
| <p>Try: </p>
<pre><code>annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/session-cookie-hash: sha1
nginx.ingress.kubernetes.io/session-cookie-name: REALTIMESERVERID
</code></pre>
<p>I wrote a sample project a while back, if you want a working example: <a href="https://github.com/DenisBiondic/RealTimeMicroservices" rel="noreferrer">DenisBiondic/RealTimeMicroservices</a></p>
<p>As a side note, consider using Azure SignalR Service, it should remove many headaches (also in the example above).</p>
|
<p>I'm following <a href="https://linuxconfig.org/how-to-install-kubernetes-on-ubuntu-18-04-bionic-beaver-linux" rel="nofollow noreferrer">this guide</a> on how to set up a kubernetes swarm.</p>
<p>My swarm will be as follows:</p>
<ul>
<li>10.0.2.1: swarm1 (master)</li>
<li>10.0.2.2: swarm2 (worker) (currently trying to setup a single node swarm)</li>
<li>10.0.2.3: swarm2 (worker) (not yet provisioned)</li>
</ul>
<p>I followed it and got up to the part where I enter the command: </p>
<p><code>kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml</code></p>
<p>And once I enter that, I get an error: </p>
<blockquote>
<p>unable to recognize "<a href="https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml" rel="nofollow noreferrer">https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml</a>": Get <a href="https://10.0.2.1:6443/api?timeout=32s" rel="nofollow noreferrer">https://10.0.2.1:6443/api?timeout=32s</a>: dial tcp 10.0.2.1:6443: connect: connection refused
unable to recognize </p>
</blockquote>
<p>Which seems to mean that the kubernetes master isn't running properly. How can I start the server so it can connect?</p>
| <p>This is a not enough privilege error. Either you did not run this:</p>
<p><code>mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config</code></p>
<p>Or there is an admin.conf from previous inits. In this case while running second command and seeing <code>cp: overwrite /home/aurelius/.kube/config'?</code> make sure you type <code>yes</code> and press Enter. </p>
|
<p>The JSON output returned to me after running this command </p>
<pre><code>kubectl get pods -o json | jq '.items[].spec.containers[].env'
</code></pre>
<p>on my kuberntes cluster is this</p>
<pre><code>[
{
"name": "USER_NAME",
"value": "USER_NAME_VALUE_A"
},
{
"name": "USER_ADDRESS",
"value": "USER_ADDRESS_VALUE_A"
}
]
[
{
"name": "USER_NAME",
"value": "USER_NAME_VALUE_B"
},
{
"name": "USER_ADDRESS",
"value": "USER_ADDRESS_VALUE_B"
}
]
</code></pre>
<p>I'd like to create a unified array/dictionary (Using <strong>Bash</strong> script) which looks like the example below and how can I get the value of each key?</p>
<pre><code>[
{
"USER_NAME": "USER_NAME_VALUE_A",
"USER_ADDRESS": "USER_ADDRESS_VALUE_A"
},
{
"USER_NAME": "USER_NAME_VALUE_B",
"USER_ADDRESS": "USER_ADDRESS_VALUE_B"
}
]
</code></pre>
| <p>This will do it in bash. You'd be surprised how much you can do with bash:</p>
<pre><code>#!/bin/bash
NAMES=`kubectl get pods -o=jsonpath='{range .items[*]}{.spec.containers[*].env[*].name}{"\n"}' | tr -d '\011\012\015'`
VALUES=`kubectl get pods -o=jsonpath='{range .items[*]}{.spec.containers[*].env[*].value}{"\n"}' | tr -d '\011\012\015'`
IFS=' ' read -ra NAMESA <<< "$NAMES"
IFS=' ' read -ra VALUESA <<< "$VALUES"
MAXINDEX=`expr ${#NAMESA[@]} - 1`
printf "[\n"
for i in "${!NAMESA[@]}"; do
printf " {\n"
printf " \"USER_NAME\": \"${NAMESA[$i]}\",\n"
printf " \"USER_ADDRESS\": \"${VALUESA[$i]}\"\n"
if [ "$i" == "${MAXINDEX}" ]; then
printf " }\n"
else
printf " },\n"
fi
done
printf "]\n"
</code></pre>
|
<p>I am running Kubernets v1.11.1 cluster, sometime my kube-apiserver server started throwing the 'too many open files' message. I noticed to many open TCP connections node kubelet port 10250</p>
<p>My server configured with 65536 file descriptors. Do I need to increase the number of open files for the container host? What are the recommended ulimit settings for the container host?</p>
<p><strong>api server log message</strong></p>
<pre><code>I1102 13:57:08.135049 1 logs.go:49] http: Accept error: accept tcp [::]:6443: accept4: too many open files; retrying in 1s
I1102 13:57:09.135191 1 logs.go:49] http: Accept error: accept tcp [::]:6443: accept4: too many open files; retrying in 1s
I1102 13:57:10.135437 1 logs.go:49] http: Accept error: accept tcp [::]:6443: accept4: too many open files; retrying in 1s
I1102 13:57:11.135589 1 logs.go:49] http: Accept error: accept tcp [::]:6443: accept4: too many open files; retrying in 1s
I1102 13:57:12.135755 1 logs.go:49] http: Accept error: accept tcp [::]:6443: accept4: too many open files; retrying in 1s
</code></pre>
<p>my host ulimit values:</p>
<pre><code># ulimit -a
-f: file size (blocks) unlimited
-t: cpu time (seconds) unlimited
-d: data seg size (kb) unlimited
-s: stack size (kb) 8192
-c: core file size (blocks) unlimited
-m: resident set size (kb) unlimited
-l: locked memory (kb) 64
-p: processes unlimited
-n: file descriptors 65536
-v: address space (kb) unlimited
-w: locks unlimited
-e: scheduling priority 0
-r: real-time priority 0
</code></pre>
<p>Thanks
SR</p>
| <p><code>65536</code> seems a bit low, although there are many apps that recommend that number. This is what I have on one K8s cluster for the kube-apiserver:</p>
<pre><code># kubeapi-server-container
# |
# \|/
# ulimit -a
-f: file size (blocks) unlimited
-t: cpu time (seconds) unlimited
-d: data seg size (kb) unlimited
-s: stack size (kb) 8192
-c: core file size (blocks) unlimited
-m: resident set size (kb) unlimited
-l: locked memory (kb) 16384
-p: processes unlimited
-n: file descriptors 1048576 <====
-v: address space (kb) unlimited
-w: locks unlimited
-e: scheduling priority 0
-r: real-time priority 0
</code></pre>
<p>Different from a regular bash process system limits:</p>
<pre><code># ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 15447
max locked memory (kbytes, -l) 16384
max memory size (kbytes, -m) unlimited
open files (-n) 1024 <===
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 15447
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
</code></pre>
<p>But yet the total max of the whole system:</p>
<pre><code>$ cat /proc/sys/fs/file-max
394306
</code></pre>
<p>If you see <a href="https://serverfault.com/questions/122679/how-do-ulimit-n-and-proc-sys-fs-file-max-differ">this</a> nothing can exceed <code>/proc/sys/fs/file-max</code> on the system, so I would also check that value. I would also check the number of file descriptors being used (first column), this will give you an idea of how many open files you have:</p>
<pre><code>$ cat /proc/sys/fs/file-nr
2176 0 394306
</code></pre>
|
<p>I have a deployment where the replicas scale up and down which are all under a headless service. I am able to query ..svc.cluster.local which returns list of all pod IPs.</p>
<p>I wanted to know if its possible to query for each pod IP and get the hostname of the pod? It works for Pods on the same host machine. But its not resolving the pods from other hosts.</p>
<p>I noticed that it works for a StatefulSet. But its not working for Deployment.</p>
| <p>This has already been discussed <a href="https://stackoverflow.com/a/43256750/2989261">here</a> for <code>kube-dns</code>. There has been more discussion <a href="https://github.com/kubernetes/dns/issues/266" rel="nofollow noreferrer">here too</a>.</p>
<p>However, PTR records work fine for me with <code>coredns</code> and K8s 1.12:</p>
<pre><code>$ kubectl get pod helloworld-xxxxxxxxxx-xxxxx -o=jsonpath="{.metadata.annotations['cni\.projectcalico\.org/podIP']}" | cut -d "/" -f 1
192.168.11.28
# Connect to another pod
$ kubectl exec -it anotherpod-svc-xxxxxxxxxx-xxxxx bash
root@anotherpod-xxxxxxxxxx-xxxxx:/# dig +short -x 192.168.11.28
192-168-11-28.helloworld.default.svc.cluster.local.
root@anotherpod-xxxxxxxxxx-xxxxx:/# dig +short 192-168-11-28.helloworld.default.svc.cluster.local
192.168.11.28
# Another helloworld pod on a different physical machine
$ kubectl get pod helloworld-xxxxxxxxxx-xxxxx -o=jsonpath="{.metadata.annotations['cni\.projectcalico\.org/podIP']}" | cut -d "/" -f 1
192.168.4.6
# Connect to another pod
$ kubectl exec -it anotherpod-svc-xxxxxxxxxx-xxxxx bash
root@anotherpod-svc-xxxxxxxxxx-xxxxx:/# dig +short -x 192.168.4.6
192-168-4-6.helloworld.default.svc.cluster.local.
root@anotherpod-xxxxxxxxxx-xxxxx:/# dig +short 192-168-4-6.helloworld.default.svc.cluster.local
192.168.4.6
</code></pre>
|
<p>When I try to create Deployment as Type Job, it's not pulling any image. </p>
<p>Below is .yaml:</p>
<pre>
apiVersion: batch/v1
kind: Job
metadata:
name: copyartifacts
spec:
backoffLimit: 1
template:
metadata:
name: copyartifacts
spec:
restartPolicy: "Never"
volumes:
- name: sharedvolume
persistentVolumeClaim:
claimName: shared-pvc
- name: dockersocket
hostPath:
path: /var/run/docker.sock
containers:
- name: copyartifacts
image: alpine:3.7
imagePullPolicy: Always
command: ["sh", "-c", "ls -l /shared; rm -rf /shared/*; ls -l /shared; while [ ! -d /shared/artifacts ]; do echo Waiting for artifacts to be copied; sleep 2; done; sleep 10; ls -l /shared/artifacts; "]
volumeMounts:
- mountPath: /shared
name: sharedvolume
</pre>
<p>Can you please guide here?</p>
<p>Regards,
Vikas</p>
| <p>There could be two possible reasons for not seeing pod.</p>
<ol>
<li>The pod hasn't been created yet.</li>
<li>The pod has completed it's task and terminated before you have noticed.</li>
</ol>
<p><strong>1. Pod hasn't been created:</strong></p>
<p>If pod hasn't been created yet, you have to find out why the job failed to create pod. You can view job's events to see if there are any failure event. Use following command to describe a job.</p>
<pre><code>kubectl describe job <job-name> -n <namespace>
</code></pre>
<p>Then, check the <code>Events:</code> field. There might be some events showing pod creation failure with respective reason.</p>
<p><strong>2. Pod has completed and terminated:</strong></p>
<p>Job's are used to perform one-time task rather than serving an application that require to maintain a desired state. When the task is complete, pod goes to completed state then terminate (but not deleted). If your Job is intended for a task that does not take much time, the pod may terminate after completing the task before you have noticed.</p>
<p>As the pod is terminated, <code>kubectl get pods</code> will not show that pod. However, you will able to see the pod using <code>kubectl get pods -a</code> command as it hasn't been deleted.</p>
<p>You can also describe the job and check for completion or success event.</p>
|
<p>I was wondering if I should have the pod level nginx in the implementations below:
I was previously using a normal ingress and kube-lego after migrating from VMs and now I am using cert-manager and GKE.
My Ingress:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myapp-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: myapp-static-ip
kubernetes.io/ingress.class: nginx
kubernetes.io/ingress.allow-http: "false"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
ingress.kubernetes.io/rewrite-target: /
certmanager.k8s.io/cluster-issuer: letsencrypt
namespace: default
spec:
tls:
- hosts:
- myapp.com
secretName: myapp-crt
rules:
- host:
http:
paths:
- path: /
backend:
serviceName: myapp
servicePort: http
</code></pre>
<p>My service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: myapp
labels:
app: myapp
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 32111
protocol: "TCP"
name: http
selector:
app: myapp
</code></pre>
<p>My Deployment</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myapp
labels:
app: myapp
spec:
replicas: 3
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: gcr.io/myapp-1/myapp:latest
imagePullPolicy: Always
env:
- name: DB_USER
valueFrom:
secretKeyRef:
name: cloudsql
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql
key: password
-
name: STATIC_ROOT
value: https://storage.googleapis.com/myapp-api/static/
-
name: STATIC_URL
value: https://storage.googleapis.com/myapp-api/static/
-
name: MEDIA_ROOT
value: /myapp/media
-
name: MEDIA_URL
value: http://myapp.com/media/
-
name: nginx
image: nginx
command: [nginx, -g,'daemon off;']
imagePullPolicy: Always
volumeMounts:
-
name: api-nginx-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
-
name: myapp-media
mountPath: /myapp/media/
ports:
- containerPort: 80
- image: b.gcr.io/cloudsql-docker/gce-proxy:1.05
name: cloudsql-proxy
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=myapp-1:europe-west1:myapp-api=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-oauth-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: cloudsql
mountPath: /cloudsql
- name: myapp-media
mountPath: /myapp/media
volumes:
- name: cloudsql-oauth-credentials
secret:
secretName: cloudsql-oauth-credentials
- name: cloudsql
emptyDir:
- name: api-nginx-config
configMap:
name: api-nginx-config
-
name: myapp-media
persistentVolumeClaim:
claimName: myapp-media
</code></pre>
<p>my nginx conf:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: api-nginx-config
data:
nginx.conf: |
events {
worker_connections 1024;
}
http {
upstream api {
server 127.0.0.1:8080 fail_timeout=0;
}
server {
access_log /var/log/nginx/http-access.log;
error_log /var/log/nginx/http-error.log;
listen 80;
listen [::]:80;
server_name myapp.com;
location /media/ {
alias /myapp/media;
}
location = /favicon.ico {
access_log off;
log_not_found off;
}
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://127.0.0.1:8080/;
}
}
}
</code></pre>
<p>Is it serving any major purpose given I could directly map the <code>myapp/media</code> directly to <code>/media</code> in the volume mount and my tls is handled by ingress. My major concern is with the pod-level nginx as I highlighted earlier, is it useless in this case? Is it just a baggage I am carrying over from previous implementations?</p>
| <p>You may need to run your own nginx as deployment , for the reasons listed in the above answer , plus , you may need to scale the nginx deployment , let say 10 replicas. you cant scale ingress like that. But in any case , you just need one of them.</p>
|
<p>I was wondering if I should have the pod level nginx in the implementations below:
I was previously using a normal ingress and kube-lego after migrating from VMs and now I am using cert-manager and GKE.
My Ingress:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myapp-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: myapp-static-ip
kubernetes.io/ingress.class: nginx
kubernetes.io/ingress.allow-http: "false"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
ingress.kubernetes.io/rewrite-target: /
certmanager.k8s.io/cluster-issuer: letsencrypt
namespace: default
spec:
tls:
- hosts:
- myapp.com
secretName: myapp-crt
rules:
- host:
http:
paths:
- path: /
backend:
serviceName: myapp
servicePort: http
</code></pre>
<p>My service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: myapp
labels:
app: myapp
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
nodePort: 32111
protocol: "TCP"
name: http
selector:
app: myapp
</code></pre>
<p>My Deployment</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myapp
labels:
app: myapp
spec:
replicas: 3
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: gcr.io/myapp-1/myapp:latest
imagePullPolicy: Always
env:
- name: DB_USER
valueFrom:
secretKeyRef:
name: cloudsql
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql
key: password
-
name: STATIC_ROOT
value: https://storage.googleapis.com/myapp-api/static/
-
name: STATIC_URL
value: https://storage.googleapis.com/myapp-api/static/
-
name: MEDIA_ROOT
value: /myapp/media
-
name: MEDIA_URL
value: http://myapp.com/media/
-
name: nginx
image: nginx
command: [nginx, -g,'daemon off;']
imagePullPolicy: Always
volumeMounts:
-
name: api-nginx-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
-
name: myapp-media
mountPath: /myapp/media/
ports:
- containerPort: 80
- image: b.gcr.io/cloudsql-docker/gce-proxy:1.05
name: cloudsql-proxy
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=myapp-1:europe-west1:myapp-api=tcp:5432",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-oauth-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: cloudsql
mountPath: /cloudsql
- name: myapp-media
mountPath: /myapp/media
volumes:
- name: cloudsql-oauth-credentials
secret:
secretName: cloudsql-oauth-credentials
- name: cloudsql
emptyDir:
- name: api-nginx-config
configMap:
name: api-nginx-config
-
name: myapp-media
persistentVolumeClaim:
claimName: myapp-media
</code></pre>
<p>my nginx conf:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: api-nginx-config
data:
nginx.conf: |
events {
worker_connections 1024;
}
http {
upstream api {
server 127.0.0.1:8080 fail_timeout=0;
}
server {
access_log /var/log/nginx/http-access.log;
error_log /var/log/nginx/http-error.log;
listen 80;
listen [::]:80;
server_name myapp.com;
location /media/ {
alias /myapp/media;
}
location = /favicon.ico {
access_log off;
log_not_found off;
}
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header X-NginX-Proxy true;
proxy_pass http://127.0.0.1:8080/;
}
}
}
</code></pre>
<p>Is it serving any major purpose given I could directly map the <code>myapp/media</code> directly to <code>/media</code> in the volume mount and my tls is handled by ingress. My major concern is with the pod-level nginx as I highlighted earlier, is it useless in this case? Is it just a baggage I am carrying over from previous implementations?</p>
| <p>Generally, there is not really a point to have an extra nginx pod. If you do, you would have something of a double ingress. An nginx ingress controller pod already has nginx in it and you can scale that up/down.</p>
<p>One reason you would want to keep it is for backward compatibility, if for example, you want to use an ingress but want to gradually roll it out in this sort of fashion: create new nginx ingress -> flip traffic from your own nginx only through the new nginx ingress and your own nginx, until you have flipped all your pods -> remove your own nginx gradually until you have removed them all.</p>
<p>Another reason is to support a very specific nginx configuration that is not supported by the nginx ingress controller yet.</p>
|
<p>Kubernetes has many networking solutions (flannel, calico, contrail).
I'm using Kubernetes installed from docker-for-windows and I cannot figure out what networking solution is applied out of the box.</p>
<p>Can anyone please point me out to how to find this?</p>
| <p>Out of the box is a work in progress as of this writing, you'll have to configure your own. For example ToR (Top of the rack). The solutions are described <a href="https://kubernetes.io/docs/getting-started-guides/windows/#networking" rel="nofollow noreferrer">here</a>.</p>
<p>Basically, this (quoted from the docs):</p>
<ol>
<li><p>Upstream L3 Routing - IP routes configured in upstream ToR</p></li>
<li><p>Host-Gateway - IP routes configured on each host</p></li>
<li><p>Open vSwitch (OVS) & Open Virtual Network (OVN) with Overlay -
overlay networks (supports STT and Geneve tunneling types)</p></li>
<li><p>[Future - In Review] Overlay - VXLAN or IP-in-IP encapsulation using
Flannel</p></li>
<li><p>[Future] Layer-3 Routing with BGP (Calico)</p></li>
</ol>
|
<p>I am getting below error while doing the installation of RabbitMQ through helm install.</p>
<blockquote>
<p>MountVolume.SetUp failed for volume "config-volume" : couldn't
propagate object cache: timed out waiting for the condition</p>
</blockquote>
<p>Below is the details of kubectl version:</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:54:59Z", GoVersion:"go1.10.4", Compiler:"gc", Pl
atform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:10:24Z", GoVersion:"go1.9.3", Compiler:"gc", Pla
tform:"linux/amd64"}
</code></pre>
<p>And below is the command I used to install stable rabbitmq.</p>
<pre><code>helm install --name coa-rabbitmq --set rabbitmq.username=#Username#,rabbitmq.password=#Password#,rabbitmq.erlangCookie=#Cookie#,livenessProbe.periodSeconds=120,readinessProbe.periodSeconds=120 stable/rabbitmq
</code></pre>
<p>Any help will be appreciated.</p>
<p>Thanks in advance.</p>
| <p>This works fine for me. Looks like it's an issue related to <a href="https://github.com/kubernetes/kubernetes/issues/70044" rel="nofollow noreferrer">this</a> in this case it can't mount the ConfigMap volume where the rabbitmq config is: the <a href="https://github.com/helm/charts/blob/master/stable/rabbitmq/templates/statefulset.yaml#L200" rel="nofollow noreferrer"><code>config-volume</code></a>. It may also be the case that something is preventing mounting volumes on your nodes (process, file descriptor, etc).</p>
<p>You didn't specify where you are running this, but you can try bouncing your node components: kubelet, docker, and ultimately your node. Keep in mind that all other containers running on the node will restart somewhere again in the cluster.</p>
<p><strong>Edit:</strong></p>
<p>There was a mismatch between kubectl client, kubectl version and kubeadm version.</p>
|
<p>I am currently working with GPU's and since they are expensive I want them to scale down and up depending on the load. However scaling up the cluster and preparing the node takes around 8 minutes since it installs the drivers and do some other preparation. </p>
<p>So to solve this problem, I want to let one node stay in idle state and autoscale the rest of the nodes. Is there any way to do it?</p>
<p>This way when a request comes, the idle node will take it and a new idle node will be created.</p>
<p>Thanks!</p>
| <p>There are three different approaches:</p>
<p>1 - The 1st approach is entirely manual. This will help you keep a node in an idle state without incurring downtime for your application during the autoscaling process. </p>
<p>You would have to prevent one specific node from autosaling (let's call it "node A"). Create a new node and make replicas of the node A's pods to that new node.
The node will be running while it is not part of the autoscaling process.
Once the autoscaling process is complete, and the boot is finished, you may safely drain that node. </p>
<pre><code> a. Create a new node.
b. Prevent node A from evicting its pods by adding the annotation "cluster-autoscaler.kubernetes.io/safe-to-evict": "false"
c. Copy a replica of node A, make replicas of the pods into that new node.
d. Once the autoscaler has scaled all the nodes, and the boot time has
completed, you may safely drain node A, and delete it.
</code></pre>
<p>2 - You could run a <a href="https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#how-disruption-budgets-work" rel="nofollow noreferrer">Pod Disruption Budget</a>.</p>
<p>3 - If you would like to block the node A from being deleted when the autoscaler scales down, <a href="https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-can-i-prevent-cluster-autoscaler-from-scaling-down-a-particular-node" rel="nofollow noreferrer">you could set the annotation</a> "cluster-autoscaler.kubernetes.io/scale-down-disabled": "true" on one particular node. This only works during a scaling down process. </p>
|
<p>I followed Istio's official documentation to setup Istio for sample bookinfo app with minikube. but I'm getting <strong>Unable to connect to the server: net/http: TLS handshake timeout</strong> error. these are the steps that I have followed(I have kubectl & minikube installed).</p>
<pre><code>minikube start
curl -L https://git.io/getLatestIstio | sh -
cd istio-1.0.3
export PATH=$PWD/bin:$PATH
kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml
kubectl apply -f install/kubernetes/istio-demo-auth.yaml
kubectl get pods -n istio-system
</code></pre>
<p>This is the terminal output I'm getting</p>
<pre><code>$ kubectl get pods -n istio-system
NAME READY STATUS RESTARTS AGE
grafana-9cfc9d4c9-xg7bh 1/1 Running 0 4m
istio-citadel-6d7f9c545b-lwq8s 1/1 Running 0 3m
istio-cleanup-secrets-69hdj 0/1 Completed 0 4m
istio-egressgateway-75dbb8f95d-k6xj2 1/1 Running 0 4m
istio-galley-6d74549bb9-mdc97 0/1 ContainerCreating 0 4m
istio-grafana-post-install-xz9rk 0/1 Completed 0 4m
istio-ingressgateway-6bd4957bc-vhbct 1/1 Running 0 4m
istio-pilot-7f8c49bbd8-x6bmm 0/2 Pending 0 4m
istio-policy-6c65d8cff4-hx2c7 2/2 Running 0 4m
istio-security-post-install-gjfj2 0/1 Completed 0 4m
istio-sidecar-injector-74855c54b9-nnqgx 0/1 ContainerCreating 0 3m
istio-telemetry-65cdd46d6c-rqzfw 2/2 Running 0 4m
istio-tracing-ff94688bb-hgz4h 1/1 Running 0 3m
prometheus-f556886b8-chdxw 1/1 Running 0 4m
servicegraph-778f94d6f8-9xgw5 1/1 Running 0 3m
$kubectl describe pod istio-galley-6d74549bb9-mdc97
Error from server (NotFound): pods "istio-galley-5bf4d6b8f7-8s2z9" not found
</code></pre>
<p>pod describe output</p>
<pre><code> $ kubectl -n istio-system describe pod istio-galley-6d74549bb9-mdc97
Name: istio-galley-6d74549bb9-mdc97
Namespace: istio-system
Node: minikube/172.17.0.4
Start Time: Sat, 03 Nov 2018 04:29:57 +0000
Labels: istio=galley
pod-template-hash=1690826493
Annotations: scheduler.alpha.kubernetes.io/critical-pod=
sidecar.istio.io/inject=false
Status: Pending
IP:
Controlled By: ReplicaSet/istio-galley-5bf4d6b8f7
Containers:
validator:
Container ID:
Image: gcr.io/istio-release/galley:1.0.0 Image ID:
Ports: 443/TCP, 9093/TCP Host Ports: 0/TCP, 0/TCP
Command: /usr/local/bin/galley
validator --deployment-namespace=istio-system
--caCertFile=/etc/istio/certs/root-cert.pem
--tlsCertFile=/etc/istio/certs/cert-chain.pem
--tlsKeyFile=/etc/istio/certs/key.pem
--healthCheckInterval=2s
--healthCheckFile=/health
--webhook-config-file
/etc/istio/config/validatingwebhookconfiguration.yaml
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Requests:
cpu: 10m
Liveness: exec [/usr/local/bin/galley probe --probe-path=/health --interval=4s] delay=4s timeout=1s period=4s #success=1 #failure=3
Readiness: exec [/usr/local/bin/galley probe --probe-path=/health --interval=4s] delay=4s timeout=1s period=4s #success=1 #failure=3
Environment: <none>
Mounts:
/etc/istio/certs from certs (ro)
/etc/istio/config from config (ro)
/var/run/secrets/kubernetes.io/serviceaccount from istio-galley-service-account-token-9pcmv(ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
certs:
Type: Secret (a volume populated by a Secret)
SecretName: istio.istio-galley-service-account
Optional: false
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: istio-galley-configuration
Optional: false
istio-galley-service-account-token-9pcmv:
Type: Secret (a volume populated by a Secret)
SecretName: istio-galley-service-account-token-9pcmv
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 1m default-scheduler Successfully assigned istio-galley-5bf4d6b8f7-8t8qz to minikube
Normal SuccessfulMountVolume 1m kubelet, minikube MountVolume.SetUp succeeded for volume "config"
Normal SuccessfulMountVolume 1m kubelet, minikube MountVolume.SetUp succeeded for volume "istio-galley-service-account-token-9pcmv"
Warning FailedMount 27s (x7 over 1m) kubelet, minikube MountVolume.SetUp failed for volume "certs" : secrets "istio.istio-galley-service-account" not found
</code></pre>
<p>after some time :-</p>
<pre><code> $ kubectl describe pod istio-galley-6d74549bb9-mdc97
Unable to connect to the server: net/http: TLS handshake timeout
</code></pre>
<p>so I wait for istio-sidecar-injector and istio-galley containers to get created. If I again run <strong>kubectl get pods -n istio-system</strong> or any other <strong>kubectl</strong> commands gives <strong>Unable to connect to the server: net/http: TLS handshake timeout</strong> error. </p>
<p>Please help me with this issue.
ps: I'm running minikube on ubuntu 16.04</p>
<p>Thanks in advance.</p>
| <p>Looks like you are running into <a href="https://github.com/istio/istio/issues/7338" rel="nofollow noreferrer">this</a> and <a href="https://github.com/istio/istio/issues/7174" rel="nofollow noreferrer">this</a> the secret <code>istio.istio-galley-service-account</code> is missing in your <code>istio-system</code> namespace. You can try the workaround as <a href="https://github.com/istio/istio/issues/7174#issuecomment-429233168" rel="nofollow noreferrer">described</a>:</p>
<blockquote>
<p>Install as outlined in the docs: <a href="https://istio.io/docs/setup/kubernetes/minimal-install/" rel="nofollow noreferrer">https://istio.io/docs/setup/kubernetes/minimal-install/</a> the missing secret is created by the citadel pod which isn't running due to the --set security.enabled=false flag, setting that to true starts citadel and the secret is created.</p>
</blockquote>
|
<p>I am trying to setup HDFS on minikube (for now) and later on a DEV kubernetes cluster so I can use it with Spark. I want Spark to run locally on my machine so I can run in debug mode during development so it should have access to my HDFS on K8s.</p>
<p>I have already set up 1 namenode deployment and a datanode statefulset (3 replicas) and those work fine when I am using HDFS from within the cluster. I am using a headless service for the datanodes and a cluster-ip service for the namenode.</p>
<p>The problem starts when I am trying to expose hdfs. I was thinking of using an ingress for that but that only exposes port 80 outside of the cluster and maps paths to different services inside the cluster which is not what I'm looking for. As far as I understand, my local spark jobs (or hdfs client) talk to the namenode which replies with an address for each block of data. That address though is something like <code>172.17.0.x:50010</code> and of course my local machine can't see those.</p>
<p>Is there any way I make this work? Thanks in advance!</p>
| <p>I know this question is about just getting it to run in a dev environment, but HDFS is very much a work in progress on K8s, so I wouldn't by any means run it in production (as of this writing). It's quite tricky to get it working on a container orchestration system because:</p>
<ol>
<li>You are talking about a lot of data and a lot of nodes (namenodes/datanodes) that are not meant to start/stop in different places in your cluster. </li>
<li>You have the risk of having a constantly unbalanced cluster if you are not pinning your namenodes/datanodes to a K8s node (which defeats the purpose of having a container orchestration system)</li>
<li>If you run your namenodes in HA mode and it for any reason your namenodes die and restart you run the risk of corrupting the namenode metadata which would make you lose all your data. It's also risky if you have a single node and you don't pin it to a K8s node.</li>
<li>You can't scale up and down easily without running in an unbalanced cluster. Running an unbalanced cluster defeats one of the main purposes of HDFS.</li>
</ol>
<p>If you look at <a href="https://docs.mesosphere.com/services/hdfs/" rel="noreferrer">DC/OS</a> they were able to make it work on their platform, so that may give you some guidance.</p>
<p>In K8s you basically need to create services for all your namenode ports and all your datanode ports. Your client needs to be able to find every namenode and datanode so that it can read/write from them. Also the some ports cannot go through an Ingress because they are layer 4 ports (TCP) for example the IPC port <code>8020</code> on the namenode and <code>50020</code> on the datanodes.</p>
<p>Hope it helps!</p>
|
<p>I'm attempting to configure a Horizontal Pod Autoscaler to scale a deployment based on the duty cycle of attached GPUs.</p>
<p>I'm using GKE, and my Kubernetes master version is 1.10.7-gke.6 .</p>
<p>I'm working off the tutorial at <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/external-metrics-autoscaling" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/external-metrics-autoscaling</a> . In particular, I ran the following command to set up custom metrics:</p>
<pre><code>kubectl create -f https://raw.githubusercontent.com/GoogleCloudPlatform/k8s-stackdriver/master/custom-metrics-stackdriver-adapter/deploy/production/adapter.yaml
</code></pre>
<p>This appears to have worked, or at least I can access a list of metrics at /apis/custom.metrics.k8s.io/v1beta1 .</p>
<p>This is my YAML:</p>
<pre><code>apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: images-srv-hpa
spec:
minReplicas: 1
maxReplicas: 10
metrics:
- type: External
external:
metricName: container.googleapis.com|container|accelerator|duty_cycle
targetAverageValue: 50
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: images-srv-deployment
</code></pre>
<p>I believe that the metricName exists because it's listed in /apis/custom.metrics.k8s.io/v1beta1 , and because it's described on <a href="https://cloud.google.com/monitoring/api/metrics_gcp" rel="nofollow noreferrer">https://cloud.google.com/monitoring/api/metrics_gcp</a> .</p>
<p>This is the error I get when describing the HPA:</p>
<pre><code> Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedGetExternalMetric 18s (x3 over 1m) horizontal-pod-autoscaler unable to get external metric prod/container.googleapis.com|container|accelerator|duty_cycle/nil: no metrics returned from external metrics API
Warning FailedComputeMetricsReplicas 18s (x3 over 1m) horizontal-pod-autoscaler failed to get container.googleapis.com|container|accelerator|duty_cycle external metric: unable to get external metric prod/container.googleapis.com|container|accelerator|duty_cycle/nil: no metrics returned from external metrics API
</code></pre>
<p>I don't really know how to go about debugging this. Does anyone know what might be wrong, or what I could do next?</p>
| <p>This problem went away on its own once I placed the system under load. It's working fine now with the same configuration.</p>
<p>I'm not sure why. My best guess is that StackMetrics wasn't reporting a duty cycle value until it went above 1%.</p>
|
<p>I am running a deployment which contains three containers the app, nginx and cloud sql instance. I have a lot of print statements in my python based app.</p>
<p>Every time a user interacts with the app, outputs are printed. I want to know if these logs are saved by default at any location.</p>
<p>I am worried that these logs might consume the space on the nodes in the cluster running it. Does this happen ? or Kubernetes deployments by default don't save any logs by default?</p>
| <p>The applications run in containers usually under Docker and the stdout/stderr logs are saved for the lifetime of the container in the graph directory (usually <code>/var/lib/docker</code>)</p>
<p>You can look at the logs with either:</p>
<pre><code>$ kubectl logs <pod-name> -c <container-in-pod>
</code></pre>
<p>Or:</p>
<pre><code>$ ssh <node>
$ docker logs <container>
</code></pre>
<p>If you'd like to know more where they are stored you can go into the <code>/var/lib/docker</code> directory and see the logs stored in JSON format:</p>
<pre><code>$ cd /var/lib/docker/containers
$ find . | grep json.log
./3454a0681100986248fd81856fadfe7cd95a1a6467eba32adb33da74c2c5443d/3454a0681100986248fd81856fadfe7cd95a1a6467eba32adb33da74c2c5443d-json.log
./80a87a9529a55f8d3fb9b814f0158dc91686704222e252b256455bcde48f56a5/80a87a9529a55f8d3fb9b814f0158dc91686704222e252b256455bcde48f56a5-json.log
...
</code></pre>
<p>If you'd like to do garbage collection on 'Exited' containers you can read more about it <a href="https://kubernetes.io/docs/concepts/cluster-administration/kubelet-garbage-collection/" rel="nofollow noreferrer">here</a>. </p>
<p>Another way is to set up a cron job that runs periodically on your nodes that does this:</p>
<pre><code>$ docker system prune -a --force
</code></pre>
|
<p>I'm having trouble using nodeAntiAffinity... in my use case I need to prevent the instances of a StatefulSet running on the same node, and that's it. I don't have labels for my nodes, which the doc looks to list as a requirement. Is it possible to purely rely on unique values of the built in label "kubernetes.io/hostname"?</p>
<p>What I am trying to do in my StatefulSet:</p>
<pre><code>spec:
podManagementPolicy: OrderedReady
affinity:
nodeAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
topologyKey: "kubernetes.io/hostname"
</code></pre>
<p>What the examples in the doc say I have to do:</p>
<pre><code>spec:
podManagementPolicy: OrderedReady
affinity:
nodeAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: <some key>
operator: In
values:
- <some value>
topologyKey: "kubernetes.io/hostname"
</code></pre>
| <p>To prevent the instances of a StatefulSet running on the same node, you need a podAntiAffinity, excerpt from zookeeper tutorial of <a href="https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/" rel="nofollow noreferrer">Kubernetes document</a> :</p>
<pre><code> affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- zk
topologyKey: "kubernetes.io/hostname"
</code></pre>
|
<p>I currently have kubectl v1.10.6 which I need to access my cluster, however I'm also trying to connect to a different cluster thats running on v1.5.</p>
<p>How and whats the best practice in having multiple version of a package on my computer? I could downgrade my package to v1.5, but that would require me to upgrade my kubectl back to v1.10 every time I need to access my other cluster. I'm currently running Ubuntu 16.04 (if that helps)</p>
| <p>They're statically linked, and have no dependencies, so there's no need to use a dependency manager for them:</p>
<pre><code>$ curl -sSfo /usr/local/bin/kubectl-1.9 \
https://storage.googleapis.com/kubernetes-release/release/v1.9.11/bin/linux/amd64/kubectl
$ chmod 755 /usr/local/bin/kubectl-1.9
</code></pre>
|
<p>I'm trying to understand how/why the schedule behaves with certain circumstances. Can someone explain what the scheduler would do (and why) in these scenarios?</p>
<p>Assume I have a 10GB memory box</p>
<p>I have a container with memory request set to 1G. I run 10 replicas of it, I expect to see all 10 on the same box (ignore for this case, any kube-system style pods)</p>
<p>Now assume I also add memory limit set to 2G. What happens? To me, this says to the scheduler "this pod is asking for 1G but can grow to 2G" -- would the scheduler still put all 10 on the same box, knowing that it might very well have to kick half of them off? Or will it allocate 2G as that's the limit described?</p>
<p>Would I also be correct in assuming that if I don't declare a limit, that the pod will grow until the node runs out of memory then kills pods that have exceeded their request resource? Or would it assume some kind of default?</p>
| <p>Requests is what needs to be provided on node exclusively to that pod for it to schedule. This is what is taken off the available resource count. Limits are, well, limits. The pod usage will get limited to that value.</p>
<p>So, if you have 10G node, and want to fit in <code>req: 1G, limit: 2G</code> pods on it, you will be able to fit 10 of them, and they will be able to bursts to 2G memory usage if there is enough unused memory from the others (ie. you request 1G, but really use 700M, which gives you roughly 3G requested, but not used space which will be available for bursting to the 2G limit by the pods.</p>
|
<p>I'm using GKE and I'm facing a strange problem with k8s jobs.
When i create a new job i got the status created but no pods run for this job so the job pod status still 0 running, 0 success, 0 fail</p>
<p>Note: It was working before and suddenly stopped to work</p>
| <p>Very little to go by from your description, but use <code>kubectl describe jobs/[name]</code> and <code>kubectl logs</code> to investigate further. </p>
|
<p>Currently, I have deployed a Hadoop cluster in Kubernetes. There are three datanodes (statefulset) and a namenode for HDFS.
I want to access data in HDFS externally. Thus, I created a service with nodePort type to export the namenode. When I tried to download the file inside HDFS, the namenode redirected me to the datanode. The problem is, the domain of redirect url was the domain in Kubernetes like <code>hadoop-hdfs-dn-0.hadoop-hdfs-dn.hadoop.svc.cluster.local:50075</code>, which was not able to be accessed externally.</p>
<p>The first thought of mine was to resolve the domain by client itself. Like</p>
<pre><code>hadoop-hdfs-dn-0.hadoop-hdfs-dn.hadoop.svc.cluster.local:50075 => IP0:50075
hadoop-hdfs-dn-1.hadoop-hdfs-dn.hadoop.svc.cluster.local:50075 => IP1:50075
hadoop-hdfs-dn-2.hadoop-hdfs-dn.hadoop.svc.cluster.local:50075 => IP2:50075
</code></pre>
<p>However, the nodePort applies to all nodes in Kubernetes cluster, so all three IPs above will go to the same service and may go to wrong datanode.</p>
<p>Is there any solution for this situation?
Either from the aspect of Hadoop or Kubernetes.
Like forcing the namenode to redirect like this?</p>
<pre><code>hadoop-hdfs-dn-0.hadoop-hdfs-dn.hadoop.svc.cluster.local:50075 => <node IP>:50001
hadoop-hdfs-dn-1.hadoop-hdfs-dn.hadoop.svc.cluster.local:50075 => <node IP>:50002
hadoop-hdfs-dn-2.hadoop-hdfs-dn.hadoop.svc.cluster.local:50075 => <node IP>:50003
</code></pre>
<p>So that I can create three services for each pod in the statefulset.</p>
| <p>I would suggest you to try externalIP out.</p>
<p>suppose your datanode is listening at port 50000, you can create seperate service for every datanode and use the nodeip of the node it running on as the externalIP. something like this:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: datanode-1
spec:
externalIPs:
- node1-ip
ports:
- name: datanode
port: 50000
selector:
app: datanode
id: "1"
---
apiVersion: v1
kind: Service
metadata:
name: datanode-2
spec:
externalIPs:
- node2-ip
ports:
- name: datanode
port: 50000
selector:
app: datanode
id: "2"
---
apiVersion: v1
kind: Service
metadata:
name: datanode-3
spec:
externalIPs:
- node3-ip
ports:
- name: datanode
port: 50000
selector:
app: datanode
id: "3"
</code></pre>
<p>Then you can resolve those pod domainnames to node ip that it running on.</p>
|
<p>I opened a Kubernetes NodePort on a machine and blocked all traffic to this port with the following rule:</p>
<pre><code>sudo ufw deny 30001
</code></pre>
<p>But I can still access that port via browser. Is it common? I can't find any information on that in the docs.</p>
| <p>Finally found the issue: <code>kube-proxy</code> is writing <code>iptables</code> rules (<a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/#is-kube-proxy-writing-iptables-rules" rel="noreferrer">https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/#is-kube-proxy-writing-iptables-rules</a>) which are catched before the <code>ufw</code> rules one added manually. This can be confirmed by checking the order in the output of <code>iptables -S -v</code>.</p>
|
<p>Hi Everyone,
I want to restrict my developers to be able to see only required resources on kubernetes dashboard(For example only their namespace not all the namespaces). Is possible to do that . If yes can someone point me to the right documents ? Many Thanks</p>
<p>I am using the below RBAC for the <code>kube-system</code> namespace. However the user is able to see all the namespaces on the dashboard rather than seeing only the namespaces he has access to.</p>
<pre><code>kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: kube-system
name: dashboard-reader-role
rules:
- apiGroups: [""]
resources: ["service/proxy"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: dashboard-reader-ad-group-rolebinding
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: dashboard-reader-role
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: "****************"
</code></pre>
| <p>please see the k8s rbac documentation:</p>
<p>example:
create a developer role in development namespace:</p>
<pre><code>kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: development
name: developer
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["deployments", "replicasets", "pods"]
verbs: ["list", "get", "watch"]
# You can use ["*"] for all verbs
</code></pre>
<p>then bind it:</p>
<pre><code>kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: developer-role-binding
namespace: development
subjects:
- kind: User
name: DevDan
apiGroup: ""
roleRef:
kind: Role
name: developer
apiGroup: ""
</code></pre>
<p>also , there is a built in view only role that u can bind to user:</p>
<p><a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#default-roles-and-role-bindings" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/rbac/#default-roles-and-role-bindings</a></p>
<pre><code>C02W84XMHTD5:~ iahmad$ kubectl get clusterroles --all-namespaces | grep view
system:aggregate-to-view 17d
view 17d
</code></pre>
<p>but this is clusterwide view role , if you want them to see only the stuff in a specific namespace only then create a view role in that namespace and bind it , exmaple above.</p>
|
<p>I have a Prometheus pod running along with my Kube-State-Metrics (KSM) pod. The KSM collects all the metrics from all the pods across all the namespaces in the cluster. Prometheus simply scrapes the metrics from KSM - this way Prometheus doesn't need to scrape the individual pods.</p>
<p>When pods are deployed, their deployment has certain pod-related labels as shown below. They have two important labels: <strong>APP</strong> and <strong>TEAM</strong>:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
APP: AppABC
TEAM: TeamABC
...
</code></pre>
<p> </p>
<p>Within Prometheus, my scrape configuration looks like this:</p>
<pre><code>scrape_configs:
- job_name: 'pod monitoring'
honor_labels: true
kubernetes_sd_configs:
- role: pod
relabel_configs:
- action: labelmap
regex: __meta_kubernetes_pod_label_(.+)
...
</code></pre>
<p> </p>
<p>Problem is, when Prometheus scrapes the information from kube-state-metrics, it overwrites the <code>APP</code> with <code>kube-state-metrics</code>. e.g. this metric below is actually for an app called <strong>"AppABC"</strong>, yet Prometheus overwrote the <code>app</code> label to <code>kube-state-metrics</code>.</p>
<pre><code>kube_pod_container_status_restarts_total{
app="kube-state-metrics",
container="appabccontainer",
job="pod monitoring",
namespace="test-namespace",
pod="appabc-766cbcb68d-29smr"
}
</code></pre>
<p>Is there anyway for me to scrape the metrics from kube-state-metrics BUT keep the <strong>APP</strong> and <strong>TEAM</strong> labels together without overwriting them?</p>
<p> </p>
<p><strong>EDIT - I figured it out</strong></p>
<p><strong>My Issue:</strong> My deployment and pods have certain labels defined (APP, TEAM). Kube-state-metrics gets these from K8 API. When Prometheus scrapes from kube-state-metrics, it doesn't have those labels.</p>
<p><strong>My Objective:</strong> Expose those labels into Prometheus.</p>
<p><strong>My Solution:</strong> Using PromQL you can do group by. So in my prometheus-rules.yaml, I changed this:</p>
<pre><code>expr: kube_pod_status_phase{phase="Failed"} > 0
</code></pre>
<p>to this:</p>
<pre><code>expr: kube_pod_status_phase{phase="Failed"} * on (pod,namespace) group_right kube_pod_labels > 0
</code></pre>
<p>So my new alert rule looks like this:</p>
<pre><code>- name: Pod_Failed
rules:
- alert: pod_failed
expr: kube_pod_status_phase{phase="Failed"} * on (pod,namespace) group_right kube_pod_labels > 0
labels:
appname: '{{ $labels.label_APP }}' # This is what I wanted to capture
teamname: '{{ $labels.label_TEAM }}' # This is what I wanted to capture
annotations:
summary: 'Pod: {{ $labels.pod }} is down'
description: 'Pod: {{ $labels.pod }} is down in {{ $labels.namespace }} namespace.'
</code></pre>
| <p><strong>Solution:</strong> Using PromQL you can do group by. So in my prometheus-rules.yaml, I changed this:</p>
<pre><code>expr: kube_pod_status_phase{phase="Failed"} > 0
</code></pre>
<p>to this:</p>
<pre><code>expr: kube_pod_status_phase{phase="Failed"} * on (pod,namespace) group_right kube_pod_labels > 0
</code></pre>
<p>So my new alert rule looks like this:</p>
<pre><code>- name: Pod_Failed
rules:
- alert: pod_failed
expr: kube_pod_status_phase{phase="Failed"} * on (pod,namespace) group_right kube_pod_labels > 0
labels:
appname: '{{ $labels.label_APP }}' # This is what I wanted to capture
teamname: '{{ $labels.label_TEAM }}' # This is what I wanted to capture
annotations:
summary: 'Pod: {{ $labels.pod }} is down'
description: 'Pod: {{ $labels.pod }} is down in {{ $labels.namespace }} namespace.'
</code></pre>
|
<p>I'm a novice regarding YAML format and kubernetes.</p>
<p>The following is a <em>dep_prom.yml</em> file.</p>
<pre><code>---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
name: prometheus-deployment
name: prometheus
#namespace: prometheus
spec:
replicas: 1
template:
metadata:
labels:
app: prometheus
spec:
containers:
- image: prom/prometheus:master
name: prometheus
command:
- "/bin/prometheus"
args:
- "-config.file=/etc/prometheus/prometheus.yml"
- "-storage.local.path=/prometheus"
- "-storage.local.retention=24h"
ports:
- containerPort: 9090
protocol: TCP
volumeMounts:
- mountPath: "/prometheus"
name: data
- mountPath: "/etc/prometheus"
name: config-volume
resources:
requests:
cpu: 100m
memory: 100Mi
limits:
cpu: 500m
memory: 2500Mi
volumes:
- name: data
hostPath:
path: /data/prometheus
- name: config-volume
configMap:
name: prometheus
nodeSelector: westporch-kubeminion-1
kubernetes.io/hostname: 10.0.24.52
---
</code></pre>
<p>However... When I executed <strong>kubectl create -f dep_prom.yml</strong> </p>
<p><em>error: error converting YAML to JSON: yaml: line 47: mapping values are not allowed in this context</em></p>
<p>Line 47 is <em>nodeSelector: westporch-kubeminion-1</em></p>
<p>I think that YAML file format is normal. </p>
<p>What is causing this error?</p>
| <p>You indicate you think the YAML format is normal, but it is not. This is a YAML error caused by the line at the end of the first document, starting with <code>kubernetes.io/hostname</code> being indented relative to the one before it. Because these are both key-value pair lines (i.e. item pairs of a block-style mapping) instead of </p>
<pre><code> nodeSelector: westporch-kubeminion-1
kubernetes.io/hostname: 10.0.24.52
</code></pre>
<p>you either need to do:</p>
<pre><code> nodeSelector: westporch-kubeminion-1
kubernetes.io/hostname: 10.0.24.52
</code></pre>
<p>but it might be that <code>nodeSelector</code> needs a mapping as argument instead of the scalar <code>westporch-kubeminion-1</code> it has now.</p>
<pre><code> nodeSelector:
kubernetes.io/hostname: 10.0.24.52
</code></pre>
<hr>
<p>This error might mask a second one in the file, depending on how lenient kubernetes is. The <code>---</code> is the end-of-directives marker and since the directives are optional it can occur at the start of a YAML document. The one at the bottom of your example indicates the start of a new document. Once you address the first error, you might get a warning about single documents based on that. (The end-of-document marker consists of three dots: <code>...</code> at the beginning of a line followed by whitespace.</p>
<p>Of course any changes you make should confirm to what kubernetes is expecting, but the above stream is clearly non-valid as YAML in itself.</p>
|
<p>I want to control/intercept the load balancer traffic using Istio. Istio gives you the ability to add a mixer on a service level but I want to add some code on a higher level just before the request traffic rules get executed.
Thus instead of adding actions per service I want to have some actions executed just after the request was received from the load balancer.</p>
| <p>As per official Istio <a href="https://istio.io/docs/tasks/traffic-management/ingress/" rel="nofollow noreferrer">Documentation</a> <code>istio-ingressgateway</code> is the main entry point for exposing nested services outside the cluster. Therefore, Istio Gateway collects information about incoming or outgoing HTTP/TCP connections and also specifies the set of ports that should be exposed, the type of protocol to use, etc. Gateway can be applied on the corresponded Envoy sidecar in the Pod through the labels.</p>
<p>Keep in mind that Istio Gateway operates within L4-L6 layers of load balancing and it's not aware of network source provider.</p>
<p>More information about Istio load balancing architecture you can find <a href="https://istio.io/docs/concepts/traffic-management/#discovery-and-load-balancing" rel="nofollow noreferrer">here</a>.</p>
|
<p>I have dockerized and created deployment and service for both front(REACT) and backend (EXPRESS NODE JS) project in kubernetes. I have successfully deployed in Kubernetes(Single node cluster) in Same Node with Two Pods(i.e <strong>One Pod --> REACT APP and SECOND POD --> EXPRESS NODE JS</strong>) in Google cloud Platform. </p>
<p><strong><em>Question:</em></strong></p>
<p>1.) How to communicate from one pod to another pod inside the Node in Kubernetes cluster?</p>
<p>2.) I have exposed my REACT app to the Outer world by creating <strong>LoadBalancer</strong> Type Service in <strong>kubernetes</strong> and i am able to access the React App Endpoint from the Browser. Now, Is it possible to access EXPRESS app from REACT app inside the node without exposing my EXPRESS app to outer world. How to achieve this? </p>
<p>Thanks in Advance.</p>
| <p>When the frontend is a browser-based JavaScript app then the JavaScript resources may be hosted from a Pod in the cluster but the logic doesn't run there. The fronted JavaScript runs in the user's browser. Calling any backend endpoints in the cluster from the user's browser requires an external URL somewhere along the chain and not just an internal one.</p>
<p>A typical way to do this is to set up a Service of type LoadBalancer and put the external endpoint into the backend's config. Another is to set up an Ingress Controller and deploy both Service and Ingress along with the backend. With Ingress you can know what the external URL will be before you deploy the Service (and this is easiest if you use DNS). Cluster-internal communication doesn't need Ingress and can be done with Services of type ClusterIP but I think you need external communication. </p>
<p>You will need to expose an external entry point for users to hit the UI anyway (the place where the JS is hosted). With ingress you could configure the route to the backend as a different path on the same (external) host. </p>
|
<p>I have a workload deployed in kubernetes. I have exposed it using a load balancer service because I need an external IP to communicate with the workload.
The external IP is now publicly accessible. How do I secure it so that only I will be able to access it from an external application?</p>
| <p>Kubernetes doesn't come with out-of-the-box authentication for external services. If you have more services and security is important for you I would take a look into istio project. You can configure authentication for your services in decalarative way using authentication policy:
<a href="https://istio.io/docs/tasks/security/authn-policy/#end-user-authentication" rel="nofollow noreferrer">https://istio.io/docs/tasks/security/authn-policy/#end-user-authentication</a>
Using istio you can secure not only incoming connections, but also outgoing and internal traffic.</p>
<p>If you are new to service mesh concept and you don't know how to start, you can check <a href="https://github.com/kyma-project/kyma" rel="nofollow noreferrer">kyma-project</a> where istio is already configured and you can apply token validation with one click in UI or single kubectl command. Check the example:
<a href="https://github.com/kyma-project/examples/tree/master/gateway" rel="nofollow noreferrer">https://github.com/kyma-project/examples/tree/master/gateway</a></p>
|
<p>I'm using the instructions from <a href="https://zero-to-jupyterhub.readthedocs.io/en/stable/" rel="nofollow noreferrer">Zero to jupyterhub with kubernetes</a> to install a jupyterhub in a minikube:</p>
<p>When I run the command in step 2 shown below:</p>
<pre><code>RELEASE=jhub
NAMESPACE=jhub
~/minik$ helm upgrade --install $RELEASE jupyterhub/jupyterhub --namespace $NAMESPACE --version 0.7.0 --values config.yaml --debug --dry-run
</code></pre>
<p>I get this error:</p>
<pre><code>[debug] Created tunnel using local port: '42995'
[debug] SERVER: "127.0.0.1:42995"
[debug] Fetched jupyterhub/jupyterhub to
/home1/chrisj/.helm/cache/archive/jupyterhub-0.7.0.tgz
Release "jhub" does not exist. Installing it now.
[debug] CHART PATH: /home1/chrisj/.helm/cache/archive/jupyterhub-0.7.0.tgz
Error: render error in "jupyterhub/templates/proxy/autohttps/service.yaml": template: jupyterhub/templates/proxy/autohttps/service.yaml:1:26: executing "jupyterhub/templates/proxy/autohttps/service.yaml" at <.Values.proxy.https....>: can't evaluate field https in type interface {}
</code></pre>
| <p>I have deployed Jupyterhub on minikube correctly using the provided tutorial, then I deleted it using helm delete, and tried to deploy it again with <code>helm upgrade --install</code>. I got similar error as you have posted. For me using:
<code>helm delete --purge jhub</code> solved the problem. </p>
<p>PS: If this will not help you, please provide some more details, like <code>helm version</code>, <code>kubectl get pods --all-namespaces</code> </p>
|
<p>I am using jmdns to broadcast a service over mdns which is then running as a docker image inside a kubernetes pod. The pod yaml looks something like this:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: mdns-broadcaster
spec:
hostNetwork: true
containers:
- name: mdns-broadcasting-pod
image: ...
</code></pre>
<p>The application will start up and broadcast some service type <code>_example._tcp</code>. However, running avahi-browse from the single node hosting this pod, I cannot see such a service being broadcast.</p>
<p>Any help would be appreciated, thanks</p>
| <p>In case anybody cares, I resolved this by moving from mDNS for the kubernetes implementation to avahi. This allows you to then share the dbus directory on the host file system with the pod in order to perform mDNS announcements.</p>
|
<p>I have a statefulSet that has a VolumeClaim.</p>
<p>The volume section of <strong>StatefulSet1</strong> is </p>
<pre><code> volumes:
- name: artifact
persistentVolumeClaim:
claimName: artifacts
</code></pre>
<p>The PVC definition is</p>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: artifacts
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "5Gi"
storageClassName: default
</code></pre>
<p>Now when I spin up StatefulSet1, everything is ok. The pod get the claim and is successfully mounted.</p>
<p>Now I want to bring up another Stateful set i.e. StatefulSet2 which needs to attach to the PV.</p>
<p>So my volume section of <strong>StatefulSet2</strong> is the same.</p>
<pre><code> volumes:
- name: artifact
persistentVolumeClaim:
claimName: artifacts
</code></pre>
<p>But when I spin up StatefulSet2, my original PVC goes into terminating state.</p>
<pre><code>kubectl get pvc artifacts
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
artifacts Terminating pvc-b55f729d-e115-11e8-953e-02000a1bef39 5Gi RWO rbd-mario 31m
</code></pre>
<p>And the new pod is continuously in Pending state.</p>
<p>Not sure what Im doing wrong here. But my aim is to connect multiple StatefulSets/Pods to the same PV.</p>
| <blockquote>
<p>The <strong>accessMode</strong> for this PVC is set to <strong>ReadWriteMany</strong> so the kubernetes
allows mounting this PVC on multiple pods</p>
</blockquote>
<p><a href="https://docs.portworx.com/scheduler/kubernetes/shared-volumes.html" rel="nofollow noreferrer">https://docs.portworx.com/scheduler/kubernetes/shared-volumes.html</a></p>
<p><a href="https://docs.okd.io/latest/install_config/storage_examples/shared_storage.html" rel="nofollow noreferrer">https://docs.okd.io/latest/install_config/storage_examples/shared_storage.html</a></p>
<p>More likely this should work:</p>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: artifacts
spec:
accessModes:
- "ReadWriteMany"
resources:
requests:
storage: "5Gi"
storageClassName: default
</code></pre>
|
<p>I need to configure kube-proxy to server from the pods running on the current node only , and avoid for the connections to bounce around different nodes.</p>
| <p>Found a solution in docs:</p>
<p>As of Kubernetes 1.5, packets sent to Services with Type=NodePort are source NAT’d by default. You can test this by creating a NodePort Service:</p>
<pre><code>$ kubectl expose deployment source-ip-app --name=nodeport --port=80 --target-port=8080 --type=NodePort
service/nodeport exposed
$ NODEPORT=$(kubectl get -o jsonpath="{.spec.ports[0].nodePort}" services nodeport)
$ NODES=$(kubectl get nodes -o jsonpath='{ $.items[*].status.addresses[?(@.type=="ExternalIP")].address }')
</code></pre>
<p>If you’re running on a cloudprovider, you may need to open up a firewall-rule for the nodes:nodeport reported above. Now you can try reaching the Service from outside the cluster through the node port allocated above.</p>
<pre><code>$ for node in $NODES; do curl -s $node:$NODEPORT | grep -i client_address; done
client_address=10.180.1.1
client_address=10.240.0.5
client_address=10.240.0.3
</code></pre>
<p>Note that these are not the correct client IPs, they’re cluster internal IPs. This is what happens:
Client sends packet to node2:nodePort
node2 replaces the source IP address (SNAT) in the packet with its own IP address
node2 replaces the destination IP on the packet with the pod IP
packet is routed to node 1, and then to the endpoint
the pod’s reply is routed back to node2
the pod’s reply is sent back to the client</p>
<p>Visually:</p>
<pre><code> client
\ ^
\ \
v \
node 1 <--- node 2
| ^ SNAT
| | --->
v |
endpoint
</code></pre>
<p>To avoid this, Kubernetes has a feature to preserve the client source IP (check here for feature availability). Setting service.spec.externalTrafficPolicy to the value Local will only proxy requests to local endpoints, never forwarding traffic to other nodes and thereby preserving the original source IP address. If there are no local endpoints, packets sent to the node are dropped, so you can rely on the correct source-ip in any packet processing rules you might apply a packet that make it through to the endpoint.
Set the <strong>service.spec.externalTrafficPolicy</strong> field as follows:</p>
<pre><code>$ kubectl patch svc nodeport -p '{"spec":{"externalTrafficPolicy":"Local"}}'
service/nodeport patched
</code></pre>
<p>Now, re-run the test:</p>
<pre><code>$ for node in $NODES; do curl --connect-timeout 1 -s $node:$NODEPORT | grep -i client_address; done
client_address=104.132.1.79
</code></pre>
<p>Note that you only got one reply, with the right client IP, from the one node on which the endpoint pod is running.
This is what happens:
client sends packet to node2:nodePort, which doesn’t have any endpoints
packet is dropped
client sends packet to node1:nodePort, which does have endpoints
node1 routes packet to endpoint with the correct source IP</p>
<p>Visually:</p>
<pre><code> client
^ / \
/ / \
/ v X
node 1 node 2
^ |
| |
| v
endpoint
</code></pre>
|
<p>I want to create a K8s cluster (1 master node and 2 slave nodes) with Vagrant on W10.</p>
<p>I have a problem when starting my master node. </p>
<p>I do a <code>sudo kubeadm init</code> to start my master node, but the command fails. </p>
<pre><code>"/etc/kubernetes/manifests/etcd.yaml" [init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled
Unfortunately, an error has occurred:
timed out waiting for the condition
This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
Additionally, a control plane component may have crashed or exited when started by the container runtime. To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker. Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID' couldn't initialize a Kubernetes cluster
</code></pre>
<p>I check with <code>systemctl status kubelet</code> that kubelet is running: </p>
<pre><code>● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled) Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf Active: active (running) since Mon 2018-11-05 13:55:48 UTC; 36min ago
Docs: https://kubernetes.io/docs/home/ Main PID: 24683 (kubelet)
Tasks: 18 (limit: 1135) CGroup: /system.slice/kubelet.service
└─24683 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-dr
Nov 05 14:32:07 master-node kubelet[24683]: E1105 14:32:07.605330 24683 kubelet.go:2236] node "master-node" not found Nov 05 14:32:07 master-node kubelet[24683]: E1105 14:32:07.710945 24683 kubelet.go:2236] node "master-node" not found Nov 05 14:32:07 master-node kubelet[24683]: W1105 14:32:07.801125 24683 cni.go:188] Unable to update cni config: No networks found in /etc/cni/net.d Nov 05 14:32:07 master-node kubelet[24683]: E1105 14:32:07.804756 24683 kubelet.go:2167] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docke Nov 05 14:32:07 master-node kubelet[24683]: E1105 14:32:07.813349 24683 kubelet.go:2236] node "master-node" not found Nov 05 14:32:07 master-node kubelet[24683]: E1105 14:32:07.916319 24683 kubelet.go:2236] node "master-node" not found Nov 05 14:32:08 master-node kubelet[24683]: E1105 14:32:08.030146 24683 kubelet.go:2236] node "master-node" not found Nov 05 14:32:08 master-node kubelet[24683]: E1105 14:32:08.136622 24683 kubelet.go:2236] node "master-node" not found Nov 05 14:32:08 master-node kubelet[24683]: E1105 14:32:08.238376 24683 kubelet.go:2236] node "master-node" not found Nov 05 14:32:08 master-node kubelet[24683]: E1105 14:32:08.340852 24683 kubelet.go:2236] node "master-node" not found
</code></pre>
<p>and after I check logs with <code>journalctl -xeu kubelet</code>:</p>
<pre><code>Nov 05 14:32:39 master-node kubelet[24683]: E1105 14:32:39.328035 24683 kubelet.go:2236] node "master-node" not found Nov 05 14:32:39 master-node kubelet[24683]: E1105 14:32:39.632382 24683 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: Get https://10.0.2.15:6 Nov 05 14:32:39 master-node kubelet[24683]: E1105 14:32:39.657289 24683 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list
*v1.Pod: Get https://10.0.2. Nov 05 14:32:39 master-node kubelet[24683]: E1105 14:32:39.752441 24683 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: Get https://10.0.2.15:6443 Nov 05 14:32:39 master-node kubelet[24683]: I1105 14:32:39.804026 24683 kubelet_node_status.go:276] Setting node annotation to enable volume controller attach/detach Nov 05 14:32:39 master-node kubelet[24683]: I1105 14:32:39.835423 24683 kubelet_node_status.go:70] Attempting to register node master-node Nov 05 14:32:41 master-node kubelet[24683]: I1105 14:32:41.859955 24683 kubelet_node_status.go:276] Setting node annotation to enable volume controller attach/detach Nov 05 14:32:41 master-node kubelet[24683]: E1105 14:32:41.881897 24683 pod_workers.go:186] Error syncing pod e808f2bea99d167c3e91a819362a586b ("kube-apiserver-master-node_kube-system(e80
</code></pre>
<p>I don't understand the error. Should I start a CNI (like weave) before starting my master node? </p>
<p>You can find my vagrantfile here, maybe I forget something: </p>
<pre><code>Vagrant.configure("2") do |config| config.vm.box = "bento/ubuntu-18.04" config.vm.box_check_update = true config.vm.network "public_network" config.vm.hostname = "master-node" config.vm.provider :virtualbox do |vb|
vb.name = "master-node"
end
config.vm.provision "shell", inline: <<-SHELL
echo "UPDATE"
apt-get -y update
echo "INSTALL PREREQUIER"
apt-get install -y apt-transport-https ca-certificates curl software-properties-common
echo "START INSTALL DOCKER"
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"
apt-get -y update
apt-get install -y docker-ce
systemctl start docker
systemctl enable docker
usermod -aG docker vagrant
curl -L "https://github.com/docker/compose/releases/download/1.22.0/docker-compose-$(uname
-s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
chown vagrant /var/run/docker.sock
docker-compose --version
docker --version
echo "END INSTALL DOCKER"
echo "START INSTALL KUBENETES"
curl -s "https://packages.cloud.google.com/apt/doc/apt-key.gpg" | apt-key add -
echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" >> /etc/apt/sources.list.d/kubernetes.list
apt-get -y update
swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
apt-get install -y kubelet kubeadm kubectl
systemctl enable kubelet
systemctl start kubelet
echo "END INSTALL KUBENETES"
kubeadm config images pull #pre-download kubeadm config FOR MASTER ONLY
IPADDR=`hostname -I`
echo "This VM has IP address $IPADDR"
SHELL
end
</code></pre>
<p>If i do a docker ps -a after the error, i can see two kube-apiserver, one is up and another is exited.</p>
<pre><code>CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
befac2364452 51a9c329b7c5 "kube-apiserver --au…" 45 seconds ago Up 42 seconds k8s_kube-apiserver_kube-apiserver-master-node_kube-system_de7285496ca374bf069328c290f65db8_2
dab8889cada8 51a9c329b7c5 "kube-apiserver --au…" 3 minutes ago Exited (137) 46 seconds ago k8s_kube-apiserver_kube-apiserver-master-node_kube-system_de7285496ca374bf069328c290f65db8_1
87d74bdeb62b 3cab8e1b9802 "etcd --advertise-cl…" 5 minutes ago Up 5 minutes k8s_etcd_etcd-master-node_kube-system_2dba96180d17235a902e739497ef2f50_0
4d869d0be44f 15548c720a70 "kube-controller-man…" 5 minutes ago Up 5 minutes k8s_kube-controller-manager_kube-controller-manager-master-node_kube-system_7c81d10c743d19c292e161476cf2b945_0
1f72b9b636b4 d6d57c76136c "kube-scheduler --ad…" 5 minutes ago Up 5 minutes k8s_kube-scheduler_kube-scheduler-master-node_kube-system_ee7b1077c61516320f4273309e9b4690_0
6116a35a7ec7 k8s.gcr.io/pause:3.1 "/pause" 5 minutes ago Up 5 minutes k8s_POD_etcd-master-node_kube-system_2dba96180d17235a902e739497ef2f50_0
5de762296ece k8s.gcr.io/pause:3.1 "/pause" 5 minutes ago Up 5 minutes k8s_POD_kube-controller-manager-master-node_kube-system_7c81d10c743d19c292e161476cf2b945_0
156544886f28 k8s.gcr.io/pause:3.1 "/pause" 5 minutes ago Up 5 minutes k8s_POD_kube-scheduler-master-node_kube-system_ee7b1077c61516320f4273309e9b4690_0
1f6c396fc6e0 k8s.gcr.io/pause:3.1 "/pause" 5 minutes ago Up 5 minutes k8s_POD_kube-apiserver-master-node_kube-system_de7285496ca374bf069328c290f65db8_0
</code></pre>
<p><strong>EDIT:</strong>
If i check the logs of the k8s_kube-apiserver who has exited i see that </p>
<pre><code>Flag --insecure-port has been deprecated, This flag will be removed in a future version.
I1107 10:35:23.236063 1 server.go:681] external host was not specified, using 192.168.1.49
I1107 10:35:23.237046 1 server.go:152] Version: v1.12.2
I1107 10:35:42.690715 1 plugins.go:158] Loaded 8 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.
I1107 10:35:42.691369 1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
I1107 10:35:42.705302 1 plugins.go:158] Loaded 8 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.
I1107 10:35:42.709912 1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
I1107 10:35:59.955297 1 master.go:240] Using reconciler: lease
W1107 10:36:31.566656 1 genericapiserver.go:325] Skipping API batch/v2alpha1 because it has no resources.
W1107 10:36:41.454087 1 genericapiserver.go:325] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W1107 10:36:41.655602 1 genericapiserver.go:325] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W1107 10:36:42.148577 1 genericapiserver.go:325] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W1107 10:36:59.451535 1 genericapiserver.go:325] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
[restful] 2018/11/07 10:37:00 log.go:33: [restful/swagger] listing is available at https://192.168.1.49:6443/swaggerapi
[restful] 2018/11/07 10:37:00 log.go:33: [restful/swagger] https://192.168.1.49:6443/swaggerui/ is mapped to folder /swagger-ui/
[restful] 2018/11/07 10:37:37 log.go:33: [restful/swagger] listing is available at https://192.168.1.49:6443/swaggerapi
[restful] 2018/11/07 10:37:37 log.go:33: [restful/swagger] https://192.168.1.49:6443/swaggerui/ is mapped to folder /swagger-ui/
I1107 10:37:38.920238 1 plugins.go:158] Loaded 8 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.
I1107 10:37:38.920985 1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
</code></pre>
<p>I also notice that k8s_kube-apiserver containers start and exited in a loop.</p>
<p>Thanks a lot! </p>
| <p>Your <code>kubelet</code> is running but looks like it can't talk to the API server.</p>
<p>I would check on the VM:</p>
<pre><code>docker ps | grep apiserver
</code></pre>
<p>You should get something like this:</p>
<pre><code>$ docker ps | grep api
2f15a11f65f4 dcb029b5e3ad "kube-apiserver --au…" 2 weeks ago Up 2 weeks k8s_kube-apiserver_kube-apiserver-xxxx.internal_kube-system_acd8011fdf93688f6391aaca470a1fe8_2
8a1a5ce855aa k8s.gcr.io/pause:3.1 "/pause" 2 weeks ago Up 2 weeks k8s_POD_kube-apiserver-xxxx.internal_kube-system_acd8011fdf93688f6391aaca470a1fe8_2
</code></pre>
<p>Then look at the logs to see if you see any failures:</p>
<pre><code>$ docker logs 2f15a11f65f4
</code></pre>
<p>If you don't see the kube-apiserver containers you might want to try <code>docker ps -a</code> which would mean that at some point it crashed.</p>
<p>Hope it helps.</p>
|
<p>I am new to prometheus/alertmanager.</p>
<p>I have created a cron job which executes shell script every minute. This shell script generates "test.prom" file (with a gauge metric in it) in the same directory which is assigned to <code>--textfile.collector.directory</code> argument (to node-exporter). I verified (using curl <a href="http://localhost:9100/metrics" rel="nofollow noreferrer">http://localhost:9100/metrics</a>) that the node-exporter exposes that custom metric correctly.</p>
<p>When I tried to run a query against that custom metric in prometheus dashboard, it does not show up any results (it says no data found). </p>
<p>I could not figure out why the query against the metric exposed via node-exporter textfile collector fails. <strong>Any clues what I missed ? Also please let me know how to check and ensure that prometheus scraped my custom metric 'test_metric` ?</strong></p>
<p>My query in prometheus dashboard is <code>test_metric != 0</code> (in prometheus dashboard) which did not give any results. But I exposed <code>test_metric</code> via node-exporter textfile. </p>
<p>Any help is appreciated !!</p>
<p>BTW, the node-exporter is running as docker container in Kubernetes environment.</p>
| <p>I had a similar situation, but it was not a configuration problem.</p>
<p>Instead, my data included timestamps:</p>
<pre><code># HELP network_connectivity_rtt Round Trip Time to each node
# TYPE network_connectivity_rtt gauge
network_connectivity_rtt{host="home"} 53.87 1541426242
network_connectivity_rtt{host="hop_1"} 58.8 1541426242
network_connectivity_rtt{host="hop_2"} 21.93 1541426242
network_connectivity_rtt{host="hop_3"} 71.69 1541426242
</code></pre>
<p>PNE was picking them up without any problem <em>once I reloaded it</em>. As prometheus is running under systemd, I had to check the logs like this:</p>
<pre><code>journalctl --system -u prometheus.service --follow
</code></pre>
<p>There I read this line:</p>
<pre><code>msg="Error on ingesting samples that are too old or are too far into the future"
</code></pre>
<p>Once I removed the timestamps, values started appearing. This lead me to read more in detail about the timestamps, and I found out they have to be in <em>miliseconds</em>. So this format now is ok:</p>
<pre><code># HELP network_connectivity_rtt Round Trip Time to each node
# TYPE network_connectivity_rtt gauge
network_connectivity_rtt{host="home"} 50.47 1541429581376
network_connectivity_rtt{host="hop_1"} 3.38 1541429581376
network_connectivity_rtt{host="hop_2"} 11.2 1541429581376
network_connectivity_rtt{host="hop_3"} 20.72 1541429581376
</code></pre>
<p>I hope it helps someone else.</p>
|
<p>I am working on migrating one of the application to kubernetes. I want to discard the result if the health check returns http(100-199).</p>
<p>Similar to one which we have in marathon </p>
<blockquote>
<p>IgnoreHttp1xx (Optional. Default: false): Ignore HTTP informational
status codes 100 to 199. If the HTTP health check returns one of
these, the result is discarded and the health status of the task
remains unchanged.</p>
</blockquote>
<p>How can i achieve this in kubernetes? Does it accept if i pass like this? </p>
<pre><code> livenessProbe:
httpGet:
path: /v1/health
port: 9102
scheme: HTTP
httpHeaders:
- name: ignoreHttp1xx
value: false
</code></pre>
<p>Unfortunately i have no way to test this in our environment. Does it ignore such requests? If not what is the alternative i can use for this.</p>
| <blockquote>
<p>Any code greater than or equal to 200 and less than 400 indicates
success. Any other code indicates failure. You can see the source code
for the server in server.go.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes</a></p>
<p>Maybe you can change your healthcheck to return something between 200-300 when it is going to return 100-199 , kind of overwirte.</p>
|
<p>I am trying to get the audit, file beat, and metric beat logs together using Fluentd in Kibana dashboard of my kubernetes cluster. I am able to get the audit, file beat and metric beat log separately as specific indexes like filebeat-<em>, auditbeat-</em> and metricbeat-* in my Kibana dashboard.</p>
<p>Could anybody suggest me? Is there any possibility to get the above 3 types of logs within a single index? </p>
| <p>Yes, assuming you are talking about an EFK stack and not an ELK stack. In your Fluentd configs you can have something like this:</p>
<pre><code><match *.**>
type copy
<store>
type elasticsearch
host localhost
port 9200
include_tag_key true
tag_key @log_name
logstash_format true
flush_interval 10s
index_name fluentd.common.%Y%m%d
</store>
</match>
</code></pre>
<p>They will all go to the same index <code>fluentd.common.%Y%m%d</code>, as opposed to having <code>index_name fluentd.${tag}.%Y%m%d</code>.</p>
|
<p>I can't find a way to make an nginx pod resolve another kubernetes services URLs.</p>
<p><strong>I am NOT using kube-dns</strong> , we are using kube2sky solely and we are not going to implement kube-dns yet, so I need to fix in this scenario.</p>
<p>For example, I want nginx to resolve a service URL <code>app.mynamespace.svc.skydns.local</code> but if I run a ping to that URL it resolves successfully.</p>
<p>My nginx config part is:</p>
<pre><code>location /api/v1/namespaces/mynamespace/services/app/proxy/ {
resolver 127.0.0.1
set \$endpoint \"http://app.mynamespace.svc.skydns.local/\";
proxy_pass \$endpoint;
proxy_http_version 1.1;
proxy_set_header Connection \"upgrade\";
}
</code></pre>
<p>I need to specify the target upstream in a variable because I want nginx to starts even if the target is not available, if I don't specify in variable nginx crashes when starting up because the upstream needs to be available and resolvable.</p>
<p>The problem I think is the resolver value, I've tried with <code>127.0.0.1</code>, with <code>127.0.0.11</code>, and with the skydns IP specified in configuration <code>172.40.0.2:53</code>:</p>
<pre><code>etcdctl get /skydns/config
{"dns_addr":"0.0.0.0:53","ttl":4294967290,"nameservers":["172.40.0.2:53"]}
</code></pre>
<p>But nginx cannot resolve the URL yet.</p>
<p>What IP should I specify in the resolver field in nginx config for kubernetes and skydns config?</p>
<p>Remember that we don't have kube-dns.</p>
<p>Thank you.</p>
| <p>I don't think resolving <code>app.mynamespace.svc.skydns.local</code> has anything to do with configuring the upstream DNS servers. Generally, for that, you configure a well-known DNS server like <code>8.8.8.8</code> or your cloud infrastructure DNS server which would be perhaps <code>172.40.0.2</code>. For example as described in the <a href="https://github.com/skynetservices/skydns" rel="nofollow noreferrer">docs</a>:</p>
<pre><code>$ curl -XPUT http://127.0.0.1:4001/v2/keys/skydns/config \
-d value='{"dns_addr":"127.0.0.1:5354","ttl":3600, "nameservers": ["8.8.8.8:53","8.8.4.4:53"]}'
</code></pre>
<p>You might want to check the logs of your <code>kube2sky2</code> pod, for any guidance and that all the <a href="https://github.com/gravitational/kube2sky" rel="nofollow noreferrer">config options</a> are specified like <code>--kube-master-url</code>, <code>--etcd-server</code>. Maybe it can't talk to the Kubernetes master and receive updates of running pods so that the SRV entries get updates.</p>
|
<p>I'm trying to figure out how to organize K8s namespaces for the development cluster.</p>
<p>Now we have multiple development namespaces (per team).</p>
<p>There are tons of pods (about 100-200) in a single namespace.</p>
<p>1-5 pods per feature-branch deployment.</p>
<p>We use Helm to make deployments. But some of the teammates say that it's hard to manage it.</p>
<p>The new idea is making a namespace per feature-branch deployment.</p>
<p>Now, I see the main issue is in TLS (and others) secrets sync sharing across namespaces. But it can be resolved by making a CronJob.</p>
<p>Are there any advantages or disadvantages to this approach?</p>
| <p>Its definitely a good approach to use namespaces for restricting the deployments to feature teams.<br/>
But deploying 50+ pods becomes difficult to manage per namespace, especially if the pods contains 10+ conatiners. So you will tend to manage 50X10=500 containers per deployment team.<br/></p>
<blockquote>
<p>1-5 pods per feature-branch deployment.</p>
</blockquote>
<p>This is really a great way to go about using a namespace, but still yet you will have lots and lots of namespace to remember when you initally said you have arounf 100-200 pods.</p>
<p><strong>Hope you are using rbac in k8s</strong></p>
|
<p>I have Istio (including citadel) running in minikube using the instructions at <a href="https://istio.io/docs/setup/kubernetes/helm-install" rel="nofollow noreferrer">https://istio.io/docs/setup/kubernetes/helm-install</a> .</p>
<pre><code>$ helm template install/kubernetes/helm/istio --name istio --namespace istio-system > $HOME/istio.yaml
$ kubectl create namespace istio-system
$ kubectl apply -f $HOME/istio.yaml
</code></pre>
<p>When I try to get a shell into the citadel container, I am getting an error:</p>
<pre><code>$ kubectl exec -it istio-citadel-6d7f9c545b-bkvnx -- /bin/bash
OCI runtime exec failed: exec failed: container_linux.go:348: starting container process caused "exec: \"/bin/bash\": stat /bin/bash: no such file or directory": unknown
command terminated with exit code 126
</code></pre>
<p>However, I can exec into other containers like pilot fine.</p>
<p>These are my pods and containers, if it helps.</p>
<pre><code>shell-demo: nginx,
istio-citadel-6d7f9c545b-bkvnx: docker.io/istio/citadel:1.0.3,
istio-cleanup-secrets-rp4wv: quay.io/coreos/hyperkube:v1.7.6_coreos.0,
istio-egressgateway-866885bb49-6jz9q: docker.io/istio/proxyv2:1.0.3,
istio-galley-6d74549bb9-7nhcl: docker.io/istio/galley:1.0.3,
istio-ingressgateway-6c6ffb7dc8-bvp6b: docker.io/istio/proxyv2:1.0.3,
istio-pilot-685fc95d96-fphc9: docker.io/istio/pilot:1.0.3, docker.io/istio/proxyv2:1.0.3,
istio-policy-688f99c9c4-bpl9w: docker.io/istio/mixer:1.0.3, docker.io/istio/proxyv2:1.0.3,
istio-security-post-install-s6dft: quay.io/coreos/hyperkube:v1.7.6_coreos.0,
istio-sidecar-injector-74855c54b9-6v5xg:docker.io/istio/sidecar_injector:1.0.3,
istio-telemetry-69b794ff59-f7dv4: docker.io/istio/mixer:1.0.3, docker.io/istio/proxyv2:1.0.3,
prometheus-f556886b8-lhdt8: docker.io/prom/prometheus:v2.3.1,
coredns-c4cffd6dc-6xblf: k8s.gcr.io/coredns:1.2.2,
etcd-minikube: k8s.gcr.io/etcd-amd64:3.1.12,
kube-addon-manager-minikube: k8s.gcr.io/kube-addon-manager:v8.6,
kube-apiserver-minikube: k8s.gcr.io/kube-apiserver-amd64:v1.10.0,
kube-controller-manager-minikube: k8s.gcr.io/kube-controller-manager-amd64:v1.10.0,
kube-dns-86f4d74b45-bjk54: k8s.gcr.io/k8s-dns-kube-dns-amd64:1.14.8, k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64:1.14.8, k8s.gcr.io/k8s-dns-sidecar-amd64:1.14.8,
kube-proxy-mqfb9: k8s.gcr.io/kube-proxy-amd64:v1.10.0,
kube-scheduler-minikube: k8s.gcr.io/kube-scheduler-amd64:v1.10.0,
kubernetes-dashboard-6f4cfc5d87-zwk2c: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0,
storage-provisioner: gcr.io/k8s-minikube/storage-provisioner:v1.8.1,
</code></pre>
<p>When I do minikube ssh and then try to exec into the citadel container, I am getting similar error:</p>
<pre><code>$ docker ps | grep citadel
f173453f843c istio/citadel "/usr/local/bin/isti…" 3 hours ago Up 3 hours k8s_citadel_istio-citadel-6d7f9c545b-bkvnx_istio-system_3d7b4f08-e120-11e8-bc40-ee7dbbb8f91b_0
7e96617d81ff k8s.gcr.io/pause-amd64:3.1 "/pause" 3 hours ago Up 3 hours k8s_POD_istio-citadel-6d7f9c545b-bkvnx_istio-system_3d7b4f08-e120-11e8-bc40-ee7dbbb8f91b_0
$ docker exec -it f173453f843c sh
OCI runtime exec failed: exec failed: container_linux.go:348: starting container process caused "exec: \"sh\": executable file not found in $PATH": unknown
$ docker exec -it f173453f843c /bin/sh
OCI runtime exec failed: exec failed: container_linux.go:348: starting container process caused "exec: \"/bin/sh\": stat /bin/sh: no such file or directory": unknown
$ docker exec -it f173453f843c ls
OCI runtime exec failed: exec failed: container_linux.go:348: starting container process caused "exec: \"ls\": executable file not found in $PATH": unknown
</code></pre>
<p>I can see the citadel containers logs fine. The logs are available at <a href="https://pastebin.com/xTy9vSz2" rel="nofollow noreferrer">https://pastebin.com/xTy9vSz2</a></p>
<p>Do you know why we can't exec into citadel container?</p>
<p>Thanks for reading.</p>
| <p>You can't shell in because neither <code>sh</code> nor <code>bash</code> are available in the container. A lot of times these are removed for the sake of efficiency and having a minimal container image. </p>
<p>If you'd like to shell into the container I recommend you build your own image in include <code>bash</code> or <code>sh</code> in it.</p>
<p>You can see here that the <a href="https://github.com/istio/istio/blob/master/security/docker/Dockerfile.citadel" rel="noreferrer">Dockerfile</a> builds an image that has nothing but the static binary. For that, you want to change the base image. For example:</p>
<pre><code>FROM alpine
</code></pre>
<p>instead of: </p>
<pre><code>FROM scratch
</code></pre>
<p>Hope it helps.</p>
|
<p>I have a VPN tunnel from gcloud to our local site.
The local site has 2 nameservers running on <code>172.16.248.32</code> and <code>172.16.248.32</code></p>
<p>These nameservers resolve our local domain names such as mycompany.local</p>
<p>How can I use these nameservers from gcloud, so the pods in my Kubernetes cluster do resolve mycompany.local as well?</p>
| <p>You'll have to configure your upstream DNS servers to be <code>172.16.248.32</code> and the other IP.</p>
<p>You can do it on a <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pod-s-dns-config" rel="nofollow noreferrer">per pod basis</a> like this:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
namespace: default
name: dns-example
spec:
containers:
- name: test
image: nginx
dnsPolicy: "None"
dnsConfig:
nameservers:
- 172.16.248.32
searches:
- ns1.svc.cluster.local
- mycompany.local
options:
- name: ndots
value: "2"
- name: edns0
</code></pre>
<p>So when the pods are created they include an <code>/etc/resolv.conf</code> like this:</p>
<pre><code>nameserver 172.16.248.32
search ns1.svc.cluster.local my.dns.search.suffix
options ndots:2 edns0
</code></pre>
<p>The other option will vary whether you are using coredns or kube-dns, and that is configuring stub-domains (these configs will also propagate to the <code>/etc/resolv.conf</code> file in your pods, all documented <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/" rel="nofollow noreferrer">here</a>:</p>
<p><strong>coredns</strong></p>
<pre><code># coredns in the coredns ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
upstream 172.16.0.1
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
proxy . 172.16.0.1
cache 30
loop
reload
loadbalance
}
mycompany.local:53 {
errors
cache 30
proxy . 172.16.248.32
}
</code></pre>
<p><strong>kube-dns</strong></p>
<pre><code># kube-dns in the kube-dns ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
data:
stubDomains: |
{"mycompany.local": ["172.16.248.32"]}
upstreamNameservers: |
["8.8.8.8", "8.8.4.4"]
</code></pre>
|
<p>Is there a way you can run kubectl in a 'session' such that it gets its kubeconfig from a local directory rather then from <code>~/.kubeconfig</code>?</p>
<h2>Example Use Case</h2>
<p>Given the abstract nature of the question, it's worth describing why this may be valuable in an example. If someone had an application, call it 'a', and they had 4 kubernetes clusters, each running a, they may have a simple script which did some <code>kubectl</code> actions in each cluster to smoke test a new deployment of A, for example, they may want to deploy the app, and see how many copies of it were autoscaled in each cluster afterward. </p>
<h2>Example Solution</h2>
<p>As in git, maybe there could be a "try to use a local kubeconfig file if you can find one" as a git-style global setting:</p>
<pre><code>kubectl global set-precedence local-kubectl
</code></pre>
<p>Then, in one terminal:</p>
<pre><code>cd firstcluster
cat << EOF > kubeconfig
firstcluster
...
EOF
kubectl get pods
p4
</code></pre>
<p>Then, in another terminal:</p>
<pre><code>cd secondcluster/
cat << EOF > kubeconfig
secondcluster
...
EOF
kubectl get pods
p1
p2
p3
</code></pre>
<p>Thus, the exact same kubectl commands (without having to set context) actually run against new clusters depending on the directory you are in.</p>
<h2>Some ideas for solutions</h2>
<ul>
<li><p>One idea I had for this, was to write a kubectl-context plugin which somehow made kubectl always check for local kubeconfig, setting context behind the scenes if it could before running, to a context in a global config that matched the directory name.</p></li>
<li><p>Another idea I've had along these lines would be to create different users which each had different kubeconfig home files.</p></li>
<li><p>And of course, using something like virtualenv, you might be able to do something where kubeconfig files had their own different value.</p></li>
</ul>
<h1>Final thought</h1>
<p>Ultimately I think the goal here is to subvert the idea that a ~/.kubeconfig file has any particular meaning, and instead look at ways that many kubeconfig files can be used in the same machine, however, not just using the <code>--kubeconfig</code> option but rather, in such a way that state is still maintained in a directory local manner.</p>
| <p>AFAIK, the config file is under <code>~/.kube/config</code> and not <code>~/.kubeconfig</code>. I suppose you are looking at an opinion on your answer, so you gave me the great idea about creating <a href="https://github.com/raravena80/kubevm" rel="nofollow noreferrer"><code>kubevm</code></a>, inspired by <a href="https://github.com/raravena80/awsvm" rel="nofollow noreferrer"><code>awsvm</code></a> for the <a href="https://aws.amazon.com/cli/" rel="nofollow noreferrer">AWS CLI</a>, <a href="https://github.com/trobrock/chefvm" rel="nofollow noreferrer"><code>chefvm</code></a> for managing multiple Chef servers and <a href="https://github.com/rvm/rvm" rel="nofollow noreferrer"><code>rvm</code></a> for managing multiple Ruby versions.</p>
<p>So, in essence, you could have a <code>kubevm</code> setup that switches between different <code>~/.kube</code> configs. You can use a CLI like this:</p>
<pre><code># Use a specific config
kubevm use {YOUR_KUBE_CONFIG|default}
# or
kubevm YOUR_KUBE_CONFIG
# Set your default config
kubevm default YOUR_KUBE_CONFIG
# List your configurations, including current and default
kubevm list
# Create a new config
kubevm create YOUR_KUBE_CONFIG
# Delete a config
kubevm delete YOUR_KUBE_CONFIG
# Copy a config
kubevm copy SRC_CONFIG DEST_CONFIG
# Rename a config
kubevm rename OLD_CONFIG NEW_CONFIG
# Open a config directory in $EDITOR
kubevm edit YOUR_KUBE_CONFIG
# Update kubevm to the latest
kubevm update
</code></pre>
<p>Let me know if it's useful!</p>
|
<p>Hadoop server is in Kubernetes. And the Hadoop client is located on an external network. So I try to use the Hadoop server using a kubernetes-service. But <code>hadoop fs -put</code> does not work for the Hadoop client. As I know, the namenode gives the datanode IP to Hadoop client. If yes, where does the namenode get IP from?</p>
| <p>You can check my <a href="https://stackoverflow.com/a/53137701/2989261">other answer</a>. HDFS is not production ready in K8s yet (as of this writing)</p>
<p>The namenode gives the client the IP addresses of the datanodes and it knows those when they join the cluster as shown below:</p>
<p><a href="https://i.stack.imgur.com/BjMtd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BjMtd.png" alt="datanodes"></a></p>
<p>The issue in K8s is that you have to expose each data node as a service or external IP, but the namenode sees the datanodes with their pod IP addresses that are not available to the outside world. Also, HDFS doesn't provide a <a href="https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsMultihoming.html" rel="nofollow noreferrer">publish IP for each datanode config</a> where you could force to use a service IP, so you'll have to do fancy custom networking or your client has to be inside the podCidr (Which kind of defeats the purpose of HDFS being a distributed filesystem).</p>
|
<p>I'm trying to deploy Gitlab-runner(s) on Openshift/K8S, but can't succeed in having runners communicating with Gitlab (itself deployed in pods on OCP).</p>
<p>I followed couple of different instructions like these ones:</p>
<ul>
<li><a href="https://www.youtube.com/watch?v=EwbhA53Jpp4" rel="nofollow noreferrer">https://www.youtube.com/watch?v=EwbhA53Jpp4</a></li>
<li><a href="https://docs.gitlab.com/runner/install/kubernetes.html" rel="nofollow noreferrer">https://docs.gitlab.com/runner/install/kubernetes.html</a></li>
<li><a href="https://github.com/oprudkyi/openshift-templates/blob/master/gitlab-runner/gitlab-runner.yaml" rel="nofollow noreferrer">https://github.com/oprudkyi/openshift-templates/blob/master/gitlab-runner/gitlab-runner.yaml</a></li>
</ul>
<p>My gitlab-runner pod is starting properly, but it always receives HTTP 404 Not Found error messages.</p>
<p>Here is my toml config file:</p>
<pre><code># cat /etc/gitlab-runner/config.toml
concurrent = 6
check_interval = 0
[[runners]]
name = "GitLab Runner"
url = "http://gitlab-ce.MY_COMAIN.com/ci"
token = "WHO_CARES?"
executor = "kubernetes"
[runners.kubernetes]
namespace = "MINE"
privileged = false
host = ""
cert_file = ""
key_file = ""
ca_file = ""
image = ""
cpus = ""
memory = ""
service_cpus = ""
service_memory = ""
helper_cpus = ""
helper_memory = ""
helper_image = ""
[runners.cache]
Type = "s3"
ServerAddress = "minio-service:80"
AccessKey = "GENERATED"
SecretKey = "GENERATED"
BucketName = "bkt-gitlab-runner"
Insecure = true
</code></pre>
<p>And as soon as the pod starts, I have this in my logs:</p>
<pre><code>Starting multi-runner from /etc/gitlab-runner/config.toml ... builds=0
Running in system-mode.
Configuration loaded builds=0
Metrics server disabled
WARNING: Checking for jobs... failed runner=WHO_CARES? status=404 Not Found
WARNING: Checking for jobs... failed runner=WHO_CARES? status=404 Not Found
WARNING: Checking for jobs... failed runner=WHO_CARES? status=404 Not Found
</code></pre>
<p>And in Gitlab, in the runners page (<a href="https://gitlab-ce.MY_COMAIN.com/group/project/settings/ci_cd" rel="nofollow noreferrer">https://gitlab-ce.MY_COMAIN.com/group/project/settings/ci_cd</a>) there is no "Runners activated for this project".</p>
<p>I can log in to my pod in its terminal and launch <code>gitlab-runner register</code> to register a new runner</p>
<pre><code>/ # gitlab-runner register
Running in system-mode.
Please enter the gitlab-ci coordinator URL (e.g. https://gitlab.com/):
https://gitlab-ce.MY_COMAIN.com
Please enter the gitlab-ci token for this runner:
WHO_CARES?
Please enter the gitlab-ci description for this runner:
[dc-gitlab-runner-service-1-ktw6v]: test
Please enter the gitlab-ci tags for this runner (comma separated):
test
Registering runner... succeeded runner=WHO_CARES?
Please enter the executor: docker+machine, kubernetes, ssh, docker-ssh, parallels, shell, virtualbox, docker-ssh+machine, docker:
kubernetes
Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded!
</code></pre>
<p>But when I try to run it... I'm facing the exact same issue.</p>
<pre><code>/ # gitlab-runner run
Starting multi-runner from /etc/gitlab-runner/config.toml ... builds=0
Running in system-mode.
Configuration loaded builds=0
Metrics server disabled
WARNING: Checking for jobs... failed runner=WHO_CARES? status=404 Not Found
WARNING: Checking for jobs... failed runner=WHO_CARES? status=404 Not Found
</code></pre>
<p>Of course, I checked if I can access Gitlab from the Runner terminal, and it works</p>
<pre><code>/ # ping
BusyBox v1.27.2 (2018-06-06 09:08:44 UTC) multi-call binary.
Usage: ping [OPTIONS] HOST
/ # ping gitlab-ce.MY_COMAIN.com
PING gitlab-ce.MY_COMAIN.com (1.2.3.4): 56 data bytes
64 bytes from 1.2.3.4: seq=0 ttl=63 time=0.268 ms
64 bytes from 1.2.3.4: seq=1 ttl=63 time=0.261 ms
64 bytes from 1.2.3.4: seq=2 ttl=63 time=0.288 ms
^C
</code></pre>
<p>Side note: I'm running OCP 3.9 / K8S 1.9</p>
<p>Do you see anything I'm doing wrong?</p>
<p>Cheers,
Olivier</p>
| <p>OK. Here is the solution.</p>
<p>The problem is coming from a weird behavior with the <code>token</code>. There are several <code>token</code> in Gitlab and one must choose carefuly the one to use.</p>
<p><strong>This problem is also related here: <a href="https://gitlab.com/gitlab-org/gitlab-ce/issues/37807" rel="nofollow noreferrer">https://gitlab.com/gitlab-org/gitlab-ce/issues/37807</a></strong></p>
<p>You must first use the Runner token available in the admin page: <a href="https://gitlab-instance/admin/runners" rel="nofollow noreferrer">https://gitlab-instance/admin/runners</a>. This token is to be used in your <code>config.toml</code>, under the [runners.token] section.</p>
<p>Deploy and start the runner.</p>
<p>It appears in Gitlab, but is unavailable. Simply clik on its name in the admin area: <a href="https://gitlab-instance/admin/runners/38" rel="nofollow noreferrer">https://gitlab-instance/admin/runners/38</a></p>
<p>Find here in the details the token associated to this Runner.</p>
<p>Copy it back to the <code>config.toml</code> file, still under the exact same [runners.token] section, as a replacement.
Redeploy your runner.</p>
<p>Should work.</p>
|
<p>I would like to execute an operation on Kubernetes like <code>kubectl apply -f stuff.yaml</code> from a java program. I don't want to invoke kubectl from my Java program, instead, I would like to use the Java <a href="https://github.com/kubernetes-client/java" rel="nofollow noreferrer">Kubernetes client</a>. After looking at the API classes in the project I wasn't able to figure what methods I could use to achieve functionality similar to <code>kubectly apply</code>.</p>
<p>Does anyone have any pointers on how to achieve it?</p>
| <p>There are no methods per se or silver bullet really, essentially what you are trying to do is almost trying to rewrite <code>kubectl</code> in Java. </p>
<p>You should be able to achieve it decoding the YAML using something like <a href="https://stackoverflow.com/a/38274671/2989261">Jackson</a> or <a href="https://bitbucket.org/asomov/snakeyaml/src/default/" rel="nofollow noreferrer">SnakeYAML</a> and use all the different components in the <a href="https://github.com/kubernetes-client/java" rel="nofollow noreferrer">Kubernetes client</a>, like Create namespaces, pods, deployments, etc. </p>
<p>You can also do a <a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/" rel="nofollow noreferrer">brute force</a> approach through the <code>kube-apiserver</code> on <code>https://kube-apiserver-address:6443/api/...</code> by sending an authenticated/authorized GET/POST/DELETE request with a JSON payload which you can get from converting the YAML to JSON (and perhaps a bit of cleanup) You can use something like the <a href="http://hc.apache.org/httpclient-3.x/" rel="nofollow noreferrer">Apache HTTP client library</a> or <a href="https://jersey.github.io/" rel="nofollow noreferrer">Jersey</a>.</p>
|
<p>It seems to be TCP and UDP support is going to be deprecated in next version of ingress-nginx controller . Any other ingress controllers supports TCP and UDP ?
or any other solutions for exposing non-http ports outside kubernetes ?</p>
<ul>
<li>kubernetes beginner here *</li>
</ul>
| <p>The nginx-ingress (and the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress</a> resource in K8s for that matter) is a layer 7 facility and doesn't support layer 4, in any case, layer 4 might be supported at some point in the future. Note that <a href="https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/" rel="nofollow noreferrer">nginx itself supports layer 4 traffic</a> but not the K8s Ingress.</p>
<p>If you like to directly terminate TCP or UDP you can use standard <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Kubernetes services</a>. <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">NodePort</a> and <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">LoadBalancer</a> (depending on the cloud provider) types of services should also support TCP/UDP.</p>
<p>Update:</p>
<p>There's a tutorial on how to support TCP/UDP with an nginx ingress (from the NGINX company) <a href="https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/tcp-udp" rel="nofollow noreferrer">here</a>.</p>
|
<p>Now I want to integrate Azure AD with AKS as <a href="https://learn.microsoft.com/en-us/azure/aks/aad-integration" rel="nofollow noreferrer">Integrate Azure Active Directory with Azure Kubernetes Service</a>. </p>
<p>It is necessary to set these attributes to the AKS cluster:</p>
<ul>
<li>aad-server-app-id</li>
<li>aad-server-app-secret</li>
<li>aad-client-app-id</li>
<li>aad-tenant-id</li>
</ul>
<p>It can do like this:</p>
<pre><code>az aks create \
--resource-group myResourceGroup \
--name myAKSCluster \
--generate-ssh-keys \
--aad-server-app-id b1536b67-29ab-4b63-b60f-9444d0c15df1 \
--aad-server-app-secret wHYomLe2i1mHR2B3/d4sFrooHwADZccKwfoQwK2QHg= \
--aad-client-app-id 8aaf8bd5-1bdd-4822-99ad-02bfaa63eea7 \
--aad-tenant-id 72f988bf-0000-0000-0000-2d7cd011db47
</code></pre>
<p>From the <a href="https://learn.microsoft.com/en-us/cli/azure/aks?view=azure-cli-latest" rel="nofollow noreferrer">az aks</a> command list I didn't find an <code>edit</code> feature. So if I have created an AKS cluster, isn't there a way to set the <code>Azure AD</code> application IDs on the Kubernetes cluster?</p>
| <blockquote>
<p>Unfortunately enabling RBAC on existing clusters is not supported at
this time. You will need to explicitly create new clusters.</p>
</blockquote>
<p>There is something you would want to know when you start to work with AKS. Follow this <a href="https://learn.microsoft.com/en-us/azure/aks/troubleshooting#i-am-trying-to-enable-rbac-on-an-existing-cluster-can-you-tell-me-how-i-can-do-that" rel="nofollow noreferrer">link</a> to see more details.</p>
|
<p>I'd like to expose some web services to be accessed from an external client in Kubernetes, many people recommended to use ingress. I've deployed an ingress controller following the guide: <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/deploy/index.md" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/blob/master/docs/deploy/index.md</a>. </p>
<p>I don't understand what's the next step to do, could anyone help explain the step with an example?</p>
| <p>You need to create an Ingress resource and Service tied to that Ingress. For example for nginx ingress controller:</p>
<pre><code>cat <<EOF
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
name: foo-boo
namespace: default
spec:
rules:
- host: foo.domain
http:
paths:
- backend:
serviceName: http-svc
servicePort: 80
path: /mypath
EOF | kubectl apply -f -
</code></pre>
<p>Then you can will able to see the ingress:</p>
<pre><code>$ kubectl get ingress foo-boo
NAME HOSTS ADDRESS PORTS AGE
foo-boo foo.domain someloadbalancer.com 80 6d11h
</code></pre>
<p>Then you can test it with something like <code>curl</code>:</p>
<pre><code>$ curl -H 'Host: foo.domain' http://someloadbalancer.com/mypath
</code></pre>
<p>More about a Kubernetes Ingress <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">here</a>.</p>
|
<p>I followed Istio's official documentation to setup Istio for sample bookinfo app with minikube. but I'm getting <strong>Unable to connect to the server: net/http: TLS handshake timeout</strong> error. these are the steps that I have followed(I have kubectl & minikube installed).</p>
<pre><code>minikube start
curl -L https://git.io/getLatestIstio | sh -
cd istio-1.0.3
export PATH=$PWD/bin:$PATH
kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml
kubectl apply -f install/kubernetes/istio-demo-auth.yaml
kubectl get pods -n istio-system
</code></pre>
<p>This is the terminal output I'm getting</p>
<pre><code>$ kubectl get pods -n istio-system
NAME READY STATUS RESTARTS AGE
grafana-9cfc9d4c9-xg7bh 1/1 Running 0 4m
istio-citadel-6d7f9c545b-lwq8s 1/1 Running 0 3m
istio-cleanup-secrets-69hdj 0/1 Completed 0 4m
istio-egressgateway-75dbb8f95d-k6xj2 1/1 Running 0 4m
istio-galley-6d74549bb9-mdc97 0/1 ContainerCreating 0 4m
istio-grafana-post-install-xz9rk 0/1 Completed 0 4m
istio-ingressgateway-6bd4957bc-vhbct 1/1 Running 0 4m
istio-pilot-7f8c49bbd8-x6bmm 0/2 Pending 0 4m
istio-policy-6c65d8cff4-hx2c7 2/2 Running 0 4m
istio-security-post-install-gjfj2 0/1 Completed 0 4m
istio-sidecar-injector-74855c54b9-nnqgx 0/1 ContainerCreating 0 3m
istio-telemetry-65cdd46d6c-rqzfw 2/2 Running 0 4m
istio-tracing-ff94688bb-hgz4h 1/1 Running 0 3m
prometheus-f556886b8-chdxw 1/1 Running 0 4m
servicegraph-778f94d6f8-9xgw5 1/1 Running 0 3m
$kubectl describe pod istio-galley-6d74549bb9-mdc97
Error from server (NotFound): pods "istio-galley-5bf4d6b8f7-8s2z9" not found
</code></pre>
<p>pod describe output</p>
<pre><code> $ kubectl -n istio-system describe pod istio-galley-6d74549bb9-mdc97
Name: istio-galley-6d74549bb9-mdc97
Namespace: istio-system
Node: minikube/172.17.0.4
Start Time: Sat, 03 Nov 2018 04:29:57 +0000
Labels: istio=galley
pod-template-hash=1690826493
Annotations: scheduler.alpha.kubernetes.io/critical-pod=
sidecar.istio.io/inject=false
Status: Pending
IP:
Controlled By: ReplicaSet/istio-galley-5bf4d6b8f7
Containers:
validator:
Container ID:
Image: gcr.io/istio-release/galley:1.0.0 Image ID:
Ports: 443/TCP, 9093/TCP Host Ports: 0/TCP, 0/TCP
Command: /usr/local/bin/galley
validator --deployment-namespace=istio-system
--caCertFile=/etc/istio/certs/root-cert.pem
--tlsCertFile=/etc/istio/certs/cert-chain.pem
--tlsKeyFile=/etc/istio/certs/key.pem
--healthCheckInterval=2s
--healthCheckFile=/health
--webhook-config-file
/etc/istio/config/validatingwebhookconfiguration.yaml
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Requests:
cpu: 10m
Liveness: exec [/usr/local/bin/galley probe --probe-path=/health --interval=4s] delay=4s timeout=1s period=4s #success=1 #failure=3
Readiness: exec [/usr/local/bin/galley probe --probe-path=/health --interval=4s] delay=4s timeout=1s period=4s #success=1 #failure=3
Environment: <none>
Mounts:
/etc/istio/certs from certs (ro)
/etc/istio/config from config (ro)
/var/run/secrets/kubernetes.io/serviceaccount from istio-galley-service-account-token-9pcmv(ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
certs:
Type: Secret (a volume populated by a Secret)
SecretName: istio.istio-galley-service-account
Optional: false
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: istio-galley-configuration
Optional: false
istio-galley-service-account-token-9pcmv:
Type: Secret (a volume populated by a Secret)
SecretName: istio-galley-service-account-token-9pcmv
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 1m default-scheduler Successfully assigned istio-galley-5bf4d6b8f7-8t8qz to minikube
Normal SuccessfulMountVolume 1m kubelet, minikube MountVolume.SetUp succeeded for volume "config"
Normal SuccessfulMountVolume 1m kubelet, minikube MountVolume.SetUp succeeded for volume "istio-galley-service-account-token-9pcmv"
Warning FailedMount 27s (x7 over 1m) kubelet, minikube MountVolume.SetUp failed for volume "certs" : secrets "istio.istio-galley-service-account" not found
</code></pre>
<p>after some time :-</p>
<pre><code> $ kubectl describe pod istio-galley-6d74549bb9-mdc97
Unable to connect to the server: net/http: TLS handshake timeout
</code></pre>
<p>so I wait for istio-sidecar-injector and istio-galley containers to get created. If I again run <strong>kubectl get pods -n istio-system</strong> or any other <strong>kubectl</strong> commands gives <strong>Unable to connect to the server: net/http: TLS handshake timeout</strong> error. </p>
<p>Please help me with this issue.
ps: I'm running minikube on ubuntu 16.04</p>
<p>Thanks in advance.</p>
| <p>Problem resolved. when I run <code>minikube start --memory=4048</code>. maybe it was a memory issue.</p>
|
<p>Getting <code>error: the server doesn't have a resource type "svc"</code> when testing <code>kubectl</code> configuration whilst following this guide:</p>
<p><a href="https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html" rel="noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html</a></p>
<h2>Detailed Error</h2>
<p><code>$ kubectl get svc -v=8</code></p>
<pre><code>I0712 15:30:24.902035 93745 loader.go:357] Config loaded from file /Users/matt.canty/.kube/config-test
I0712 15:30:24.902741 93745 round_trippers.go:383] GET https://REDACTED.yl4.us-east-1.eks.amazonaws.com/api
I0712 15:30:24.902762 93745 round_trippers.go:390] Request Headers:
I0712 15:30:24.902768 93745 round_trippers.go:393] User-Agent: kubectl/v1.10.3 (darwin/amd64) kubernetes/2bba012
I0712 15:30:24.902773 93745 round_trippers.go:393] Accept: application/json, */*
I0712 15:30:25.425614 93745 round_trippers.go:408] Response Status: 401 Unauthorized in 522 milliseconds
I0712 15:30:25.425651 93745 round_trippers.go:411] Response Headers:
I0712 15:30:25.425657 93745 round_trippers.go:414] Content-Type: application/json
I0712 15:30:25.425662 93745 round_trippers.go:414] Content-Length: 129
I0712 15:30:25.425670 93745 round_trippers.go:414] Date: Thu, 12 Jul 2018 14:30:25 GMT
I0712 15:30:25.426757 93745 request.go:874] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Unauthorized","reason":"Unauthorized","code":401}
I0712 15:30:25.428104 93745 cached_discovery.go:124] skipped caching discovery info due to Unauthorized
I0712 15:30:25.428239 93745 round_trippers.go:383] GET https://REDACTED.yl4.us-east-1.eks.amazonaws.com/api
I0712 15:30:25.428258 93745 round_trippers.go:390] Request Headers:
I0712 15:30:25.428268 93745 round_trippers.go:393] Accept: application/json, */*
I0712 15:30:25.428278 93745 round_trippers.go:393] User-Agent: kubectl/v1.10.3 (darwin/amd64) kubernetes/2bba012
I0712 15:30:25.577788 93745 round_trippers.go:408] Response Status: 401 Unauthorized in 149 milliseconds
I0712 15:30:25.577818 93745 round_trippers.go:411] Response Headers:
I0712 15:30:25.577838 93745 round_trippers.go:414] Content-Type: application/json
I0712 15:30:25.577854 93745 round_trippers.go:414] Content-Length: 129
I0712 15:30:25.577868 93745 round_trippers.go:414] Date: Thu, 12 Jul 2018 14:30:25 GMT
I0712 15:30:25.578876 93745 request.go:874] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Unauthorized","reason":"Unauthorized","code":401}
I0712 15:30:25.579492 93745 cached_discovery.go:124] skipped caching discovery info due to Unauthorized
I0712 15:30:25.579851 93745 round_trippers.go:383] GET https://REDACTED.yl4.us-east-1.eks.amazonaws.com/api
I0712 15:30:25.579864 93745 round_trippers.go:390] Request Headers:
I0712 15:30:25.579873 93745 round_trippers.go:393] Accept: application/json, */*
I0712 15:30:25.579879 93745 round_trippers.go:393] User-Agent: kubectl/v1.10.3 (darwin/amd64) kubernetes/2bba012
I0712 15:30:25.729513 93745 round_trippers.go:408] Response Status: 401 Unauthorized in 149 milliseconds
I0712 15:30:25.729541 93745 round_trippers.go:411] Response Headers:
I0712 15:30:25.729547 93745 round_trippers.go:414] Content-Type: application/json
I0712 15:30:25.729552 93745 round_trippers.go:414] Content-Length: 129
I0712 15:30:25.729557 93745 round_trippers.go:414] Date: Thu, 12 Jul 2018 14:30:25 GMT
I0712 15:30:25.730606 93745 request.go:874] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Unauthorized","reason":"Unauthorized","code":401}
I0712 15:30:25.731228 93745 cached_discovery.go:124] skipped caching discovery info due to Unauthorized
I0712 15:30:25.731254 93745 factory_object_mapping.go:93] Unable to retrieve API resources, falling back to hardcoded types: Unauthorized
F0712 15:30:25.731493 93745 helpers.go:119] error: the server doesn't have a resource type "svc"
</code></pre>
<h2>Screenshot of EKS Cluster in AWS</h2>
<p><a href="https://i.stack.imgur.com/ztyry.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ztyry.png" alt="enter image description here"></a></p>
<h2>Version</h2>
<p><code>kubectl version</code></p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-28T20:03:09Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
error: You must be logged in to the server (the server has asked for the client to provide credentials)
</code></pre>
<h2>Config</h2>
<h3>Kubctl Config</h3>
<p><code>$ kubectl config view</code></p>
<pre><code>apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://REDACTED.yl4.us-east-1.eks.amazonaws.com
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- token
- -i
- test
command: heptio-authenticator-aws
env:
- name: AWS_PROFILE
value: personal
</code></pre>
<h3>AWS Config</h3>
<p><code>cat .aws/config</code></p>
<pre><code>[profile personal]
source_profile = personal
</code></pre>
<h3>AWS Credentials</h3>
<p><code>$ cat .aws/credentials</code></p>
<pre><code>[personal]
aws_access_key_id = REDACTED
aws_secret_access_key = REDACTED
</code></pre>
<h3> ~/.kube/config-test</h3>
<pre><code>apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACETED
server: https://REDACTED.yl4.us-east-1.eks.amazonaws.com
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- token
- -i
- test
command: heptio-authenticator-aws
env:
- name: AWS_PROFILE
value: personal
</code></pre>
<h2>Similar issues</h2>
<ul>
<li><a href="https://stackoverflow.com/questions/51135795/error-the-server-doesnt-have-resource-type-svc">error-the-server-doesnt-have-resource-type-svc</a></li>
<li><a href="https://stackoverflow.com/questions/51121136/the-connection-to-the-server-localhost8080-was-refused-did-you-specify-the-ri">the-connection-to-the-server-localhost8080-was-refused-did-you-specify-the-ri</a></li>
</ul>
| <p>I just had a similar issue which I managed to resolve with aws support. The issue I was having was that the cluster was created with a role that was assumed by the user, but kubectl was not assuming this role with the default kube config created by the aws-cli.</p>
<p>I fixed the issue by providing the role in the users section of the kube config</p>
<pre><code>users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- token
- -i
- test
- -r
- <arn::of::your::role>
command: aws-iam-authenticator
env:
- name: AWS_PROFILE
value: personal
</code></pre>
<p>I believe the heptio-aws-authenticator has now been changed to the aws-iam-authenticator, but this change was what allowed me to use the cluster.</p>
|
<p>I was trying to <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="nofollow noreferrer">create a single master cluster with kubeadm</a> in a CentOS VM.</p>
<p>I would like to schedule pods on the master node, so I run the following</p>
<pre><code>kubectl taint nodes --all node-role.kubernetes.io/master-
</code></pre>
<p>But then, when I try to run </p>
<pre><code>kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>
</code></pre>
<p>with the proper input of token, master-ip, master-port and hash.The pre-flight checks give the following errors:</p>
<pre><code>/etc/kubernetes/manifests is not empty
/etc/kubernetes/kubelet.config already exists
Port 10250 is in use
/etc/kubernetes/pki/ca.crt already exists
</code></pre>
<p>How can I fix the errors so that pods can still be scheduled on master node? Thanks</p>
| <p>You basically don't need <code>kubeadm join</code> on the master since it's already set up by <code>kubeadm init</code>. Also, the fact that you removed the taint on your master node to run pods should be enough for you to run pods on the master (use this just for test).</p>
<p>If you want a K8s node to join a cluster to run your pods you would use <code>kubeadm join</code>, in this case, you could taint your master to not run any pods. (You could remove the taint if you wanted to, but it's not recommended to run workloads on your master, especially in production)</p>
|
<p>Below given the config, I am trying to deploy on Google Kubernetes Engine. But after deployment, I can't access the service on the ingress external IP. </p>
<p>I can access the service if I do:</p>
<pre><code>$ kubectl exec POD_NAME
# curl GET localhost:6078/todos
</code></pre>
<p>But I can't access it through ingress. GKE UI show errors like:</p>
<ul>
<li>Error during sync: error while evaluating the ingress spec: could not find service "default/todo"</li>
</ul>
<p>OR</p>
<ul>
<li>Some backend services are in UNHEALTHY state</li>
</ul>
<p>Even though the backend pod is up and running.</p>
<p>I believe there is something wrong with the service. </p>
<pre><code>---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: todo
labels:
app: todo
spec:
replicas: 1
selector:
matchLabels:
app: todo
template:
metadata:
labels:
app: todo
spec:
containers:
- image: eu.gcr.io/xxxxx/todo
name: todo
ports:
- containerPort: 6078
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: todo
labels:
app: todo
spec:
type: NodePort
ports:
- port: 6078
selector:
app: todo
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: todo-ingress
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: todo
servicePort: 6078
</code></pre>
| <p>Hard to tell without knowing what 'todo' does, but there's a few things:</p>
<ol>
<li><p>There's an indentation error on the Ingress definition. I'm not sure if it's typo or it didn't get applied:</p>
<p>Instead, it should be something:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: todo-ingress
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: todo
servicePort: 6078
</code></pre></li>
<li><p>If you really want <code>/*</code> with no <code>host</code> then the default backend is overriding you, since it's the last rule in the <code>nginx.conf</code>, so you might as well configure:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: todo-ingress
spec:
backend:
serviceName: todo
servicePort: 6078
</code></pre></li>
<li><p>Is your service binding to <code>0.0.0.0</code> and not <code>127.0.0.1</code>. Listening to on <code>127.0.0.1</code> will cause it to serve locally in the pod but not to any service outside.</p></li>
</ol>
|
<p>I am going to utilize the pod IP environment variable in my K8s deployment so that the pod IP should be assigned to a container's argument which is listening on this IP. I tried to fetch pod IP via "status.podIP" and retrieve it in the arg section as follow:</p>
<pre><code>env:
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
</code></pre>
<p>the container is a proxy application which is listening on the pod IP and its own port number. </p>
<pre><code>- args:
- --listen=MY_POD_IP:XXXX
</code></pre>
<p>But this setup sometimes returns a binding error as:</p>
<blockquote>
<p>bind: cannot assign requested address</p>
</blockquote>
<p>and sometimes the server error as:</p>
<blockquote>
<p>listen tcp: lookup MY_POD_IP: server misbehaving</p>
</blockquote>
<p>If I replace the MY_POD_IP with the actual pod IP, the setup works fine, but since this pod IP is generated dynamically in every deployment, I need to have a general solution to assign this IP to my argument. Any idea or workaround? </p>
<p>Thank you in advance.</p>
| <p>Try this way,</p>
<pre><code>- args:
- --listen=$(MY_POD_IP):XXXX
</code></pre>
<p>Ref: <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#use-environment-variables-to-define-arguments" rel="nofollow noreferrer">Use environment variables to define arguments</a></p>
|
<p>I am using Ambassador to manage my Kubernetes services. My Kubernetes services consist of a few web servers and a few postgres. I followed the instructions <a href="https://www.getambassador.io/user-guide/getting-started/" rel="nofollow noreferrer">here</a>to establish routes to my web servers. Here is an example:</p>
<pre><code> annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v1
kind: Mapping
name: somewebservice
prefix: /somewebservice
service: somewebservice:80
</code></pre>
<p>This works perfectly for my webserver. I can do <code>curl localhost/somewebservice</code> and I get the expected response.</p>
<p>I have set up the same annotation in my postgres container, but I cannot do a psql.</p>
<pre><code> annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v1
kind: Mapping
name: atlas
prefix: /somepostgres
service: somepostgres:5432
</code></pre>
<p>I see the following:</p>
<pre><code>$ psql -h 'localhost/somepostgres' -p 5432
psql: could not translate host name "localhost/somepostgres" to address: nodename nor servname provided, or not known
</code></pre>
<p>My goal is to have Ambassador accept both HTTP/HTTPS & postgres requests.
Thanks for your time.</p>
| <p>Postgres is a TCP service (Layer 4) and not an HTTP(s) service (Layer 7). <a href="https://www.getambassador.io/reference/mappings#services" rel="nofollow noreferrer">It doesn't look like Ambassador supports TCP only services</a>, even though the <a href="https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/tcp_proxy" rel="nofollow noreferrer">Envoy proxy supports it</a>. So you'll have to do with a regular Kubernetes TCP service, something like this:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: postgres-svc
spec:
selector:
app: postgres
ports:
- protocol: TCP
port: 5432
</code></pre>
|
<p>In the book <a href="https://www.goodreads.com/book/show/26759355-kubernetes" rel="nofollow noreferrer">Kubernetes: Up & Running</a>, on the section "Creating Deployments", it has a yaml file that starts like this:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: nginx
</code></pre>
<p>What is the use of applying a label to a deployment? I understand how the pods and a service interact, but when do the labels in a deployment spring into action?</p>
| <p>Labels are useful to group inter-related apps. For example, you have a application that require a Deployment, a Service, a Database(maybe deployed with deployment or statefulset). Now,if you use similar labels to all these resources(let it is <code>app: my-app</code>), then you can list,delete etc. operation based on this label.</p>
<p>For example, if you want to list all resources for your particular application, then you can use <code>kubectl get all -l app=my-app</code>.</p>
<p>For more details, please read this <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" rel="nofollow noreferrer">article</a>.</p>
|
<p>I have a scaler service that was working fine, until my recent kubernetes version upgrade. Now I keep getting the following error. (some info redacted)</p>
<p><code>Error from server (Forbidden): deployments.extensions "redacted" is forbidden: User "system:serviceaccount:namesspace:saname" cannot get resource "deployments/scale" in API group "extensions" in the namespace "namespace"</code></p>
<p>I have below cluster role:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app: redacted
chart: redacted
heritage: Tiller
release: redacted
name: redacted
rules:
- apiGroups:
- '*'
resources: ["configmaps", "endpoints", "services", "pods", "secrets", "namespaces", "serviceaccounts", "ingresses", "daemonsets", "statefulsets", "persistentvolumeclaims", "replicationcontrollers", "deployments", "replicasets"]
verbs: ["get", "list", "watch", "edit", "delete", "update", "scale", "patch", "create"]
- apiGroups:
- '*'
resources: ["nodes"]
verbs: ["list", "get", "watch"]
</code></pre>
| <p>scale is a subresource, not a verb. Include "deployments/scale" in the resources list. </p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.