prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>I have a Google Kubernetes Engine with an application that has a public service endpoint. But I would like to change the endpoint from http://... to a secure https://. </p>
<p>What is the easiest way of going about this?</p>
<p>I assume I will need to somehow get an entity to issue me some sort of certificate for my domain. If that's the case, should I use the 35.X.X.X endpoint IP or use a domain such as mydomain.com?</p>
<p>Thanks</p>
| <p>There are multiple ways of doing this. Here are some:</p>
<ol>
<li>An easy way is to create external layer 7 GCP load balancer and have it terminate your SSL, with your own certificates, then create a <code>NodePort</code> service to listen for your traffic (non-TLS) and forward the layer 7 load balancer to that <code>NodePort</code> on all your cluster machines. The downside of this solution would be that your internal cluster traffic would be non-TLS, but you may not care about that unless you are implementing compliance.</li>
</ol>
<p><a href="https://i.stack.imgur.com/9L5an.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9L5an.png" alt="LB"></a></p>
<ol start="2">
<li><p>Another way is to just let your application in the pod handle SSL directly and set the pod to listen on port 443 and expose it on your layer 4 GCP load balancer on port 443.</p></li>
<li><p>Another way (which is preferred on later Kubernetes versions) is to use the same <code>35.X.X.X</code> external IP which essentially is a layer 4 load balancer and have a <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Kubernetes Ingress</a> listen for your traffic and handle <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#tls" rel="nofollow noreferrer">TLS</a>. </p></li>
</ol>
<p>Note that this all gets a bit trickier if you want to implement TLS end to end.</p>
|
<p>I'm trying to dynamically specify the name of a file to include in a configmap.yaml, using the Helm templating language.</p>
<p>Here is an example:</p>
<pre><code>{{- $filename := .Values.KRB5_REALM -}}
apiVersion: v1
data:
# When the config map is mounted as a volume, these will be created as files.
krb5.conf: |
{{ .Files.Get $filename".krb5.conf" | indent 4 }}
kind: ConfigMap
metadata:
name: {{ template "myapp.fullname" . }}
labels:
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
app: {{ template "myapp.name" . }}
environment: {{ .Values.environment }}
</code></pre>
<p>The above code results in an error.</p>
<p>I've tried several variations but without any success, such as:</p>
<pre><code>{{ .Files.Get .Values.KRB5_REALM".krb5.conf" | indent 4 }}
</code></pre>
<p>How can I resolve this issue?</p>
| <p>The usual way to assemble strings like this is with the <a href="https://godoc.org/text/template" rel="noreferrer">Go text/template</a> <code>printf</code> function. One way to do it could be:</p>
<pre><code>{{ printf "%s.krb5.conf" .Values.KRB5_REALM | .Files.Get | indent 4 }}
</code></pre>
<p>Or you could parenthesize the expression:</p>
<pre><code>{{ .Files.Get (printf "%s.krb5.conf" .Values.KRB5_REALM) | indent 4 }}
</code></pre>
|
<p>An individual ingress backed by a service requires an annotation <code>ingress.bluemix.net/rewrite-path</code> to get rewriting of the url. Managing multiple services in code is error-prone. Is there a way to define multiple individual ingress objects and have them all merged by the loadbalancer? It does not seem to work this way out of the box.</p>
| <p>Yes. You can have them on different namespaces you can read more about it <a href="https://itnext.io/save-on-your-aws-bill-with-kubernetes-ingress-148214a79dcb" rel="nofollow noreferrer">here</a></p>
<p>You can also read about it on this <a href="https://github.com/kubernetes/kubernetes/issues/17088" rel="nofollow noreferrer">Github issue</a></p>
<p>You can also have multiple ingress controllers for example with <a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer">nginx ingress controller</a> by specifying the <a href="https://kubernetes.github.io/ingress-nginx/user-guide/cli-arguments/#command-line-arguments" rel="nofollow noreferrer"><code>--ingress-class</code></a> option on the nginx ingress controller command line. More information <a href="https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/multiple-ingress-controllers" rel="nofollow noreferrer">here</a>.</p>
|
<p>I want exposing various services with a single ingress.</p>
<pre><code>rules:
- http:
paths:
# The path is the URL prefix for the service, e.g. /api/* or just /*
# Note that the service will receive the entire URL with the prefix
- path: /service1/*
backend:
serviceName: service1
servicePort: 5000
- path: /service2/*
backend:
serviceName: service2
servicePort: 5000
</code></pre>
<p>The problem is the whole URL including the prefix is passed to the underlying services so all requests return 404 errors: <code>service1</code> and api don't respond on <code>/service1/some/path</code> but directly on <code>/some/path</code></p>
<p>How can I specify a prefix to the underlying services?</p>
<p><strong>UPDATE</strong></p>
<p>I tried using rewrite-target as follows. Requests are sent to the <code>rasa-nlu</code> service, but they all trigger 404 because <code>rasa-nlu</code> still gets the <code>/nlu</code> </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /nlu
backend:
serviceName: rasa-nlu
servicePort: 5000
</code></pre>
| <p>This might be what you are looking for;</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/rewrite-target: /
name: rewrite
namespace: default
spec:
rules:
- host: rewrite.bar.com
http:
paths:
- backend:
serviceName: echoheaders
servicePort: 80
path: /something
</code></pre>
<p>Note the <strong>annotation</strong> to rewrite-target. </p>
<p>Found this <a href="https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx/examples/rewrite" rel="noreferrer">here</a></p>
|
<p>I am setting up a namespace for my application that has statefulsets, deployments, and secrets into that namespace. Using RBAC, I am defining specific roles and binding them to a service account that is used by the deployment/statefulset. This works as expected. </p>
<p>Now when I try to test if the secrets are secure by not assigning any service account to the deployment, it still pulls down the secrets. The default service account in the namespace is bound with the view clusterrole which should not have access to secrets. </p>
<p>Any clue what is happening here? </p>
<p>Thanks in advance. </p>
| <p>I believe you need to assign a <code>RoleBinding</code> to the default service account on your namespace. For example:</p>
<pre><code>kubectl create rolebinding myapp-view-binding --clusterrole=view --serviceaccount=default:default --namespace=default
</code></pre>
<p>The view role should prevent you from reading secrets.</p>
|
<p>I have Kubernetes Cluster <strong>v1.10</strong> Over <strong>Centos 7</strong> Over <strong>OVH</strong> Cloud Provider's Servers <strong>.</strong></p>
<p>As I knew <strong>OVH</strong> does not Provide <strong>Loadbalancer</strong> Component Directly to <strong>Kubernetes</strong> </p>
<p>And I want to buy Loadbalancer Component From <strong>OVH</strong> From this <strong><a href="https://www.ovh.com/world/solutions/load-balancer/" rel="nofollow noreferrer">Link</a></strong> and connect to kubernetes Cluster <strong>.</strong></p>
<p>Can I connect Loadbalancer to Kubernetes? </p>
<p>And is there any tutorial? </p>
<p>Thank You :D </p>
| <p>Yes.</p>
<p>You can follow <a href="https://docs.ovh.com/gb/en/load-balancer/loadbalancer-introduction/" rel="nofollow noreferrer">this guide from OVH</a> in terms setting up your load balancer.</p>
<p>And in terms of Kubernetes you'd either want to create a <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Kubernetes Ingress</a> exposed on a <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">NodePort</a>, this is a <a href="https://akomljen.com/kubernetes-nginx-ingress-controller/" rel="nofollow noreferrer">good tutorial</a> for that, or you can also expose your services directly on a <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">NodePort</a> and point your load balancer's backend to all the nodes in your cluster on that specific NodePort. </p>
<p>I would also familiarize with the <a href="https://kubernetes.io/docs/concepts/services-networking/service" rel="nofollow noreferrer">Services</a> abstraction in Kubernetes.</p>
|
<p>I read this post about how to share kube config.</p>
<p><a href="https://stackoverflow.com/questions/34270076/how-to-share-kube-config/34271748#34271748">How to share .kube/config?</a></p>
<p>It says that <code>kubectl config view --flatten --minify</code> is the way to get kube config file.<br>
But when it comes to use this config file, I am confused.<br>
For example, if the output is saved as <code>config-yuta</code>, do I always specify the config file like this?
<code>kubectl --kubeconfig=config-yuta cluster-info</code></p>
<p>I feel too troublesome to specify <code>--kubeconfig=xxx</code> always.
I just want to switch the context and do like <code>kubectl cluster-info</code> without specifying a certain file when I have multiple clusters. </p>
<p>Should I merge ~/.kube/config with the output?<br>
If so, how can I do it correctly? </p>
| <p>You can set multiple clusters in the same kubeconfig file, see <a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/" rel="nofollow noreferrer">this doc</a></p>
<p>Another solution to use multiple kubeconfig files is to set the environment variable KUBECONFIG</p>
<pre><code>export KUBECONFIG=<path to config-yuta>
</code></pre>
<p>And finally it's also possible to merge the file config-yuta with the default kubeconfig with this command:</p>
<pre><code>KUBECONFIG=~/.kube/config:<path to config-yuta> kubectl config view --flatten > ~/.kube/config
</code></pre>
|
<p>I have setup Custom Nginx ingress controller with Ingress resource in Kubernetes and instead of "default-http-backend service", I used custom application as the default backend service to be served for default requests. I have also used custom SSL which is set as kubernetes secret, for my service. The issue is that when I request the hostnames which are mentioned in the rules, the https redirection works. But when the requests other than the hosts mentioned in the rules are made, it serves the default app, but the https redirection does not work. </p>
<p>How can I redirect requests from http to https for all the requests including default requests. In other words, how to setup https redirection for wildcard domains in ingress resource. </p>
<p>Please find my yaml files for ingress resource.</p>
<pre><code>---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-resource
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
kubernetes.io/ingress.allow-http: "false"
ingress.kubernetes.io/rewrite-target: /
ingress.kubernetes.io/ssl-redirect: "true"
ingress.kubernetes.io/proxy-connect-timeout: "14400"
ingress.kubernetes.io/proxy-send-timeout: "14400"
ingress.kubernetes.io/proxy-read-timeout: "14400"
spec:
tls:
- secretName: tls-secret
rules:
- host: service1.example.com
http:
paths:
- path: /
backend:
serviceName: service1
servicePort: 80
- host: service2.example.com
http:
paths:
- path: /
backend:
serviceName: service2
servicePort: 80
---
</code></pre>
| <p>I needed to configure custom service (not default-http-backend service) for default requests which does not have rules set and this custom service should use custom SSL. At present nginx-ingress-controller doesn't do anything if the domain names are omitted from the Ingress rules (with the intention of the "wildcard" TLS cert being used).
Therefore I have added the following code in the ingress yaml I used and this works perfectly. I have added the wildcard tls name in ingress rules at the bottom for the custom default service. Please find the code below:</p>
<pre><code> rules:
- host: service1.example.com
http:
paths:
- path: /
backend:
serviceName: service1
servicePort: 80
- host: service2.example.com
http:
paths:
- path: /
backend:
serviceName: service2
servicePort: 80
- host: '*.example.com'
http:
paths:
- path: /
backend:
serviceName: custom-backend-service
servicePort: 80
</code></pre>
|
<p>When using GKE, I found that a all the nodes in the Kubernetes cluster must be in the same network and the same subnet. So, I wanted to understand the correct way to design networking.</p>
<p>I have two services <code>A</code> and <code>B</code> and they have no relation between them. My plan was to use a single cluster in a single region and have two nodes for each of the services <code>A</code> and <code>B</code> in different subnets in the same network.</p>
<p>However, it seems like that can't be done. The other way to partition a cluster is using <code>namespaces</code>, however I am already using partitioning development environment using namespaces.</p>
<p>I read about cluster federation <a href="https://kubernetes.io/docs/concepts/cluster-administration/federation/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/cluster-administration/federation/</a>, however it my services are small and I don't need them in multiple clusters and in sync.</p>
<p>What is the correct way to setup netowrking for these services? Should I just use the same network and subnet for all the 4 nodes to serve the two services <code>A</code> and <code>B</code>?</p>
| <p>You can restrict the incoming (or outgoing) traffic making use of labels and networking policies.</p>
<p>In this way the pods would be able to receive the traffic merely if it has been generated by a pod belonging to the same application or with any logic you want to implement.</p>
<p>You can follow <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/network-policy" rel="nofollow noreferrer">this</a> step to step tutorial that guides you thorough the implementation of a POC.</p>
<pre><code>kubectl run hello-web --labels app=hello \
--image=gcr.io/google-samples/hello-app:1.0 --port 8080 --expose
</code></pre>
<p>Example of Network policy</p>
<pre><code>kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: hello-allow-from-foo
spec:
policyTypes:
- Ingress
podSelector:
matchLabels:
app: hello
ingress:
- from:
- podSelector:
matchLabels:
app: foo
</code></pre>
|
<p>We have CentOS based infra for kubernetes and also using Openshift on top of tis. We have terminated a pod and now its not visible on master controller any more. However we are willing to analyze its logs.Can we still access its logs?How ? </p>
| <p>Containers together with its logs get deleted when you issue a <code>kubectl delete pod <pod-name></code>. You can use something like <a href="https://www.fluentd.org/" rel="noreferrer">Fluentd</a> or <a href="https://github.com/gliderlabs/logspout" rel="noreferrer">logspout</a> to pipe your logs to say an <a href="https://www.elastic.co/elk-stack" rel="noreferrer">ELK</a> or an <a href="https://docs.fluentd.org/v0.12/articles/docker-logging-efk-compose" rel="noreferrer">EFK</a> stack.</p>
|
<p>I am using Prometheus tool for monitoring my Kubernetes cluster. </p>
<p>I have set a resource limit(memory limit) in my deployments and need to configure a panel for showing the total memory available. Please let me know the query needed to run in Prometheus for getting the total memory limit available for my deployment.</p>
| <p>It is possible using metrics kube_pod_container_resource_limits_memory_bytes (provided by <a href="https://github.com/kubernetes/kube-state-metrics" rel="nofollow noreferrer">kube-state-metrics</a>) and container_memory_usage_bytes (provided by kubelet/cAdvisor)</p>
<pre><code>label_replace(
label_replace(
kube_pod_container_resource_limits_memory_bytes{},
"pod_name",
"$1",
"pod",
"(.+)"
),
"container_name",
"$1",
"container",
"(.+)"
)
-
on(pod_name,namespace,container_name)
avg(
container_memory_usage_bytes{pod_name=~".+"}
)
by (pod_name,namespace,container_name)
</code></pre>
<p>A little explanation of the query: It is a subtraction of the memory limit and the actual usage. <code>label_replace</code> functions are needed to match the label names of both metrics, as they are obtained from different targets. <code>avg</code> is used to get the average between pod restarts, as every pod restart creates a new metric. <code>{pod_name=~".+"}</code> is used to filter metrics from <code>container_memory_usage_bytes</code> that are not useful for this case</p>
|
<p>I deployed my apps using helm. Mistakenly I put the same helm release name. Now how can I delete the specific deployment for e.g. first one with chart <code>my-chart-1.0.0</code>. </p>
<pre><code>$ helm ls
NAME REVISION UPDATED STATUS CHART NAMESPACE
appname 1 Mon Oct 1 10:36:17 2018 DEPLOYED my-chart-1.0.0 default
appname 1 Mon Sep 10 17:18:49 2018 DEPLOYED my-chart-2.0.0 default
appname 1 Mon Sep 10 18:18:49 2018 DEPLOYED my-chart-3.0.0 default
</code></pre>
<p>Any help is appreciated.</p>
<p>Thanks</p>
| <p>I think this should help:</p>
<p><code>helm delete $(helm ls | awk '$9 ~ /SEARCH/ { print $1 }')</code></p>
<p>Replace <code>SEARCH</code> with any chart pattern, in your case <code>my-chart-1.0.0</code>.
I would also add a <code>--dry-run</code> and check if this is indeed the deployment you want to remove.</p>
<p>You can read the Helm documentation regarding <a href="https://docs.helm.sh/helm/#helm-delete" rel="nofollow noreferrer">helm delete</a>.</p>
|
<p>I am trying to use a configmap in my deployment with helm charts. Now seems like files can be accessed with Helm according to the docs here: <a href="https://github.com/helm/helm/blob/master/docs/chart_template_guide/accessing_files.md" rel="nofollow noreferrer">https://github.com/helm/helm/blob/master/docs/chart_template_guide/accessing_files.md</a></p>
<p>This is my deployment:</p>
<pre><code>kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: "{{ template "service.fullname" . }}"
labels:
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
spec:
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app: "{{ template "service.fullname" . }}"
spec:
containers:
- name: "{{ .Chart.Name }}"
image: "{{ .Values.registryHost }}/{{ .Values.userNamespace }}/{{ .Values.projectName }}/{{ .Values.serviceName }}:{{.Chart.Version}}"
volumeMounts:
- name: {{ .Values.configmapName}}configmap-volume
mountPath: /app/config
ports:
- containerPort: 80
name: http
livenessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 10
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 10
timeoutSeconds: 5
volumes:
- name: {{ .Values.configmapName}}configmap-volume
configMap:
name: "{{ .Values.configmapName}}-configmap"
</code></pre>
<p>My configmap is accessing a config file. Here's the configmap:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: "{{ .Values.configmapName}}-configmap"
labels:
app: "{{ .Values.configmapName}}"
data:
{{ .Files.Get "files/{{ .Values.configmapName}}-config.json" | indent 2}}
</code></pre>
<p>The charts directory looks like this:</p>
<pre><code>files/
--runtime-config.json
templates/
--configmap.yaml
--deployment.yaml
--ingress.yaml
--service.yaml
chart.value
vaues.yaml
</code></pre>
<p>And this is how my runtime-confi.json file looks like:</p>
<pre><code>{
"GameModeConfiguration": {
"command": "xx",
"modeId": 10,
"sessionId": 11
}
}
</code></pre>
<p>The problem is, when I install my chart (even with a dry-run mode), the data for my configmap is empty. It doesn't add the data from the config file into my configmap declaration. This is how it looks like when I do a dry-run:</p>
<pre><code>---
apiVersion: v1
kind: ConfigMap
metadata:
name: "runtime-configmap"
labels:
app: "runtime"
data:
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: "whimsical-otter-runtime-service"
labels:
chart: "runtime-service-unknown/version"
spec:
replicas: 1
template:
metadata:
labels:
app: "whimsical-otter-runtime-service"
spec:
containers:
- name: "runtime-service"
image: "gcr.io/xxx-dev/xxx/runtime_service:unknown/version"
volumeMounts:
- name: runtimeconfigmap-volume
mountPath: /app/config
ports:
- containerPort: 80
name: http
resources:
limits:
cpu: 100m
memory: 100Mi
requests:
cpu: 100m
memory: 100Mi
livenessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 10
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /health
port: http
initialDelaySeconds: 10
timeoutSeconds: 5
volumes:
- name: runtimeconfigmap-volume
configMap:
name: "runtime-configmap"
---
</code></pre>
<p>What am I doing wrong that I don't get data?</p>
| <p>The replacement of the variable within the string does not work: </p>
<pre><code>{{ .Files.Get "files/{{ .Values.configmapName}}-config.json" | indent 2}}
</code></pre>
<p>But you can gerenate a string using the <code>printf</code> function like this: </p>
<pre><code>{{ .Files.Get (printf "files/%s-config.json" .Values.configmapName) | indent 2 }}
</code></pre>
|
<p>I would like to run Node-RED as a service on Kubernetes to be able to build a custom API using the HTTP IN nodes. The goal is to be able to push any number of different flows to an arbitrary container running Node-RED using the Node-RED API.</p>
<p>I have tried running Node-RED as a service with 5 replicas and built a flow through the UI that has an HTTP in and HTTP out node. When I try hitting the service using curl on the minikube ip (e.g. curl <a href="http://192.168.64.2:30001/test" rel="nofollow noreferrer">http://192.168.64.2:30001/test</a>), it will only return the results if the load balancer happens to land on the container that has the flow. Otherwise, it will return an error with HTML.</p>
<p>Any advice on how I should go about solving this issue? Thanks!</p>
| <p>This is working as expected. If you are interacting with the Node-RED editor via the load balancer you are only editing the flow on that instance.</p>
<p>If you have 5 instances of Node-RED and only one of them is running a flow with the HTTP endpoints defined then calls to that endpoint will only succeed 1 time in 5.</p>
<p>You need to make sure that all instances have the same endpoints defined in their flows.</p>
<p>There are several ways you can do this, some examples would be:</p>
<ul>
<li>Use the Node-RED Admin API to push the flows to each of the Node-RED instances in turn. You will probably need to do this via the private IP Address of each instance to prevent the load balancer getting in the way.</li>
<li>Use a custom Storage plugin to store the flow in a database and have all the Node-RED instances load the same flow. You would need to restart the instances to force the flow to be reloaded should you change it.</li>
</ul>
|
<p>I want to try and configure a Filter in Envoy Proxy to block ingress and egress to the service based on some IP's, hostname, routing table, etc.</p>
<p>I have searched for the documentation and see it's possible. But didn't get any examples, of its usage. </p>
<p>Can someone point out some example of how It can be done?</p>
<ul>
<li><p>One configuration example is present on this page:
<a href="https://www.envoyproxy.io/docs/envoy/latest/api-v2/config/rbac/v2alpha/rbac.proto" rel="nofollow noreferrer">https://www.envoyproxy.io/docs/envoy/latest/api-v2/config/rbac/v2alpha/rbac.proto</a></p>
<ul>
<li>But this is for a service account, like in Kubernetes.</li>
</ul></li>
<li><p>The closest to what I want, I can see here in this page:
<a href="https://www.envoyproxy.io/docs/envoy/latest/configuration/network_filters/rbac_filter#statistics" rel="nofollow noreferrer">https://www.envoyproxy.io/docs/envoy/latest/configuration/network_filters/rbac_filter#statistics</a><br/></p>
<ul>
<li>Mentioned as, <em>"The filter supports configuration with either a safe-list (ALLOW) or block-list (DENY) set of policies based on properties of the connection (IPs, ports, SSL subject)."</em> </li>
<li>But it doesn't show how to do it.</li>
</ul></li>
</ul>
<p>I have figured out something like this:</p>
<pre><code>network_filters:
- name: service-access
config:
rules:
action: ALLOW
policies:
"service-access":
principals:
source_ip: 192.168.135.211
permissions:
- destination_ip: 0.0.0.0
- destination_port: 443
</code></pre>
<p><strong>But I am not able to apply this network filter. All the configurations give me configuration error.</strong></p>
| <p>This is a complete rbac filter config given to me by envoy team in their guthub issue. Haven't tested it out though.</p>
<pre><code>static_resources:
listeners:
- name: "ingress listener"
address:
socket_address:
address: 0.0.0.0
port_value: 9001
filter_chains:
filters:
- name: envoy.http_connection_manager
config:
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains:
- "*"
routes:
- match:
prefix: "/"
route:
cluster: local_service
per_filter_config:
envoy.filters.http.rbac:
rbac:
rules:
action: ALLOW
policies:
"per-route-rule":
permissions:
- any: true
principals:
- any: true
http_filters:
- name: envoy.filters.http.rbac
config:
rules:
action: ALLOW
policies:
"general-rules":
permissions:
- any: true
principals:
- any: true
- name: envoy.router
config: {}
access_log:
name: envoy.file_access_log
config: {path: /dev/stdout}
clusters:
- name: local_service
connect_timeout: 0.250s
type: static
lb_policy: round_robin
http2_protocol_options: {}
hosts:
- socket_address:
address: 127.0.0.1
port_value: 9000
admin:
access_log_path: "/dev/null"
address:
socket_address:
address: 0.0.0.0
port_value: 8080
</code></pre>
|
<p>The scenario:</p>
<p>I have two K8s clusters. One is on-prem, the other is hosted in AWS. I could use Istio to make communication painless and do things like balloon capacity in AWS, but I'm getting hung up on trying to connect them. Reading the documentation, it looks like I need a VPN deployed inside of K8s if I want to have encrypted tunnels so that each internal network can talk to the other side. They're both non-overlapping 10-dots so I have that part done. </p>
<p>Is that correct or am I missing something on how to connect the two K8s clusters? </p>
| <p>Having Istio in your cluster is independent of setting up basic communication in between your two clusters. There are a few options that I can think of here:</p>
<ol>
<li>VPN between some nodes in both clusters like you mentioned.</li>
<li>BGP peering with <a href="https://docs.projectcalico.org/v3.2/usage/configuration/bgp" rel="nofollow noreferrer">Calico</a> and your existing infrastructure.</li>
<li>A router in between your two clusters that understand the internal cluster IPs (This could be with BGP or static routes)</li>
<li><a href="https://kubernetes.io/docs/concepts/cluster-administration/federation/" rel="nofollow noreferrer">Kubernetes Federation</a>. V1 is in alpha and V2 is in the implementation phase as of this writing. Not prod ready yet IMO. </li>
</ol>
|
<p>Is it possible to use VDO on Kubernetes(with Docker containers)?</p>
<p>As far as I know, block devices are mountable - the problem here(I think) would be loading the VDO modules into the Docker container. I assume it's not possible to do it within the Docker container, so the responsibility lies on the host.</p>
| <p>Correct, it's not directly supported by <a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="nofollow noreferrer">Kubernetes</a> but you can always manage your VDO modules and <a href="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/storage_administration_guide/vdo-quick-start" rel="nofollow noreferrer">volumes</a> at the host level. For example, mount the volumes under /mnt/vdo0 and then use them in a container with the <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer"><code>HostPath</code></a> volume option. </p>
<p>You can also, for example, specify a VDO volume as the main graph directory for your <a href="https://docs.docker.com/v17.09/engine/reference/commandline/dockerd/" rel="nofollow noreferrer">docker daemon</a> with something like <code>/usr/bin/dockerd -g /mnt/vdo0</code> That will make your images and your non-external volume container storage stored in that directory.</p>
|
<p>Is it possible to attach ~30 persistent disks to single k8s node (e.g. n1-standard-4)?</p>
<p>According to the documentation 2-4 core node can support up to 64 attached disks in Beta: <a href="https://cloud.google.com/compute/docs/disks/#increased_persistent_disk_limits" rel="nofollow noreferrer">Link</a>.</p>
<p>Is it supported by GKE? Is there any limit in GKE Kubernetes?</p>
| <p>GKE has the same limitation as vanilla Kubernetes on GCP per se. The Kubernetes limits for the largest public cloud providers are documented <a href="https://kubernetes.io/docs/concepts/storage/storage-limits/#kubernetes-default-limits" rel="nofollow noreferrer">here</a> </p>
<p>You can also change those limits using the <code>KUBE_MAX_PD_VOLS</code> on the <code>kube-scheduler</code> (After restarting). Unfortunately, you won't be able to change this on GKE yet, because GKE doesn't give you access to the master(s) configuration yet.</p>
<p>Also documented <a href="https://kubernetes.io/docs/concepts/storage/storage-limits/#dynamic-volume-limits" rel="nofollow noreferrer">here</a> is Dynamic Volume Limits introduced in Kubernetes 1.11 and currently in Beta.</p>
<p>I believe you self-answered your first question, the <code>n1-standard-4</code> VM has 4 vCPUs and per the <a href="https://cloud.google.com/compute/docs/disks/#increased_persistent_disk_limits" rel="nofollow noreferrer">link</a> that you provided you can attach up to 64 disks. So yes, you should be able to attach 30 persistent disks, a PVC/PV in the GCE storage class maps to GCP VM disk.</p>
|
<p>We have a relatively standard Kubernetes cluster which is hosted in the cloud behind a load balancer. We have found that most of the system pods only run as a single instance, the really concerning thing is that by default our nginx ingress controller will also run as a single instance. This means that in the event of a node failure there is a 1/n chance of every single application going down until the liveness probe kicks in and moves the ingress controller pod.</p>
<p>We have had to increase the number of replicas of our ingress controller because it is a single point of failure. However, I'm not particularly happy about how that makes our network diagram look and I'd imagine that this would cause issues if any of our applications were stateful.</p>
<p>Some pods (like heapster) you can probably only have a single instance of but I was wondering if anyone had any guidelines on what can and can't be scaled up and why this is the default behavior?</p>
<p>Thanks,</p>
<p>Joe</p>
| <p>I don't see any issues with scaling your ingress controllers, you just have more replicas and are served by your external IPs or load balancer. In the event one of them goes down your load balancer will stop forwarding requests to the ingress that is down.</p>
<p>As far as the backend you can have one or more replicas too, it really depends on what kind of redundancy you want to have and also the type of service. Having said that, I really don't recommend an ingress for stateful apps. An ingress is at a layer 7 (HTTP(s)), you'd better off connecting directly using TCP in your cluster. For example, connecting to a MySQL or PostgreSQL instance. I suppose ElasticSearch is one of those exceptions where you would add data through HTTP(s), but I'd be careful of posting large amounts of data through an Ingress.</p>
|
<p>I'am testing Amazon EKS and I'd like to know if I need to remove kube-dns if I'd like to use external-dns instead ?</p>
<p>Today I'am using KOPS to create K8S clusters in AWS. And I'am using the cluster-internal DNS server (kube-dns) with the flag <code>--watch-ingress=true</code> to automatically create route53 "hosts" regarding my Ingress annotations.</p>
<p>I'd like to reproduce this behavior with EKS and I see this project : <a href="https://github.com/kubernetes-incubator/external-dns" rel="nofollow noreferrer">https://github.com/kubernetes-incubator/external-dns</a></p>
<p>But I don't know if if replaces kube-dns or if it works in addition of it.</p>
<p>Thank you for your help. </p>
| <p><code>kube-dns</code> is for DNS resolution inside the cluster. It doesn't interfere with external, public DNS resolution. So, don't delete <code>kube-dns</code>.</p>
<p>Kops' <a href="https://github.com/kubernetes/kops/blob/master/dns-controller/docs/flags.md" rel="nofollow noreferrer"><code>dns-controller</code> offers the <code>--watch-ingress</code> flag</a>, not <code>kube-dns</code>. Both the <code>dns-controller</code> & <code>external-dns</code> (Kubernetes incubator) can register public DNS names in AWS Route53.
<a href="https://github.com/kubernetes-incubator/external-dns/issues/221" rel="nofollow noreferrer"><code>external-dns</code> is aimed to replace <code>dns-controller</code> in the future</a>.</p>
|
<p>I have tried to search for how to install Helm on Raspberry PI 3 (ARM), but I have just found fragments of information here and there.</p>
<p>What are the steps to install Helm on a Raspberry Pi 3 running Raspbian Stretch?</p>
| <pre><code>export HELM_VERSION=v2.9.1
export HELM_INSTALL_DIR=~/bin
mkdir bin
wget https://kubernetes-helm.storage.googleapis.com/helm-$HELM_VERSION-linux-arm.tar.gz
tar xvzf helm-$HELM_VERSION-linux-arm.tar.gz
mv linux-arm/helm $HELM_INSTALL_DIR/helm
rm -rf linux-arm
helm list
helm init --tiller-image=jessestuart/tiller:v2.9.1
</code></pre>
|
<p>I have two Ubuntu 18.04 bare metal servers. using devstack deployment I have stood up a multi-node (2 nodes) cluster where one server has the controller services and compute, while the second has only compute. In the controller node, I have enabled lbaas v2 with Octavia.</p>
<pre>
# LBaaS
enable_plugin neutron-lbaas https://git.openstack.org/openstack/neutron-lbaas stable/queens
enable_plugin octavia https://git.openstack.org/openstack/octavia stable/queens
enable_service q-lbaasv2 octavia o-cw o-hk o-hm o-api
</pre>
<p>I've created a kubernetes cluster with 1 master and 2 minion nodes. some initial testing was successful. deploying WordPress via Helm created a load balancer and I was able to access the app as expected.</p>
<p>I'm now trying to set up a nginx-ingress controller. when I deploy my nginx-ingress controller LoadBalancer service, I can see the load balancer created in OpenStack. however, attempts to access the ingress controller using the external IP always result in an empty reply. </p>
<p>Using the CLI i can see the load balancer, pools, and members. The member entries indicate there is an error:</p>
<pre>
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| address | 10.0.0.9 |
| admin_state_up | True |
| created_at | 2018-09-28T22:15:51 |
| id | 109ad896-5953-4b2b-bbc9-d251d44c3817 |
| name | |
| operating_status | ERROR |
| project_id | 12b95a935dc3481688eb840249c9b167 |
| protocol_port | 31042 |
| provisioning_status | ACTIVE |
| subnet_id | 1e5efaa0-f95f-44a1-a271-541197f372ab |
| updated_at | 2018-09-28T22:16:33 |
| weight | 1 |
| monitor_port | None |
| monitor_address | None |
+---------------------+--------------------------------------+
</pre>
<p>However, there is no indication of what the error is. there is no corresponding error in the log that I can find.</p>
<p>Using kubectl port-forward I verified that the nginx ingress controller is up/running and correctly configured. the problem seems to be in the load balancer.</p>
<p>My question is how can I diagnose what the error is? </p>
<p>I found only one troubleshooting guide related to lbaas v2 and it claims I should be able to see q-lbaas- namespaces when I run: <code>ip netns list</code>. However, there are none defined. </p>
<p>Using helm --dry-run --debug the service yaml is:</p>
<pre><code># Source: nginx-ingress/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-0.25.1
component: "controller"
heritage: Tiller
release: oslb2
name: oslb2-nginx-ingress-controller
spec:
clusterIP: ""
externalTrafficPolicy: "Local"
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
app: nginx-ingress
component: "controller"
release: oslb2
type: "LoadBalancer"
</code></pre>
<p>Interestingly, in comparing to a previous (wordpress) LoadBalancer service that worked, i noticed that the nginx-ingress externalRoutingPolicy is set to <strong>Local</strong>, while wordpress specified Cluster. I changed the values.yaml for the nginx-ingress chart to set externalRoutingPolicy to Cluster and now the load balancer is working.</p>
<p>We'd like to keep the policy at "Local" to preserve source IPs. Any thoughts on why it doesn't work?</p>
| <p>It turns out I was barking up the wrong tree (apologies). There is no issue with the load balancer. </p>
<p>The problem stems from Kubernetes inability to match the minion/worker hostname with its node name. The nodes take the short form of the hostname, e.g.:
<code>k8s-cluster-fj7cs2gokrnz-minion-1</code> while kube-proxy does the look-up based on the fully qualified name: <code>k8s-cluster-fj7cs2gokrnz-minion-1.novalocal</code></p>
<p>i found this in the log for kube-proxy:</p>
<pre><code>Sep 27 23:26:20 k8s-cluster-fj7cs2gokrnz-minion-1.novalocal runc[2205]: W0927 23:26:20.050146 1 server.go:586]
Failed to retrieve node info: nodes "k8s-cluster-fj7cs2gokrnz-minion-1.novalocal" not found
Sep 27 23:26:20 k8s-cluster-fj7cs2gokrnz-minion-1.novalocal runc[2205]: W0927 23:26:20.050241 1 proxier.go:463] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP
</code></pre>
<p>This has the effect of making Kubernetes fail to find "Local" endpoints for LoadBalancer (or other) services. When you specify <code>externalTrafficPolicy: "Local"</code> K8s will drop packets since it i) is restricted to routing only to endpoints local to the node and ii) it believes there are no local endpoints.</p>
<p>other folks who have enountered this issue configure kube-proxy with <code>hostname-override</code> to make the two match up. </p>
|
<p>Because Kubernetes handles situations where there's a typo in the job spec, and therefore a container image can't be found, by leaving the job in a running state forever, I've got a process that monitors job events to detect cases like this and deletes the job when one occurs.</p>
<p>I'd prefer to just stop the job so there's a record of it. Is there a way to stop a job?</p>
| <p>Not really, no such mechanism exists in Kubernetes yet afaik.</p>
<p>You can workaround is to ssh into the machine and run a: (if you're are using Docker)</p>
<pre><code># Save the logs
$ docker log <container-id-that-is-running-your-job> 2>&1 > save.log
$ docker stop <main-container-id-for-your-job>
</code></pre>
<p>It's better to stream log with something like <a href="https://www.fluentd.org/" rel="nofollow noreferrer">Fluentd</a>, or <a href="https://github.com/gliderlabs/logspout" rel="nofollow noreferrer">logspout</a>, or <a href="https://www.elastic.co/products/beats/filebeat" rel="nofollow noreferrer">Filebeat</a> and forward the logs to an ELK or EFK stack.</p>
<p>In any case, I've opened <a href="https://github.com/kubernetes/kubernetes/issues/69311" rel="nofollow noreferrer">this</a></p>
|
<p>I was trying to run <code>kubelet parameters add --feature-gates=ReadOnlyAPIDataVolumes=false</code> on my GKE node with node version 1.9.7.</p>
<p>Then I got the following error:</p>
<pre><code>I1002 00:56:53.617596 13469 feature_gate.go:226] feature gates: &{{} map[ReadOnlyAPIDataVolumes:false]}
I1002 00:56:53.617724 13469 controller.go:114] kubelet config controller: starting controller
I1002 00:56:53.617729 13469 controller.go:118] kubelet config controller: validating combination of defaults and flags
error: error reading /var/lib/kubelet/pki/kubelet.key, certificate and key must be supplied as a pair
</code></pre>
<p>If I run <code>sudo kubelet parameters add --feature-gates=ReadOnlyAPIDataVolumes=false</code>, then I got </p>
<pre><code>error: unrecognized key: ReadOnlyAPIDataVolumes
</code></pre>
<p>My questions:</p>
<ul>
<li>In general, should kubelet command be executed from as root or not?</li>
<li>Specifically how to run "kubelet parameters add" command successfully?</li>
</ul>
| <ol>
<li><p>Yes. Although it may be possible to run as <code>non-root</code>, the kubelet has control over so many different components on your system that would be difficult to make it talk to all the components as <code>non-root</code>.</p></li>
<li><p>I'm really not sure where <code>kubelet parameters add</code> comes from, or how it's setup on GKE (I believe it's the good old <code>kube-up.sh</code> script) But generally, you can change your kubelet parameters at the <code>systemd</code> level. For example, I use kubeadm and change/add flags in <code>/var/lib/kubelet/kubeadm-flags.env</code> or just inline on the kubelet command line, according to this <code>systemd</code> service definition:</p>
<pre><code>$ systemctl cat kubelet
# /lib/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/home/
[Service]
ExecStart=/usr/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
# /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap- kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --hostname-override=ip-x-x-x-x.us-east-1.compute.internal"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
</code></pre></li>
</ol>
|
<p>Every time I initialize a new cluster, everything works perfectly for anywhere from 3 days to around a month. Then kube-dns simply stops functioning. I can shell into the kubedns container, and it seems to be running fine, although I don't really know what to look for. I can ping a hostname, it resolves and is reachable, so kubedns container itself still has dns service. It's just not providing it for other containers in the cluster. And the failure happens in both containers that have been running since before it started (so they used to be able to resolve+ping a hostname, but now cannot resolve it, but can still ping with IP), and new containers that are created.</p>
<p>I'm not sure if it's related to time, or the number of jobs or pods that have been created. The most recent incident happened after 32 pods had been created, and 20 jobs.</p>
<p>If I delete the kube-dns pod with:</p>
<pre><code>kubectl delete pod --namespace kube-system kube-dns-<pod_id>
</code></pre>
<p>A new kube-dns pod is created and things go back to normal (DNS works for all containers, new and old).</p>
<p>I have one master node and two worker nodes. They are all CentOS 7 machines.</p>
<p>To setup the cluster, on the master, I run:</p>
<pre><code>systemctl start etcd
etcdctl mkdir /kube-centos/network
etcdctl mk /kube-centos/network/config "{ "Network": "172.30.0.0/16", "SubnetLen": 24, "Backend": { "Type": "vxlan" } }"
systemctl disable etcd && systemctl stop etcd
systemctl enable docker && systemctl start docker
systemctl enable kubelet && systemctl start kubelet
kubeadm init --kubernetes-version v1.10.0 --apiserver-advertise-address=$(hostname) --ignore-preflight-errors=DirAvailable--var-lib-etcd
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=1.10&env.WEAVE_NO_FASTDP=1&env.CHECKPOINT_DISABLE=1"
kubectl --kubeconfig=/etc/kubernetes/admin.conf proxy -p 80 --accept-hosts='.*' --address=<master_ip> &
</code></pre>
<p>And on the two workers, I run:</p>
<pre><code>kubeadm join --token <K8S_MASTER_HOST>:6443 --discovery-token-ca-cert-hash sha256: --ignore-preflight-errors cri
</code></pre>
<p>Here's some shell commands+output that I've run that could be useful:</p>
<p>Before the failure starts happening, this is a container that's running on one of the workers:</p>
<pre><code>bash-4.4# env
PACKAGES= dumb-init musl libc6-compat linux-headers build-base bash git ca-certificates python3 python3-dev
HOSTNAME=network-test
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_PORT=tcp://10.96.0.1:443
PWD=/
HOME=/root
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP_PORT=443
ALPINE_VERSION=3.7
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
TERM=xterm
SHLVL=1
KUBERNETES_SERVICE_PORT=443
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_SERVICE_HOST=10.96.0.1
_=/usr/bin/env
bash-4.4# ifconfig
eth0 Link encap:Ethernet HWaddr 8A:DD:6E:E8:C4:E3
inet addr:10.44.0.1 Bcast:10.47.255.255 Mask:255.240.0.0
UP BROADCAST RUNNING MULTICAST MTU:65535 Metric:1
RX packets:1 errors:0 dropped:0 overruns:0 frame:0
TX packets:1 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:42 (42.0 B) TX bytes:42 (42.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
bash-4.4# ip route
default via 10.44.0.0 dev eth0
10.32.0.0/12 dev eth0 scope link src 10.44.0.1
bash-4.4# cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local dc1.int.company.com dc2.int.company.com dc3.int.company.com
options ndots:5
-- Note that even when working, the DNS IP can't be pinged. --
bash-4.4# ping 10.96.0.10
PING 10.96.0.10 (10.96.0.10): 56 data bytes
-- Never unblocks. So we know it's fine that the container can't ping the DNS IP. --
bash-4.4# ping 10.44.0.0
PING 10.44.0.0 (10.44.0.0): 56 data bytes
64 bytes from 10.44.0.0: seq=0 ttl=64 time=0.139 ms
64 bytes from 10.44.0.0: seq=1 ttl=64 time=0.124 ms
--- 10.44.0.0 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.124/0.131/0.139 ms
bash-4.4# ping somehost.env.dc1.int.company.com
PING somehost.env.dc1.int.company.com (10.112.17.2): 56 data bytes
64 bytes from 10.112.17.2: seq=0 ttl=63 time=0.467 ms
64 bytes from 10.112.17.2: seq=1 ttl=63 time=0.271 ms
64 bytes from 10.112.17.2: seq=2 ttl=63 time=0.214 ms
64 bytes from 10.112.17.2: seq=3 ttl=63 time=0.241 ms
64 bytes from 10.112.17.2: seq=4 ttl=63 time=0.350 ms
--- somehost.env.dc1.int.company.com ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 0.214/0.308/0.467 ms
bash-4.4# ping 10.112.17.2
PING 10.112.17.2 (10.112.17.2): 56 data bytes
64 bytes from 10.112.17.2: seq=0 ttl=63 time=0.474 ms
64 bytes from 10.112.17.2: seq=1 ttl=63 time=0.404 ms
--- 10.112.17.2 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.404/0.439/0.474 ms
bash-4.4# ping worker1.env
PING worker1.env (10.112.5.50): 56 data bytes
64 bytes from 10.112.5.50: seq=0 ttl=64 time=0.051 ms
64 bytes from 10.112.5.50: seq=1 ttl=64 time=0.102 ms
--- worker1.env ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.051/0.076/0.102 ms
</code></pre>
<p>After failure starts, same container that's been running the whole time:</p>
<pre><code>bash-4.4# env
PACKAGES= dumb-init musl libc6-compat linux-headers build-base bash git ca-certificates python3 python3-dev
HOSTNAME=vda-test
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_PORT=tcp://10.96.0.1:443
PWD=/
HOME=/root
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP_PORT=443
ALPINE_VERSION=3.7
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
TERM=xterm
SHLVL=1
KUBERNETES_SERVICE_PORT=443
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_SERVICE_HOST=10.96.0.1
OLDPWD=/root
_=/usr/bin/env
bash-4.4# ifconfig
eth0 Link encap:Ethernet HWaddr 22:5E:D5:72:97:98
inet addr:10.44.0.2 Bcast:10.47.255.255 Mask:255.240.0.0
UP BROADCAST RUNNING MULTICAST MTU:65535 Metric:1
RX packets:1645 errors:0 dropped:0 overruns:0 frame:0
TX packets:1574 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:718909 (702.0 KiB) TX bytes:150313 (146.7 KiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
bash-4.4# ip route
default via 10.44.0.0 dev eth0
10.32.0.0/12 dev eth0 scope link src 10.44.0.2
bash-4.4# cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local dc1.int.company.com dc2.int.company.com dc3.int.company.com
options ndots:5
bash-4.4# ping 10.44.0.0
PING 10.44.0.0 (10.44.0.0): 56 data bytes
64 bytes from 10.44.0.0: seq=0 ttl=64 time=0.130 ms
64 bytes from 10.44.0.0: seq=1 ttl=64 time=0.097 ms
64 bytes from 10.44.0.0: seq=2 ttl=64 time=0.072 ms
64 bytes from 10.44.0.0: seq=3 ttl=64 time=0.102 ms
64 bytes from 10.44.0.0: seq=4 ttl=64 time=0.116 ms
64 bytes from 10.44.0.0: seq=5 ttl=64 time=0.099 ms
64 bytes from 10.44.0.0: seq=6 ttl=64 time=0.167 ms
64 bytes from 10.44.0.0: seq=7 ttl=64 time=0.086 ms
--- 10.44.0.0 ping statistics ---
8 packets transmitted, 8 packets received, 0% packet loss
round-trip min/avg/max = 0.072/0.108/0.167 ms
bash-4.4# ping somehost.env.dc1.int.company.com
ping: bad address 'somehost.env.dc1.int.company.com'
bash-4.4# ping 10.112.17.2
PING 10.112.17.2 (10.112.17.2): 56 data bytes
64 bytes from 10.112.17.2: seq=0 ttl=63 time=0.523 ms
64 bytes from 10.112.17.2: seq=1 ttl=63 time=0.319 ms
64 bytes from 10.112.17.2: seq=2 ttl=63 time=0.304 ms
--- 10.112.17.2 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.304/0.382/0.523 ms
bash-4.4# ping worker1.env
ping: bad address 'worker1.env'
bash-4.4# ping 10.112.5.50
PING 10.112.5.50 (10.112.5.50): 56 data bytes
64 bytes from 10.112.5.50: seq=0 ttl=64 time=0.095 ms
64 bytes from 10.112.5.50: seq=1 ttl=64 time=0.073 ms
64 bytes from 10.112.5.50: seq=2 ttl=64 time=0.083 ms
--- 10.112.5.50 ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.073/0.083/0.095 ms
</code></pre>
<p>And here are some commands in the kube-dns container:</p>
<pre><code>/ # ifconfig
eth0 Link encap:Ethernet HWaddr 9A:24:59:D1:09:52
inet addr:10.32.0.2 Bcast:10.47.255.255 Mask:255.240.0.0
UP BROADCAST RUNNING MULTICAST MTU:65535 Metric:1
RX packets:4387680 errors:0 dropped:0 overruns:0 frame:0
TX packets:4124267 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1047398761 (998.8 MiB) TX bytes:1038950587 (990.8 MiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:4352618 errors:0 dropped:0 overruns:0 frame:0
TX packets:4352618 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1
RX bytes:359275782 (342.6 MiB) TX bytes:359275782 (342.6 MiB)
/ # ping somehost.env.dc1.int.company.com
PING somehost.env.dc1.int.company.com (10.112.17.2): 56 data bytes
64 bytes from 10.112.17.2: seq=0 ttl=63 time=0.430 ms
64 bytes from 10.112.17.2: seq=1 ttl=63 time=0.252 ms
--- somehost.env.dc1.int.company.com ping statistics ---
2 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 0.208/0.274/0.430 ms
/ # netstat
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53152 TIME_WAIT
tcp 0 0 kube-dns-86f4d74b45-2kxdr:sunproxyadmin 10.32.0.1:58424 TIME_WAIT
tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53174 TIME_WAIT
tcp 0 0 kube-dns-86f4d74b45-2kxdr:sunproxyadmin 10.32.0.1:58468 TIME_WAIT
tcp 0 0 kube-dns-86f4d74b45-2kxdr:sunproxyadmin 10.32.0.1:58446 TIME_WAIT
tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53096 TIME_WAIT
tcp 0 0 kube-dns-86f4d74b45-2kxdr:sunproxyadmin 10.32.0.1:58490 TIME_WAIT
tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53218 TIME_WAIT
tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53100 TIME_WAIT
tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53158 TIME_WAIT
tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53180 TIME_WAIT
tcp 0 0 kube-dns-86f4d74b45-2kxdr:sunproxyadmin 10.32.0.1:58402 TIME_WAIT
tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53202 TIME_WAIT
tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53178 TIME_WAIT
tcp 0 0 kube-dns-86f4d74b45-2kxdr:sunproxyadmin 10.32.0.1:58368 TIME_WAIT
tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53134 TIME_WAIT
tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53200 TIME_WAIT
tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53136 TIME_WAIT
tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53130 TIME_WAIT
tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53222 TIME_WAIT
tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53196 TIME_WAIT
tcp 0 0 kube-dns-86f4d74b45-2kxdr:48230 10.96.0.1:https ESTABLISHED
tcp 0 0 kube-dns-86f4d74b45-2kxdr:10054 10.32.0.1:53102 TIME_WAIT
netstat: /proc/net/tcp6: No such file or directory
netstat: /proc/net/udp6: No such file or directory
netstat: /proc/net/raw6: No such file or directory
Active UNIX domain sockets (w/o servers)
Proto RefCnt Flags Type State I-Node Path
</code></pre>
<p>Version/OS info on master+worker nodes:</p>
<pre><code>kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:55:54Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
uname -a
Linux master1.env.dc1.int.company.com 3.10.0-514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
</code></pre>
| <p>Hard to tell without access to the cluster, but when you create a pod the <code>kube-proxy</code> creates several iptables rules on your nodes so that you can get to them. My guess is that one or more of the iptables rules is messing up your new and existing pods.</p>
<p>Then when you delete and re-create your kube-dns pod, those iptables get deleted and re-created causing things to go back to normal.</p>
<p>Some things that you can try:</p>
<ol>
<li>Upgrade to K8s 1.11 which uses core-dns.</li>
<li>Try installing a different network overlay that uses a different <code>podCidr</code></li>
<li>Try restarting your overlay pods (For example, calico pods)</li>
</ol>
<p>All of these will cause downtime and possibly screwing up your cluster. So it might be a better idea to create a new cluster and test there first.</p>
|
<p>I'm trying to install Helm as <a href="https://github.com/ahmetb/gke-letsencrypt/blob/master/10-install-helm.md" rel="nofollow noreferrer">described here</a>.
But when I'm doing <code>kubectl create serviceaccount -n kube-system tiller
</code> I get a message saying <em>Error from server (AlreadyExists): serviceaccounts "tiller" already exists</em>. But I can't see it when I visit <a href="https://console.cloud.google.com/iam-admin/iam" rel="nofollow noreferrer">https://console.cloud.google.com/iam-admin/iam</a> nor <a href="https://console.cloud.google.com/iam-admin/serviceaccounts" rel="nofollow noreferrer">https://console.cloud.google.com/iam-admin/serviceaccounts</a>. How can that be? I had just made sure I was working the correct cluster:
<code>
cloud container clusters get-credentials my-cluster
Fetching cluster endpoint and auth data.
kubeconfig entry generated for my-cluster.
</code></p>
| <p>You are confusing Google service accounts and Kubernetes service accounts.</p>
<p><code>kubectl get serviceaccount -n kube-system tiller -o yaml</code></p>
<p><a href="https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/</a></p>
|
<p>I am trying to deploy Kong in GKE as per the documentation <a href="https://github.com/Kong/kong-dist-kubernetes" rel="nofollow noreferrer">https://github.com/Kong/kong-dist-kubernetes</a></p>
<p>I noticed <a href="https://github.com/Kong/kong-dist-kubernetes/blob/master/cassandra.yaml" rel="nofollow noreferrer">cassandra</a> is available as StatefulSet but <a href="https://github.com/Kong/kong-dist-kubernetes/blob/master/postgres.yaml" rel="nofollow noreferrer">Postgres</a> as ReplicationController. Can I understand the difference? Also can anyone suggest how to choose between these 2?</p>
| <p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/" rel="nofollow noreferrer">ReplicationControllers</a> predates StatefulSets. It was a way to manage your pod replicas. The 'newer' approach to manage your replicas is <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/" rel="nofollow noreferrer">ReplicaSets</a> which is used by <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployments</a>.</p>
<p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSets</a> is meant for applications that require your pods to start in an ordered way together with some sort of data stored on disk. So it's very suitable for master/slave datastore or ring topology datastores like Cassandra. I would strongly recommend using StatefulSets for these types of workloads.</p>
|
<h2>Objective</h2>
<p>I want to connect to and call Kubernetes REST APIs from inside a running pod, the Kubernetes in question is an AWS EKS cluster using IAM authentication. All of this using Kubernetes Python lib.</p>
<h2>What I have tried</h2>
<p>From inside my <code>python file</code>:</p>
<pre><code>from kubernetes import client, config
config.load_incluster_config()
v1 = client.CoreV1Api()
ret = v1.list_pod_for_all_namespaces(watch=False)
</code></pre>
<p>The above command throws a <code>403</code> error, This I believe is due to the different auth mechanism that AWS EKS uses.</p>
<h2>What I already know works</h2>
<pre><code>ApiToken = 'eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.xxx.yyy'
configuration = client.Configuration()
configuration.host = 'https://abc.sk1.us-east-1.eks.amazonaws.com'
configuration.verify_ssl = False
configuration.debug = True
configuration.api_key = {"authorization": "Bearer " + ApiToken}
client.Configuration.set_default(configuration)
</code></pre>
<p>While the above works, I have to hardcode a token that I generate locally via kubectl and check it into the code which is a security risk.</p>
<p>Is there a more proper way to authenticate the Kubernetes python lib with AWS EKS?</p>
| <p>You can use the following method to get the token. This assumes that you have successfully installed and configured <a href="https://docs.aws.amazon.com/eks/latest/userguide/install-aws-iam-authenticator.html" rel="nofollow noreferrer">aws-iam-authenticator</a> on your pod/server/laptop.</p>
<pre><code>def get_token(cluster_name):
args = ("/usr/local/bin/aws-iam-authenticator", "token", "-i", cluster_name, "--token-only")
popen = subprocess.Popen(args, stdout=subprocess.PIPE)
popen.wait()
return popen.stdout.read().rstrip()
api_token = get_token("<cluster_name>")
configuration = client.Configuration()
configuration.host = '<api_endpoint>'
configuration.verify_ssl = False
configuration.debug = True
configuration.api_key['authorization'] = "Bearer " + api_token
configuration.assert_hostname = True
configuration.verify_ssl = False
client.Configuration.set_default(configuration)
v1 = client.CoreV1Api()
ret = v1.list_pod_for_all_namespaces(watch=False)
print ret
</code></pre>
<p>There is an PR for kubernetes-client/python-base that adds support for exec plugins, <a href="https://github.com/kubernetes-client/python-base/pull/75" rel="nofollow noreferrer">Attempt to implement exec-plugins support in kubeconfig</a>. </p>
|
<p>I am trying to fully purge my kube env, but sometimes when I run <code>helm delete --purge</code> some pods don't delete (sometimes).
<br>
<br>
Is there any issues using <code>kubectl delete pods --grace-period=0 --force</code>
Or will using this command over and over lead to any issues on my cluster or nodes? </p>
| <p>According to the K8S documentation <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#delete" rel="nofollow noreferrer">here</a>. Depending on the application it might lead to corruption or inconsistency of the data because of the duplication of the pods till the node detects and kills one of the pod.</p>
<blockquote>
<p>Force deleting pods does not wait for confirmation that the pod's processes have been terminated, which can leave those processes running until the node detects the deletion and completes graceful deletion. If your processes use shared storage or talk to a remote API and depend on the name of the pod to identify themselves, force deleting those pods may result in multiple processes running on different machines using the same identification which may lead to data corruption or inconsistency. Only force delete pods when you are sure the pod is terminated, or if your application can tolerate multiple copies of the same pod running at once. Also, if you force delete pods the scheduler may place new pods on those nodes before the node has released those resources and causing those pods to be evicted immediately. </p>
</blockquote>
<p>So, it depends, if the pods are using any shared resources or not.</p>
|
<p>I'm trying to use kubernetes ingress annotation rules in order to enable X509 authentication.
My ingress yaml file is defined below:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: bdf-opengie-test
name: keycloak-opengie-test-ssl
labels:
app: keycloak-opengie
annotations:
nginx.ingress.kubernetes.io/auth-tls-verify-client: "on"
nginx.ingress.kubernetes.io/auth-tls-secret: "opengie-tls-secret"
nginx.ingress.kubernetes.io/auth-tls-verify-depth: "3"
nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "true"
spec:
rules:
- host: keycloak-opengie-test-ssl.bdf-clu4.paas.eclair.local
http:
paths:
- path: /
backend:
serviceName: keycloak-opengie
servicePort: http
tls:
- hosts:
- keycloak-opengie-test-ssl.bdf-clu4.paas.eclair.local
</code></pre>
<p>When I invoke my application url, I'm expecting to see a popup requesting for a certificate, but nothing happens.
It seems like the annotations has no effect in the ingress definition.
Can someone tell me what's going wrong in my ingress definition.
I'm using Nginx Ingress: 0.15.0 and Kubernetes 1.10.5</p>
| <p>First of all you are missing the secret with SSL files issued for your domain. (if we are talking about a native k8s secret management) You secret should be created by:</p>
<pre><code>kubectl --namespace bdf-opengie-test create secret tls <secret_name> --key <key_path> --cert <cert_path>
</code></pre>
<p>Then your <code>Ingress</code> .yml file should contain this secret:</p>
<pre><code> ...
tls:
- hosts:
- keycloak-opengie-test-ssl.<domain>
secretName: <secret_name>
</code></pre>
<p>Only after this you can think about any annotations for auth or something else which is not working</p>
<p>Note: the secret is a namespaced object. </p>
|
<p>Because Kubernetes handles situations where there's a typo in the job spec, and therefore a container image can't be found, by leaving the job in a running state forever, I've got a process that monitors job events to detect cases like this and deletes the job when one occurs.</p>
<p>I'd prefer to just stop the job so there's a record of it. Is there a way to stop a job?</p>
| <p>1) According to the K8S documentation <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="nofollow noreferrer">here</a>.</p>
<blockquote>
<p>Finished Jobs are usually no longer needed in the system. Keeping them around in the system will put pressure on the API server. If the Jobs are managed directly by a higher level controller, such as CronJobs, the Jobs can be cleaned up by CronJobs based on the specified capacity-based cleanup policy.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.12/#cronjobspec-v1beta1-batch" rel="nofollow noreferrer">Here</a> are the details for the failedJobsHistoryLimit property in the CronJobSpec.</p>
<p>This is another way of retaining the details of the failed job for a specific duration. The <code>failedJobsHistoryLimit</code> property can be set based on the approximate number of jobs run per day and the number of days the logs have to be retained. Agree that the Jobs will be still there and put pressure on the API server.</p>
<p>This is interesting. Once the job completes with failure as in the case of a wrong typo for image, the pod is getting deleted and the resources are not blocked or consumed anymore. Not sure exactly what <code>kubectl job stop</code> will achieve in this case. But, when the Job with a proper image is run with success, I can still see the pod in <code>kubectl get pods</code>. </p>
<p>2) Another approach without using the CronJob is to specify the <code>ttlSecondsAfterFinished</code> as mentioned <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#ttl-mechanism-for-finished-jobs" rel="nofollow noreferrer">here</a>.</p>
<blockquote>
<p>Another way to clean up finished Jobs (either Complete or Failed) automatically is to use a TTL mechanism provided by a TTL controller for finished resources, by specifying the .spec.ttlSecondsAfterFinished field of the Job.</p>
</blockquote>
|
<p>I want to expose my kubernetes cluster with minikube. </p>
<p>consider my tree</p>
<pre><code>.
├── deployment.yaml
├── Dockerfile
├── server.js
└── service.yaml
</code></pre>
<p>I build my docker image locally and am able to run all pods via </p>
<pre><code>kubectl create -f deployment.yaml
kubectl create -f service.yaml
</code></pre>
<p>. However when I run </p>
<pre><code> $ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2h
nodeapp LoadBalancer 10.110.106.83 <pending> 80:32711/TCP 9m
</code></pre>
<p>There is no external ip to be able to connect to the cluster. Tried to expose one pod but the the external Ip stays none. Why Is there no external ip? </p>
<pre><code> $ cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nodeapp
labels:
app: nodeapp
spec:
replicas: 2
selector:
matchLabels:
app: nodeapp
template:
metadata:
labels:
app: nodeapp
spec:
containers:
- name: hello-node
image: hello-node:v2
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
</code></pre>
<p>and </p>
<pre><code> cat service.yaml
kind: Service
apiVersion: v1
metadata:
name: nodeapp
spec:
selector:
app: nodeapp
ports:
- name: http
port: 80
targetPort: 3000
protocol: TCP
type: LoadBalancer
$ cat server.js
var http = require('http');
var handleRequest = function(request, response) {
console.log('Received request for URL: ' + request.url);
response.writeHead(200);
response.end('Hello User');
};
var www = http.createServer(handleRequest);
</code></pre>
| <p>According to the K8S documentation <a href="https://kubernetes.io/docs/concepts/services-networking/#loadbalancer" rel="noreferrer">here</a>. So, <code>type=LoadBalancer</code> can be used on AWS, GCP and other supported Clouds, not on Minikube.</p>
<blockquote>
<p>On cloud providers which support external load balancers, setting the type field to LoadBalancer will provision a load balancer for your Service.</p>
</blockquote>
<p>Specify type as NodePort as mentioned <a href="https://kubernetes.io/docs/concepts/services-networking/#nodeport" rel="noreferrer">here</a> and the service will be exposed on a port on the Minikube. Then the service can be accessed by using url from the host OS.</p>
<blockquote>
<p>minikube service nodeapp --url</p>
</blockquote>
|
<p>I have been experimenting with setting up a RabbitMQ cluster with Kubernetes.
After doing some research, I stumbled upon the following 2 useful tutorials:
<a href="https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/</a> and <a href="https://github.com/rabbitmq/rabbitmq-peer-discovery-k8s/tree/master/examples/k8s_statefulsets" rel="nofollow noreferrer">https://github.com/rabbitmq/rabbitmq-peer-discovery-k8s/tree/master/examples/k8s_statefulsets</a></p>
<p>The first sets up a simple RabbitMQ Service with a ReplicationController and pushes through a test message, while the second goes a bit further and sets up a StatefulSet with RBAC etc.. So far, so good.</p>
<p>However, when I complete the second tutorial, I am trying to push a test message through, like in the first tutorial by running the slightly adjusted commands:</p>
<pre><code>export BROKER_URL=amqp://guest:guest@rabbitmq:5672
/usr/bin/amqp-declare-queue --url=$BROKER_URL -q foo -d
</code></pre>
<p>which then throws the error</p>
<pre><code>logging in to AMQP server: a socket error occurred
</code></pre>
<p>At first I thought the URL was wrong, but I verified that it by substituting <code>rabbitmq</code> with the ip address feteched using <code>nslookup rabbitmq</code> (as demonstrated in the first tutorial).</p>
<p>Could anyone help out on what I am missing here?</p>
<p>Thanks</p>
| <p>I resolved the problem, it appears the kubernetes guide is outdated: if you use a newer image of unbutu (for example, ubuntu:18.04) when testing, it seems to work! </p>
|
<p>We're starting our move from D7 to D8, and are going to be using Docker and Kubernetes (with Jenkins) to manage and deploy our D8 environments.</p>
<p>Since with Docker and Kubernetes deploying as many identical nodes as you want / need is trivial, I'm looking for feedback re choosing the standard one-to-many design of Varnish and Drupal nodes (each Varnish node points at multiple Drupal backends), as opposed to a one-to-one design where each Varnish node points at a single Drupal backend.</p>
<p>We're leaning toward a one-to-one setup since with Kubernetes we could move the health check that Varnish would execute to test a backend as healthy to the Kubernetes layer, add another check in that layer to make sure Varnish itself is healthy, and if a pairing is marked as unhealthy Kubernetes will simply send the request to the next healthy pair.</p>
<p>I don't think this would result in any more pages being put in one Varnish node or another, since if it was one-to-many and a backend is sick Varnish will just try to get the page from the next healthy backend, but it still goes into the same Varnish cache.</p>
<p>In the case of a one-to-one setup, the testing of if something is "sick" simply moves from Varnish to Kubernetes, and since Varnish itself rarely goes down, this is an unlikely occurrence.</p>
<p>Roughed up proof of concept diagrams are attached.</p>
<p>I'd be very interested to see opinions / feedback on any potential disadvantages people can think of with the 1-to-1 design.</p>
<p>Thanks,
Pablo</p>
<p><a href="https://i.stack.imgur.com/SIEfA.png" rel="nofollow noreferrer">One-to-Many</a></p>
<p><a href="https://i.stack.imgur.com/AvR8K.png" rel="nofollow noreferrer">One-to-One</a></p>
| <p>A couple of things I can think of:</p>
<ol>
<li><p>Having a 1-1 mapping will couple Varnish with your Drupal backend meaning if you won't be able to add or reduce capacity for Varnish or Drupal individually.</p>
<p>It may be fine if you don't care about this, but what if you feel that you need to add more capacity to the caching layer, say you have a really heavy hitter request. You will have to double your capacity for Drupal too and that will incur in more costs.</p>
</li>
<li><p>If you have some sort of sticky sessions you may run into a situation where a 1-1 mapping will serve a large percentage of the traffic leaving the other 1-1 mapping somehow idle or underutilized. In other words, it could lead you to have some unbalanced traffic.</p>
</li>
</ol>
|
<p>I am building an application over a managed K8s cluster. I can create deployments & co with NodePort. Services are then accessible via these ports. </p>
<p>Managing all the exposed services and their ports is becoming a challenge and for the HTTP traffic, I was thinking of exposing a single nginx proxy that would proxy <code>https://someservice.someurl:someport</code> to <code>https://someservice:someport</code> and have <code>someurl</code> DNS mapped to my front-end Ubuntu IP as depicted below.</p>
<p><a href="https://i.stack.imgur.com/CiX3a.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CiX3a.png" alt="enter image description here"></a></p>
<p>For the web traffic, I reckon that it should work as expected. However, I have some deployments that are accessible via ssh; git daemon for example. With these demons I am currently doing commands like this <code>git clone ssh://git@someipsofthecluster:someport/git-server/repos/somerepos</code> and I'd like to use the same DNS name as for http traffic (i.e. <code>git clone ssh://[email protected]:someport/git-server/repos/somerepo)</code></p>
<p>I know about iptables where I can redirect traffic incoming from one port to another IP/port but I don't know how I would go about redirecting to a given machine/port with regards to the subdomain used.</p>
| <p>You'll probably have to re-think how to do this as TCP load balancing or proxying based on DNS name is not really possible. More on this <a href="https://serverfault.com/questions/643131/proxying-tcp-by-hostname">here</a>. Keep in mind that HTTP is a Layer 7 protocol so the proxy can use the 'Host' header to direct requests. </p>
<p>Filtering based on hostname is also not possible with iptables. More on that <a href="https://serverfault.com/questions/482221/how-do-i-filter-incoming-connections-based-on-server-hostname-port-using-iptable">here</a>.</p>
<p>You can, however, use a Layer 4 proxy, meaning a TCP proxy but this will be based on listening on a specific TCP port. <a href="https://docs.nginx.com/nginx/admin-guide/load-balancer/tcp-udp-load-balancer/" rel="nofollow noreferrer">Nginx</a> can do it or you can also use something else like <a href="http://www.haproxy.org/" rel="nofollow noreferrer">Haproxy</a>.</p>
|
<p>I am running Docker containers containing JVM (java8u31). These containers are deployed as pods in a kubernetes cluster. Often I get OOM for the pods and Kubernetes kills the pods and restarts it. I am having issues in finding the root cause for these OOMs as I am new to Kubernetes.</p>
<ol>
<li><p>Here are the JVM parameters</p>
<pre><code>-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Xms700M -Xmx1000M -XX:MaxRAM=1536M -XX:MaxMetaspaceSize=250M
</code></pre></li>
<li><p>These containers are deployed as stateful set and following is the resource allocation</p>
<pre><code>resources:
requests:
memory: "1.5G"
cpu: 1
limits:
memory: "1.5G"
cpu: 1
</code></pre>
<p>so the total memory allocated to the container matches the MaxRam </p></li>
<li><p>If I use <code>-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/etc/opt/jmx/java_pid%p.hprof</code> that doesn't help because the pod is getting killed and recreated and started as soon as there is a OOM so everything within the pod is lost</p>
<p>The only way to get a thread or HEAP dump is to SSH into the pod which also I am not able to take because the pod is recreated after an OOM so I don't get the memory footprint at the time of OOM. I SSH after an OOM which is not much help.</p></li>
<li><p>I also profiled the code using visualVM, jHat but couldn't find substantial memory footprint which could lead to a conclusion of too much memory consumption by the threads running within the JVM or a probable leak.</p></li>
</ol>
<p>Any help is appreciated to resolve the OOM thrown by Kubernetes.</p>
| <p>When your application in a pod reaches the limits of memory you set by resources.limits.memory or namespace limit, Kubernetes restarts the pod.</p>
<p>The Kubernetes part of limiting resources is described in the following articles:</p>
<ul>
<li><a href="https://cloudplatform.googleblog.com/2018/05/Kubernetes-best-practices-Resource-requests-and-limits.html" rel="nofollow noreferrer">Kubernetes best practices: Resource requests and limits</a></li>
<li><a href="https://kubernetes.io/docs/concepts/policy/resource-quotas/" rel="nofollow noreferrer">Resource Quotas</a></li>
<li><a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/resource-management/admission_control_resource_quota.md" rel="nofollow noreferrer">Admission control plugin: ResourceQuota</a></li>
<li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/" rel="nofollow noreferrer">Assign Memory Resources to Containers and Pods</a></li>
</ul>
<p>Memory consumed by Java application is not limited to the size of the Heap that you can set by specifying the options:</p>
<pre><code>-Xmssize Specifies the initial heap size.
-Xmxsize Specifies the maximum heap size.
</code></pre>
<p>Java application needs some additional memory for metaspace, class space, stack size, and JVM itself needs even more memory to do its tasks like garbage collection, JIT optimization, Off-heap allocations, JNI code.
It is hard to predict total memory usage of JVM with reasonable precision, so the best way is to measure it on the real deployment with usual load.</p>
<p>I would recommend you to set the Kubernetes pod limit to double <code>Xmx</code> size, check if you are not getting OOM anymore, and then gradually decrease it to the point when you start getting OOM. The final value should be in the middle between these points.<br />
You can get more precise value from memory usage statistics in a monitoring system like Prometheus.</p>
<p>On the other hand, you can try to limit java memory usage by specifying the number of available options, like the following:</p>
<pre><code>-Xms<heap size>[g|m|k] -Xmx<heap size>[g|m|k]
-XX:MaxMetaspaceSize=<metaspace size>[g|m|k]
-Xmn<young size>[g|m|k]
-XX:SurvivorRatio=<ratio>
</code></pre>
<p>More details on that can be found in these articles:</p>
<ul>
<li><a href="https://medium.com/@matt_rasband/dockerizing-a-spring-boot-application-6ec9b9b41faf" rel="nofollow noreferrer">Properly limiting the JVM’s memory usage (Xmx isn’t enough)</a></li>
<li><a href="https://plumbr.io/blog/memory-leaks/why-does-my-java-process-consume-more-memory-than-xmx" rel="nofollow noreferrer">Why does my Java process consume more memory than Xmx</a></li>
</ul>
<p>The second way to limit JVM memory usage is to calculate heap size based on the amount of RAM(or MaxRAM). There is a good explanation of how it works in the <a href="https://web.archive.org/web/20191216024717/http://what-when-how.com:80/Tutorial/topic-684cn3k/Java-Performance-The-Definitive-Guide-218.html" rel="nofollow noreferrer">article</a>:</p>
<blockquote>
<p>The default sizes are based on the amount of memory on a machine, which can be set with the <code>-XX:MaxRAM=N</code> flag.
Normally, that value is calculated by the JVM by inspecting the amount of memory on the machine.
However, the JVM limits <code>MaxRAM</code> to <code>1 GB</code> for the client compiler, <code>4 GB</code> for 32-bit server compilers, and <code>128 GB</code> for 64-bit compilers.
The maximum heap size is one-quarter of <code>MaxRAM</code> .
This is why the default heap size can vary: if the physical memory on a machine is less than <code>MaxRAM</code> , the default heap size is one-quarter of that.
But even if hundreds of gigabytes of RAM are available, the most the JVM will use by default is <code>32 GB</code>: one-quarter of <code>128 GB</code>. The default maximum heap calculation is actually this:</p>
</blockquote>
<blockquote>
<p><code>Default Xmx = MaxRAM / MaxRAMFraction</code></p>
</blockquote>
<blockquote>
<p>Hence, the default maximum heap can also be set by adjusting the value of the - <code>XX:MaxRAMFraction=N</code> flag, which defaults to <code>4</code>.
Finally, just to keep things interesting, the <code>-XX:ErgoHeapSizeLimit=N</code> flag can also be set to a maximum default value that the JVM should use.
That value is <code>0</code> by default (meaning to ignore it); otherwise, that limit is used if it is smaller than <code>MaxRAM / MaxRAMFraction</code> .</p>
</blockquote>
<blockquote>
<p>The initial heap size choice is similar, though it has fewer complications. The initial heap size value is determined like this:</p>
</blockquote>
<blockquote>
<p><code>Default Xms = MaxRAM / InitialRAMFraction</code></p>
</blockquote>
<blockquote>
<p>As can be concluded from the default minimum heap sizes, the default value of the <code>InitialRAMFraction</code> flag is <code>64</code>.
The one caveat here occurs if that value is less than <code>5 MB</code> —or, strictly speaking, less than the values specified by <code>-XX:OldSize=N</code> (which defaults to <code>4 MB</code>) plus -<code>XX:NewSize=N</code> (which defaults to <code>1 MB</code>).
In that case, the sum of the old and new sizes is used as the initial heap size.</p>
</blockquote>
<p>This article gives you a good point to start tuning your JVM for web-oriented application:</p>
<ul>
<li><a href="https://web.archive.org/web/20220326090501/https://blog.sokolenko.me/2014/11/javavm-options-production.html" rel="nofollow noreferrer">Java VM Options You Should Always Use in Production</a></li>
</ul>
|
<p>I have to deploy a <strong>Cronjob in Kubernetes</strong> that will create a Job pod every 15 minutes. The job will check if a service is ready to provide new data. Once this service is ready, the job will take more than 1 hour to complete its execution. The problem revolves around the fact that other jobs are going to be executed during this time.</p>
<p>In short, how can I prevent a job to be executed in <strong>Kubernetes Cronjobs</strong> when there's already a Job running? </p>
| <p>CronJob resource has a property called <code>concurrencyPolicy</code>, here an example:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: your-cron
spec:
schedule: "*/40 8-18 * * 1-6"
concurrencyPolicy: Forbid
jobTemplate:
spec:
template:
metadata:
labels:
app: your-periodic-job
spec:
containers:
- name: your_container
image: your_image
imagePullPolicy: IfNotPresent
restartPolicy: OnFailure
</code></pre>
|
<p>I have exposed Kube API through proxy, but I do not have permission to run the Dashboard on that cluster. Can I run the dashboard on a separate cluster, and point that dashboard to the desired cluster's API? </p>
| <p>Yes, you can. Below is the preferred deployment definition for the dashboard (from the dashboard Github page). You will have to uncomment the option <code>--apiserver-host=http://my-address:port</code>. You will also have to make sure you are using the right certificates and credentials to access your kube-apiserver. For security reasons, I would recommend opening your kube-apiserver proxy only to very specific hosts like the one where your dashboard would be running.</p>
<pre><code>kind: Deployment
apiVersion: apps/v1beta2
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
</code></pre>
|
<p>On my GCE Kubernetes cluster I can no longer create pods.</p>
<pre><code>Warning FailedScheduling pod (www.caveconditions.com-f1be467e31c7b00bc983fbe5efdbb8eb-438ef) failed to fit in any node
fit failure on node (gke-prod-cluster-default-pool-b39c7f0c-c0ug): Insufficient CPU
</code></pre>
<p>Looking at the allocated stats of that node</p>
<pre><code>Non-terminated Pods: (8 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
default dev.caveconditions.com-n80z8 100m (10%) 0 (0%) 0 (0%) 0 (0%)
default lamp-cnmrc 100m (10%) 0 (0%) 0 (0%) 0 (0%)
default mongo-2-h59ly 200m (20%) 0 (0%) 0 (0%) 0 (0%)
default www.caveconditions.com-tl7pa 100m (10%) 0 (0%) 0 (0%) 0 (0%)
kube-system fluentd-cloud-logging-gke-prod-cluster-default-pool-b39c7f0c-c0ug 100m (10%) 0 (0%) 200Mi (5%) 200Mi (5%)
kube-system kube-dns-v17-qp5la 110m (11%) 110m (11%) 120Mi (3%) 220Mi (5%)
kube-system kube-proxy-gke-prod-cluster-default-pool-b39c7f0c-c0ug 100m (10%) 0 (0%) 0 (0%) 0 (0%)
kube-system kubernetes-dashboard-v1.1.0-orphh 100m (10%) 100m (10%) 50Mi (1%) 50Mi (1%)
Allocated resources:
(Total limits may be over 100%, i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md)
CPU Requests CPU Limits Memory Requests Memory Limits
------------ ---------- --------------- -------------
910m (91%) 210m (21%) 370Mi (9%) 470Mi (12%)
</code></pre>
<p>Sure I have 91% allocated and can not fit another 10% into it. But is it not possible to over commit resources?</p>
<p>The usage of the server is at about 10% CPU average</p>
<p><a href="https://i.stack.imgur.com/wBSuc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wBSuc.png" alt="enter image description here" /></a></p>
<p>What changes do I need to make for my Kubernetes cluster to be able to create more pods?</p>
| <p>I recently had this same issue. After some research, I found that GKE has a default <code>LimitRange</code> with CPU requests limit set to <code>100m</code>.</p>
<p>You can validate this by running <code>kubectl get limitrange -o=yaml</code>.
It's going to display something like this:</p>
<pre><code>apiVersion: v1
items:
- apiVersion: v1
kind: LimitRange
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"LimitRange","metadata":{"annotations":{},"name":"limits","namespace":"default"},"spec":{"limits":[{"defaultRequest":{"cpu":"100m"},"type":"Container"}]}}
creationTimestamp: 2017-11-16T12:15:40Z
name: limits
namespace: default
resourceVersion: "18741722"
selfLink: /api/v1/namespaces/default/limitranges/limits
uid: dcb25a24-cac7-11e7-a3d5-42010a8001b6
spec:
limits:
- defaultRequest:
cpu: 100m
type: Container
kind: List
metadata:
resourceVersion: ""
selfLink: ""
</code></pre>
<p>This limit is applied to every container. So, for instance, if you have a 4 cores node and each pod creates 2 containers, it will allow only for around ~20 pods to be created (4 cpus = 4000m -> / 100m = 40 -> / 2 = 20).</p>
<p>The "fix" here is to change the default <code>LimitRange</code> to one that better fits your use-case and then remove old pods allowing them to be recreated with the updated values. Another (and probably better) option is to directly set the CPU limits on each deployment/pod definition you have.</p>
<p>Some reading material:</p>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/#specify-a-cpu-request-and-a-cpu-limit" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/#specify-a-cpu-request-and-a-cpu-limit</a></p>
<p><a href="https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/#create-a-limitrange-and-a-pod" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/#create-a-limitrange-and-a-pod</a></p>
<p><a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#how-pods-with-resource-limits-are-run" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#how-pods-with-resource-limits-are-run</a></p>
<p><a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-resource-requests-and-limits" rel="nofollow noreferrer">https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-resource-requests-and-limits</a></p>
|
<p>I am using KOPS and I have a cluster with 3 masters. I deleted one master and the disks (root disk and etcd disks(main and events)). </p>
<p>Now kops recreated this master and the disks, but this new master node cannot join in the cluster. The error message on kube-apiserver is </p>
<pre><code>controller.go:135] Unable to perform initial IP allocation check: unable to refresh the service IP block: client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 127.0.0.1:4001: getsockopt: connection refused
</code></pre>
<p>Any idea?</p>
| <p>Looks like your <code>etcd</code> server is down on that host. It might have not been able to sync with the <code>etcd</code> servers on the other masters.</p>
<p>You can check like this:</p>
<pre><code>$ sudo docker ps | grep etcd
</code></pre>
<p>If you don't see anything then it's down. Then you can check the logs for the 'Exited' etcd container:</p>
<pre><code>$ sudo docker ps -a | grep Exited | grep etcd
$ sudo docker logs <etcd-container-id>
</code></pre>
<p>Also check that your kube-apiserver options for <code>etcd</code> look ok under <code>/etc/kuberbetes/manifests/kube-apiserver.yaml</code></p>
|
<p>We are currently providing our software as a software-as-a-service on Amazon EC2 machines. Our software is a microservice-based application with around 20 different services.
For bigger customers we use dedicated installations on a dedicated set of VMs, the number of VMs (and number of instances of our microservices) depending on the customer's requirements. A common requirement of any larger customer is that our software needs access to the customer's datacenter (e.g., for LDAP access). So far, we solved this using Amazon's virtual private gateway feature.</p>
<p>Now we want to move our SaaS deployments to Kubernetes. Of course we could just create a Kubernetes cluster across an individual customer's VMs (e.g., using kops), but that would offer little benefit.
Instead, perspectively, we would like to run a single large Kubernetes cluster on which we deploy the individual customer installations into dedicated namespaces, that way increasing resource utilization and lowering cost compared to the fixed allocation of machines to customers that we have today.
From the Kubernetes side of things, our software works fine already, we can deploy multiple installations to one cluster just fine. An open topic is however the VPN access. What we would need is a way to allow <strong>all</strong> pods in a customer's namespace access to the customer's VPN, but not to any other customers' VPNs.</p>
<p>When googleing for the topic, I found approaches that add a VPN client to the individual container (e.g., <a href="https://caveofcode.com/2017/06/how-to-setup-a-vpn-connection-from-inside-a-pod-in-kubernetes/" rel="noreferrer">https://caveofcode.com/2017/06/how-to-setup-a-vpn-connection-from-inside-a-pod-in-kubernetes/</a>) which is obviously not an option).
Other approaches seem to describe running a VPN <em>server</em> inside K8s (which is also not what we need).
Again others (like the "Strongswan IPSec VPN service", <a href="https://www.ibm.com/blogs/bluemix/2017/12/connecting-kubernetes-cluster-premises-resources/" rel="noreferrer">https://www.ibm.com/blogs/bluemix/2017/12/connecting-kubernetes-cluster-premises-resources/</a> ) use DaemonSets to "configure routing on each of the worker nodes". This also does not seem like a solution that is acceptable to us, since that would allow all pods (irrespective of the namespace they are in) on a worker node access to the respective VPN... and would also not work well if we have dozens of customer installations each requiring its own VPN setup on the cluster.</p>
<p>Is there any approach or solution that provides what we need, .i.e., VPN access for the pods in a specific namespace only?
Or are there any other approaches that could still satisfy our requirement (lower cost due to Kubernetes worker nodes being shared between customers)?</p>
<p>For LDAP access, one option might be to setup a kind of LDAP proxy, so that only this proxy would need to have VPN access to the customer network (by running this proxy on a small dedicated VM for each customer, and then configuring the proxy as LDAP endpoint for the application). However, LDAP access is only one out of many aspects of connectivity that our application needs depending on the use case.</p>
| <p>If your IPSec concentrator support VTI, it's possible route the traffic using firewall rules. For example, PFSense suports it: <a href="https://www.netgate.com/docs/pfsense/vpn/ipsec/ipsec-routed.html" rel="nofollow noreferrer">https://www.netgate.com/docs/pfsense/vpn/ipsec/ipsec-routed.html</a>. </p>
<p>Using VTI, you can direct traffic using some kind of policy routing: <a href="https://www.netgate.com/docs/pfsense/routing/directing-traffic-with-policy-routing.html" rel="nofollow noreferrer">https://www.netgate.com/docs/pfsense/routing/directing-traffic-with-policy-routing.html</a></p>
<p>However, i can see two big problems here:</p>
<ul>
<li><p>You cannot have two IPSEC tunnels with the conflicted networks. For example, your kube network is 192.168.0.0/24 and you have two customers: A (172.12.0.0/24) and B (172.12.0.0/12). Unfortunelly, this can happen (unless your customer be able to NAT those networks).</p></li>
<li><p>Find the ideals criteria for rule match (to allow the routing), since your source network are always the same. Use mark packages (using iptables mangle or even through application) can be a option, but you will still get stucked on the first problem.</p></li>
</ul>
<p>A similar scenario is founded on WSO2 (API gateway provider) architecture. They solved it using reverse-proxy in each network (sad but true) <a href="https://docs.wso2.com/display/APICloud/Expose+your+On-Premises+Backend+Services+to+the+API+Cloud#ExposeyourOn-PremisesBackendServicestotheAPICloud-ExposeyourservicesusingaVPN" rel="nofollow noreferrer">https://docs.wso2.com/display/APICloud/Expose+your+On-Premises+Backend+Services+to+the+API+Cloud#ExposeyourOn-PremisesBackendServicestotheAPICloud-ExposeyourservicesusingaVPN</a></p>
<p>Regards,</p>
<p>UPDATE:</p>
<p>I don't know if you use GKE. If yes, maybe use Alias-IP can be an option: <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/alias-ips" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/alias-ips</a>. The PODs IPs will be routable from VPC. So, you can apply some kind of routing policy based on their CIDR.</p>
|
<p>We are running a couple of k8s clusters on Azure AKS.
The service (ghost blog) is behind the Nginx ingress and secured with a cert from Letsencrypt. All of that works fine but the redirect behavior is what I am having trouble with.</p>
<blockquote>
<p>The Ingress correctly re-directs from <a href="http://whatever.com" rel="nofollow noreferrer">http://whatever.com</a> to
<a href="https://whatever.com" rel="nofollow noreferrer">https://whatever.com</a> — the issue is that it does so using a 308
redirect which strips all post/page Meta anytime a user shares a
page from the site.</p>
</blockquote>
<p>The issue results in users who share any page of the site on most social properties receiving a 'Preview Link' — where the title of the page and the page meta preview do not work and are instead replaced with '308 Permanent Redirect' text — which looks like this:</p>
<p><a href="https://i.stack.imgur.com/DtgR9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DtgR9.png" alt="enter image description here" /></a></p>
<p>From the <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#server-side-https-enforcement-through-redirect" rel="nofollow noreferrer">ingress-nginx docs over here</a> I can see that this is the intended behavior (ie. 308 redirect) what I believe is not intended is the interaction with social sharing services when those services attempt to create a page preview.</p>
<p>While the issue would be solved by Facebook (or twitter, etc etc) pointing direct to the https site by default, I currently have no way to force those sites to look to https for the content that will be used to create the previews.</p>
<h2>Setting Permanent Re-Direct Code</h2>
<p>I can also see that it looks like I should be able to set the redirect code to whatever I want it to be (I believe a 301 redirect will allow Facebook et al. to correctly pull post/page snippet meta), <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#permanent-redirect-code" rel="nofollow noreferrer">docs on that found here</a>.</p>
<p>The problem is that when I add the redirect-code annotation as specified:</p>
<pre><code>nginx.ingress.kubernetes.io/permanent-redirect-code: "301"
</code></pre>
<p>I still get a 308 re-direct on my resources despite being able to see (from my kubectl proxy) that the redirect-code annotation correctly applied. For reference, my full list of annotations on my Ingress looks like this:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ghost-ingress
annotations:
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/permanent-redirect-code: "301"
</code></pre>
<blockquote>
<p>To reiterate — my question is; what is the correct way to force a redirect to https via a custom error code (in my case 301)?</p>
</blockquote>
| <p>My guess is the TLS redirect shadows the <code>nginx.ingress.kubernetes.io/permanent-redirect-code</code> annotation. </p>
<p>You can actually change the <code>ConfigMap</code> for your <code>nginx-configuration</code> so that the default redirect is 301. That's the configuration your nginx ingress controller uses for nginx itself. The <code>ConfigMap</code> looks like this:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
name: nginx-configuration
namespace: ingress-nginx
data:
use-proxy-protocol: "true"
http-redirect-code: "301"
</code></pre>
<p>You can find more about the <code>ConfigMap</code> options <a href="https://github.com/kubernetes/ingress-nginx/blob/6393ca6aafe73d6f04cd2e1181cdd102e45fe75d/docs/user-guide/nginx-configuration/configmap.md" rel="noreferrer">here</a>. Note that if you change the <code>ConfigMap</code> you'll have to restart your <code>nginx-ingress-controller</code> pod.</p>
<p>You can also shell into the <code>nginx-ingress-controller</code> pod and see the actual nginx configs that the controller creates:</p>
<pre><code>kubectl -n ingress-nginx exec -it nginx-ingress-controller-xxxxxxxxxx-xxxxx bash
www-data@nginx-ingress-controller-xxxxxxxxx-xxxxx:/etc/nginx$ cat /etc/nginx/nginx.conf
</code></pre>
|
<p>I have two kubernetes clusters that were set up by kops. They are both running <code>v1.10.8</code>. I have done by best to mirror the configuration between the two. They both have RBAC enabled. I have kubernetes-dashboard running on both. They both have a <code>/srv/kubernetes/known_tokens.csv</code> with an <code>admin</code> and a <code>kube</code> user:</p>
<p><code>$ sudo cat /srv/kubernetes/known_tokens.csv
ABCD,admin,admin,system:masters
DEFG,kube,kube
(... other users ...)
</code></p>
<p>My question is how do these users get authorized with consideration to RBAC? When authenticating to kubernetes-dashboard using tokens, the <code>admin</code> user's token works on both clusters and has full access. But the <code>kube</code> user's token only has access on one of the clusters. On one cluster, I get the following errors in the dashboard.</p>
<p><code>configmaps is forbidden: User "kube" cannot list configmaps in the namespace "default"
persistentvolumeclaims is forbidden: User "kube" cannot list persistentvolumeclaims in the namespace "default"
secrets is forbidden: User "kube" cannot list secrets in the namespace "default"
services is forbidden: User "kube" cannot list services in the namespace "default"
ingresses.extensions is forbidden: User "kube" cannot list ingresses.extensions in the namespace "default"
daemonsets.apps is forbidden: User "kube" cannot list daemonsets.apps in the namespace "default"
pods is forbidden: User "kube" cannot list pods in the namespace "default"
events is forbidden: User "kube" cannot list events in the namespace "default"
deployments.apps is forbidden: User "kube" cannot list deployments.apps in the namespace "default"
replicasets.apps is forbidden: User "kube" cannot list replicasets.apps in the namespace "default"
jobs.batch is forbidden: User "kube" cannot list jobs.batch in the namespace "default"
cronjobs.batch is forbidden: User "kube" cannot list cronjobs.batch in the namespace "default"
replicationcontrollers is forbidden: User "kube" cannot list replicationcontrollers in the namespace "default"
statefulsets.apps is forbidden: User "kube" cannot list statefulsets.apps in the namespace "default"
</code></p>
<p>As per the <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#users-in-kubernetes" rel="nofollow noreferrer">official docs</a>, "Kubernetes does not have objects which represent normal user accounts". </p>
<p>I can't find anywhere on the working cluster that would give authorization to <code>kube</code>. Likewise, I can't find anything that would restrict <code>kube</code> on the other cluster. I've checked all <code>ClusterRoleBinding</code> resources in the <code>default</code> and <code>kube-system</code> namespace. None of these reference the <code>kube</code> user. So why the discrepancy in access to the dashboard and how can I adjust it?</p>
<p>Some other questions:</p>
<ul>
<li>How do I debug authorization issues such as this? The dashboard logs just say this user doesn't have access. Is there somewhere I can see which <code>serviceAccount</code> a particular request or token is mapped to?</li>
<li>What are <code>groups</code> in k8s? The <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/" rel="nofollow noreferrer">k8s docs</a> mention groups a lot. Even the static token users can be assigned a group such as <code>system:masters which looks like a</code>role<code>/</code>clusterrole<code>but there is no</code>system:masters<code>role in my cluster? What exactly are</code>groups`? As per <a href="https://stackoverflow.com/questions/44763590/create-user-group-using-rbac-api">Create user group using RBAC API?</a>, it appears groups are simply arbitrary labels that can be defined per user. What's the point of them? Can I map a group to a RBAC serviceAccount?</li>
</ul>
<p><strong>Update</strong></p>
<p>I restarted the working cluster and it no longer works. I get the same authorization errors as the working cluster. Looks like it was some sort of cached access. Sorry for the bogus question. I'm still curious on my follow-up questions but they can be made into separate questions.</p>
| <p>Hard to tell without access to the cluster, but my guess is that you have a <code>Role</code> and a <code>RoleBinding</code> somewhere for the <code>kube</code> user on the cluster that works. Not a <code>ClusterRole</code> with <code>ClusterRoleBinding</code>.</p>
<p>Something like this:</p>
<pre><code>kind: Role
metadata:
name: my-role
namespace: default
rules:
- apiGroups: [""]
Resources: ["services", "endpoints", "pods"]
verbs: ["get", "list", "watch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: kube-role-binding
namespace: default
subjects:
- kind: User
name: "kube"
apiGroup: ""
roleRef:
kind: Role
name: my-role
apiGroup: ""
</code></pre>
<blockquote>
<p>How do I debug authorization issues such as this? The dashboard logs
just say this user doesn't have access. Is there somewhere I can see
which serviceAccount a particular request or token is mapped to?</p>
</blockquote>
<p>You can look at the kube-apiserver logs under <code>/var/log/kube-apiserver.log</code> on your leader master. Or if it's running in a container <code>docker logs <container-id-of-kube-apiserver></code></p>
|
<p>I am trying to mount a file probes.json to an image. I started with trying to create a configmap similar to my probes.json file by manually specifying the values. </p>
<p>However, when I apply the replicator controller, I am getting an error.</p>
<p>How should I pass my JSON file to my configmap / how can I specify my values in data parameter?</p>
<p>I tried the below steps, however, I got an error. </p>
<pre><code>$ cat probes.json
[
{
"id": "F",
"url": "http://frontend.stars:80/status"
},
{
"id": "B",
"url": "http://backend.stars:6379/status"
},
{
"id": "C",
"url": "http://client.stars:9000/status"
}
]
</code></pre>
<p>Configmap: </p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: my-vol-config
namespace: stars
data:
id: F
id: B
id: C
F: |
url: http://frontend.stars:80/status
B: |
url: http://backend.stars:6379/status
C: |
url: http://client.stars:9000/status
</code></pre>
<p>ReplicaContainer:</p>
<pre><code>apiVersion: v1
kind: ReplicationController
metadata:
name: management-ui
namespace: stars
spec:
replicas: 1
template:
metadata:
labels:
role: management-ui
spec:
containers:
- name: management-ui
image: calico/star-collect:v0.1.0
imagePullPolicy: Always
ports:
- containerPort: 9001
volumeMounts:
name: config-volume
- mountPath: /star/probes.json
volumes:
- name: config-volume
configMap:
name: my-vol-config
</code></pre>
<p>Error: </p>
<pre><code>kubectl apply -f calico-namespace/management-ui.yaml
service "management-ui" unchanged
error: error converting YAML to JSON: yaml: line 20: did not find expected key
</code></pre>
| <p>This part, <code>-</code> should be with <code>name:</code> on first line under <code>volumeMounts</code></p>
<pre><code> volumeMounts:
name: config-volume
- mountPath: /star/probes.json
</code></pre>
<p>Like so:</p>
<pre><code> volumeMounts:
- name: config-volume
mountPath: /star/probes.json
</code></pre>
|
<p>I am testing a Kubernetes cluster version 1.11 and need to make POD to be accessed externally by the master server IP and by the POD port (in this case an nginx image through port 80) and I am trying to enable and configure ingress-nginx to get this access.</p>
<p>To run the tests, I added ingress-nginx to the cluster with the command:</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
</code></pre>
<p>Then I tried to create an ingress as described in this example:
<a href="https://koudingspawn.de/install-kubernetes-ingress/" rel="nofollow noreferrer">https://koudingspawn.de/install-kubernetes-ingress/</a> - I just did not do the LoadBalancer portion of Digital Ocean.</p>
<p>It happened that it did not work, I could not access the IP or host configured and because of this, I was in doubt if I made the right addition of ingress-nginx in the cluster, if the example has a failure or if I have to follow another path?</p>
| <p>Neither of the canonical approaches will give you exactly what you want here.</p>
<p>The typical solution involves either using LoadBalancer service type or NodePort and manualy configuring your network LB to point to the ports of the NodePort service.</p>
<p>I will make 3 assumptions here :</p>
<ul>
<li>you have no LB service available so you want to connect with HTTP(S) to the IP of your master</li>
<li>your master hosts kube api on port like 6443, or anything else but 80/443 that you want to use for web traffic</li>
<li>you are talking about single master and using it for the traffic. It's an obvious SPOF, so I assume you do not care about HA that much</li>
</ul>
<p>With that in mind, you need to adapt your ingress deployment to fit your needs.</p>
<p>Nginx ingress, within it's network namespace, listens on standard ports (80/443). If, instead of exposing it with a <code>Service</code>, you run tham with <code>hostNetwork: true</code>, you will see the ingress listening directly on 80/443. To be certain it's running on your master, you need to allow it to be scheduled on master (probably via tolerations) and make sure it is scheduled on master and not some other node (nodeSelector/NodeAffinity or DaemonSet to run it on ~every node in cluster)</p>
<p>Another solution can be to actually go the canonical way and have the ingress listening on some nodeports, and then have another piece of software act as loadbalancer deployed to master either by means of kube (<code>hostNetwork</code>) or by completely autonomous mechanism (ie. as systemd service unit), that would listen on 80/443 and tcp forward the traffic to the nodeports.</p>
|
<p>I have a dotnet core pod in Kubernetes(minikube) that need to access to local SQL Server(Testing Server).
it work in the container but when i put it in to pod. it can't fine sql server on my machine</p>
<p>but i can ping from pod to my SQL Server</p>
<p>here is the error from log</p>
<pre><code> An error occurred using the connection to database
> 'ArcadiaAuthenServiceDB' on server '192.168.2.68'.
> System.Data.SqlClient.SqlException (0x80131904): A network-related or
> instance-specific error occurred while establishing a connection to
> SQL Server. The server was not found or was not accessible. Verify
> that the instance name is correct and that SQL Server is configured to
> allow remote connections. (provider: TCP Provider, error: 40 - Could
> not open a connection to SQL Server)
</code></pre>
<p>ping</p>
<pre><code>root@authenservice-dpm-57455f59cf-7rqvz:/app# ping 192.168.2.68
PING 192.168.2.68 (192.168.2.68) 56(84) bytes of data.
64 bytes from 192.168.2.68: icmp_seq=1 ttl=127 time=0.449 ms
64 bytes from 192.168.2.68: icmp_seq=2 ttl=127 time=0.361 ms
64 bytes from 192.168.2.68: icmp_seq=3 ttl=127 time=0.323 ms
64 bytes from 192.168.2.68: icmp_seq=4 ttl=127 time=0.342 ms
^C
--- 192.168.2.68 ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3064ms
rtt min/avg/max/mdev = 0.323/0.368/0.449/0.053 ms
root@authenservice-dpm-57455f59cf-7rqvz:/app#
</code></pre>
<p>my Connection String in Container</p>
<pre><code>"DefaultConnection": "Server=mssql-s; Database=ArcadiaAuthenServiceDB; MultipleActiveResultSets=true;User Id=pbts;Password=pbts"
</code></pre>
<p>I try to Created Service End-point in Kubernetes but no luck.</p>
<p>Thank you.</p>
<p>EDIT
Here The Service.yml File</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mssql-s
namespace: default spec:
ports:
- port: 1433
---
apiVersion: v1
kind: Endpoints
metadata:
name: mssql-s
namespace: default subsets:
- addresses:
- ip: 192.168.2.68
ports:
- port: 1433
–--
</code></pre>
<p>EDIT
I check that SQL Server is Listen to 1433 as well</p>
<pre><code>PS C:\Windows\system32> netstat -aon | findstr 1433
TCP 0.0.0.0:1433 0.0.0.0:0 LISTENING 5028
TCP [::]:1433 [::]:0 LISTENING 5028
</code></pre>
<p>Is there anything i can do to solve this problem?</p>
| <p>Thank you for all your reply.
Today i found solution. it not about k8s but it about Firewall Setting.
I add Inbound rule to allow port 1433. And That it</p>
|
<p>I have installed a k8s cluster on AWS with kops and I have followed instructions to deploy api-platform on that cluster with helm.</p>
<p>I don't understand why the php pod log shows a 405 when php-pod try to invalidate cache into varnish-pod.</p>
<p>In the Varnish pod inside /usr/local/etc/varnish/default.vcl my whitelist is the default one</p>
<pre><code># Hosts allowed to send BAN requests
acl invalidators {
"localhost";
"php";
}
</code></pre>
<p>UPDATE I think that the problem can be generalized in this way: from a pod A inside a service A I want to call a service B. I need that in the request (received in pod B) is preserved the IP of the service A not the IP of the pod A. </p>
| <p>My original problem can be summarize with:</p>
<p>from a pod A inside a service A I want to call a service B. I need that in the request (received in pod B) is preserved in the origin the IP of the service A not the IP of the pod A.</p>
<p>But Kubernetes networking model need that:</p>
<ul>
<li>all containers can communicate with all other containers without NAT</li>
<li>all nodes can communicate with all containers (and vice-versa) without NAT</li>
<li>the IP that a container sees itself as is the same IP that others see it as</li>
</ul>
<p>So my needed can't be satisfied.</p>
<p>So my solution is to use <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/network-policies/</a> and not to rely to a ip-based whitelist in the varnish vcl config.</p>
|
<p>My environment:</p>
<p>CentOS7 linux</p>
<p><strong>/etc/hosts:</strong></p>
<p>192.168.0.106 master01</p>
<p>192.168.0.107 node02</p>
<p>192.168.0.108 node01</p>
<p>On master01 machine:</p>
<p><strong>/etc/hostname:</strong></p>
<p>master01</p>
<p>On master01 machine I execute commands as follows:</p>
<p>1)yum install docker-ce kubelet kubeadm kubectl </p>
<p>2)systemctl start docker.service</p>
<p>3)vim /etc/sysconfig/kubelet</p>
<p>EDIT the file:</p>
<p>KUBELET_EXTRA_ARGS="--fail-swap-on=false"</p>
<p>4)systemctl enable docker kubelet</p>
<p>5)kubeadm init --kubernetes-version=v1.12.0 --pod-network-cidr=10.244.0.0/16 servicecidr=10.96.0.0/12 --ignore-preflight-errors=all</p>
<p>THEN</p>
<p>The first error message: </p>
<p><strong>unable to load client CA file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory</strong></p>
<p>kubelet.go:2236] node "master01" not found</p>
<p>kubelet_node_status.go:70] Attempting to register node master01
Oct 2 23:32:35 master01 kubelet: E1002 23:32:35.974275 49157 </p>
<p>kubelet_node_status.go:92] Unable to register node "master01" with API server: Post <a href="https://192.168.0.106:6443/api/v1/nodes" rel="nofollow noreferrer">https://192.168.0.106:6443/api/v1/nodes</a>: dial tcp 192.168.0.106:6443: connect: connection refused</p>
<p>l don't know why node master01 not found?</p>
<p>l have tried a lot of ways but can't solve the problem.</p>
<p>Thank you!</p>
| <p>Your issue also might be caused by firewall rules, restricting tcp connection to 6443 port.
So you can temporary disable firewall on master node to validate this:</p>
<pre><code>systemctl stop firewalld
</code></pre>
<p>and then try to perform kubeadm init once again.
Hope it helps.</p>
|
<p>I have created an HA kubernetes cluster with kubeadm version 1.11.2 and installed calico CNI plugin which is up and running. I am trying to create a deployment with a docker image. It successfully created the deployment and created a container on the node but the container is failing to communicate to the outside world other than the node on which it's hosted (with the IP). </p>
<p>I have logged into the container and tried to ping the masters and other nodes it's failing.</p>
<p>Can anyone help me in resolving this issue? </p>
| <p>Hard to tell, but has to be an issue with Calico/CNI. Are all your Calico pods ready on all your nodes, like this:</p>
<pre><code>$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-node-xxxxx 2/2 Running 0 15h
</code></pre>
<p>You can check the CNI configs under <code>/etc/cni/net.d</code> Maybe your <code>install-cni.sh</code> container in your Calico pod didn't initialize the configs? For example:</p>
<pre><code>{
"name": "k8s-pod-network",
"cniVersion": "0.3.0",
"plugins": [
{
"type": "calico",
"log_level": "info",
"datastore_type": "kubernetes",
"nodename": "<node-name>",
"mtu": 1500,
"ipam": {
"type": "host-local",
"subnet": "usePodCidr"
},
"policy": {
"type": "k8s"
},
"kubernetes": {
"kubeconfig": "/etc/cni/net.d/calico-kubeconfig"
}
},
{
"type": "portmap",
"snat": true,
"capabilities": {"portMappings": true}
}
]
}
</code></pre>
<p>Normally, your container will have an interface and podCidr IP assigned to it, so after you shell into the pod/containers you can check with <code>$ ifconfig</code></p>
|
<p>Im trying to run spark-submit to kubernetes cluster with spark 2.3 docker container image</p>
<p>The challenge im facing is application have a mainapplication.jar and other dependency files & jars which are located in Remote location like AWS s3 ,but as per spark 2.3 documentation there is something called kubernetes init-container to download remote dependencies but in this case im not creating any Podspec to include init-containers in kubernetes, as per documentation Spark 2.3 spark/kubernetes internally creates Pods (driver,executor) So not sure how can i use init-container for spark-submit when there are remote dependencies.</p>
<p><a href="https://spark.apache.org/docs/latest/running-on-kubernetes.html#using-remote-dependencies" rel="nofollow noreferrer">https://spark.apache.org/docs/latest/running-on-kubernetes.html#using-remote-dependencies</a></p>
<p>Please suggest</p>
| <p>It works as it should with s3a:// urls. Unfortunatly getting s3a running on the stock spark-hadoop2.7.3 is problematic (authentication mainly), so I opted for building spark with Hadoop 2.9.1, since S3A has seen significant development there</p>
<p>I have created a <a href="https://gist.github.com/jalkjaer/4621d9034c5588880d854290c588004e" rel="nofollow noreferrer">gist</a> with the steps needed to </p>
<ul>
<li>build spark with new hadoop dependencies </li>
<li>build the docker image for k8s</li>
<li>push image to ECR</li>
</ul>
<p>The script also creates a second docker image with the S3A dependencies added and base conf settings for enabling S3A using IAM credentials so running in AWS doesn't require putting access/secretkey in conf files/args</p>
<p>I havn't run any production spark jobs yet using the image, but have tested that basic saving and loading to s3a urls does work. </p>
<p>I have yet to experiment with <a href="https://hadoop.apache.org/docs/r2.9.1/hadoop-aws/tools/hadoop-aws/s3guard.html" rel="nofollow noreferrer">S3Guard</a> which uses DynamoDB to ensure that S3 writes/reads are consistent - similarly to EMRFS</p>
|
<p>We are running Kubernetes 1.9.1.
We are using Kubernetes client python based library script to connect to Kubernetes server and generating some information related to pods like list of pods which r in terminating state currently. We want to send this data to prometheus server as metric and want to raise an alert in prometheus.
Do I need to create custom metric of prometheus to achieve this?
Alert for prometheus will be created in my python script using prometheus client? </p>
| <p>According to <a href="https://prometheus.io/docs/introduction/overview/" rel="nofollow noreferrer">prometheus documentation</a>:</p>
<blockquote>
<p>Prometheus is a monitoring platform that collects metrics from monitored targets by scraping metrics HTTP endpoints on these targets. </p>
</blockquote>
<p>So - you don't send data to prometheus. You have to expose those data in format readable by prometheus and configure prometheus to scrap it.
If it's not possible - you can push data to prometheus <a href="https://prometheus.io/docs/instrumenting/pushing/" rel="nofollow noreferrer">push gateway</a>.
If you use python client you can use <a href="https://github.com/prometheus/client_python#exporting-to-a-pushgateway" rel="nofollow noreferrer">this code</a> as an example.</p>
<p>You have to define <a href="https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/" rel="nofollow noreferrer">alert rules</a> as part of prometheus configuration. Please note - alerts are not sent by prometheus, but by the alertmanager which is a separate process.</p>
|
<p>I need to create ServiceAccounts that can access a GKE cluster. Internally I do this with the following commands:</p>
<pre><code>kubectl create serviceaccount onboarding --namespace kube-system
kubectl apply -f onboarding.clusterrole.yaml
kubectl create clusterrolebinding onboarding --clusterrole=onboarding --serviceaccount=kube-system:onboarding
</code></pre>
<p>Where the contents of the file <code>onboarding.clusterrole.yaml</code> are something like this:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: onboarding
rules:
- apiGroups:
- '*'
resources:
- 'namespace,role,rolebinding,resourcequota'
verbs:
- '*'
</code></pre>
<p>The ServiceAccount resource is created as expected and the ClusterRole and ClusterRoleBinding also look right, but when I attempt to access the API using this new role, I get an Authentication failure.</p>
<pre><code>curl -k -X GET -H "Authorization: Bearer [REDACTED]" https://36.195.83.167/api/v1/namespaces
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "namespaces is forbidden: User \"system:serviceaccount:kube-system:onboarding\" cannot list namespaces at the cluster scope: Unknown user \"system:serviceaccount:kube-system:onboarding\"",
"reason": "Forbidden",
"details": {
"kind": "namespaces"
},
"code": 403
</code></pre>
<p>The response suggests an unknown user, but I confirmed the ServiceAccount exists and is in the Subjects of the ClusterRoleBinding. Is it possible to define a ServiceAccount in this way for GKE?</p>
<p>I am using the exact process successfully on kubernetes clusters we run in our datacenters.</p>
| <p>GKE should have the same process. Does your <code>kubectl</code> version match that of the GKE cluster? Not sure if this is the issue but the <code>ClusterRole</code> needs plurals for the resources and the resources are represented as lists:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: onboarding
rules:
- apiGroups:
- '*'
resources:
- namespaces
- roles
- rolebindings
- resourcequotas
verbs:
- '*'
</code></pre>
<p>Works for me on K8s 1.11.x:</p>
<pre><code>curl -k -X GET -H "Authorization: Bearer [REDACTED]" https://127.0.0.1:6443/api/v1/namespaces
{
"kind": "NamespaceList",
"apiVersion": "v1",
"metadata": {
"selfLink": "/api/v1/namespaces",
"resourceVersion": "12345678"
},
...
</code></pre>
|
<p>After learning about arguments that can be passed to a Java 8 Virtual Machine to make it container-aware (i.e. -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap), I am trying to add these arguments to my Kubernetes deployment for a Spring Boot service.</p>
<p>In containers section of my deployment YAML file, I have the following:<p></p>
<pre>
resources:
requests:
memory: "256Mi"
cpu: "50m"
limits:<br/>
memory: "512Mi"
cpu: "200m"
env:
- name: JVM_OPTS
value: "-Xms256M -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=1"
</pre>
<p>In my Dockerfile, I have:</p>
<pre>
ENV JVM_OPTS="-Xmx256M"
ENV JVM_ARGS="-Dspring.profiles.active=kubernetes"
EXPOSE 8080
ENTRYPOINT [ "sh", "-c", "java $JVM_ARGS $JVM_OPTS -jar testservice.jar" ]
</pre>
<p>I can't seem to figure out why the max heap sized does not get sized properly:</p>
<pre>
$ kubectl exec test-service-deployment-79c9d4bd54-trxgj -c test-service -- java -XshowSettings:vm -version'
VM settings:
Max. Heap Size (Estimated): 875.00M
Ergonomics Machine Class: server
Using VM: OpenJDK 64-Bit Server VM
openjdk version "1.8.0_171"
OpenJDK Runtime Environment (IcedTea 3.8.0) (Alpine 8.171.11-r0)
OpenJDK 64-Bit Server VM (build 25.171-b11, mixed mode)
</pre>
<p>What am I doing wrong here?</p>
<p>On a local Docker install, I can see the JVM max heap set correctly: </p>
<pre>
$ docker run openjdk:8-jre-alpine java -Xms256M -Xmx512M -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -XX:MaxRAMFraction=1 -XshowSettings:vm -version
VM settings:
Min. Heap Size: 256.00M
Max. Heap Size: 512.00M
Ergonomics Machine Class: server
Using VM: OpenJDK 64-Bit Server VM
openjdk version "1.8.0_171"
OpenJDK Runtime Environment (IcedTea 3.8.0) (Alpine 8.171.11-r0)
OpenJDK 64-Bit Server VM (build 25.171-b11, mixed mode)
</pre>
| <p>When running <code>java -XshowSettings:vm -version</code> in container, <code>JVM_OPTS</code> is not include in your command.</p>
<p>Try with this one</p>
<pre><code>kubectl exec test-service-deployment-79c9d4bd54-trxgj -c test-service \
-- sh -c 'java $JVM_OPTS -XshowSettings:vm -version'
</code></pre>
|
<p>I have not seen any documentation of a multi-container pod application on helm charts. Can anyone point on how to do this?</p>
<p>Something like this <a href="https://linchpiner.github.io/k8s-multi-container-pods.html" rel="noreferrer">https://linchpiner.github.io/k8s-multi-container-pods.html</a> or <a href="https://www.mirantis.com/blog/multi-container-pods-and-container-communication-in-kubernetes/" rel="noreferrer">https://www.mirantis.com/blog/multi-container-pods-and-container-communication-in-kubernetes/</a></p>
<p>So basically have a pod with multiple containers.</p>
<p>Does helm charts support this?</p>
<p>UPDATE:
I have been able to deploy pod now using this help (finally sample pod definition)</p>
<p><a href="https://github.com/helm/charts/blob/master/stable/keycloak/templates/test/test-pod.yaml" rel="noreferrer">https://github.com/helm/charts/blob/master/stable/keycloak/templates/test/test-pod.yaml</a></p>
<p>but how do i have replicas like increase number of pods i launch kind of like deployment.yaml file?</p>
| <p>It should be supported in the templates essentially use the templates with multiple containers in the Pod spec. That Pod spec can also be in other abstractions like Deployments, DaemonSets, StatefulSets, etc.</p>
<p>Example:</p>
<p><a href="https://github.com/helm/charts/blob/master/stable/mysql/templates/deployment.yaml" rel="noreferrer">https://github.com/helm/charts/blob/master/stable/mysql/templates/deployment.yaml</a>
<a href="https://github.com/helm/charts/blob/master/stable/lamp/templates/deployment.yaml" rel="noreferrer">https://github.com/helm/charts/blob/master/stable/lamp/templates/deployment.yaml</a></p>
<p>and a few more here:</p>
<p><a href="https://github.com/helm/charts/tree/master/stable" rel="noreferrer">https://github.com/helm/charts/tree/master/stable</a></p>
<p>You can scale your deployment replicas like this:</p>
<pre><code>kubectl scale deployment mysql-deployment --replicas=10
</code></pre>
<p>More on that <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#scaling-a-deployment" rel="noreferrer">here</a></p>
<p>On the template you can specify <code>replicas</code> in the deployment spec.</p>
<p>For example:</p>
<pre><code>kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: my-dep
namespace: kube-system
labels:
k8s-app: my-app
spec:
replicas: 1 <= here
selector:
matchLabels:
k8s-app: my-app
template:
metadata:
labels:
k8s-app: my-app
name: my-app
spec:
serviceAccountName: mysa
terminationGracePeriodSeconds: 60
containers:
- image: mycontainer
name: myappcontainer
ports:
- name: http
containerPort: 80
- name: admin
containerPort: 8080
args:
- --opt1
- --opt2
- --opt3
- image: mycontainer2
name: myappcontainer2
</code></pre>
|
<p>I'm trying to figure out what is the performance impact of Kubernetes on my applications.</p>
<p>I understand my applications are just Docker containers running on the K8s host, but still - the K8s application itself does has its cost. It manages networking, security, storage etc. What impact does it have (if any...) on my general application's performance?</p>
<p>Has anyone benchmarked it? Compared to just Docker or application running directly on a given host?</p>
| <p>You can get some kubelet metrics from the Node Performance Dashboard. More information <a href="https://kubernetes.io/blog/2016/11/visualize-kubelet-performance-with-node-dashboard/" rel="nofollow noreferrer">here</a></p>
<p>Here are some metrics for the <a href="https://docs.openstack.org/performance-docs/latest/test_results/container_cluster_systems/kubernetes/proxy/index.html" rel="nofollow noreferrer">kube-proxy</a>.</p>
<p>And the network overlay performance will depend on the type of network overlay that you are using here are some for <a href="https://www.projectcalico.org/calico-dataplane-performance/" rel="nofollow noreferrer">Calico</a> comparing throughput with CPU usage.</p>
<p>You can also compare some of those metrics with profiling the running containers on your nodes using <a href="https://github.com/google/cadvisor" rel="nofollow noreferrer">cAdvisor</a></p>
<p>Btw, on the later K8s versions the kube-proxy and overlay run in a container/pod.</p>
|
<p>I created a service account that contains the default cluster role "view" which makes it can access all of our resources with view permission.</p>
<p>But I would like to add a limitation so that this service account can't access one of our namespace.</p>
<p>Any idea how can I achieve this?</p>
<p>Br,</p>
<p>Tim</p>
| <p>Kubernetes has only two permission scopes: Cluster(<code>ClusterRole</code>) or Namespace(<code>Role</code>) and no way to limit or exclude a <code>ClusterRole</code> to specific namespaces. If you want to restrict your ServiceAccount to specific namespaces you cannot use a <code>ClusterRole</code> but must use a <code>Role</code> in every namespace the ServiceAccount should have access in. </p>
|
<p>I am trying to enable a deployment in the gateway namespace to send metrics to an external service at <code>engine-report.apollodata.com</code></p>
<p>I have written the following service entry and virtual service rules, as per the <a href="https://istio.io/docs/tasks/traffic-management/egress/" rel="nofollow noreferrer">Istio documentation</a> yet no traffic is able to access the endpoint.</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: apollo-engine-ext
namespace: {{ .Release.Namespace }}
labels:
chart: {{ .Chart.Name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
hosts:
- '*.apollodata.com'
ports:
- number: 80
name: http
protocol: HTTP
- number: 443
name: https
protocol: HTTPS
resolution: DNS
location: MESH_EXTERNAL
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: apollo-engine-ext
namespace: {{ .Release.Namespace }}
labels:
chart: {{ .Chart.Name }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
hosts:
- '*.apollodata.com'
tls:
- match:
- port: 443
sni_hosts:
- '*.apollodata.com'
route:
- destination:
host: '*.apollodata.com'
port:
number: 443
weight: 100
</code></pre>
<p>What might be causing this issue</p>
| <p>I think the problem is that you are using DNS resolution in a ServiceEntry with a wildcard host. According to the <a href="https://istio.io/docs/reference/config/istio.networking.v1alpha3/#ServiceEntry-Resolution" rel="nofollow noreferrer">documentation</a>, if there are no endpoints in the ServiceEntry the DNS resolution will only work if the host is not a wildcard.</p>
<p>If the endpoints are DNS resolvable by the application, then it should work if you set the resolution to NONE.</p>
|
<p>I am using KOPS and I have a cluster with 3 masters. I deleted one master and the disks (root disk and etcd disks(main and events)). </p>
<p>Now kops recreated this master and the disks, but this new master node cannot join in the cluster. The error message on kube-apiserver is </p>
<pre><code>controller.go:135] Unable to perform initial IP allocation check: unable to refresh the service IP block: client: etcd cluster is unavailable or misconfigured; error #0: dial tcp 127.0.0.1:4001: getsockopt: connection refused
</code></pre>
<p>Any idea?</p>
| <p>Issue Solved.</p>
<p>1 - I removed the old master from de etcd cluster using etcdctl. You will need to connect on the etcd-server container to do this.</p>
<p>2 - On the new master node I stopped kubelet and protokube services.</p>
<p>3 - Empty Etcd data dir. (data and data-events)</p>
<p>4 - Edit /etc/kubernetes/manifests/etcd.manifests and etcd-events.manifest changing ETCD_INITIAL_CLUSTER_STATE from new to existing.</p>
<p>5 - Get the name and PeerURLS from new master and use etcdctl to add the new master on the cluster. (etcdctl member add "name" "PeerULR")You will need to connect on the etcd-server container to do this.</p>
<p>6 - Start kubelet and protokube services on the new master.</p>
|
<p>Based on <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/" rel="noreferrer">this</a> it is possible to create environment variables that are the same across all the pods of the deployment that you define.</p>
<p>Is there a way to instruct Kubernetes deployment to create pods that have different environment variables?</p>
<p>Use case:</p>
<p>Let's say that I have a monitoring container and i want to create 4 replicas of it. This container has a service that is mailing if an environment variables defines so. Eg, if the env var IS_MASTER is true, then the service proceeds to send those e-mails.</p>
<pre><code>apiVersion: v1
kind: Deployment
metadata:
...
spec:
...
replicas: 4
...
template:
...
spec:
containers:
-env:
-name: IS_MASTER
value: <------------- True only in one of the replicas
</code></pre>
<p>(In my case I'm using helm, but the same thing can be without helm as well)</p>
| <p>What you are looking for is, as far as I know, more like an anti-pattern than impossible.</p>
<p>From what I understand, you seem to be looking to deploy a scalable/HA monitoring platform that wouldn't mail X times on alerts, so you can either make a sidecar container that will talk to its siblings and "elect" the master-mailer (a StatefulSet will make it easier in this case), or just separate the mailer from the monitoring and make them talk to each other through a Service. That would allow you to load-balance both monitoring and mailing separately.</p>
<pre><code>monitoring-1 \ / mailer-1
monitoring-2 --- > mailer.svc -- mailer-2
monitoring-3 / \ mailer-3
</code></pre>
<p>Any mailing request will be handled by one and only one mailer from the pool, but that's assuming your Monitoring Pods aren't all triggered together on alerts... If that's not the case, then regardless of your "master" election for the mailer, you will have to tackle that first.</p>
<p>And by tackling that first I mean adding a master-election logic to your monitoring platform, to orchestrate master fail-overs on events, there are a few ways to do so, but it really depends on what your monitoring platform is and can do...</p>
<p>Although, if your replicas are just there to extend compute power somehow and your master is expected to be static, then simply use a StatefulSet, and add a one liner at runtime doing <code>if hostname == $statefulset-name-0 then MASTER</code>, but I feel like it's not the best idea.</p>
|
<p>I'm having issues opening up communication with my LDAP authentication. Locally logins work fine, but when running on Kubernetes I am receiving the error:</p>
<pre><code>2018.10.03 18:23:44 INFO web[][org.sonar.INFO] Security realm: LDAP
2018.10.03 18:23:44 INFO web[][o.s.p.l.LdapSettingsManager] User mapping: LdapUserMapping{baseDn=ou=bluepages,o=ibm.com, request=(&amp;(mail={0})(objectclass=person)), realNameAttribute=cn, emailAttribute=mail}
2018.10.03 18:23:45 INFO web[][o.s.p.l.LdapSettingsManager] Groups will not be synchronized, because property 'ldap.group.baseDn' is empty.
2018.10.03 18:23:45 INFO web[][o.s.p.l.LdapContextFactory] Test LDAP connection: FAIL
2018.10.03 18:23:45 ERROR web[][o.s.s.p.Platform] Background initialization failed. Stopping SonarQube
org.sonar.plugins.ldap.LdapException: Unable to open LDAP connection
at org.sonar.plugins.ldap.LdapContextFactory.testConnection(LdapContextFactory.java:211)
at org.sonar.plugins.ldap.LdapRealm.init(LdapRealm.java:63)
</code></pre>
<p>I'm a bit unsure how to open up the flow from the Kubernetes cluster to ldap.</p>
| <p>I totally think it's either an 'Egress' firewall rule not allowing your Kubernetes cluster to talk to port <code>636</code> or an 'Incoming' firewall rule on your LDAP deployment not allowing the external IP address range of your Kubernetes cluster into port <code>636</code>.</p>
<p><a href="https://console.bluemix.net/docs/containers/cs_firewall.html#firewall" rel="nofollow noreferrer">This might help</a> where it says "Allowing the cluster to access infrastructure resources and other services"</p>
|
<p>Any idea how to get a POD status via Kubernetes REST API for a POD with known name?
I can do it via kubectl by just typing "kubectl get pods --all-namespaces" since the output lists STATUS as a separate column but not sure which REST API to use to get the STATUS of a running pod.
Thank you</p>
| <p>You can just query the API server:</p>
<pre><code>curl -k -X GET -H "Authorization: Bearer [REDACTED]" \
https://127.0.0.1:6443/api/v1/pods
</code></pre>
<p>If you want to get the status you can pipe them through something like <a href="https://stedolan.github.io/jq/" rel="nofollow noreferrer"><code>jq</code></a>:</p>
<pre><code>curl -k -X GET -H "Authorization: Bearer [REDACTED]" \
https://127.0.0.1:6443/api/v1/pods \
| jq '.items[] | .metadata.name + " " + .status.phase'
</code></pre>
|
<p>Any idea how to get a POD status via Kubernetes REST API for a POD with known name?
I can do it via kubectl by just typing "kubectl get pods --all-namespaces" since the output lists STATUS as a separate column but not sure which REST API to use to get the STATUS of a running pod.
Thank you</p>
| <p>When not sure which REST API and the command is known, run the command as below with -v9 option. Note the kubectl supports only a subset of options in imperative way (get, delete, create etc), so it's better to get familiar with the REST API.</p>
<blockquote>
<p>kubectl -v9 get pods</p>
</blockquote>
<p>The above will output the REST API call. This can be modified appropriately and the output can piped to jq to get subset of the data.</p>
|
<p>I have just installed an EFK stack on my Kubernetes cluster using the guide on <a href="https://medium.com/@timfpark/efk-logging-on-kubernetes-on-azure-4c54402459c4" rel="noreferrer">https://medium.com/@timfpark/efk-logging-on-kubernetes-on-azure-4c54402459c4</a></p>
<p>I have it working when accessing it through the proxy as stated in the guide on</p>
<p><a href="http://localhost:8001/api/v1/namespaces/kube-system/services/kibana-logging/proxy" rel="noreferrer">http://localhost:8001/api/v1/namespaces/kube-system/services/kibana-logging/proxy</a></p>
<p>However, I want it to work through my existing ingress controller so I have created a new ingress rule using the yaml below:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
generation: 2
labels:
app: kibana
name: kibana
namespace: kube-system
spec:
rules:
- host: kibana.dev.example1.com
http:
paths:
- backend:
serviceName: kibana-logging
servicePort: 5601
path: /
status:
loadBalancer:
ingress:
- {}
</code></pre>
<p>To my service which runs as:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
addonmanager.kubernetes.io/mode: Reconcile
k8s-app: kibana-logging
kubernetes.io/cluster-service: "true"
kubernetes.io/name: Kibana
name: kibana-logging
namespace: kube-system
spec:
clusterIP: X.X.195.49
ports:
- port: 5601
protocol: TCP
targetPort: ui
selector:
k8s-app: kibana-logging
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
</code></pre>
<p>However, when I try and access my URL:
<code>http://kibana.dev.example1.com</code></p>
<p>I get: <code>{"statusCode":404,"error":"Not Found","message":"Not Found"}</code></p>
<p>If I try and access: <code>http://kibana.dev.example1.com/app/kibana#</code></p>
<p>I get: "Kibana did not load properly. Check the server output for more information."</p>
<p>After looking through the logs for both Kibana pod and ingress pod and comparing the results between a successful request through the proxy and an unsuccessful request through the ingress I can see that...</p>
<p>for hitting /</p>
<pre><code>"GET / HTTP/1.1" 200 197 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0 Safari/605.1.15" 491 0.003 [kube-system-kibana-logging-5601] X.X.22.204:5601 197 0.003 200 6101a7003003d34636d2012e53c23ca7
"GET /api/v1/namespaces/kube-system/services/kibana-logging/proxy/app/kibana HTTP/1.1" 404 85 "http://kibana.dev.example1.com/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0 Safari/605.1.15" 612 0.003 [kube-system-kibana-logging-5601] X.X.22.204:5601 85 0.003 404 5809ac2b33d3e23b200b13c9971d8520
</code></pre>
<p>for hitting /app/kibana#</p>
<pre><code>"GET /app HTTP/1.1" 404 85 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0 Safari/605.1.15" 470 0.003 [kube-system-kibana-logging-5601] X.X.22.204:5601 85 0.003 404 54a4abe0cae6d3d4298847a0db0786d6
"GET /app/kibana HTTP/1.1" 200 13301 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0 Safari/605.1.15" 477 0.041 [kube-system-kibana-logging-5601] X.X.22.204:5601 13272 0.040 200 6cb7e7698f5c72e0cd06b3408d8d4673
"GET /api/v1/namespaces/kube-system/services/kibana-logging/proxy/bundles/kibana.style.css?v=16627 HTTP/1.1" 404 85 "https://kibana.dev.example1.com/app/kibana" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.0 Safari/605.1.15" 576 0.004 [kube-system-kibana-logging-5601] X.X.22.204:5601 85 0.004 404 0b825f03c36c2225ab082c2a0bab15f4
</code></pre>
<p>When hitting through the proxy most of these requests return 302s rather than 404s. Can the ingress not access these URLs in namespace kube-system?</p>
<p>Am I missing something obvious here - maybe hitting the wrong Kibana URL? I've done a lot of googling and can't find anything similar.</p>
| <p>Turns out the problem was with the kibana configuration.</p>
<p>In the kibana deployment yaml there is an environment variable called <code>SERVER_BASEPATH</code> which is set to point at the kibana service proxy. This was causing the URL to be rewritten each time I tried to access the endpoint externally.</p>
<p>If you comment out this variable and it's value and redeploy kibana then it should work by just hitting the ingress address.</p>
<p>e.g. <a href="http://kibana.dev.example1.com/" rel="noreferrer">http://kibana.dev.example1.com/</a></p>
|
<pre><code>public void runKubernetes() {
KubernetesCluster k8sCluster = this.getKubernetesCluster("xyz-aks");
System.out.println("___________________________________________");
System.out.println("Kubernetes Cluster String: " + k8sCluster.name());
DefaultKubernetesClient kubeclient = new DefaultKubernetesClient();
System.out.println("Kube client Master URL :"+kubeclient.getMasterUrl());
NodeList kubenodes = kubeclient.nodes().list();
for (Node node : kubenodes.getItems()) {
System.out.println( node.getKind() + " => " + node.getMetadata().getName() +": " + node.getMetadata().getClusterName());
}
}
</code></pre>
<p>I get Client and nodes. Now, I have yaml file and I want to deploy that yaml (create service, deployment and pods) programatically. </p>
<p>I can do following </p>
<pre><code>kubectl create -f pod-sample.yaml
</code></pre>
<p>but I want to do same thing using JAVA SDK.</p>
<p>I am using following java libraries for kubernetes:</p>
<pre><code>io.fabric8.kubernetes
</code></pre>
| <p>I believe you can parse the YAML or JSON of the deployment definition. For example, for YAML you can use any of the Java libraries <a href="http://yaml.org/" rel="nofollow noreferrer">here</a></p>
<ul>
<li><a href="https://jvyaml.dev.java.net/" rel="nofollow noreferrer">JvYaml</a> # Java port of RbYaml</li>
<li><a href="http://www.snakeyaml.org/" rel="nofollow noreferrer">SnakeYAML</a> # Java 5 / YAML 1.1</li>
<li><a href="http://yamlbeans.sourceforge.net/" rel="nofollow noreferrer">YamlBeans</a> # To/from JavaBeans</li>
<li><a href="http://jyaml.sourceforge.net/" rel="nofollow noreferrer">JYaml</a> # Original Java Implementation</li>
<li><a href="https://www.github.com/decorators-squad/camel" rel="nofollow noreferrer">Camel</a> # YAML 1.2 for Java. A user-friendly OOP library.</li>
</ul>
<p><a href="https://github.com/FasterXML/jackson" rel="nofollow noreferrer">Jackson</a> seems to be the more popular for JSON which also supports a YAML extension.</p>
<p>Then once you parse say the name, for example to create a service:</p>
<pre><code>Service myservice = client.services().inNamespace(parsedNamespaceStr).createNew()
.withNewMetadata()
.withName(parsedServiceName)
.addToLabels(parsedLabel1, parseLabel2)
.endMetadata()
.done();
</code></pre>
|
<p>I'd like to allow our developers to pass dynamic arguments to a helm template (Kubernetes job). Currently my arguments in the helm template are somewhat static (apart from certain values) and look like this </p>
<pre><code> Args:
--arg1
value1
--arg2
value2
--sql-cmd
select * from db
</code></pre>
<p>If I were run a task using the docker container without Kubernetes, I would pass parameters like so:</p>
<pre><code>docker run my-image --arg1 value1 --arg2 value2 --sql-cmd "select * from db"
</code></pre>
<p>Is there any way to templatize arguments in a helm chart in such way that any number of arguments could be passed to a template.</p>
<p>For example.</p>
<pre><code>cat values.yaml
...
arguments: --arg1 value1 --arg2 value2 --sql-cmd "select * from db"
...
</code></pre>
<p>or </p>
<pre><code>cat values.yaml
...
arguments: --arg3 value3
...
</code></pre>
<p>I've tried a few approaches but was not successful. Here is one example: </p>
<pre><code> Args:
{{ range .Values.arguments }}
{{ . }}
{{ end }}
</code></pre>
| <p>Yes. In <code>values.yaml</code> you need to give it an array instead of a space delimited string.</p>
<pre><code>cat values.yaml
...
arguments: ['--arg3', 'value3', '--arg2', 'value2']
...
</code></pre>
<p>or</p>
<pre><code>cat values.yaml
...
arguments:
- --arg3
- value3
- --arg2
- value2
...
</code></pre>
<p>and then you like you mentioned in the template should do it:</p>
<pre><code> args:
{{ range .Values.arguments }}
- {{ . }}
{{ end }}
</code></pre>
<p>If you want to override the arguments on the command line you can pass an array with <code>--set</code> like this:</p>
<pre><code>--set arguments={--arg1, value1, --arg2, value2, --arg3, value3, ....}
</code></pre>
|
<p>I have dozens of secrets to pass into a k8 deployment which becomes very verbose, bellow is an example of passing the redis secrets in from the <code>redis-secrets</code> secret. </p>
<pre><code>- name: REDIS_HOST
valueFrom:
secretKeyRef:
name: redis-secrets
key: REDIS_HOST
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: redis-secrets
key: REDIS_PASSWORD
- name: REDIS_PORT
valueFrom:
secretKeyRef:
name: redis-secrets
key: REDIS_PORT
</code></pre>
<p>Is it possible to pass all the secrets from <code>redis-secrets</code> into the deployment, with the keys of the secrets being the env variable key?</p>
| <p>I used this for configmaps.</p>
<p>Same level as env which is <code>.spec.containers</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>envFrom:
- secretRef:
name: redis-secrets
</code></pre>
|
<p><em>Note: Baking keys into an image is the worst you can do, I did this here to have a binary equal filesystem between Docker and Kubernetes while debugging.</em></p>
<p>I am trying to start up a flink-jobmanager that persists its state in GCS, so I added a <code>high-availability.storageDir: gs://BUCKET/ha</code> line to my <code>flink-conf.yaml</code> and I am building my Dockerfile as described <a href="https://data-artisans.com/blog/getting-started-with-da-platform-on-google-kubernetes-engine" rel="nofollow noreferrer">here</a></p>
<p>This is my Dockerfile:</p>
<pre><code>FROM flink:1.5-hadoop28
ADD https://storage.googleapis.com/hadoop-lib/gcs/gcs-connector-latest-hadoop2.jar /opt/flink/lib/gcs-connector-latest-hadoop2.jar
RUN mkdir /opt/flink/etc-hadoop
COPY flink-conf.yaml /opt/flink/conf/flink-conf.yaml
COPY key.json /opt/flink/etc-hadoop/key.json
COPY core-site.xml /opt/flink/etc-hadoop/core-site.xml
</code></pre>
<p>Now if I build this container via <code>docker build -t flink:dev .</code> and start an interactive shell in it like <code>docker run -ti flink:dev /bin/bash</code>, I am able to start the flink jobmanager via:</p>
<p><code>flink-console.sh jobmanager --configDir=/opt/flink/conf/ --executionMode=cluster</code></p>
<p>Flink is picking up the jar's and starting normally. However, when I use the following yaml for starting it on Kubernetes, based on the one <a href="https://ci.apache.org/projects/flink/flink-docs-stable/ops/deployment/kubernetes.html" rel="nofollow noreferrer">here</a>:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: flink-jobmanager
spec:
replicas: 1
selector:
matchLabels:
app: flink
component: jobmanager
template:
metadata:
labels:
app: flink
component: jobmanager
spec:
containers:
- name: jobmanager
image: flink:dev
imagePullPolicy: Always
resources:
requests:
memory: "1024Mi"
cpu: "500m"
limits:
memory: "2Gi"
cpu: "1000m"
args: ["jobmanager"]
ports:
- containerPort: 6123
name: rpc
- containerPort: 6124
name: blob
- containerPort: 6125
name: query
- containerPort: 8081
name: ui
- containerPort: 46110
name: ha
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /opt/flink/etc-hadoop/key.json
- name: JOB_MANAGER_RPC_ADDRESS
value: flink-jobmanager
</code></pre>
<p>Flink seems to be unable to register the filesystem:</p>
<pre><code>2018-10-04 09:20:51,357 DEBUG org.apache.flink.runtime.util.HadoopUtils - Cannot find hdfs-default configuration-file path in Flink config.
2018-10-04 09:20:51,358 DEBUG org.apache.flink.runtime.util.HadoopUtils - Cannot find hdfs-site configuration-file path in Flink config.
2018-10-04 09:20:51,359 DEBUG org.apache.flink.runtime.util.HadoopUtils - Adding /opt/flink/etc-hadoop//core-site.xml to hadoop configuration
2018-10-04 09:20:51,767 DEBUG org.apache.hadoop.security.UserGroupInformation - PrivilegedActionException as:flink (auth:SIMPLE) cause:java.io.IOException: Could not create FileSystem for highly available storage (high-availability.storageDir)
2018-10-04 09:20:51,767 ERROR org.apache.flink.runtime.entrypoint.ClusterEntrypoint - Cluster initialization failed.
java.io.IOException: Could not create FileSystem for highly available storage (high-availability.storageDir)
at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:122)
at org.apache.flink.runtime.blob.BlobUtils.createBlobStoreFromConfig(BlobUtils.java:95)
at org.apache.flink.runtime.highavailability.HighAvailabilityServicesUtils.createHighAvailabilityServices(HighAvailabilityServicesUtils.java:115)
at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.createHaServices(ClusterEntrypoint.java:402)
at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.initializeServices(ClusterEntrypoint.java:270)
at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.runCluster(ClusterEntrypoint.java:225)
at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.lambda$startCluster$0(ClusterEntrypoint.java:189)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836)
at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41)
at org.apache.flink.runtime.entrypoint.ClusterEntrypoint.startCluster(ClusterEntrypoint.java:188)
at org.apache.flink.runtime.entrypoint.StandaloneSessionClusterEntrypoint.main(StandaloneSessionClusterEntrypoint.java:91)
Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Could not find a file system implementation for scheme 'gs'. The scheme is not directly supported by Flink and no Hadoop file system to support this scheme could be loaded.
at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:405)
at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:320)
at org.apache.flink.core.fs.Path.getFileSystem(Path.java:298)
at org.apache.flink.runtime.blob.BlobUtils.createFileSystemBlobStore(BlobUtils.java:119)
... 12 more
Caused by: org.apache.flink.core.fs.UnsupportedFileSystemSchemeException: Hadoop File System abstraction does not support scheme 'gs'. Either no file system implementation exists for that scheme, or the relevant classes are missing from the classpath.
at org.apache.flink.runtime.fs.hdfs.HadoopFsFactory.create(HadoopFsFactory.java:102)
at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:401)
... 15 more
Caused by: java.io.IOException: No FileSystem for scheme: gs
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2799)
at org.apache.flink.runtime.fs.hdfs.HadoopFsFactory.create(HadoopFsFactory.java:99)
... 16 more
</code></pre>
<p>As Kubernetes should be using the same image, I am confused how this is possible. Am I overseeing something here?</p>
| <p>The problem was using the <code>dev</code> tag. Using specific version tags fixed the issue.</p>
|
<p>I have the following snippet in my Helm deployment yaml file:</p>
<pre><code>{{if or .Values.ha.enabled .Values.checkpointing.enable_checkpointing .Values.enable_upgrade_hook}}
{{if eq .Values.pvc.file_prefix "file://"}}
- mountPath: {{ .Values.pvc.shared_storage_path }}/{{ template "fullname" . }}
name: shared-pvc
{{end}}
{{end}}
</code></pre>
<p>I would like to put all these <em>if</em> checks into a custom function and just call the function here. My new snippet using the function should look like this:</p>
<pre><code>{{if eq enable_mount_volume "true"}}
- mountPath: {{ .Values.pvc.shared_storage_path }}/{{ template "fullname" . }}
name: shared-pvc
{{end}}
</code></pre>
<p>How would I achieve this? I may have multiple deployment yaml files, each doing this conditional check, and it would be useful to just call a function instead of putting the logic-heavy if check in each yaml file(Just to make it less error-prone).</p>
<p>Also, I wouldn't want to define this function in every single template file, as that would defeat the purpose.</p>
| <p>You can create a <a href="https://helm.sh/docs/howto/charts_tips_and_tricks/#using-partials-and-template-includes" rel="nofollow noreferrer" title="partial template">partial template</a> named <em>conditional-mount</em> in a file starting with underscore, for example, <code>templates/_conditional-mount.tpl</code>:</p>
<pre><code>{{define "conditional-mount"}}
{{if or .Values.ha.enabled .Values.checkpointing.enable_checkpointing .Values.enable_upgrade_hook}}
{{if eq .thisPvc.file_prefix "file://"}}
- mountPath: {{ .thisPvc.shared_storage_path }}/{{ template "fullname" . }}
name: shared-pvc
{{end}}
{{end}}
{{end}}
</code></pre>
<p>And then use it you anywhere you need via:</p>
<pre><code>{{include "conditional-mount" (dict "Values" .Values "thisPvc" .Values.pvc)}}
</code></pre>
<p>The trick here is that you specify a pvc to be mounted via the scope object <strong>thisPvc</strong> pointing to the <strong>.Values.pvc</strong>. The <a href="https://github.com/Masterminds/sprig/blob/master/docs/dicts.md" rel="nofollow noreferrer" title="Sprig dict function">Sprig dict function</a> is used.
You can then call it for different PVC, for example, <code>.Values.pvcXYZ</code>:</p>
<pre><code>{{include "conditional-mount" (dict "Values" .Values "thisPvc" .Values.pvcXYZ)}}
</code></pre>
|
<p>I have created an autoscaling Kubernetes cluster on Google Cloud Platform. I have a use case where I want to launch dedicated pods on each node .i.e. each worker node can have only one such pod and I want these dedicated pods to be launched on the newly created nodes which were formed due to autoscaling. Is there a way I can achieve this.
for e.g. If I have 3 workers nodes , I will specify number of replicas to be 3 along with podAntiAffinity in my deployment file so that each of these pods will launch on 3 different nodes. But If my cluster autoscales and a 4th node is added, how can I ensure that this Pod will be added on the 4th node?</p>
| <p>If you need one pod on every node, a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="nofollow noreferrer">DaemonSet</a> is the tool you're looking for. If you add nodes, the DaemonSet controller will launch new copies of the pod there automatically.</p>
<p>This is a good approach for tools like monitoring services and service meshes, where the host itself is an important entity and you really do want one per host (to collect per-host disk I/O, to transparently encrypt inter-host network traffic). You should stick to an ordinary Deployment for more typical server-based workloads where the number of replicas should scale independently from the number of hosts (you could have 20 hosts and 4 pod replicas; or you could schedule 20 pod replicas on 4 8-core systems).</p>
|
<p>I have deployments on one Kubernetes cluster that I might want to move to another Kubernetes cluster in the future. Is it possible to combine these two clusters or must I redeploy everything? If the answer is yes, what if there are StatefulSets?</p>
| <p>The short answer is no. </p>
<p>You can connect to clusters with something like <a href="https://kubernetes.io/docs/concepts/cluster-administration/federation/" rel="nofollow noreferrer">Kubernetes Federation</a> or if you have Calico, you can use something like <a href="https://docs.projectcalico.org/v3.2/usage/configuration/bgp" rel="nofollow noreferrer">BGP peering</a></p>
<p>You'll have to redeploy everything and in the case of StatefulSets, it really depends where you are storing your state. For example:</p>
<ul>
<li>Is it MySql? Backup your db and restore it in the new place.</li>
<li>Is it Cassandra? Can you reattach the same physical volumes in the cloud provider? if not, then you'll have to transfer your data.</li>
<li>Is it etcd, Consul or Zookeeper? Can you back it up or attach the same physical volumes?</li>
</ul>
|
<p>I am using a Horizontal Pod Autoscaler in Kubernetes as shown below. I want to use it for a service to scale between 4 and 40 replicas. Unfortunately due to the upscale delay it would take roughly an hour to scale from 4 to 40 replicas. Is there any chance I could provide something like a min/max Surge to upscale replicas? So that it would at least upscale by 2 or 4 replicas?</p>
<p>My API Object (helm):</p>
<pre><code>apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: {{ required "A valid service.name entry required!" .Values.service.name }}
labels:
app: {{ .Values.service.name }}
version: {{ .Values.image.tag | quote }}
chart: {{ template "nodejs.chart" . }}
release: "{{ .Release.Name }}-{{ .Values.image.tag }}"
heritage: {{ .Release.Service }}
spec:
scaleTargetRef:
apiVersion: apps/v1beta1
kind: Deployment
name: {{ required "A valid service.name entry required!" .Values.service.name }}
minReplicas: {{ .Values.autoscaling.minReplicas }}
maxReplicas: {{ .Values.autoscaling.maxReplicas }}
metrics:
- type: Resource
resource:
name: cpu
targetAverageValue: {{ required "A valid autoscaling.cpuTargetValue entry is required" .Values.autoscaling.cpuTargetValue }}
- type: Resource
resource:
name: memory
targetAverageValue: {{ required "A valid autoscaling.memoryTargetValue entry is required" .Values.autoscaling.memoryTargetValue }}
</code></pre>
| <p>Not really. It seems that you are concerned about <em>thrashing</em>. There's not really a way to define an upscale <em>step</em> number combined with a cool off period. </p>
<p>Up until Kubernetes 1.11 you can specify the <a href="https://v1-11.docs.kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-cooldown-delay" rel="nofollow noreferrer"><code>--horizontal-pod-autoscaler-upscale-delay</code></a> on the kube-controller-manager which defaults to 3 minutes. This may not be enough, so I created this <a href="https://github.com/kubernetes/kubernetes/issues/69428" rel="nofollow noreferrer">issue</a></p>
<p>So starting with Kubernetes 1.12 that option has been removed in favor of a <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-cooldown-delay" rel="nofollow noreferrer">better scaling algorithm</a>.</p>
|
<p>I'm researching Kubernetes Services (that is one of the kinds of k8s components, much like Pods and ReplicaSets). They seem to function like reverse proxies, but I know k8s internally uses DNS, so perhaps they are some kind of load balancing DNS? Also it would somehow mean, that since a Pod can be relocated or exist on many nodes it couldn't simply be a Reverse Proxy since it too would need to be addressable, but share a single IP on many machines... (obviously struggling to imagine how they were built without looking directly at the source code -- yet).</p>
<p>What makes up a K8s Service? DNS + Reverse Proxy, or something more/less? Some kind of networking trick?</p>
| <h3>Regular <code>ClusterIP</code> services</h3>
<p>Ensuring network connectivity for <code>type: ClusterIP</code> Services is the responsibility of the <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/" rel="nofollow noreferrer">kube-proxy</a> -- a component that typically runs on each and every node of your cluster. The kube-proxy does this by intercepting outgoing traffic from Pods on each node and filtering traffic targeted at service IPs. Since it is connected to the Kubernetes API, the kube-proxy can determine which Pod IPs are associated with each service IP and can forward the traffic accordingly.</p>
<p>Conceptually, the kube-proxy might be considered similar to a reverse proxy (hence the name), but typically uses IPtables rules (or, starting at Kubernetes 1.9 optionally IPVS). Each created service will result in a set of IPtables rules on every node that intercepts and forwards traffic targeted at the service IP to the respective Pod IPs (service IPs are purely virtual and exist only in these IPtables rules; nowhere in the entire cluster you will find an actual network interface holding that IP).</p>
<p>Load balancing is also implemented via IPtables rules (or IPVS). Load balancing always occurs on the <em>source</em> node that the traffic originates from.</p>
<p>Here's an example from the <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/#is-kube-proxy-writing-iptables-rules" rel="nofollow noreferrer"><em>Debug Services</em></a> section of the documentation:</p>
<pre><code>u@node$ iptables-save | grep hostnames
-A KUBE-SEP-57KPRZ3JQVENLNBR -s 10.244.3.6/32 -m comment --comment "default/hostnames:" -j MARK --set-xmark 0x00004000/0x00004000
-A KUBE-SEP-57KPRZ3JQVENLNBR -p tcp -m comment --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.244.3.6:9376
-A KUBE-SEP-WNBA2IHDGP2BOBGZ -s 10.244.1.7/32 -m comment --comment "default/hostnames:" -j MARK --set-xmark 0x00004000/0x00004000
-A KUBE-SEP-WNBA2IHDGP2BOBGZ -p tcp -m comment --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.244.1.7:9376
-A KUBE-SEP-X3P2623AGDH6CDF3 -s 10.244.2.3/32 -m comment --comment "default/hostnames:" -j MARK --set-xmark 0x00004000/0x00004000
-A KUBE-SEP-X3P2623AGDH6CDF3 -p tcp -m comment --comment "default/hostnames:" -m tcp -j DNAT --to-destination 10.244.2.3:9376
-A KUBE-SERVICES -d 10.0.1.175/32 -p tcp -m comment --comment "default/hostnames: cluster IP" -m tcp --dport 80 -j KUBE-SVC-NWV5X2332I4OT4T3
-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment "default/hostnames:" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-WNBA2IHDGP2BOBGZ
-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment "default/hostnames:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-X3P2623AGDH6CDF3
-A KUBE-SVC-NWV5X2332I4OT4T3 -m comment --comment "default/hostnames:" -j KUBE-SEP-57KPRZ3JQVENLNBR
</code></pre>
<p>For more information, have a look at the <a href="https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies" rel="nofollow noreferrer">Virtual IPs and Service Proxies</a> section in the manual.</p>
<h3>Headless services</h3>
<p>Besides regular <code>ClusterIP</code> services, there are also <a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="nofollow noreferrer"><em>Headless Services</em></a> (which are declares by specifying the property <code>clusterIP: None</code> when creating the service). These will not use the kube-proxy; instead, their DNS hostname will directly resolve to all Pod IPs that are associated with the service. Load balancing is achieved via regular DNS round-robin.</p>
|
<p>I have a question with Kubernetes when deploying a new version.</p>
<p>My YAML configuration of Kubernetes has the RollingUpdate strategy. The problem comes when it comes to changing versions this way. If I have a php-fpm that is performing an action, does that action get lost? In case it is just changing that pod to the new version.</p>
<p>My main question is if Kubernetes with this strategy takes into consideration if the pod is being used and if so if it waits until he finishes what it has to do and changes it.</p>
<p>Thanks!</p>
| <p>If something is dropping your sessions it would be a bug. Generally speaking, if you have a 'Service' that forwards to multiple backend replicas when you do an update it happens one replica at a time. Something like this:</p>
<ul>
<li>New pod created.</li>
<li>Wait for the new pod to be ready and serviceable.</li>
<li>Put the new pod in the Service pool.</li>
<li>Remove the old pod from the Service pool</li>
<li>Drain old pod. Don't take any more incoming connections and wait for connections to close.</li>
<li>Take down the old pod.</li>
</ul>
|
<p>I have configured istio ingress with lets encrypt certificate.
I am able to access different service on https which are running on different port by using gateways and virtualservice.</p>
<p><strong>But kubernetes-dashboard run on 443 port in kube-system namespace and with its own certificate, How i can expose it through istio gateways and virtualservice.</strong></p>
<p>I have defined sub domain for dashboard and created gateways,virtualservice and it was directing 443 trafic to kuberentes dashboard service , but its not working.</p>
<p><a href="http://istio.io/docs/reference/config/istio.networking.v1alpha3/#TLSRoute" rel="nofollow noreferrer">for https virtual service config i have taken reference from for istio doc</a></p>
| <p>It sounds like you want to configure an ingress gateway to perform SNI passthrough instead of TLS termination. You can do this by setting the tls mode in your Gateway configuration to PASSTHROUGH something like this:</p>
<pre class="lang-yml prettyprint-override"><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: dashboard
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 443
name: https-dashboard
protocol: HTTPS
tls:
mode: PASSTHROUGH
hosts:
- dashboard.example.com
</code></pre>
<p>A complete passthrough example can be found <a href="https://preliminary.istio.io/docs/examples/advanced-gateways/ingress-sni-passthrough/" rel="nofollow noreferrer">here</a>.</p>
|
<p>I am deploying the spring cloud dataflow kubernetes server. this deployment also deploys a mysql server in the cluster but i would like to write a service to connect to my mysql server which is on amazon aws instead of deploying one on kubernetes. Can i do this using external name type or something? Also, how can i give the username and password to connect to this db? </p>
| <p>You can override the default <a href="https://github.com/spring-cloud/spring-cloud-dataflow-server-kubernetes/tree/master/src/kubernetes/mysql" rel="nofollow noreferrer"><code>mysql</code></a> specific deployment configurations. More details <a href="https://docs.spring.io/spring-cloud-dataflow-server-kubernetes/docs/1.6.3.RELEASE/reference/htmlsingle/#_deploy_services_and_dataflow" rel="nofollow noreferrer">here</a> and <a href="https://docs.spring.io/spring-cloud-dataflow-server-kubernetes/docs/1.6.3.RELEASE/reference/htmlsingle/#configuration-rdbms" rel="nofollow noreferrer">here</a>.</p>
<p>Once you have the above-taken care, you'd update Spring Datasource configuration defined in <a href="https://github.com/spring-cloud/spring-cloud-dataflow-server-kubernetes/blob/master/src/kubernetes/server/server-config-rabbit.yaml#L27" rel="nofollow noreferrer">SCDF deployment</a>, so you can point to the external cluster.</p>
<p>We will further clarify this in the reference guide.</p>
|
<p>I just started to explore Kubernetes and I deployed a service in a container on Kubernetes which is running on a cloud.</p>
<p>My service requires to make a call to a database which requires a certificate for authentication. I am wondering what would be the best practice to store/install the certificate on Kubernetes </p>
<p>I need to access the certificate from my code which I am using as follows</p>
<pre><code>const (
serverCertificate = "./cert/api.cer"
serverPrivateKey = "./cert/api.key"
)
creds, err := credentials.NewServerTLSFromFile(serverCertificate, serverPrivateKey)
</code></pre>
| <p>You could store the certificate in a Kubernetes Secret: <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="noreferrer">https://kubernetes.io/docs/concepts/configuration/secret/</a></p>
<p>Here is an example on how to do so:
<a href="https://kubernetes.io/docs/concepts/configuration/secret/#creating-a-secret-using-kubectl-create-secret" rel="noreferrer">https://kubernetes.io/docs/concepts/configuration/secret/#creating-a-secret-using-kubectl-create-secret</a></p>
|
<p>So, I'm running a docker container that's getting killed.</p>
<pre><code>Memory cgroup out of memory: Kill process 1014588 (my-process) score 1974 or sacrifice child
</code></pre>
<p>The pid doesn't really help since the instance will be restarted. I'm not sure what to make of the <code>score 1974</code> portion. Is that some kind of rating? Is that the number of bytes it needs to drop to?</p>
<p>Could the kill be issued because of other things on the system that are squeezing this container, or can this <strong>only</strong> this container is topped out?</p>
<p>And the sacrifice child part, I think that would be in regards to some kind of subprocess?</p>
| <p>I believe this is actually answered <a href="https://unix.stackexchange.com/questions/282155/what-is-the-out-of-memory-message-sacrifice-child">here</a></p>
<p>If you check the Linux kernel code <a href="https://github.com/torvalds/linux/blob/master/mm/oom_kill.c" rel="noreferrer">here</a>. You'll see:</p>
<pre><code>/*
* If any of p's children has a different mm and is eligible for kill,
* the one with the highest oom_badness() score is sacrificed for its
* parent. This attempts to lose the minimal amount of work done while
* still freeing memory.
*/
</code></pre>
<p>mm means 'Memory Management'.</p>
<p>The only difference here is that this kill is getting triggered by cgroups because you have probably run into memory limits.</p>
|
<p>As a leaner of Kubernetes concepts, their working, and deployment with it. I have a couple of cases which I don't know how to achieve. I am looking for advice or some guideline to achieve it.</p>
<p>I am using the Google Cloud Platform. The current running flow is described below. A push to the google source repository triggers Cloud Build which creates a docker image and pushes the image to the running cluster nodes.</p>
<p>Case 1: Now I want that when new pods are up and running. Then traffic is routed to the new pods. Kill old pod but after each pod complete their running request. Zero downtime is what I'm looking to achieve.</p>
<p>Case 2: What will happen if the space of running pod reaches 100 and in the Debian case that the inode count reaches full capacity. Will kubernetes create new pods to manage?</p>
<p>Case 3: How to manage pod to database connection limits?</p>
| <ol>
<li><p>Like the other answer use <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/" rel="nofollow noreferrer">Liveness and Readiness probes</a>. Basically, a new pod is added to the service pool then it will only serve traffic after the readiness probe has passed. The old pod is removed from the Service pool, then drained and then terminated. This happens on a rolling fashion one pod at a time.</p></li>
<li><p>This really depends on the capacity of your cluster and the ability to schedule pods depending on the limits for the containers in them. For more about setting up limits for containers refer to <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/" rel="nofollow noreferrer">here</a>. In terms of the inode limit, if you reach it on a node, the kubelet won't be able to run any more pods on that node. The kubelet eviction manager also has a <a href="https://github.com/kubernetes/kubernetes/pull/35137" rel="nofollow noreferrer">mechanism</a> in where evicts some pods using the most inodes. You can also configure your <a href="https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/#eviction-policy" rel="nofollow noreferrer">eviction thresholds</a> on the kubelet.</p></li>
<li><p>This would be more a limitation at the OS level combined your stateful application configuration. You can keep this configuration in a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">ConfigMap</a>. And for example in something for MySql the option would be <a href="https://dev.mysql.com/doc/refman/8.0/en/server-system-variables.html#sysvar_max_connections" rel="nofollow noreferrer">max_connections</a>.</p></li>
</ol>
|
<p>I deployed a helm chart using <code>config-v1.yaml</code>. I added some data to my helm chart app via an api exposed in the helm chart</p>
<p>I want to deploy/update the current chart with values from <code>config-v2.yaml</code> because there is a feature I want to expose.</p>
<p>When I use <code>helm upgrade -f config-v2.yaml my-chart stable/chart</code>. The previous helm version is blown away meaning the data I added with the API is gone. So I figure I need to add a volume to my container.</p>
<p>When I add a PersistentVolume and PersistentVolumeClaim, the app fails to update with values from <code>config-v2.yaml</code> which means I don't get the new features I want.</p>
<p>What is the proper way to do these types of updates to helm charts?</p>
| <p>To upgrade, use '--reuse-values' flag as you are providing extra customization to the existing values.</p>
<p>In your case, you can use</p>
<p>helm upgrade <strong>--reuse-values</strong> -f config-v2.yaml my-chart stable/chart</p>
<p>Please refer the <a href="https://helm.sh/docs/helm/helm_upgrade/" rel="nofollow noreferrer">docs.</a></p>
|
<p>I have a kubernetes cluster on a private network(private server, not aws or google cloud) and I created a Service to be able to access, however, I need to be able to access from outside the cluster and for this I created an Ingress and added ingress-nginx in the cluster.</p>
<p>This is the YAML I'm using after making several attempts:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: demo-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: k8s.local
http:
paths:
- path: /
backend:
serviceName: nginx
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: ClusterIP
selector:
name: nginx
ports:
- port: 80
targetPort: 80
protocol: TCP
# selector:
# app: nginx
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: echoserver
image: nginx
ports:
- containerPort: 80
</code></pre>
<p>I ran yaml like this: <strong>kubectl create -f file.yaml</strong></p>
<p>In the /etc/hosts file I added <strong>k8s.local</strong> to the ip of the master server.</p>
<p>When trying the command in or out of the master server a "Connection refused" message appears:
<strong>$ curl <a href="http://172.16.0.18:80/" rel="noreferrer">http://172.16.0.18:80/</a> -H 'Host: k8s.local'</strong></p>
<p>I do not know if it's important, but I'm using Flannel in the cluster.</p>
<p>My idea is just to create a 'hello world' and expose it out of the cluster!</p>
<p>Do I need to change anything in the configuration to allow this access?</p>
<hr>
<p>YAML file edited:</p>
<pre><code> apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: demo-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
# nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: k8s.local
http:
paths:
- path: /teste
backend:
serviceName: nginx
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: LoadBalancer # NodePort
selector:
app: nginx
ports:
- port: 80
targetPort: 80
protocol: TCP
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: echoserver
image: nginx
ports:
- containerPort: 80
</code></pre>
| <p>You can deploy the ingress controller as a daemonset with host port 80. The service of the controller will not matter then. You can point your domain to every node in your cluster</p>
<p>You can do a NodePort type service but that will force you to use some port in the 30k vicinity, you will not be able to use port 80</p>
<p>Of course the best solution is to use a cloud provider with a load balancer</p>
|
<p>I'm attempting to run Minikube in a VMWare Workstation guest, running Ubuntu 18.04.</p>
<p><code>kubectl version</code> results in:</p>
<p><code>Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.0", GitCommit:"0ed33881dc4355495f623c6f22e7dd0b7632b7c0", GitTreeState:"clean", BuildDate:"2018-09-27T17:05:32Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
</code></p>
<p><code>minikube version</code> results in:</p>
<pre><code>minikube version: v0.29.0
</code></pre>
<p>I have enabled Virtualize Intel VT-x/EPT or AMD-V/RVI on the VMWare guest configuration. I have 25GB of hard drive space. Yet, regardless of how I attempt to start Minikube, I get the following error:</p>
<pre><code>Starting local Kubernetes v1.10.0 cluster...
Starting VM...
E1005 11:02:32.495579 5913 start.go:168] Error starting host: Error creating host: Error creating machine: Error in driver during machine creation: Error creating VM: virError(Code=1, Domain=10, Message='internal error: qemu unexpectedly closed the monitor: 2018-10-05T09:02:29.926633Z qemu-system-x86_64: error: failed to set MSR 0x38d to 0x0
qemu-system-x86_64: /build/qemu-11gcu0/qemu-2.11+dfsg/target/i386/kvm.c:1807: kvm_put_msrs: Assertion `ret == cpu->kvm_msr_buf->nmsrs' failed.').
Retrying.
</code></pre>
<p>Commands I've tried:</p>
<pre><code>minikube start --vm-driver=kvm2
minikube start --vm-driver=kvm
minikube start --vm-driver=none
</code></pre>
<p>All result in the same thing.</p>
<p>I notice that on the Ubuntu guest, the network will shortly disconnect and re-connect when I run <code>minikube start</code>. Is it a problem with the network driver? How would I debug this?</p>
| <p>Apparently getting virtualbox as a dependency to use as a vm driver works.</p>
<p><code>sudo apt install virtualbox virtualbox-ext-pack</code></p>
<p><code>minikube start --vm-driver=virtualbox</code></p>
<p>Now results in:</p>
<pre><code>Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Downloading kubeadm v1.10.0
Downloading kubelet v1.10.0
Finished Downloading kubelet v1.10.0
Finished Downloading kubeadm v1.10.0
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
</code></pre>
<p>etc</p>
|
<p>Imagine I am developing an application microservices based. They will be deployed to kubernetes with Helm Package Manager. Some microservices ends having pretty similar YAML files configuration. Some others might be different in terms of YAML configuration. What is the best practice for this? I have a few options:</p>
<ol>
<li>Use a generic chart and pass different configuration using values.env.yaml for each microservice and then deploy this with a different release name. </li>
<li>Create a chart for every single microservice no matter of they are similar in terms of configuration?</li>
</ol>
| <p>This is an opinion question, so I'll answer with an opinion.</p>
<ol>
<li><p>Upside: You would have to change just a few values in your values.yaml depending on the microservice and it would be easier to maintain your values.yml. Your Helm charts repo may not grow as fast.</p>
<p>Downside: It will be harder to create you <code>_helpers.tpl</code> file for example. That file will grow rapidly and it could get confusing for people creating microservices understand it.</p></li>
<li><p>Upside: Separation of your microservice as you scale to hundreds. Developers can work only on their microservice deployment.</p>
<p>Downside: File spread, too many files everywhere, and your Helm charts repo can grow rapidly. Also, a risk of large code duplication.</p></li>
</ol>
<p>The more general practice is number 2 for the official Helm charts but then again every chart is for a very different application.</p>
|
<p>I have an Apache Airflow working on Kubernetes (via Google Composer). I want to retrieve one variable store in Secret:</p>
<p><a href="https://i.stack.imgur.com/knUeI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/knUeI.png" alt="enter image description here"></a></p>
<p>I need to consume the variables stored in this Secret from a DAG in Airflow.(python).</p>
| <p>The variables are stored as "Environment Vars", so in Python is quite easy:</p>
<pre><code>import os
os.environ['DB_USER'])
</code></pre>
|
<p>I want to understand better how Kubernetes works,and there are some doubts I haven't founded answer in documentation.</p>
<p>I have a simple Kubernetes Cluster, a Master, and 2 Workers.
I have created a docker image of my app which is stores in dockerhub.</p>
<p>I created a deployment_file.yaml, where I state that I want to deploy my app container in worker 3, thanks to node affinity.</p>
<p>If imagePullPolicy set to Always</p>
<ul>
<li><p>Who is downloading the image from dockerhub, the master itself, or is it the worker were this image will be deployed???
If it is the master who pulls the image, then it transfer replicas of this images to the workers?</p></li>
<li><p>When a the image is pulled, is it stored in any local folder in kubernetes?</p></li>
</ul>
<p>I would like to understand better how data is transferred. Thanks.</p>
| <p>Each of the minions (workers) will pull the docker image and store it locally. <code>docker image ls</code> will show the list of image on the minions.</p>
<p>To address where are the images are stored. Take a look at SO answer <a href="https://stackoverflow.com/questions/19234831/where-are-docker-images-stored-on-the-host-machine">here</a>.</p>
|
<p>I have created a shared library and it has a groovy file named <code>myBuildPlugin.groovy</code>:</p>
<pre><code>def label = "worker-${UUID.randomUUID().toString()}"
podTemplate(label: label, yaml: """
apiVersion: v1
kind: Pod
metadata:
name: my-build
spec:
containers:
- name: jnlp
image: dtr.myhost.com/test/jenkins-build-agent:latest
ports:
- containerPort: 8080
- containerPort: 50000
resources:
limits:
cpu : 1
memory : 1Gi
requests:
cpu: 200m
memory: 256Mi
env:
- name: JENKINS_URL
value: http://jenkins:8080
- name: mongo
image: mongo
ports:
- containerPort: 8080
- containerPort: 50000
- containerPort: 27017
resources:
requests:
cpu: 200m
memory: 256Mi
limits:
cpu: 1
memory: 512Mi
imagePullSecrets:
- name: dtrsecret""")
{
node(label) {
pipelineParams.step1.call([label : label])
}
}
</code></pre>
<p>When in my project I use myBuildPlugin as below, the log shows it waits for an executor forever. When I look at Jenkins I can see the agent is being created but for some reason it can't talk to it via port <code>50000</code> (or perhaps the pod can't talk to the agent!)</p>
<p>Later I tried to remove <code>yaml</code> and instead used the following code:</p>
<pre><code>podTemplate(label: 'mypod', cloud: 'kubernetes', containers: [
containerTemplate(
name: 'jnlp',
image: 'dtr.myhost.com/test/jenkins-build-agent:latest',
ttyEnabled: true,
privileged: false,
alwaysPullImage: false,
workingDir: '/home/jenkins',
resourceRequestCpu: '1',
resourceLimitCpu: '100m',
resourceRequestMemory: '100Mi',
resourceLimitMemory: '200Mi',
envVars: [
envVar(key: 'JENKINS_URL', value: 'http://jenkins:8080'),
]
),
containerTemplate(name: 'maven', image: 'maven:3.5.0', command: 'cat', ttyEnabled: true),
containerTemplate(name: 'docker', image: 'docker', command: 'cat', ttyEnabled: true)
],
volumes: [
emptyDirVolume(mountPath: '/etc/mount1', memory: false),
hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock')
],
imagePullSecrets: [ 'dtrsecret' ],
)
{
node(label) {
pipelineParams.step1.call([label : label])
}
}
</code></pre>
<p>Still no luck. Interestingly if I define all these containers in Jenkins configuration, things work smoothly. This is my configuration:</p>
<p><a href="https://i.stack.imgur.com/EyqQe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EyqQe.png" alt="enter image description here"></a></p>
<p>and this is the pod template configuration:</p>
<p><a href="https://i.stack.imgur.com/L2574.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/L2574.png" alt="enter image description here"></a></p>
<p>It appears that if I change the label to something other that <code>jenkins-jenkins-slave</code> the issue happens. This is the case even if it's defined via Jenkins' configuration page. If that's the case, how am I suppose to create multiple Pod template for different type of projects?</p>
<p>Just today, I also tried to use pod inheritance as below without any success:</p>
<pre><code>def label = 'kubepod-test'
podTemplate(label : label, inheritFrom : 'default',
containers : [
containerTemplate(name : 'mongodb', image : 'mongo', command : '', ttyEnabled : true)
]
)
{
node(label) {
}
}
</code></pre>
<p>Please help me on this issue. Thanks</p>
| <p>There's something iffy about your pod configuration, you can't have your Jenkins and Mongo containers using the same port <code>50000</code>. Generally, you want to specify a unique port since pods share the same <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/#what-is-a-pod" rel="nofollow noreferrer">port space</a>.</p>
<p>In this case looks like you need port <code>50000</code> to set up a tunnel to the Jenkins agent. Keep in mind that the Jenkins plugin might be doing other things such as setting up a <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Kubernetes Service</a> or using the internal <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">Kubernetes DNS</a>.</p>
<p>In the second example, I don't even see port <code>50000</code> exposed.</p>
|
<p>I was following this URL: <a href="https://stackoverflow.com/questions/42564058/how-to-use-local-docker-images-with-minikube">How to use local docker images with Minikube?</a>
I couldn't add a comment, so thought of putting my question here:</p>
<p>On my laptop, I have Linux Mint OS. Details as below:</p>
<pre><code>Mint version 19,
Code name : Tara,
PackageBase : Ubuntu Bionic
Cinnamon (64-bit)
</code></pre>
<p>As per one the answer on the above-referenced link:</p>
<ol>
<li>I started minikube and checked pods and deployments</li>
</ol>
<blockquote>
<pre><code>xxxxxxxxx:~$ pwd
/home/sj
xxxxxxxxxx:~$ minikube start
xxxxxxxxxx:~$ kubectl get pods
xxxxxxxxxx:~$ kubectl get deployments
</code></pre>
</blockquote>
<p>I ran command docker images</p>
<pre><code>xxxxxxxxx:~$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
<username>/spring-docker-01 latest e10f88e1308d 6 days ago 640MB
openjdk 8 81f83aac57d6 4 weeks ago 624MB
mysql 5.7 563a026a1511 4 weeks ago 372MB
</code></pre>
<ol start="2">
<li>I ran below command: </li>
</ol>
<blockquote>
<p>eval $(minikube docker-env)</p>
</blockquote>
<ol start="3">
<li><p>Now when I check docker images, looks like as the <a href="https://github.com/kubernetes/minikube/blob/0c616a6b42b28a1aab8397f5a9061f8ebbd9f3d9/README.md#reusing-the-docker-daemon" rel="nofollow noreferrer">README</a> describes, it reuses the Docker daemon from Minikube with eval $(minikube docker-env).</p>
<p>xxxxxxxxxxxxx:~$ docker images</p></li>
</ol>
<blockquote>
<pre><code>REPOSITORY TAG IMAGE ID CREATED SIZE
nginx alpine 33c5c6e11024 9 days ago 17.7MB
k8s.gcr.io/coredns 1.2.2 367cdc8433a4 5 weeks ago 39.2MB
k8s.gcr.io/kubernetes-dashboard-amd64 v1.10.0 0dab2435c100 5 weeks ago 122MB
k8s.gcr.io/kube-proxy-amd64 v1.10.0 bfc21aadc7d3 6 months ago 97MB
k8s.gcr.io/kube-controller-manager-amd64 v1.10.0 ad86dbed1555 6 months ago 148MB
k8s.gcr.io/kube-apiserver-amd64 v1.10.0 af20925d51a3 6 months ago 225MB
k8s.gcr.io/kube-scheduler-amd64 v1.10.0 704ba848e69a 6 months ago 50.4MB
k8s.gcr.io/etcd-amd64 3.1.12 52920ad46f5b 6 months ago 193MB
k8s.gcr.io/kube-addon-manager v8.6 9c16409588eb 7 months ago 78.4MB
k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64 1.14.8 c2ce1ffb51ed 9 months ago 41MB
k8s.gcr.io/k8s-dns-sidecar-amd64 1.14.8 6f7f2dc7fab5 9 months ago 42.2MB
k8s.gcr.io/k8s-dns-kube-dns-amd64 1.14.8 80cc5ea4b547 9 months ago 50.5MB
k8s.gcr.io/pause-amd64 3.1 da86e6ba6ca1 9 months ago 742kB
gcr.io/k8s-minikube/storage-provisioner v1.8.1 4689081edb10 11 months ago 80.8MB
k8s.gcr.io/echoserver 1.4 a90209bb39e3 2 years ago 140MB
</code></pre>
</blockquote>
<p><em>Note: if noticed docker images command pulled different images before and after step 2.</em></p>
<ol start="4">
<li>As I didn't see the image that I wanted to put on minikube, I pulled it from my docker hub.</li>
</ol>
<blockquote>
<pre><code>xxxxxxxxxxxxx:~$ docker pull <username>/spring-docker-01
Using default tag: latest
latest: Pulling from <username>/spring-docker-01
05d1a5232b46: Pull complete
5cee356eda6b: Pull complete
89d3385f0fd3: Pull complete
80ae6b477848: Pull complete
40624ba8b77e: Pull complete
8081dc39373d: Pull complete
8a4b3841871b: Pull complete
b919b8fd1620: Pull complete
2760538fe600: Pull complete
48e4bd518143: Pull complete
Digest: sha256:277e8f7cfffdfe782df86eb0cd0663823efc3f17bb5d4c164a149e6a59865e11
Status: Downloaded newer image for <username>/spring-docker-01:latest
</code></pre>
</blockquote>
<ol start="5">
<li>Verified if I can see that image using "docker images" command.</li>
</ol>
<blockquote>
<pre><code>xxxxxxxxxxxxx:~$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
<username>/spring-docker-01 latest e10f88e1308d 6 days ago 640MB
nginx alpine 33c5c6e11024 10 days ago 17.7MB
</code></pre>
</blockquote>
<ol start="6">
<li>Then I tried to build image as stated in referenced link step.</li>
</ol>
<blockquote>
<pre><code>xxxxxxxxxx:~$ docker build -t <username>/spring-docker-01 .
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /home/sj/Dockerfile: no such file or directory
</code></pre>
</blockquote>
<p><strong>As the error states that dockerfile doesn't exist at the location, I am not sure where exactly I can see dockerfile for the image I had pulled from docker hub.</strong></p>
<p>Looks like I have to go to the location where the image has been pulled and from that location, I need to run the above-mentioned command. Please correct me wrong.</p>
<p>Below are the steps, I will be doing after I fix the above-mentioned issue.</p>
<pre><code># Run in minikube
kubectl run hello-foo --image=myImage --image-pull-policy=Never
# Check that it's running
kubectl get pods
</code></pre>
<hr>
<p>UPDATE-1</p>
<p>There is mistake in above steps.
Step 6 is not needed. Image has already been pulled from docker hub, so no need of <code>docker build</code> command.</p>
<p>With that, I went ahead and followed instructions as mentioned by @aurelius in response.</p>
<pre><code>xxxxxxxxx:~$ kubectl run sdk-02 --image=<username>/spring-docker-01:latest --image-pull-policy=Never
kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
deployment.apps/sdk-02 created
</code></pre>
<p>Checked pods and deployments</p>
<pre><code>xxxxxxxxx:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
sdk-02-b6db97984-2znlt 1/1 Running 0 27s
xxxxxxxxx:~$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
sdk-02 1 1 1 1 35s
</code></pre>
<p>Then exposed deployment on port 8084 as I was using other ports like 8080 thru 8083</p>
<pre><code>xxxxxxxxx:~$ kubectl expose deployment sdk-02 --type=NodePort --port=8084
service/sdk-02 exposed
</code></pre>
<p>Then verified if service has been started, checked if no issue on kubernetes dashboard and then checked the url</p>
<pre><code>xxxxxxxxx:~$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7h
sdk-02 NodePort 10.100.125.120 <none> 8084:30362/TCP 13s
xxxxxxxxx:~$ minikube service sdk-02 --url
http://192.168.99.101:30362
</code></pre>
<p>When I tried to open URL: <a href="http://192.168.99.101:30362" rel="nofollow noreferrer">http://192.168.99.101:30362</a> in browser I got message:</p>
<pre><code>This site can’t be reached
192.168.99.101 refused to connect.
Search Google for 192 168 101 30362
ERR_CONNECTION_REFUSED
</code></pre>
<p><strong>So the question : Is there any issue with steps performed?</strong></p>
<hr>
<p>UPDATE-2</p>
<p>The issue was with below step:</p>
<pre><code>xxxxxxxxx:~$ kubectl expose deployment sdk-02 --type=NodePort --port=8084
service/sdk-02 exposed
</code></pre>
<p>Upon checking Dockerfile of my image : <code><username>/spring-docker-01:latest</code> I was exposing it to 8083 something like <code>EXPOSE 8083</code>
May be that was causing issue.
So I went ahead and changed expose command:</p>
<pre><code>xxxxxxxxx:~$ kubectl expose deployment sdk-02 --type=NodePort --port=8083
service/sdk-02 exposed
</code></pre>
<p>And then it started working.</p>
<p><em>If anyone has something to add to this, please feel free.</em></p>
<p><strong>However I am still not sure where exactly I can see dockerfile for the image I had pulled from docker hub.</strong></p>
| <p>The <code>docker build</code> does not know what you mean by your <a href="https://docs.docker.com/v17.09/edge/engine/reference/commandline/build/#usage" rel="nofollow noreferrer">command</a>, because flag -t requires specific format:</p>
<blockquote>
<p>--tag , -t Name and optionally a tag in the ‘<strong>name:tag</strong>’ format</p>
</blockquote>
<p><code>xxxxxxxxxx:~/Downloads$ docker build -t shivnilesh1109/spring-docker-01 .</code></p>
<p>So the proper command here should be:</p>
<pre><code>docker build -t shivnilesh1109/spring-docker-01:v1(1) .(2)
</code></pre>
<p>(1) desired name of your container:tag
(2) directory in which your dockerfile is.</p>
<p>After you proceed to minikube deployment, it will be enough just to run:
<code>kubectl run *desired name of deployment/pod* --image=*name of the container with tag* --image-pull-policy=Never</code></p>
<p>If this would not fix your issue, try adding the path to Dockerfile manually. I've tested this on my machine and error stopped after using proper tagging of the image and tested this also with full path to Dockerfile otherwise I had the same error as you. </p>
|
<p>when I'm running the command:</p>
<pre><code>gcloud beta compute instance-groups managed rolling-action start-update gke-playground-pool-test-1-420d5b80-grp --version template=elk-pool-template-us-west1-3 --zone us-west1-b --max-surge 1 --max-unavailable 1 --type opportunistic --force
</code></pre>
<p>I'm getting the following error:</p>
<pre><code>ERROR: (gcloud.beta.compute.instance-groups.managed.rolling-action.start-update) Could not fetch resource:
- Invalid value for field 'resource.instanceTemplate': ''. Unable to create an instance from instanceTemplate elk-pool-template-us-west1-3 in zone us-west1-b:
Invalid value for field 'instance.networkInterfaces[0].accessConfigs[0].natIP': The specified external IP address 'xx.xxx.xxx.xx' is not reserved in region 'us-west1'.
</code></pre>
| <blockquote>
<p>The specified external IP address 'xx.xxx.xxx.xx' is not reserved in
region 'us-west1' </p>
</blockquote>
<p>The issue you have described is usually caused by a template that tries to claim unreserved External IP. To make those IP's available you have to first reserve them by buying it in GCP. You can find more about External IP addresses <a href="https://cloud.google.com/compute/docs/ip-addresses/" rel="nofollow noreferrer">here.</a> </p>
<p>If the reserved IP address is available, you can use it. In another case, the service will allocate External IP which will be available at that moment. You can find more about reserving IP addresses <a href="https://cloud.google.com/compute/docs/ip-addresses/reserve-static-external-ip-address" rel="nofollow noreferrer">here</a>. </p>
|
<p>I have tried to install Kubernetes on Amazon Linux Machine. I followed a lot of documents and videos in those tutorials they are easily installing kubectl and kops but in my case, I followed the same steps but kubectl is not working for me.</p>
<p>error: The connection to the server localhost:8080 was refused - did you specify the right host or port? I opened all required ports still effecting with the error.</p>
<p><a href="https://i.stack.imgur.com/3aqh7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3aqh7.png" alt="enter image description here"></a></p>
| <p>1) kubelet is not a service, it's just a binary executable file so there is no any service unit file for it on your system</p>
<p>2) How did you use kops to deploy cluster on aws? I always use the following steps which work for me:</p>
<p>Install <code>awscli</code></p>
<pre><code>sudo apt-get install python python-pip
sudo python-pip install awscli
</code></pre>
<p>Create aws credentials for your <strong>admin user</strong> (using IAM) and configure your awscli utility to use them</p>
<pre><code>aws configure
</code></pre>
<p>Install <code>kops</code></p>
<pre><code>curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
chmod +x kops-linux-amd64
sudo mv kops-linux-amd64 /usr/local/bin/kops
</code></pre>
<p>as well as <code>kubectl</code></p>
<pre><code>apt-get update && apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubectl
</code></pre>
<p>Create <code>s3 bucket</code> for Kubernetes's storage with some name</p>
<pre><code>aws s3api create-bucket --bucket k8s --region eu-central-1 --create-bucket-configuration LocationConstraint=eu-central-1
aws s3api put-bucket-versioning --region eu-central-1 --bucket k8s --versioning-configuration Status=Enabled
aws s3 ls
</code></pre>
<p>Create hosted zone (or subdomain) for Kubernetes cluster in Route53 or use existed one in <code>Route53</code>, for example <code>test.com</code>. </p>
<p>Create cluster via kops:</p>
<pre><code>kops create cluster --name=k8s.test.com \
--state=s3://k8s \
--zones=eu-central-1a \
--node-count=2 \
--node-size=t2.small \
--master-count=1 \
--master-size=t2.micro \
--master-zones=eu-central-1a \
--dns-zone=test.com \
--authorization=RBAC \
--yes
</code></pre>
<p>wait for a while and check if it's running:</p>
<pre><code>kops validate cluster --name=k8s.test.com --state=s3://k8s
</code></pre>
|
<p>I'm trying to give max.request.size to Kafka in kubernetes, it seems doesn't work. How to do that?</p>
<p>I tried also with <code>KAFKA_MAX_REQUEST_SIZE</code> again no positive result:</p>
<pre><code> spec:
containers:
- name: kafka
image: wurstmeister/kafka
...
env:
- name: KAFKA_ZOOKEEPER_CONNECT
value: "zookeeper:2181"
- name: KAFKA_ADVERTISED_PORT
value: "9092"
- name: KAFKA_DELETE_TOPIC_ENABLE
value: "true"
- name: KAFKA_CREATE_TOPICS
value: "tpinput:1:1,tpoutput:1:1,operinput:1:1,operoutput:1:1,authoutput:1:1"
- name: KAFKA_JMX_PORT
value: "7071"
- name: KAFKA_ZOOKEEPER_TIMEOUT_MS
value: "16000"
- name: KAFKA_MESSAGE_MAX_BYTES
value: "209715200"
- name: KAFKA_FETCH_MESSAGE_MAX_BYTES
value: "209715200"
- name: KAFKA_REPLICA_FETCH_MAX_BYTES
value: "209715200"
- name: KAFKA_PRODUCER_MAX_REQUEST_SIZE
value: "9651322321"
- name: KAFKA_ADVERTISED_HOST_NAME
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: KUBE_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
imagePullPolicy: Always
restartPolicy: Always
envFrom:
- configMapRef:
name: env-config-for-pods
</code></pre>
| <p>Based on the Docker entrypoint script <a href="https://github.com/wurstmeister/kafka-docker/blob/master/start-kafka.sh" rel="nofollow noreferrer">here</a> and this part in that file:</p>
<pre><code>EXCLUSIONS="|KAFKA_VERSION|KAFKA_HOME|KAFKA_DEBUG|KAFKA_GC_LOG_OPTS|KAFKA_HEAP_OPTS|KAFKA_JMX_OPTS|KAFKA_JVM_PERFORMANCE_OPTS|KAFKA_LOG|KAFKA_OPTS|"
# Read in env as a new-line separated array. This handles the case of env variables have spaces and/or carriage returns. See #313
IFS=$'\n'
for VAR in $(env)
do
env_var=$(echo "$VAR" | cut -d= -f1)
if [[ "$EXCLUSIONS" = *"|$env_var|"* ]]; then
echo "Excluding $env_var from broker config"
continue
fi
if [[ $env_var =~ ^KAFKA_ ]]; then
kafka_name=$(echo "$env_var" | cut -d_ -f2- | tr '[:upper:]' '[:lower:]' | tr _ .)
updateConfig "$kafka_name" "${!env_var}" "$KAFKA_HOME/config/server.properties"
fi
if [[ $env_var =~ ^LOG4J_ ]]; then
log4j_name=$(echo "$env_var" | tr '[:upper:]' '[:lower:]' | tr _ .)
updateConfig "$log4j_name" "${!env_var}" "$KAFKA_HOME/config/log4j.properties"
fi
done
</code></pre>
<p><code>KAFKA_MAX_REQUEST_SIZE</code> should be included in the <code>$KAFKA_HOME/config/server.properties</code> file as <code>max.request.size</code>. I wouldn't be surprised if there's a bug in the docker image. </p>
<p>You can always shell into your Kafka pod and check the <code>$KAFKA_HOME/config/server.properties</code> config file.</p>
<pre><code>kubectl exec -it <kafka-pod> -c <kafka-container> sh
</code></pre>
|
<p>When I push my deployments, for some reason, I'm getting the error on my pods:</p>
<blockquote>
<p>pod has unbound PersistentVolumeClaims</p>
</blockquote>
<p>Here are my YAML below:</p>
<p>This is running locally, not on any cloud solution.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.16.0 ()
creationTimestamp: null
labels:
io.kompose.service: ckan
name: ckan
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: ckan
spec:
containers:
image: slckan/docker_ckan
name: ckan
ports:
- containerPort: 5000
resources: {}
volumeMounts:
- name: ckan-home
mountPath: /usr/lib/ckan/
subPath: ckan
volumes:
- name: ckan-home
persistentVolumeClaim:
claimName: ckan-pv-home-claim
restartPolicy: Always
status: {}
</code></pre>
<hr>
<pre class="lang-yaml prettyprint-override"><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ckan-pv-home-claim
labels:
io.kompose.service: ckan
spec:
storageClassName: ckan-home-sc
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
volumeMode: Filesystem
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ckan-home-sc
provisioner: kubernetes.io/no-provisioner
mountOptions:
- dir_mode=0755
- file_mode=0755
- uid=1000
- gid=1000
</code></pre>
| <p>You have to define a <strong>PersistentVolume</strong> providing disc space to be consumed by the <strong>PersistentVolumeClaim</strong>.</p>
<p>When using <code>storageClass</code> Kubernetes is going to enable <strong>"Dynamic Volume Provisioning"</strong> which is not working with the local file system.</p>
<hr />
<h3>To solve your issue:</h3>
<ul>
<li>Provide a <strong>PersistentVolume</strong> fulfilling the constraints of the claim (a size >= 100Mi)</li>
<li>Remove the <code>storageClass</code> from the <strong>PersistentVolumeClaim</strong> or provide it with an empty value (<code>""</code>)</li>
<li>Remove the <strong>StorageClass</strong> from your cluster</li>
</ul>
<hr />
<h3>How do these pieces play together?</h3>
<p>At creation of the deployment state-description it is usually known which kind (amount, speed, ...) of storage that application will need.<br />
To make a deployment versatile you'd like to avoid a hard dependency on storage. Kubernetes' volume-abstraction allows you to provide and consume storage in a standardized way.</p>
<p>The <strong>PersistentVolumeClaim</strong> is used to provide a storage-constraint alongside the deployment of an application.</p>
<p>The <strong>PersistentVolume</strong> offers cluster-wide volume-instances ready to be consumed ("<code>bound</code>"). One PersistentVolume will be bound to <em>one</em> claim. But since multiple instances of that claim may be run on multiple nodes, that volume may be <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="noreferrer">accessed</a> by multiple nodes.</p>
<p>A <strong>PersistentVolume without StorageClass</strong> is considered to be <strong>static</strong>.</p>
<p><a href="https://kubernetes.io/docs/concepts/storage/dynamic-provisioning" rel="noreferrer"><strong>"Dynamic Volume Provisioning"</strong></a> alongside <strong>with</strong> a <strong>StorageClass</strong> allows the cluster to provision PersistentVolumes on demand.
In order to make that work, the given storage provider must support <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/#provisioner" rel="noreferrer">provisioning</a> - this allows the cluster to request the provisioning of a "new" <strong>PersistentVolume</strong> when an unsatisfied <strong>PersistentVolumeClaim</strong> pops up.</p>
<hr />
<h3>Example PersistentVolume</h3>
<p>In order to find how to specify things you're best advised to take a look at the <a href="https://kubernetes.io/de/docs/reference/#api-referenz" rel="noreferrer">API for your Kubernetes version</a>, so the following example is build from the <a href="https://v1-17.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.17/#persistentvolume-v1-core" rel="noreferrer">API-Reference of K8S 1.17</a>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: ckan-pv-home
labels:
type: local
spec:
capacity:
storage: 100Mi
hostPath:
path: "/mnt/data/ckan"
</code></pre>
<p>The <strong>PersistentVolumeSpec</strong> allows us to define multiple attributes.
I chose a <code>hostPath</code> volume which maps a local directory as content for the volume. The capacity allows the resource scheduler to recognize this volume as applicable in terms of resource needs.</p>
<hr />
<h3>Additional Resources:</h3>
<ul>
<li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/" rel="noreferrer">Configure PersistentVolume Guide</a></li>
</ul>
|
<p>I can delete a deployment with the kubectl cli, but is there a way to make my deployment auto-destroy itself once it has finished? For my situation, we are kicking off a long-running process in a Docker container on AWS EKS. When I check the status, it is 'running', and then sometime later the status is 'completed'. So is there any way to get the kubernetes pod to auto destroy once it as finished?</p>
<pre><code>kubectl run some_deployment_name --image=path_to_image
kubectl get pods
//the above command returns...
some_deployment_name1212-75bfdbb99b-vt622 0/1 Running 2 23s
//and then some time later...
some_deployment_name1212-75bfdbb99b-vt622 0/1 Completed 2 15m
</code></pre>
<p>Once it is complete, I would like for it to be destroyed, without me having to call another command.</p>
| <p>So the question is about running Jobs and not deployments as in the Kubernetes <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">Deployments</a> abstraction that creates a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/" rel="nofollow noreferrer">ReplicaSet</a> but more like Kubernetes <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="nofollow noreferrer">Jobs</a></p>
<p>A <code>Job</code> is created with <code>kubectl run</code> when you specify the <code>--restart=OnFailure</code> option. These jobs are not cleaned up by the cluster unless you delete them manually with <code>kubectl delete <pod-name></code>. More info <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#clean-up-finished-jobs-automatically" rel="nofollow noreferrer">here</a>. </p>
<p>If you are using Kubernetes 1.12 or later a new Job spec was introduced: <code>ttlSecondsAfterFinished</code>. You can also use that to clean up your jobs. Another more time-consuming option would be to write your own Kubernetes controller that cleans up regular Jobs.</p>
<p>A <code>CronJob</code> is created if you specify both the <code>--restart=OnFailure</code> and `--schedule="" option. These pods get deleted automatically because they run on a regular schedule.</p>
<p>More info on <code>kubectl run</code> <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#run" rel="nofollow noreferrer">here</a>.</p>
|
<p>user1 and user2 have been assigned "admin" role in K8s cluster where they can only work within the namepsace they are assigned. In the case below, ns1 and ns2 respectively</p>
<p>user1 --> assigned namespace ns1</p>
<p>user2 --> assigned namespace ns2</p>
<p>user3 --> assigned namespace ns3 and also have namespace-admin role assigned.
namespace-admin role (user3) should be able to create any resource in namespace ns3 and any new namespaces he creates in the cluster. This role should have ability to dynamically create new namespaces. But user3 should NOT have access to ns1 or ns2 namespaces which is not created by user "user3".</p>
<p>user3 will be dynamically creating new namespaces and deploying workloads in those namespaces.</p>
<p>Can this be addressed ? This is similar to Openshift "Projects" concept.</p>
| <p>Yes, you can restrict user3 to create/delete resources only in the namespace ns3 using a <code>Role</code> bind that role to user3. </p>
<p>Then you can use <code>ClusterRole</code> with only access to the <code>namespaces</code> resource and allow it to <code>create, delete, etc</code></p>
<p>Something like this:</p>
<pre><code>kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: my-namespace
name: user-namespace-role
rules:
- apiGroups: [""]
resources: ["services", "endpoints", "pods"] # etc...
verbs: ["get", "list", "create"] # etc
</code></pre>
<p>Then:</p>
<pre><code>kubectl create rolebinding user-namespace-binding --role=user-namespace-role --user=user3 --namespace=my-namespace
</code></pre>
<p>Then:</p>
<pre><code>kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cluster-role-all-namespaces
rules:
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"] # etc
</code></pre>
<p>Then:</p>
<pre><code>kubectl create clusterrolebinding all-namespaces-binding --clusterrole=cluster-role-all-namespaces --user=user3
</code></pre>
<p>For user1 and user2 you can create a <code>Role</code> and <code>RoleBinding</code> for their unique namespaces.</p>
|
<p>I have an OpenShift, cluster, and periodically when accessing logs, I get:</p>
<pre><code>worker1-sass-on-prem-origin-3-10 on 10.1.176.130:53: no such host" kube doing a connection to 53 on a node.
</code></pre>
<p>I also tend to see <code>tcp: lookup postgres.myapp.svc.cluster.local on 10.1.176.136:53: no such host</code> errors from time to time in pods, again, this makes me think that, when accessing internal service endpoints, pods, clients, and other Kubernetes related services actually talk to a DNS server that is assumed to be running on the given node that said pods are running on.</p>
<h1>Update</h1>
<p>Looking into one of my pods on a given node, I found the following in resolv.conf (I had to ssh and run <code>docker exec</code> to get this output - since oc exec isn't working due to this issue).</p>
<pre><code>/etc/cfssl $ cat /etc/resolv.conf
nameserver 10.1.176.129
search jim-emea-test.svc.cluster.local svc.cluster.local cluster.local bds-ad.lc opssight.internal
options ndots:5
</code></pre>
<p>Thus, it appears that in my cluster, containers have a self-referential resolv.conf entry. This cluster is created with <em>openshift-ansible</em>. I'm not sure if this is infra-specific, or if its actually a fundamental aspect of how openshift nodes work, but i suspect the latter, as I haven't done any major customizations to my ansible workflow from the upstream openshift-ansible recipes.</p>
| <h1>Yes, DNS on every node is normal in openshift.</h1>
<p>It does appear that its normal for an openshift ansible deployment to deploy <code>dnsmasq</code> services on every node. </p>
<h2>Details.</h2>
<p>As an example of how this can effect things, the following <a href="https://github.com/openshift/openshift-ansible/pull/8187" rel="nofollow noreferrer">https://github.com/openshift/openshift-ansible/pull/8187</a> is instructive. In any case, if a local node's dnsmasq is acting flakey for any reason, it will prevent containers running on that node from properly resolving addresses of other containers in a cluster. </p>
<h2>Looking deeper at the dnsmasq 'smoking gun'</h2>
<p>After checking on an individual node, I found that in fact, there was a process indeed bounded to port 53, and it is dnsmasq. Hence, </p>
<p><code>
[enguser@worker0-sass-on-prem-origin-3-10 ~]$ sudo netstat -tupln | grep 53
tcp 0 0 127.0.0.1:53 0.0.0.0:* LISTEN 675/openshift
</code></p>
<p>And, dnsmasq is running locally: </p>
<p><code>
[enguser@worker0-sass-on-prem-origin-3-10 ~]$ ps -ax | grep dnsmasq
4968 pts/0 S+ 0:00 grep --color=auto dnsmasq
6994 ? Ss 0:22 /usr/sbin/dnsmasq -k
[enguser@worker0-sass-on-prem-origin-3-10 ~]$ sudo ps -ax | grep dnsmasq
4976 pts/0 S+ 0:00 grep --color=auto dnsmasq
6994 ? Ss 0:22 /usr/sbin/dnsmasq -k
</code></p>
<p>The final clue, resolv.conf itself is even adding the local IP address as a nameserver... And this is obviously borrowed into containers that start.</p>
<pre><code> nameserver updated by /etc/NetworkManager/dispatcher.d/99-origin-dns.sh
Generated by NetworkManager
search cluster.local bds-ad.lc opssight.internal
NOTE: the libc resolver may not support more than 3 nameservers.
The nameservers listed below may not be recognized.
nameserver 10.1.176.129
</code></pre>
<h1>The solution (in my specific case)</h1>
<p>In my case , this was happening because the local nameserver was using an <code>ifcfg</code> (you can see these files in /etc/sysconfig/network-scripts/) with </p>
<pre><code>[enguser@worker0-sass-on-prem-origin-3-10 network-scripts]$ cat ifcfg-ens192
TYPE=Ethernet
BOOTPROTO=dhcp
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens192
UUID=50936212-cb5e-41ff-bec8-45b72b014c8c
DEVICE=ens192
ONBOOT=yes
</code></pre>
<p>However, my internally configured Virtual Machines could not resolve IPs provided to them by the PEERDNS records.</p>
<p>Ultimately the fix was to work with our IT department to make sure our authoritative domain for our kube clusters had access to all IP addresses in our data center.</p>
<h1>The Generic Fix to :53 lookup errors...</h1>
<p>If youre seeing the :53 record errors are coming up when you try to kubectl or oc logs / exec, then there is likely that <em>your apiserver is not able to connect with kubelets via their IP address</em>. </p>
<p>If youre seeing :53 record errors in other places, for example, <em>inside of pods</em>, then this is because your pod, using its own local DNS, isnt able to resolve internal cluster IP addresses. This might simply be because you have an outdated controller that is looking for services that don't exist anymore, or else, you have flakiness at your kubernetes dns implementation level.</p>
|
<p>We are using Elasticsearch in a Kubernetes cluster (not exposed publicly) without X-Pack security, and had it working in 5.x with <code>elastic</code>/<code>changeme</code>, but after trying to get it set up with 6.x, it's now requiring a password, and the default of <code>elastic</code>/<code>changeme</code> no longer works.</p>
<p>We didn't explicitly configure it to require authentication, since it's not publicly exposed and only accessible internally, so not sure why it's requiring the password, or more importantly, how we can find out what it is or how to set/change it without using X-Pack security.</p>
<p>Will we end up needing to subscribe to X-Pack since we're trying to us it within a Kubernetes cluster?</p>
| <p>Not sure how you are deploying Elasticseach in Kubernetes but we had a similar issue an ended passing this:</p>
<pre><code>xpack.security.enabled=false
</code></pre>
<p>through the environment to the container.</p>
|
<p>We would like to test some <code>Spark</code> submission on a <code>Kubernetes</code> cluster;</p>
<p>However, the <a href="https://spark.apache.org/docs/2.3.0/running-on-kubernetes.html" rel="nofollow noreferrer">official documentation</a> is kind of ambiguous.</p>
<blockquote>
<p>Spark can run on clusters managed by Kubernetes. This feature makes use of native Kubernetes scheduler that has been added to Spark.</p>
<p><strong>The Kubernetes scheduler is currently experimental. In future versions, there may be behavioral changes around configuration, container images and entrypoints.</strong></p>
</blockquote>
<p>Does this mean that the <code>kubernetes</code> scheduler itself is experimental or some kind of its implementation related to spark?</p>
<p>Does it make sense to run spark on <code>Kubernetes</code> in production-grade environments?</p>
| <ol>
<li><p>Yes, it's experimental if you are using the Spark Kubernetes scheduler like you mentioned <a href="https://spark.apache.org/docs/2.3.0/running-on-kubernetes.html" rel="nofollow noreferrer">here</a>. Use it at your own risk.</p></li>
<li><p>Not really, if you are running a standalone cluster in Kubernetes without the Kubernetes scheduler. This means create a master in a Kubernetes pod and then allocate a number of slave pods that talk to that master. Then submitting your jobs with the good old <code>spark-summit</code> without <code>--master k8s://</code> command and with the usual <code>--master spark://</code> command. The downside of this basically that your Spark cluster in Kubernetes is static.</p></li>
</ol>
|
<p>Is it possible to pass a Service's external IP (NodePort or LoadBalancer) as an environment variable to a container in a different service's Deployment?</p>
<p>For a concrete example, consider a Kubernetes cluster with namespaces for multiple teammates so they each have their own environment for testing. In a single environment, there are at least two services:</p>
<ol>
<li>An API Gateway service, for routing traffic to other services</li>
<li>A service that can register DNS entries to the environment</li>
</ol>
<p>For Service #2, it needs to know the external IP address of service #1. So far I've only been able to find examples that make use of <code>kubectl</code> to describe service #1 to find this information. I was hoping it would be possible to do something like this:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: gateway
namespace: "${ env }"
labels:
app: gateway
spec:
type: LoadBalancer
ports:
- port: 8080
selector:
app: gateway
</code></pre>
<p>and</p>
<pre><code>apiVersion: extensions/v1
kind: Deployment
metadata:
name: svc2-deployment
namespace: "${ env }"
labels:
app: svc2
spec:
template:
metadata:
labels:
app: svc2
spec:
containers:
- name: app
env:
- name: GATEWAY_IP
valueFrom:
fieldRef:
fieldPath: service.gateway.----.ingressIp
</code></pre>
<p>instead of using say an <code>initContainers</code> with a script that does <code>kubectl</code> things. Especially since I'm very new to Kubernetes :)</p>
| <p>Not really but you can use <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">DNS</a> for that. For example:</p>
<pre><code>apiVersion: extensions/v1
kind: Deployment
metadata:
name: svc2-deployment
namespace: "${ env }"
labels:
app: svc2
spec:
template:
metadata:
labels:
app: svc2
spec:
containers:
- name: app
env:
- name: GATEWAY_IP
value: gateway.<namespace-where-gw-is-running>.svc.cluster.local
</code></pre>
|
<p>I have a multi-master Kubernetes cluster set up, with one worker node. I set up the cluster with kubeadm. On <code>kubeadm init</code>, I passed the <code>-pod-network-cidr=10.244.0.0/16</code> (using Flannel as the network overlay).</p>
<p>When using <code>kubeadm join</code> on the first worker node, everything worked properly. For some reason when trying to add more workers, none of the nodes are automatically assigned a podCidr.</p>
<p>I used <a href="https://github.com/coreos/flannel/blob/master/Documentation/troubleshooting.md" rel="nofollow noreferrer">this</a> document to manually patch each worker node, using the
<code>kubectl patch node <NODE_NAME> -p '{"spec":{"podCIDR":"<SUBNET>"}}'</code> command and things work fine.</p>
<p>But this is not ideal, I am wondering how I can fix my setup so that just adding the <code>kubeadm join</code> command will automatically assign the podCidr.</p>
<p>Any help would be greatly appreciated. Thanks!</p>
<p><strong>Edit:</strong></p>
<pre><code>I1003 23:08:55.920623 1 main.go:475] Determining IP address of default interface
I1003 23:08:55.920896 1 main.go:488] Using interface with name eth0 and address
I1003 23:08:55.920915 1 main.go:505] Defaulting external address to interface address ()
I1003 23:08:55.941287 1 kube.go:131] Waiting 10m0s for node controller to sync
I1003 23:08:55.942785 1 kube.go:294] Starting kube subnet manager
I1003 23:08:56.943187 1 kube.go:138] Node controller sync successful
I1003 23:08:56.943212 1 main.go:235] Created subnet manager:
</code></pre>
<p>Kubernetes Subnet Manager - kubernetes-worker-06</p>
<pre><code>I1003 23:08:56.943219 1 main.go:238] Installing signal handlers
I1003 23:08:56.943273 1 main.go:353] Found network config - Backend type: vxlan
I1003 23:08:56.943319 1 vxlan.go:120] VXLAN config: VNI=1 Port=0 GBP=false DirectRouting=false
E1003 23:08:56.943497 1 main.go:280] Error registering network: failed to acquire lease: node "kube-worker-02" pod cidr not assigned
I1003 23:08:56.943513 1 main.go:333] Stopping shutdownHandler...
</code></pre>
| <p>I was able to solve my issue. In my multi-master setup, on <em>one</em> of my master nodes, the <code>kube-controller-manager.yaml</code> (in /etc/kubernetes/manifest) file was missing the two following fields:</p>
<ul>
<li><code>--allocate-node-cidrs=true</code></li>
<li><code>--cluster-cidr=10.244.0.0/16</code></li>
</ul>
<p>Once adding these fields to the yaml, I reset the <code>kubelet</code> service and everything was working great when trying to add a new worker node.</p>
<p>This was a mistake on my part, because when initializing one of my master nodes with <code>kubeadm init</code>, I must of forgot to pass the <code>--pod-network-cidr</code>. Oops.</p>
<p>Hope this helps someone out there!</p>
|
<p>I have a Kubernetes service that exposes two ports as follows</p>
<pre><code>Name: m-svc
Namespace: m-ns
Labels:
Annotations: <none>
Selector: app=my-application
Type: ClusterIP
IP: 10.233.43.40
Port: first 8080/TCP
TargetPort: 8080/TCP
Endpoints: 10.233.115.178:8080,10.233.122.166:8080
Port: second 8888/TCP
TargetPort: 8888/TCP
Endpoints: 10.233.115.178:8888,10.233.122.166:8888
Session Affinity: None
Events: <none>
</code></pre>
<p>And here is the ingress definition:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: f5
virtual-server.f5.com/http-port: "80"
virtual-server.f5.com/ip: controller-default
virtual-server.f5.com/round-robin: round-robin
creationTimestamp: 2018-10-05T18:54:45Z
generation: 2
name: m-ingress
namespace: m-ns
resourceVersion: "39557812"
selfLink: /apis/extensions/v1beta1/namespaces/m-ns
uid: 20241db9-c8d0-11e8-9fac-0050568d4d4a
spec:
rules:
- host: www.myhost.com
http:
paths:
- backend:
serviceName: m-svc
servicePort: 8080
path: /first/path
- backend:
serviceName: m-svc
servicePort: 8080
path: /second/path
status:
loadBalancer:
ingress:
- ip: 172.31.74.89
</code></pre>
<p>But when I go to <code>www.myhost.com/first/path</code> I end up at the service that is listening on port <code>8888</code> of <code>m-svc</code>. What might be going on?</p>
<p>Another piece of information is that I am sharing a service between two ingresses that point to different ports on the same service, is this a problem? There is a different ingress port the port 8888 on this service which works fine</p>
<p>Also I am using an F5 controller</p>
<p>After a lot of time investigating this, it looks like the root cause is in the F5s, it looks like because the name of the backend (Kubernetes service) is the same, it only creates one entry in the pool and routes the requests to this backend and the one port that gets registered in the F5 policy. Is there a fix for this? A workaround is to create a unique service for each port but I dont want to make this change , is this possible at the F5 level?</p>
| <p>From what I see you don't have a <code>Selector</code> field in your service. Without it, it will not forward to any backend or pod. What makes you think that it's going to port <code>8888</code>? What's strange is that you have <a href="https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors" rel="nofollow noreferrer"><code>Endpoints</code></a> in your service. Did you manually create them? </p>
<p>The service would have to be something like this:</p>
<pre><code>Name: m-svc
Namespace: m-ns
Labels:
Annotations: <none>
Selector: app=my-application
Type: ClusterIP
IP: 10.233.43.40
Port: first 8080/TCP
TargetPort: 8080/TCP
Endpoints: 10.233.115.178:8080,10.233.122.166:8080
Port: second 8888/TCP
TargetPort: 8888/TCP
Endpoints: 10.233.115.178:8888,10.233.122.166:8888
Session Affinity: None
Events: <none>
</code></pre>
<p>Then in your deployment definition:</p>
<pre><code>selector:
matchLabels:
app: my-application
</code></pre>
<p>Or in a pod:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
annotations: { ... }
labels:
app: my-application
</code></pre>
<p>You should also be able to describe your <code>Endpoints</code>:</p>
<pre><code>$ kubectl describe endpoints m-svc
Name: m-svc
Namespace: default
Labels: app=my-application
Annotations: <none>
Subsets:
Addresses: x.x.x.x
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
first 8080 TCP
second 8081 TCP
Events: <none>
</code></pre>
|
<p>We have harbor scanning containers before they have been deployed. Once they are scanned, we then deploy them to the platform (k8s).</p>
<p>Is there anyway to scan a container just say a few weeks down the line after it has been deployed? Without disturbing the deployment of course.</p>
<p>Thanks</p>
| <p>I think we have to distinguish between a container (the running process) and the image from which a container is created/started.</p>
<p>If this is about finding out which image was used to create a container that is (still) running and to scan that image for (new) vulnerabilities...here is a way to get information about the images of all running containers in a pod:</p>
<pre><code>kubectl get pods <pod-name> -o jsonpath={.status.containerStatuses[*].image}
</code></pre>
|
<p>I have the following ingress resource for one of my apps</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name:""
annotations:
ingress.kubernetes.io..
spec:
rules:
- host: my-app
http:
paths:
- path: /path/to/service
backend:
serviceName: my-service
servicePort: 80
</code></pre>
<p>This works as expected and I can access my service at <code>http://my-app/path/to/service</code>. However the service in my app takes query parameters that dont seem to correctly get redirected for eg:</p>
<pre><code>http://my-app/path/to/service/more/paths
</code></pre>
<p>This brings me back to <code>http://my-app/path/to/service</code></p>
<p>How can I maintain this path structure ?</p>
| <p>I believe you need to use wildcards on your path:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name:""
annotations:
ingress.kubernetes.io..
spec:
rules:
- host: my-app
http:
paths:
- path: /path/to/service/*
backend:
serviceName: my-service
servicePort: 80
</code></pre>
<p>More information <a href="https://github.com/kubernetes/ingress-nginx/issues/1120" rel="nofollow noreferrer">here</a>. Seems like it's hard to find any docs with wildcard examples. Not that this is specific to nginx, it may not work with other ingress controllers.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.