prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>Hi I am using google kubernetes engine to deploy my application. I tried to add a configMap </p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: configmap
namespace: default
data:
database_user: root
database_password: root
database_db: db
database_port: 5432
database_host: mypostgres
</code></pre>
<p>And then in my application deployment file I mapped my envirement variables like the following</p>
<pre><code> spec:
containers:
- env:
- name: DATABASE_HOST
valueFrom:
configMapKeyRef:
name: configmap
key: database_host
- name: DATABASE_NAME
valueFrom:
configMapKeyRef:
name: configmap
key: database_db
- name: DATABASE_PASSWORD
valueFrom:
configMapKeyRef:
name: configmap
key: database_password
- name: DATABASE_USER
valueFrom:
configMapKeyRef:
name: configmap
key: database_user
- name: DATABASE_PORT
valueFrom:
configMapKeyRef:
name: configmap
key: database_port
</code></pre>
<p>My service I not running and I got the</p>
<blockquote>
<p>CreateContainerConfigError
When I try to show the result of the pod </p>
</blockquote>
<p>When I do "describe my pod " I got </p>
<blockquote>
<p>Error: Couldn't find key database_host</p>
</blockquote>
<p>My question is, why my deployment file are not commincating with the configmap I defined</p>
<p>I created the configmap via this command</p>
<blockquote>
<p>kubectl create configmap configmap --from-file=configmap.yaml</p>
</blockquote>
| <p>As mentioned in "kubectl create configmap --help":
--from-env-file='': Specify the path to a file to read lines of key=val pairs to create a configmap (i.e. a Docker
.env file).</p>
<p>so you just need to make a file named conf with value like:</p>
<pre><code>database_user= root
database_password= root
database_db= db
database_port= 5432
database_host= mypostgres
</code></pre>
<p>and run: "kubectl create configmap coco-config --from-env-file=conf"</p>
<p><strong>UPDATE:</strong>
If you put your data in " ", problem will be fixed</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: configmap
namespace: default
data:
database_user: "root"
database_password: "root"
database_db: "db"
database_port: "5432"
database_host: "mypostgres"
</code></pre>
|
<p>I'm running a MySQL deployment on Kubernetes however seems like my allocated space was not enough, initially I added a persistent volume of <code>50GB</code> and now I'd like to expand that to <code>100GB</code>.</p>
<p>I already saw the a persistent volume claim is immutable after creation, but can I somehow just resize the persistent volume and then recreate my claim?</p>
| <p>Yes, as of 1.11, <a href="https://kubernetes.io/blog/2018/07/12/resizing-persistent-volumes-using-kubernetes/" rel="noreferrer">persistent volumes can be resized</a> on certain cloud providers. To increase volume size:</p>
<ol>
<li>Edit the PVC (<code>kubectl edit pvc $your_pvc</code>) to specify the new size. The key to edit is <code>spec.resources.requests.storage</code>:</li>
</ol>
<p><a href="https://i.stack.imgur.com/neoTZ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/neoTZ.png" alt="enter image description here"></a></p>
<ol start="2">
<li>Terminate the pod using the volume.</li>
</ol>
<p>Once the pod using the volume is terminated, the filesystem is expanded and the size of the <code>PV</code> is increased. See the above link for details.</p>
|
<p>I am following <a href="https://cloud.google.com/sql/docs/postgres/connect-kubernetes-engine" rel="nofollow noreferrer">these docs</a> on how to setup a sidecar proxy to my cloud-sql database. It refers to a <a href="https://github.com/GoogleCloudPlatform/kubernetes-engine-samples/blob/master/cloudsql/postgres_deployment.yaml" rel="nofollow noreferrer">manifest on github</a> that -as I find it all over the place on github repos etc- seems to work for 'everyone' but I run into trouble. The proxy container can not mount to /secrets/cloudsql it seems as it can not succesfully start. When I run <code>kubectl logs [mypod] cloudsql-proxy</code>:</p>
<pre><code>invalid json file "/secrets/cloudsql/mysecret.json": open /secrets/cloudsql/mysecret.json: no such file or directory
</code></pre>
<p>So the secret seems to be the problem. </p>
<p>Relevant part of the manifest:</p>
<pre><code>- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=pqbq-224713:europe-west4:osm=tcp:5432",
"-credential_file=/secrets/cloudsql/mysecret.json"]
securityContext:
runAsUser: 2
allowPrivilegeEscalation: false
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
volumes:
- name: cloudsql-instance-credential
secret:
secretName: mysecret
</code></pre>
<p>To test/debug the secret I mount the volume to an another container that does start, but then the path and file /secrets/cloudsql/mysecret.json does not exist either. However when I mount the secret to an already EXISTING folder I can find in this folder not the mysecret.json file (as I expected...) but (in my case) two secrets it contains, so I find: <code>/existingfolder/password</code> and <code>/existingfolder/username</code> (apparently this is how it works!? When I cat these secrets they give the proper strings, so they seem fine). </p>
<p>So it looks like the path can not be made by the system, is this a permission issue? I tried simply mounting in the proxy container to the root ('/') so no folder, but that gives an error saying it is not allowed to do so. As the image <code>gcr.io/cloudsql-docker/gce-proxy:1.11</code> is from Google and I can not get it running I can not see what folder it has. </p>
<p>My questions:</p>
<ol>
<li>Is the mountPath created from the manifest or should it be already
in the container? </li>
<li>How can I get this working?</li>
</ol>
| <p>I solved it. I was using the same secret on the cloudsql-proxy as the ones used on the app (env), but it needs to be a key you generate from a service account and then make a secret out of that. Then it works. <a href="https://shinesolutions.com/2018/10/25/deploying-a-full-stack-application-to-google-kubernetes-engine/" rel="nofollow noreferrer">This tutorial</a> helped me through.</p>
|
<p>I am trying to setup a node.js app on GKE with a gcloud SQL Postgres database with a sidecar proxy. I am following along the docs but do not get it working. The proxy does not seem to be able to start (the app container does start). I have no idea why the proxy container can not start and also have no idea how to debug this (e.g. how do i get an error message!?).</p>
<p>mysecret.yaml:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
username: [base64_username]
password: [base64_password]
</code></pre>
<p>Output of <code>kubectl get secrets</code>:</p>
<pre><code>NAME TYPE DATA AGE
default-token-tbgsv kubernetes.io/service-account-token 3 5d
mysecret Opaque 2 7h
</code></pre>
<p>app-deployment.yaml:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
labels:
app: myapp
spec:
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: app
image: gcr.io/myproject/firstapp:v2
ports:
- containerPort: 8080
env:
- name: POSTGRES_DB_HOST
value: 127.0.0.1:5432
- name: POSTGRES_DB_USER
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: POSTGRES_DB_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=myproject:europe-west4:databasename=tcp:5432",
"-credential_file=/secrets/cloudsql/mysecret.json"]
securityContext:
runAsUser: 2
allowPrivilegeEscalation: false
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: mysecret
</code></pre>
<p>output of <code>kubectl create -f ./kubernetes/app-deployment.json</code>:</p>
<pre><code>deployment.apps/myapp created
</code></pre>
<p>output of <code>kubectl get deployments</code>:</p>
<pre><code>NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
myapp 1 1 1 0 5s
</code></pre>
<p>output of <code>kubectl get pods</code>:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
myapp-5bc965f688-5rxwp 1/2 CrashLoopBackOff 1 10s
</code></pre>
<p>output of <code>kubectl describe pod/myapp-5bc955f688-5rxwp -n default</code>:</p>
<pre><code>Name: myapp-5bc955f688-5rxwp
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: gke-standard-cluster-1-default-pool-1ec52705-186n/10.164.0.4
Start Time: Sat, 15 Dec 2018 21:46:03 +0100
Labels: app=myapp
pod-template-hash=1675219244
Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container app; cpu request for container cloudsql-proxy
Status: Running
IP: 10.44.1.9
Controlled By: ReplicaSet/myapp-5bc965f688
Containers:
app:
Container ID: docker://d3ba7ff9c581534a4d55a5baef2d020413643e0c2361555eac6beba91b38b120
Image: gcr.io/myproject/firstapp:v2
Image ID: docker-pullable://gcr.io/myproject/firstapp@sha256:80168b43e3d0cce6d3beda6c3d1c679cdc42e88b0b918e225e7679252a59a73b
Port: 8080/TCP
Host Port: 0/TCP
State: Running
Started: Sat, 15 Dec 2018 21:46:04 +0100
Ready: True
Restart Count: 0
Requests:
cpu: 100m
Environment:
POSTGRES_DB_HOST: 127.0.0.1:5432
POSTGRES_DB_USER: <set to the key 'username' in secret 'mysecret'> Optional: false
POSTGRES_DB_PASSWORD: <set to the key 'password' in secret 'mysecret'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-tbgsv (ro)
cloudsql-proxy:
Container ID: docker://96e2ed0de8fca21ecd51462993b7083bec2a31f6000bc2136c85842daf17435d
Image: gcr.io/cloudsql-docker/gce-proxy:1.11
Image ID: docker-pullable://gcr.io/cloudsql-docker/gce-proxy@sha256:5c690349ad8041e8b21eaa63cb078cf13188568e0bfac3b5a914da3483079e2b
Port: <none>
Host Port: <none>
Command:
/cloud_sql_proxy
-instances=myproject:europe-west4:databasename=tcp:5432
-credential_file=/secrets/cloudsql/mysecret.json
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Sat, 15 Dec 2018 22:43:37 +0100
Finished: Sat, 15 Dec 2018 22:43:37 +0100
Ready: False
Restart Count: 16
Requests:
cpu: 100m
Environment: <none>
Mounts:
/secrets/cloudsql from cloudsql-instance-credentials (ro)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-tbgsv (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
cloudsql-instance-credentials:
Type: Secret (a volume populated by a Secret)
SecretName: mysecret
Optional: false
default-token-tbgsv:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-tbgsv
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 59m default-scheduler Successfully assigned default/myapp-5bc955f688-5rxwp to gke-standard-cluster-1-default-pool-1ec52705-186n
Normal Pulled 59m kubelet, gke-standard-cluster-1-default-pool-1ec52705-186n Container image "gcr.io/myproject/firstapp:v2" already present on machine
Normal Created 59m kubelet, gke-standard-cluster-1-default-pool-1ec52705-186n Created container
Normal Started 59m kubelet, gke-standard-cluster-1-default-pool-1ec52705-186n Started container
Normal Started 59m (x4 over 59m) kubelet, gke-standard-cluster-1-default-pool-1ec52705-186n Started container
Normal Pulled 58m (x5 over 59m) kubelet, gke-standard-cluster-1-default-pool-1ec52705-186n Container image "gcr.io/cloudsql-docker/gce-proxy:1.11" already present on machine
Normal Created 58m (x5 over 59m) kubelet, gke-standard-cluster-1-default-pool-1ec52705-186n Created container
Warning BackOff 4m46s (x252 over 59m) kubelet, gke-standard-cluster-1-default-pool-1ec52705-186n Back-off restarting failed container
</code></pre>
<p>EDIT: something seems wrong with my secret since when I do <code>kubectl logs 5bc955f688-5rxwp cloudsql-proxy</code> I get:</p>
<pre><code>2018/12/16 22:26:28 invalid json file "/secrets/cloudsql/mysecret.json": open /secrets/cloudsql/mysecret.json: no such file or directory
</code></pre>
<p>I created the secret by doing:</p>
<pre><code>kubectl create -f ./kubernetes/mysecret.yaml
</code></pre>
<p>I presume the secret is turned into JSON... When I change in app-deployment.yaml the mysecret.json into mysecret.yaml I still get similar error... </p>
| <p>I was missing the correct key (credentials.json). It needs to be a key you generate from a service account; then you turn it into a secret. See also <a href="https://stackoverflow.com/questions/53815189/kubernetes-can-not-mount-a-volume-to-a-folder/53819064#53819064">this issue</a>. </p>
|
<p>I use a tmuxinator to generate a tmux window that watchs the output of some kubectl commands like:</p>
<pre><code>watch -n 5 kubectl get pods/rc/svc/pv/pvc
</code></pre>
<p>But sometimes the output of Kubectl get too wide, i.e the selector column after rolling updates, and I would like not to show it on my setup. How do I do this with kubectl alone?</p>
<p>awk or cut can do the job too, but I could not figure out a way of doing this without loosing the table formating. </p>
| <p>Rather than using a second tool/binary like awk and column. you can use the flag <code>-o=custom-columns</code> in this way:
<code>kubectl get pods --all-namespaces -o=custom-columns=NAME:.metadata.name,Namespace:.metadata.namespace</code></p>
<p>This is also an alternative and easy way to output custom columns than go-templates or jsonpath!</p>
|
<p>I am running my elixir app on GKE</p>
<p>here is my deployment configuration:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myapp
namespace: production
spec:
replicas: 1
revisionHistoryLimit: 1
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: myapp
tier: backend
spec:
securityContext:
runAsUser: 0
runAsNonRoot: false
containers:
- name: myapp
image: myimage
resources:
limits:
cpu: 3000m
memory: 2000Mi
requests:
cpu: 2500m
memory: 1000Mi
ports:
- containerPort: 80
args:
- foreground
</code></pre>
<p>as you can see in the image, the pod reached its memory limit and crashed
<a href="https://i.stack.imgur.com/nbiwR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nbiwR.png" alt="pod"></a></p>
<p>these are my last logs:</p>
<pre><code>erl_child_setup closed
Crash dump is being written to: erl_crash.dump...done
Shutting down..
Node is not running!
</code></pre>
<p>and then my app is frozen, I get 502 when trying to request the app,</p>
<p>In order to restart I restart the pod (kubectl delete pod), and then it runs again,</p>
<p>my question is: why doesnt the pod restart automatically when reaches memory limit?</p>
| <p>You'll need to add probes that will check if your application is healthy.</p>
<p>Since you mentioned a <code>502</code>, I'm assuming this is a Phoenix application and you can add a health-check endpoint:</p>
<pre><code>livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 3
periodSeconds: 3
</code></pre>
<p>When this request stops receiving a <code>200</code>, then the Kubernetes Controller will restart your pod.</p>
|
<p>Team,</p>
<p>We need to roll out some drivers on worker nodes of a K8s cluster and our flow is as below:</p>
<ol>
<li>cordon node [no more scheduling]</li>
<li>wait for jobs to complete</li>
<li>destroy</li>
</ol>
<p>Is there a way I can automate this using K8s options itself instead of writing some bash script to do those checks every time because we don't know when pods would complete. So, can we configure the master API server to check cordon node and destroy if it has no jobs running?</p>
| <p>You can write your own application either using the <a href="https://github.com/kubernetes/client-go" rel="nofollow noreferrer">Go Client</a>, <a href="https://github.com/kubernetes-client/python" rel="nofollow noreferrer">Python Client</a>, or <a href="https://github.com/kubernetes-client/java" rel="nofollow noreferrer">Java Client</a> and basically do this:</p>
<pre><code>$ kubectl apply -f yourjob.yaml
$ kubectl cordon <nodename>
$ kubectl wait --for=condition=complete job/myjob
$ kubectl drain <nodename>
# Terminate your node if drain returns successfully
</code></pre>
<p>If this is a frequent pattern, you could also probably leverage a custom controller (<a href="https://coreos.com/operators/" rel="nofollow noreferrer">operator</a>) with a custom resource definition (<a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/" rel="nofollow noreferrer">CRD</a>) to do that. You will have to embed the code of your application that talks to the API server.</p>
|
<p>I know that PVC can be used as a volume in k8s. I know how to create them and how to use, but I couldn't understand why there are two of them, PV and PVC. </p>
<p>Can someone give me an architectural reason behind PV/PVC distinction? What kind of problem it try to solve (or what historical is behind this)?</p>
| <p>Despite their names, they serve two different purposes: an abstraction for storage (PV) and a request for such storage (PVC). Together, they enable a clean separation of concerns (using a figure from our <a href="http://shop.oreilly.com/product/0636920064947.do" rel="noreferrer">Kubernetes Cookbook</a> here to illustrate this):</p>
<p><a href="https://i.stack.imgur.com/tfniF.png" rel="noreferrer"><img src="https://i.stack.imgur.com/tfniF.png" alt="enter image description here"></a></p>
<p>The storage admin focuses on provisioning PVs (ideally <a href="http://shop.oreilly.com/product/0636920064947.do" rel="noreferrer">dynamically</a> through defining storage classes) and the developer uses a PVC to acquire a PV and use it in a pod.</p>
|
<p>The <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#taint" rel="noreferrer">docs</a> are great about explaining how to set a taint on a node, or remove one. And I can use <code>kubectl describe node</code> to get a verbose description of one node, including its taints. But what if I've forgotten the name of the taint I created, or which nodes I set it on? Can I list all of my nodes, with any taints that exist on them?</p>
|
<pre class="lang-bash prettyprint-override"><code>kubectl get nodes -o json | jq '.items[].spec'
</code></pre>
<p>which will give the complete spec with node name, or:</p>
<pre class="lang-bash prettyprint-override"><code>kubectl get nodes -o json | jq '.items[].spec.taints'
</code></pre>
<p>will produce the list of the taints per each node</p>
|
<p>According to the <a href="https://istio.io/docs/concepts/traffic-management/#virtual-services" rel="nofollow noreferrer">Istio documentation</a>, VirtualServices should be able to route requests to "a completely different service than was requested". I would like to use this feature give services different aliases in different applications.</p>
<p>I'm starting with a VirtualService definition like this:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: my-vs
spec:
hosts:
- my-alias
http:
- route:
- destination:
host: my-service
</code></pre>
<p>The intention is that a client pod in the mesh should be able to send requests to <a href="http://my-alias" rel="nofollow noreferrer">http://my-alias</a> and have them routed to my-service. In the future I'll expand this with match rules to make the alias behave differently for different clients, but even the simple version isn't working.</p>
<p>With no other setup, the client fails to resolve my-alias via DNS. I can solve this by adding a selectorless k8s service named my-alias so its DNS resolves, but then the VirtualService doesn't seem to do the redirect. If I add an external host like google.com to the VirtualService, then it does successfully redirect any requests to google.com over to my-service. Using the full hostname (my-alias.default.svc.cluster.local) doesn't help.</p>
<p>So it seems like the VirtualService is not allowing me to redirect traffic bound for another service in the mesh. Is this expected, and is there a way I can work around it?</p>
| <p>The problem ended up being that I was using unnamed ports for my services, so traffic never made it into the mesh. According to <a href="https://istio.io/docs/setup/kubernetes/spec-requirements/" rel="nofollow noreferrer">https://istio.io/docs/setup/kubernetes/spec-requirements/</a>, the HTTP port must be named <code>http</code>.</p>
|
<p>I have been trying to deploy JupyterHub on my Kubernetes cluster (on 3 virtualbox nodes) and willing to use PAM authentication but unfortunately there is nothing in the documents.
I would appreciate if somebody explain the steps briefly, or at least give me some hint to follow.</p>
<p>I'll give more information if needed.</p>
<p>Thanks, </p>
| <p>Basically, the <a href="https://jupyterhub.readthedocs.io/en/stable/api/auth.html#pamauthenticator" rel="nofollow noreferrer">PAM authenticator</a> would be configured the same way that you would on any Linux machine except that in this case, you would be doing it in the containers in running in your JupyterHub on your Kubernetes cluster.</p>
<p>You could build you customer container base on the base for JupyterHub and then add users as you build the container:</p>
<p>Dockerfile:</p>
<pre><code>FROM jupyterhub/jupyterhub
RUN adduser -q –gecos "" –disabled-password <username1>
RUN adduser -q –gecos "" –disabled-password <username2>
...
</code></pre>
<p>Build the container:</p>
<pre><code>$ docker build -t myjupyterhub .
</code></pre>
<p>Alternatively, you could create an <code>entrypoint.sh</code> script that creates the users. Something like this:</p>
<pre><code>#!/bin/bash
adduser -q –gecos "" –disabled-password <username1>
adduser -q –gecos "" –disabled-password <username2>
...
<start-jupyterhub>
</code></pre>
<p>Make it executable:</p>
<pre><code>$ chmod +x entrypoint.sh
</code></pre>
<p>Then on your Dockerfile, something like this:</p>
<pre><code>FROM jupyterhub/jupyterhub
COPY entrypoint.sh /usr/local/bin/entrypoint.sh
...
ENTRYPOINT [ "/usr/local/bin/entrypoint.sh" ]
</code></pre>
<p>You could also play around with <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">ConfigMaps</a> and see if you can use them to add the users but that means that you will have to understand more on how PAM is configured and what the <code>adduser -q –gecos "" –disabled-password <username1></code> actually does for example.</p>
|
<p>I'm trying to run <code>kubectl -f pod.yaml</code> but getting this error. Any hint?</p>
<pre><code>error: error validating "/pod.yaml": error validating data: [ValidationError(Pod): unknown field "imagePullSecrets" in io.k8s.api.core.v1.Pod, ValidationError(Pod): unknown field "nodeSelector" in io.k8s.api.core.v1.Pod, ValidationError(Pod): unknown field "tasks" in io.k8s.api.core.v1.Pod]; if you choose to ignore these errors, turn validation off with --validate=false
</code></pre>
<p>pod.yaml:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: gpu-pod-10.0.1
namespace: e6a5089f-8e9e-4647-abe3-b8d775079565
spec:
containers:
- name: main
image: bded587f4604
imagePullSecrets: ["testo", "awsecr-cred"]
nodeSelector:
kubernetes.io/hostname: 11-4730
tasks:
- name: traind
command: et estimate -e v/lat/exent_sps/enet/default_sql.spec.txt -r /out
completions: 1
inputs:
datasets:
- name: poa
version: 2018-
mountPath: /in/0
</code></pre>
| <p>You have an indentation error on your <code>pod.yaml</code> definition with <code>imagePullSecrets</code> and you need to specify the <code>- name:</code> for your <code>imagePullSecrets</code>. Should be something like this:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: gpu-test-test-pod-10.0.1.11-e8b74730
namespace: test-e6a5089f-8e9e-4647-abe3-b8d775079565
spec:
containers:
- name: main
image: test.io/tets/maglev-test-bded587f4604
imagePullSecrets:
- name: testawsecr-cred
...
</code></pre>
<p>Note that <code>imagePullSecrets:</code> is plural and an <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.13/#podspec-v1-core" rel="nofollow noreferrer">array</a> so you can specify multiple credentials to multiple registries.</p>
<p>If you are using Docker you can also specify multiple credentials in <code>~/.docker/config.json</code>.</p>
<p>If you have the same credentials in <code>imagePullSecrets:</code> and configs in <code>~/.docker/config.json</code>, the credentials are merged. </p>
|
<p>I am learning Docker and I did a simple exercise:</p>
<ul>
<li>prepared 2 machines </li>
<li>set up Leader and Worker on them </li>
<li>deployed 2 instances of simple web-app with stack command</li>
</ul>
<p>when I send HTTP request to BOTH machines I get correct reply.</p>
<p>That is strange for me. I thought only Leader node should handle requests because there is some load balancing in Swarm. I thought if some node failes Swarm should automatically redirect requests to another one and Leader node is where it happens. But looks like Swarm works in a different way.</p>
<p>What is the idea behind Swarm clustering?
Is Kubernetes different?</p>
| <p>In Docker swarm the leader handles decisions on how to schedule the containers in your nodes and how to have them setup so that traffic is forwarded to them. However, the traffic doesn't go to the leader per se, but rather to the <a href="https://docs.docker.com/engine/swarm/ingress/" rel="nofollow noreferrer">Docker swarm ingress/network mesh</a></p>
<p>You do this through the command line with:</p>
<pre><code>$ docker service create \
--name <SERVICE-NAME> \
--publish published=8080,target=80 \
<IMAGE>
</code></pre>
<p>Then all your nodes will receive traffic on the published port and will get forwarded to the containers.</p>
<p><a href="https://i.stack.imgur.com/rIOyQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rIOyQ.png" alt="swarm"></a></p>
<p>In the case above, from an external load balancer, you would could forward traffic to either port <code>80</code> (container exposed port) or port <code>8080</code> (published)</p>
<p>Kubernetes is very similar but not quite the same. <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Services</a> are exposed externally through a <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">LoadBalancer</a> or <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">NodePort</a>. However, you can't get directly to the pod IP addresses from the outside because those are not seen on the outside, as opposed to <code>192.168.99.100:80</code> in the Docker swarm example above. Also, the traffic in Kubernetes doesn't go through the master (except when you are making a call to the kube-apiserver), but directly to the nodes.</p>
|
<h2>Env</h2>
<p>I've set up a 2-node kubernetes cluster in custom environment (it's not Google Cloud, not AWS, not Azure) but it's backed by Amazon EC2 instances. So I have 2 <code>c1xlarge</code> (4 CPU, 8GB RAM, CentOS 7.4 v18.01) machines in US West region.</p>
<h2>Problem</h2>
<p>When I ssh to the kubernetes master machine a day after the setup, I see this:</p>
<pre><code>[administrator@d4191051 ~]$ kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
</code></pre>
<p>Could someone please review my setup and suggest what I could be doing wrong? I have been reading documentation and setups from other people for 3 weeks now but could not manage to have this cluster up and running in a stable way.</p>
<h2>Setup</h2>
<p>It does work though right after the setup described below (if not specified, the commands were run on both master and worker nodes):</p>
<pre><code>ssh [email protected] # master
ssh [email protected] # worker
[administrator@d4191051 ~]$ sudo vi /etc/hosts # master
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.40.50.61 56fa67ff
10.40.50.60 d4191051
[administrator@56fa67ff ~]$ sudo vi /etc/hosts # worker
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.40.50.60 d4191051
10.40.50.61 56fa67ff
# disable SELinux
cat /etc/sysconfig/selinux
setenforce 0
sudo sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
sudo su root
# disable swap memory
swapoff -a
vi /etc/fstab # comment out the swap line
yum update -y
# install docker on CentOS 7
yum install yum-utils device-mapper-persistent-data lvm2 -y
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install docker-ce-18.06.1.ce-3.el7.x86_64 -y
# configure Kubernetes repository
vi /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
# install kubernetes
yum install kubelet kubeadm kubectl -y
reboot
ssh [email protected] # master
ssh [email protected] # worker
sudo su root
# start docker service
systemctl start docker && systemctl enable docker
systemctl start firewalld
# open Kubernetes ports in firewall on master
firewall-cmd --permanent --add-port=6443/tcp # Kubernetes API Server
firewall-cmd --permanent --add-port=2379-2380/tcp # etcd server client API
firewall-cmd --permanent --add-port=10250/tcp # Kubelet API
firewall-cmd --permanent --add-port=10251/tcp # kube-scheduler
firewall-cmd --permanent --add-port=10252/tcp # kube-controller-manager
firewall-cmd --permanent --add-port=10255/tcp # Read-Only Kubelet API
# open Kubernetes ports in firewall on worker
firewall-cmd --permanent --add-port=10250/tcp # Kubelet API
firewall-cmd --permanent --add-port=10255/tcp # Read-Only Kubelet API
firewall-cmd --permanent --add-port=30000-32767/tcp # NodePort Services
firewall-cmd --permanent --add-port=6783/tcp # Allows the node to join the overlay network that allows service discovery among nodes on a Docker Cloud account
#firewall-cmd --permanent --add-port=6783/udp # Allows the node to join the overlay network that allows service discovery among nodes on a Docker Cloud account
firewall-cmd --reload
Ctrl-D
# enable the br_netfilter kernel module
cat /proc/sys/net/bridge/bridge-nf-call-iptables
sudo modprobe br_netfilter
echo '1' | sudo tee /proc/sys/net/bridge/bridge-nf-call-iptables
sudo sysctl net.bridge.bridge-nf-call-iptables=1
# initialize Kubernetes cluster on master
[administrator@d4191051 ~]$ sudo kubeadm init --apiserver-advertise-address=10.40.50.60 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.13.1
[preflight] Running pre-flight checks
[WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [d4191051 localhost] and IPs [10.40.50.60 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [d4191051 localhost] and IPs [10.40.50.60 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [d4191051 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.40.50.60]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 22.502676 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "d4191051" as an annotation
[mark-control-plane] Marking the node d4191051 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node d4191051 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: hx910e.xh7kl0zbcjqsktdv
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 10.40.50.60:6443 --token hx910e.xh7kl0zbcjqsktdv --discovery-token-ca-cert-hash sha256:39ee4baaf600d1872ef2482cfa2a895e21dacee92a831e3e3f0af2f0278db2d3
# configure the cluster for a non-root user
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
echo "export KUBECONFIG=$HOME/admin.conf" | tee -a ~/.bashrc
sudo su root
# start kubernetes service
systemctl start kubelet && systemctl enable kubelet
# use the cgroupfs driver
docker info | grep -i cgroup
sed -i 's/cgroup-driver=systemd/cgroup-driver=cgroupfs/g' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
# add --cgroup-driver=cgroupfs to KUBELET_CGROUP_ARGS=
vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
systemctl daemon-reload
systemctl restart kubelet
CtrlD
# deploy a pod network to the cluster (this article suggests to use flannel network https://chrislovecnm.com/kubernetes/cni/choosing-a-cni-provider/ )
[administrator@d4191051 ~]$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
# check the cluster nodes (wait until it's ready)
[administrator@d4191051 ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
d4191051 Ready master 3m56s v1.13.0
# join the Kubernetes pod network on worker
[administrator@56fa67ff ~]$ sudo kubeadm join 10.40.50.60:6443 --token hx910e.xh7kl0zbcjqsktdv --discovery-token-ca-cert-hash sha256:39ee4baaf600d1872ef2482cfa2a895e21dacee92a831e3e3f0af2f0278db2d3
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "10.40.50.60:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://10.40.50.60:6443"
[discovery] Requesting info from "https://10.40.50.60:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server
"10.40.50.60:6443"
[discovery] Successfully established connection with API Server "10.40.50.60:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "56fa67ff" as an annotation
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the master to see this node join the cluster.
# monitor kubernetes on master
[administrator@d4191051 ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
56fa67ff Ready <none> 30s v1.13.0
d4191051 Ready master 6m42s v1.13.0
[administrator@cfe9680b ~]$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-86c58d9df4-7d5xz 0/1 ContainerCreating 0 7m4s
kube-system coredns-86c58d9df4-nm2nw 0/1 ContainerCreating 0 7m4s
kube-system etcd-cfe9680b 1/1 Running 0 6m17s
kube-system kube-apiserver-cfe9680b 1/1 Running 0 6m2s
kube-system kube-controller-manager-cfe9680b 1/1 Running 0 6m20s
kube-system kube-flannel-ds-amd64-2p77k 1/1 Running 1 2m48s
kube-system kube-flannel-ds-amd64-vvvbx 1/1 Running 0 4m14s
kube-system kube-proxy-67sdh 1/1 Running 0 2m48s
kube-system kube-proxy-ptdpv 1/1 Running 0 7m4s
kube-system kube-scheduler-cfe9680b 1/1 Running 0 6m25s
</code></pre>
| <p>To access your cluster as a non-root user you are doing following steps(I am assuming as non root user):</p>
<pre><code>mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
</code></pre>
<p>You're running following command which is not correct:</p>
<pre><code>echo "export KUBECONFIG=$HOME/admin.conf" | tee -a ~/.bashrc
</code></pre>
<p>And it should be:</p>
<pre><code>echo "export KUBECONFIG=/etc/kubernetes/admin.conf" | tee -a ~/.bashrc
</code></pre>
<p>Now check your config file in .kube folder, it should look like</p>
<pre><code>[centos@ip-10-0-1-91 ~]$ ls -al $HOME/.kube
drwxr-xr-x. 3 centos centos 23 Dec 17 11:42 cache
-rw-------. 1 centos centos 5573 Dec 17 11:42 config
drwxrwxr-x. 3 centos centos 4096 Dec 17 11:42 http-cache
</code></pre>
<p>The owner should be your <code>non-root</code> user. If owner is root user then you should run first three command as non-root user and it will work.</p>
<p>Hope this helps.</p>
|
<p>I wanted to know what kubernetes does to check liveness and readiness of a pod and container by default.</p>
<p>I could find the document which mentions how I can add my custom probe and change the probe parameters like initial delay etc. However, could not find the default probe method used by k8s.</p>
| <p>By default, Kubernetes starts to send traffic to a pod when all the containers inside the pod start, and restarts containers when they crash. While this can be <code>good enough</code> when you are starting out, but you can make your deployment more robust by creating custom health checks.</p>
<blockquote>
<p>By default, Kubernetes just checks container inside the pod is up and starts sending traffic. There is no by default readiness or liveness check provided by kubernetes.</p>
</blockquote>
<p><strong>Readiness Probe</strong></p>
<p>Let’s imagine that your app takes a minute to warm up and start. Your service won’t work until it is up and running, even though the process has started. You will also have issues if you want to scale up this deployment to have multiple copies. A new copy shouldn’t receive traffic until it is fully ready, but by default Kubernetes starts sending it traffic as soon as the process inside the container starts. By using a <code>readiness probe</code>, Kubernetes waits until the app is fully started before it allows the service to send traffic to the new copy.</p>
<p><strong>Liveness Probe</strong></p>
<p>Let’s imagine another scenario where your app has a nasty case of deadlock, causing it to hang indefinitely and stop serving requests. Because the process continues to run, by default Kubernetes thinks that everything is fine and continues to send requests to the broken pod. By using a liveness probe, Kubernetes detects that the app is no longer serving requests and restarts the offending pod.</p>
|
<p>I'm looking for a Prometheus metric that would allow me to monitor the time pods spend in the <code>terminating</code> state before vanishing into the void. </p>
<p>I've tried playing around with <code>kube_pod_container_status_terminated</code> but it only seems to register pods once they finish the termination process, but don't help me understand how long does it take to terminate a pod.<br>
I've also looked at <code>kube_pod_status_phase</code> which I found out about in this channel a while ago but it also seems to lack this insight.</p>
<p>I'm currently collecting metrics on my k8s workload using cAdvisor, kube-state-metrics and the prometheus node-exporter, but would happily considering additional collectors if they contain the desired data.<br>
A non-prometheus solution would also be great.<br>
Any ideas? Thanks!</p>
| <p>Kubernetes itself, Heapster and metrics-server don't provide such metrics, but you can get metrics close to what you've mentioned by installing <a href="https://github.com/kubernetes/kube-state-metrics" rel="nofollow noreferrer">kube-state-metrics</a>. It has several pod metrics that reflect pods state:</p>
<pre><code>kube_pod_status_phase
kube_pod_container_status_terminated
kube_pod_container_status_terminated_reason
kube_pod_container_status_last_terminated_reason
</code></pre>
<p>You can find the full list of pods metrics, provided by <code>kube-state-metrics</code> in <a href="https://github.com/kubernetes/kube-state-metrics/blob/master/Documentation/pod-metrics.md" rel="nofollow noreferrer">Documentation</a>.</p>
<p>There is also <a href="https://hub.docker.com/r/bitnami/kube-state-metrics/" rel="nofollow noreferrer">Bitnami Helm chart</a> that could simplify the installation of <code>kube-state-metrics</code>.</p>
|
<p>this my first deployment yaml file, i'm testing k8s with minikube like an external cluster, i would expose port 80 of minikube cluster to port 8080 of container (webservice). This my yaml:</p>
<pre><code>apiVersion: v1
kind: List
metadata:
resourceVersion: ""
selfLink: ""
items:
############ Services ###############
- apiVersion: v1
kind: Service
metadata:
name: kuard-80
labels:
component: webserver
app: k8s-test
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
loadBalancerIP: 192.168.99.100 # Minikube IP from "minikube ip"
selector:
component: webserver
sessionAffinity: None
type: LoadBalancer
############ Deployments ############
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kuard
labels:
component: webserver
app: k8s-test
spec:
replicas: 1 # tells deployment to run 1 pod matching the template
selector:
matchLabels:
component: webserver
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
component: webserver
spec:
volumes:
- name: "kuard-data"
hostPath:
path: "/var/lib/kuard"
containers:
- image: gcr.io/kuar-demo/kuard-amd64:1
name: kuard
volumeMounts:
- mountPath: "/data"
name: "kuard-data"
livenessProbe:
httpGet:
path: /healthy
port: 8080
initialDelaySeconds: 5
timeoutSeconds: 1
periodSeconds: 10
failureThreshold: 3
ports:
- containerPort: 8080
protocol: TCP
restartPolicy: Always
</code></pre>
<p>I expect the port 80 to answer me on <a href="http://192.168.99.100" rel="nofollow noreferrer">http://192.168.99.100</a> , where the error? here are the results of some commands, services and endpoints</p>
<p>$ kubectl get service</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kuard-80 LoadBalancer 10.107.163.175 <pending> 80:30058/TCP 3m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 34d
</code></pre>
<p>$ kubectl get endpoints</p>
<pre><code>NAME ENDPOINTS AGE
kuard-80 172.17.0.7:8080 10m
kubernetes 192.168.99.100:8443 34d
</code></pre>
<p>Thanks for any help you can give me and excuse me if the question is stupid...</p>
| <p>Your service is of type <code>LoadBalancer</code> which is supported only for cloud, hence your external ip is in pending state.</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kuard-80 LoadBalancer 10.107.163.175 <pending> 80:30058/TCP 3m
</code></pre>
<p>You can expose your service using <code>NodePort</code> in minikube. Following will be the yaml file for that:</p>
<pre><code> apiVersion: v1
kind: Service
metadata:
name: kuard-80
labels:
component: webserver
app: k8s-test
spec:
ports:
- port: 80
targetPort: 8080
protocol: TCP
selector:
component: webserver
type: NodePort
</code></pre>
<p>Now, when you do <code>kubectl describe service kuard-80</code> you will be able to see a port of type <code>NodePort</code> whose value will be in between 30000-32767. You will be able to access your application using:</p>
<pre><code>http://<vm_ip>:<node_port>
</code></pre>
<p>Hope this helps</p>
|
<p>By definition, <code>kube_pod_container_status_waiting_reason</code> is supposed to capture reasons for a pod in Waiting status.</p>
<p>I have several pods in my kubernetes cluster which are in CrashLoopBackOff but I dont see that reason captured by <code>kube_pod_container_status_waiting_reason</code>.
It only captures two reasons - ErrImagePull and ContainerCreating. </p>
<pre><code>~$ k get pods -o wide --show-all --all-namespaces | grep Crash
cattle-system cattle-cluster-agent-6f744c67cc-jlkjh 0/1 CrashLoopBackOff 2885 10d 10.233.121.247 k8s-4
cattle-system cattle-node-agent-6klkh 0/1 CrashLoopBackOff 2886 171d 10.171.201.127 k8s-2
cattle-system cattle-node-agent-j6r94 0/1 CrashLoopBackOff 2887 171d 10.171.201.110 k8s-3
cattle-system cattle-node-agent-nkfcq 0/1 CrashLoopBackOff 17775 171d 10.171.201.131 k8s-1
cattle-system cattle-node-agent-np76b 0/1 CrashLoopBackOff 2887 171d 10.171.201.89 k8s-4
cattle-system cattle-node-agent-pwn5v 0/1 CrashLoopBackOff 2859 171d 10.171.202.72 k8s-5
</code></pre>
<p>Running <code>sum by (reason) (kube_pod_container_status_waiting_reason)</code> in prometheus yields results:</p>
<pre><code>Element Value
{reason="ContainerCreating"} 0
{reason="ErrImagePull"} 0
</code></pre>
<p>I am running <code>quay.io/coreos/kube-state-metrics:v1.2.0</code> image of kube-state-metrics.</p>
<p>What am I missing? Why is the CrashLoopBackOff reason not showing up in the query?
I would like to set up an alert which finds pods in the waiting status with the reason. So thinking of merging <code>kube_pod_container_status_waiting</code> to find the pods in the waiting status and <code>kube_pod_container_status_waiting_reason</code> to find the exact reason.</p>
<p>Please assist. Thank you!</p>
| <p>You are running into <a href="https://github.com/kubernetes/kube-state-metrics/issues/468" rel="nofollow noreferrer">this</a>. Basically, it looks like you are using <a href="https://github.com/kubernetes/kube-state-metrics" rel="nofollow noreferrer">kube-state-metrics</a> <code>1.2.0</code> or earlier. You see that <code>ImagePullBackOff</code> and <code>CrashLoopBackOff</code> was added in <code>1.3.0</code>.</p>
<p>So update your image to:</p>
<pre><code>k8s.gcr.io/kube-state-metrics:v1.3.0
quay.io/coreos/kube-state-metrics:v1.3.0
</code></pre>
<p>or</p>
<pre><code>k8s.gcr.io/kube-state-metrics:v1.4.0
quay.io/coreos/kube-state-metrics:v1.4.0
</code></pre>
|
<p>My understanding is since pod is defined as a group of containers which provides shared resources such as storage and network among those containers, can it be thought of as a namespace in a worker node that is to say, different pods are representing different namespaces in a worker node machine?</p>
<p>Or otherwise is pod actually a process which is first started (or run or executed) by the deployment and then it starts the containers inside it?
Can i see it through ps command? (I did try it, there are only docker containers running so I am ruling out pod being a process)</p>
| <p>If we start from the basics</p>
<p><strong>What is a namespace (in a generic manner)?</strong></p>
<blockquote>
<p>A namespace is a declarative region that provides a scope to the identifiers (the names of types, functions, variables, etc) inside it. Namespaces are used to organize code into logical groups and to prevent name collisions that can occur especially when your code base includes multiple libraries.</p>
</blockquote>
<p><strong>What is a Pod (in K8s)?</strong></p>
<blockquote>
<p>A pod is a group of one or more containers (such as Docker containers), with shared storage/network, and a specification for how to run the containers. A pod’s contents are always co-located and co-scheduled, and run in a shared context. A pod models an application-specific “logical host” - it contains one or more application containers which are relatively tightly coupled — in a pre-container world, being executed on the same physical or virtual machine would mean being executed on the same logical host.</p>
<p>While Kubernetes supports more container runtimes than just Docker, Docker is the most commonly known runtime, and it helps to describe pods in Docker terms.</p>
<p>The shared context of a pod is a set of <strong>Linux namespaces</strong>, cgroups, and potentially other facets of isolation - the same things that isolate a Docker container. Within a pod’s context, the individual applications may have further sub-isolations applied.
<a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/#what-is-a-pod" rel="noreferrer">Some deep dive into Pods</a></p>
</blockquote>
<p><strong>What is a Namespace (in k8s terms)?</strong></p>
<blockquote>
<p>Namespaces are intended for use in environments with many users spread across multiple teams, or projects.</p>
<p>Namespaces provide a scope for names. Names of resources need to be unique within a namespace, but not across namespaces.</p>
<p>Namespaces are a way to divide cluster resources between multiple users.</p>
</blockquote>
<p>So I think its suffice to say:</p>
<p><strong>Yes Pods have a namespace</strong> :</p>
<blockquote>
<p>Pods kind of represent a namespace but on a container level (where they share the same context of networks, volumes/storage only among a set of containers)</p>
<p>But namespaces (in terms of K8s) are a bigger level of isolation -- on a cluster level which shared by all the containers (services, deployments, dns-names, IPs, config-maps, secrets, roles, etc).</p>
</blockquote>
<p>Also you should see this <a href="https://stackoverflow.com/q/33472741/5833562">link</a></p>
<p>Hope this clears a bit of fog on the issue.</p>
|
<p>I have PHP application running as a Docker container based on Alpine Linux inside Kubernetes. Everything went well until I tried removing container with test database and replacing it with <a href="https://azure.microsoft.com/cs-cz/services/postgresql/" rel="nofollow noreferrer">Azure PostgreSQL</a>. This led to significant latency increase from under 250ms to above 1500ms.</p>
<p>According to profiler most time is spent in <a href="http://php.net/manual/en/pdo.construct.php" rel="nofollow noreferrer">PDO constructor</a> which is establishing connection to database. <strong>The SQL queries themselves, after connection was established, then run in about 20ms.</strong></p>
<ul>
<li><strong>I tried using IP instead of address and it was still slow.</strong></li>
<li>I tried connecting from container using psql and it was slow (see full command below) </li>
<li>I tried DNS resolution using <a href="https://pkgs.alpinelinux.org/package/edge/main/x86/bind-tools" rel="nofollow noreferrer">bind-tools</a> and it was fast. </li>
<li>Database runs in same region as Kubernetes nodes, tried even same resource group, different network settings and nothing helped.</li>
<li>I tried requiring/disabling SSL mode on both client and server</li>
<li>I tried repeatedly running 'select 1' inside an already established connection and it was fast (average 1.2ms, median 0.9ms) (see full query below)</li>
</ul>
<p>What can cause such a latency?
How can I further debug/investigate this issue?</p>
<hr>
<p>psql command used to try connection:</p>
<pre><code>psql "sslmode=disable host=host dbname=postgres [email protected] password=password" -c "select 1"
</code></pre>
<hr>
<p>Query speed</p>
<pre><code>\timing
SELECT;
\watch 1
</code></pre>
| <p>As far as I can tell it is caused by Azure specific authentication on top of PostgreSQL. Unfortunately Azure support was not able to help from their side.</p>
<p>Using connection pool (<a href="https://pgbouncer.github.io/" rel="nofollow noreferrer">PgBouncer</a>) solves this problem. It is another piece of infrastructure we have to maintain (docker file, config/secret management, etc.), which we hoped to outsource to cloud provider.</p>
|
<p>How to resolve the error no module named pandas when one node (in Airflow's DAG) is successful in using it(pandas) and the other is not?</p>
<p>I am unable to deduce as to why I am getting an error no module named pandas.</p>
<p>I have checked via <code>pip3 freeze</code> and yes, the desired pandas version does show up.</p>
<p>I have deployed this using docker on a kubernetes cluster.</p>
| <p><a href="https://github.com/apache/incubator-airflow/blob/v1-10-stable/setup.py#L292" rel="nofollow noreferrer">Pandas is generally required</a>, and sometimes used in some hooks to return dataframes. Well, it's possible that Airflow was installed with <code>pip</code> and not <code>pip3</code> possibly being added as a Python 2 module and not a Python 3 module (though, using <code>pip</code> should have installed Pandas when one looks at the <a href="https://github.com/apache/incubator-airflow/blob/v1-10-stable/setup.py#L292" rel="nofollow noreferrer"><code>setup.py</code></a>).</p>
<p>Which Operator in your DAG is giving this error?
Do you have any PythonVirtualEnvironmentOperators or BashOperators running <code>python</code> from the command line (and thus possibly not sharing the same environment that you're checking has <code>pandas</code>)?</p>
|
<p>I got the following service defined: </p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: customerservice
spec:
type: NodePort
selector:
app: customerapp
ports:
- protocol: TCP
port: 31004
nodePort: 31004
targetPort: 8080
</code></pre>
<p>Current situation: I am able to hit the pod via the service IP.
Now my goal is to reach the <code>customerservice</code> via the name of the service, which does not work right now. So I would simply type <code>http://customerservice:31004</code> instead of <code>http://<IP>:31004</code>.</p>
| <p>DNS resolution of services is ONLY available within the cluster, provided by CoreDNS/KubeDNS.</p>
<p>Should you wish to have access to this locally on your machine, you'd need to use another tool. One such tool is <code>kubefwd</code>:</p>
<p><a href="https://github.com/txn2/kubefwd" rel="nofollow noreferrer">https://github.com/txn2/kubefwd</a></p>
<p>A slightly simpler solution, is to use port-forward; which is a very simple way to access a single service locally.</p>
<p><code>kubectl port-forward --namespace=whatever svs/service-name port</code></p>
<p>EDIT:// I've made the assumption that you want to use the service DNS locally, as I'm assuming by saying:</p>
<blockquote>
<p>I would simply type <a href="http://customerservice:31004" rel="nofollow noreferrer">http://customerservice:31004</a></p>
</blockquote>
<p>is in the context of your web browser.</p>
|
<p>I would like to programmatically create GKE clusters (and resize them etc.). To do so, I could use the gscloud commands, but I found this java library that seems to imply that one can create/resize/delete clusters all from within java:
<a href="https://developers.google.com/api-client-library/java/apis/container/v1" rel="nofollow noreferrer">https://developers.google.com/api-client-library/java/apis/container/v1</a> library
(Note: This is a DIFFERENT library from the Java libraries for Kubernetes, which is well documented. The above link is for creating the INITIAL cluster, not starting up / shutting down pods etc.)</p>
<p>However, I couldn't find any examples/sample code on how to do some basic commands, eg</p>
<p>a) get list of clusters and see if a cluster of a particular name is runing
b) start up cluster of a particular name in a certain region with a certain number of nodes of a certain instance type
c) wait until the cluster has fully started up from (b)
d) etc.</p>
<p>Any one have any examples of using the java library to accomplish this?</p>
<p>Also, is there a "generic" java library for any Kubernetes cluster managerment (not just the Google GKE one? I couldn't find any. Again, there are libraries for pod management, but I couldn't find any for generic Kubernetes <em>cluster</em> management (ie create cluster etc.))</p>
| <p>You could consider using the <a href="https://www.terraform.io/docs/providers/google/r/container_cluster.html" rel="nofollow noreferrer">Terraform GKE provider</a> to programmatically create and mange GKE clusters.
It is idempotent and tracks state. I'd consider it to be more stable than any standalone library implementation. Besides, this is a typical use case for Terraform.</p>
|
<p>Currently I'm using Kubernetes version 1.11.+. Previously I'm always using the following command for my <em>cloud build</em> scripts:</p>
<pre><code>- name: 'gcr.io/cloud-builders/kubectl'
id: 'deploy'
args:
- 'apply'
- '-f'
- 'k8s'
- '--recursive'
env:
- 'CLOUDSDK_COMPUTE_ZONE=${_REGION}'
- 'CLOUDSDK_CONTAINER_CLUSTER=${_CLUSTER_NAME}'
</code></pre>
<p>And the commands just working as expected, at that time I'm using k8s version 1.10.+. However recently I got the following error:</p>
<blockquote>
<ul>
<li>spec.clusterIP: Invalid value: "": field is immutable</li>
<li>metadata.resourceVersion: Invalid value: "": must be specified for an update</li>
</ul>
</blockquote>
<p>So I'm wondering if this is an expected behavior for Service resources?</p>
<p>Here's my YAML config for my service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: {name}
namespace: {namespace}
annotations:
beta.cloud.google.com/backend-config: '{"default": "{backend-config-name}"}'
spec:
ports:
- port: {port-num}
targetPort: {port-num}
selector:
app: {label}
environment: {env}
type: NodePort
</code></pre>
| <p>This is due to <a href="https://github.com/kubernetes/kubernetes/issues/71042" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/71042</a></p>
<p><a href="https://github.com/kubernetes/kubernetes/pull/66602" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/pull/66602</a> should be picked to 1.11</p>
|
<p>Should I use URL's at the root of my application like so:</p>
<pre><code>/ready
/live
</code></pre>
<p>Should they both be grouped together like so:</p>
<pre><code>/status/ready
/status/live
</code></pre>
<p>Should I use <a href="https://www.rfc-editor.org/rfc/rfc5785" rel="nofollow noreferrer">RFC5785</a> and put them under the <code>.well-known</code> sub-directory like so:</p>
<pre><code>/.well-known/status/ready
/.well-known/status/live
</code></pre>
<p>If I do this, my understanding is that I have to register the <code>status</code> assignment with the official <a href="https://www.iana.org/assignments/well-known-uris/well-known-uris.xhtml" rel="nofollow noreferrer">IANA</a> registry.</p>
<p>Or is there some other scheme? I'm looking for a common convention that people use.</p>
| <p>The Kubernetes docs use <code>/healthz</code>, which I'd say is advisable to follow; but you really can use whatever you want.</p>
<p>I believe <code>healthz</code> is used to keep it inline with <code>zpages</code>, which are described by OpenCensus:</p>
<p><a href="https://opencensus.io/zpages/" rel="nofollow noreferrer">https://opencensus.io/zpages/</a></p>
|
<p>I have setup the kubernetes with one master and two Workers, but I am facing one issue.</p>
<p>I have created the apache pod; it will deployed on worker1 automatically by the scheduler. It works fine. When I stop the worker one machine, ideally pod will be generated on worker2. The problem is that it takes around 7 minutes to come online on workers2.</p>
<p>Is there any way to fail the pod over without any downtime?</p>
| <p>There will be a minor downtime unless you have multiple replicas (apache replicas) and have a Kubernetes service forwarding to them on your system. This is generally the architecture is recommended for HTTP/TCP type of services.</p>
<p>However, if you need faster response you could tweak:</p>
<ul>
<li><code>--node-status-update-frequency</code> on the kubelet. (Default 10 seconds)</li>
<li><code>--kubelet-timeout</code> on the kube-apiserver. Which defaults to a low 5 seconds.</li>
<li><code>–-node-monitor-period</code> on the kube-controller-manager. Defaults to 5 seconds.</li>
<li><code>-–node-monitor-grace-period</code> on the kube-controller-manager. Defaults to 40 seconds.</li>
<li><code>-–pod-eviction-timeout</code> on the kube-controller-manager. Defaults to 5 minutes.</li>
</ul>
<p>You can try something like this:</p>
<ul>
<li>kubelet: <code>--node-status-update-frequency=4s</code> (from 10s)</li>
<li>kube-controller-manager: <code>--node-monitor-period=2s</code> (from 5s)</li>
<li>kube-controller-manager: <code>--node-monitor-grace-period=16s</code> (from 40s)</li>
<li>kube-controller-manager: <code>--pod-eviction-timeout=30s</code> (from 5m)</li>
</ul>
|
<p>I'm currently exploring running an Istio / Kubernetes cluster on AWS using EKS. I would like to be able to assign a different IAM role to each service running in the cluster to limit the AWS privileges of each service.</p>
<p>In non-Istio Kubernetes clusters this facility is provided by projects such as <a href="https://github.com/jtblin/kube2iam" rel="nofollow noreferrer">kube2iam</a> but this doesn't seem ideal in the Istio world as <code>kube2iam</code> relies on <code>iptables</code> rules and Istio is already using <code>iptables</code> rules to divert all outbound traffic to the Envoy sidecar.</p>
<p>The Istio <a href="https://istio.io/docs/concepts/security/" rel="nofollow noreferrer">security documentation</a> says that identity model caters for different underlying implementations and on AWS that implementation is IAM:</p>
<blockquote>
<p>In the Istio identity model, Istio uses the first-class service identity to determine the identity of a service. This gives great flexibility and granularity to represent a human user, an individual service, or a group of services. On platforms that do not have such identity available, Istio can use other identities that can group service instances, such as service names.</p>
<p>Istio service identities on different platforms:</p>
<p>Kubernetes: Kubernetes service account<br />
GKE/GCE: may use GCP service account<br />
GCP: GCP service account<br />
AWS: AWS IAM user/role account</p>
</blockquote>
<p>But I haven't come across any additional documentation about how to assign IAM roles to Istio <a href="https://istio.io/docs/concepts/security/#servicerole" rel="nofollow noreferrer">ServiceRoles</a>.</p>
<p>Has anyone found a solution to this?</p>
<p>UPDATE: See <a href="https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html" rel="nofollow noreferrer">IRSA</a></p>
| <p>I'm also struggling with this and have found little help. I did have success with this persons suggestion
<a href="https://groups.google.com/forum/m/#!topic/istio-users/3-fp2JPb2dQ" rel="nofollow noreferrer">https://groups.google.com/forum/m/#!topic/istio-users/3-fp2JPb2dQ</a></p>
<p>I was having no luck getting kube2iam working until I added that serviceentry (see below or follow link)</p>
<p>Basically you add this</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: apipa
spec:
hosts:
- 169.254.169.254
ports:
- number: 80
name: http
protocol: HTTP
resolution: DNS
location: MESH_EXTERNAL
</code></pre>
<p>From looking at the istio-proxy sidecar before applying the serviceentry you could lots of 404 errors in the log with paths all looking like aws api calls. After the service entry those turned to 200's. </p>
<p>UPDATE....
Later I found out that this is expected requirement when using istio for any external-mesh communication. See <a href="https://istio.io/docs/concepts/traffic-management/#service-entries" rel="nofollow noreferrer">https://istio.io/docs/concepts/traffic-management/#service-entries</a> </p>
|
<p>I've been trying to use the below to expose my application to a public IP. This is being done on Azure. The public IP is generated but when I browse to it I get nothing. </p>
<p>This is a Django app which runs the container on Port 8000. The service runs at Port 80 at the moment but even if I configure the service to run at port 8000 it still doesn't work.</p>
<p>Is there something wrong with the way my service is defined?</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: web
labels:
app: hmweb
spec:
ports:
- port: 80
selector:
app: hmweb
tier: frontend
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hmweb-deployment
labels:
app: hmweb
spec:
replicas: 1
selector:
matchLabels:
app: hmweb
template:
metadata:
labels:
app: hmweb
spec:
containers:
- name: hmweb
image: nw_webimage
envFrom:
- configMapRef:
name: new-config
command: ["/bin/sh","-c"]
args: ["gunicorn saleor.wsgi -w 2 -b 0.0.0.0:8000"]
ports:
- containerPort: 8000
imagePullSecrets:
- name: key
</code></pre>
<p>Output of kubectl describe service web (name of service:)</p>
<pre><code>Name: web
Namespace: default
Labels: app=hmweb
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"hmweb"},"name":"web","namespace":"default"},"spec":{"ports":[{"port":...
Selector: app=hmweb
Type: LoadBalancer
IP: 10.0.86.131
LoadBalancer Ingress: 13.69.127.16
Port: <unset> 80/TCP
TargetPort: 8000/TCP
NodePort: <unset> 31827/TCP
Endpoints: 10.244.0.112:8000
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 8m service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 7m service-controller Ensured load balancer
</code></pre>
| <p>The reason behind that is your service has two selector <code>app: hmweb</code> and <code>tier: frontend</code> and your deployment pods has only single label named <code>app: hmweb</code>. Hence when your service is created it could not find the pods which has both the labels and doesn't connect to any pods. Also, if you have container running on <code>8000</code> port then you must define <code>targetPort</code> which has the value of container port on which container is running, else it will take both <code>targetPort</code> and <code>port</code> value as same you defined in your service i.e. <code>port: 80</code></p>
<p>The correct yaml for your deployment is:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: web
labels:
app: hmweb
spec:
ports:
- port: 80
targetPort: 8000
protocol: TCP
selector:
app: hmweb
type: LoadBalancer
</code></pre>
<p>Hope this helps.</p>
|
<p>I have kubernetes HA environment with three masters. Just have a test, shutdown two masters(kill the apiserver/kcm/scheduler process), then only one master can work well. I can use kubectl to create a deployment successfully ,some pods were scheduled to different nodes and start. So can anyone explain why it is advised odd number of masters? Thanks.</p>
| <p>Because if you have an even number of servers, it's a lot easier to end up in a situation where the network breaks and you have exactly 50% on each side. With an odd number, you can't (easily) have a situation where more than one partition in the network thinks it has majority control.</p>
|
<p>What is the max size allowed for Environment variable (pod->container->Env) in kubernetes, assuming base ubuntu containers? I am unable to find the relevant documentation. Question might seem stupid, but, I do need the info to make my design robust. </p>
| <p>So at bare minimum there is some 1,048,576 byte limitation imposed:</p>
<blockquote>
<p>The ConfigMap "too-big" is invalid: []: Too long: must have at most 1048576 characters</p>
</blockquote>
<p>which I generated as:</p>
<pre class="lang-sh prettyprint-override"><code>cat > too-big.yml<<FOO
apiVersion: v1
kind: ConfigMap
metadata:
name: too-big
data:
kaboom.txt: |
$(python -c 'print("x" * 1024 * 1024)')
FOO
</code></pre>
<p>And when I try that same stunt with a Pod, I'm met with a very similar outcome:</p>
<pre><code>containers:
- image: ubuntu:18.10
env:
- name: TOO_BIG
value: |
$(python -c the same print)
</code></pre>
<blockquote>
<p>standard_init_linux.go:178: exec user process caused "argument list too long"</p>
</blockquote>
<p>So I would guess it's somewhere in between those two numbers: 0 and 1048576</p>
<p>That said, as the <a href="https://stackoverflow.com/questions/1078031/what-is-the-maximum-size-of-an-environment-variable-value">practically duplicate question</a> answered, you are very, very likely solving the wrong problem. The very fact that you have to come to a community site to ask such a question means you are brining risk to your project that it will work one way on Linux, another way on docker, another way on kubernetes, and a different way on macOS.</p>
|
<p>I need to be able to assign custom environment variables to each replica of a pod. One variable should be some random uuid, another unique number. How is it possible to achieve? I'd prefer continue using "Deployment"s with replicas. If this is not feasible out of the box, how can it be achieved by customizing replication controller/controller manager? Are there hooks available to achieve this?</p>
| <p>You can use the downward API to inject the metadata.uid of the pod as an envvar, which is unique per pod</p>
|
<p>I have a script that deploys my application to my kubernetes cluster. However, if my current kubectl context is pointing at the wrong cluster, I can easily end up deploying my application to a cluster that I did not intend to deploy it to. What is a good way to check (from inside a script) that I'm deploying to the right cluster?</p>
<p>I don't really want to hardcode a specific kubectl context name, since different developers on my team have different conventions for how to name their kubectl contexts.</p>
<p>Instead, I'd like something more like <code>if $(kubectl get cluster-name) != "expected-clsuter-name" then error</code>.</p>
| <pre><code>#!/bin/bash
if [ $(kubectl config current-context) != "your-cluster-name" ]
then
echo "Do some error!!!"
return
fi
echo "Do some kubectl command"
</code></pre>
<p>Above script get the cluster name and match with <code>your-desired-cluster</code> name. If mismatch then give error. Otherwise run desire kubectl command.</p>
|
<p>We have a Google Cloud project with several VM instances and also Kubernetes cluster.</p>
<p>I am able to easily access Kubernetes services with <code>kubefwd</code> and I can <code>ping</code> them and also <code>curl</code> them. The problem is that <code>kubefwd</code> works only for Kubernetes, but not for other VM instances. </p>
<p>Is there a way to mount the network locally, so I could <code>ping</code> and <code>curl</code> any instance without it having public IP and with DNS the same as inside the cluster?</p>
| <p>I would highly recommend rolling a vpn server like openvpn. You can also run this inside of the Kubernetes Cluster.</p>
<p>I have a <code>make install</code> ready repo for ya to check out at <a href="https://github.com/mateothegreat/k8-byexamples-openvpn" rel="nofollow noreferrer">https://github.com/mateothegreat/k8-byexamples-openvpn</a>.</p>
<p>Basically openvpn is running inside of a container (inside of a pod) and you can set the routes that you want the client(s) to be able to see.</p>
<p>I would not rely on <code>kubefwd</code> as it isn't production grade and will give you issues with persistent connections.</p>
<p>Hope this help ya out.. if you still have questions/concerns please reach out.</p>
|
<p>I have a Kubernetes v1.13 cluster with Calico + flannel as CNI. All Nodes have a publicly routable ip address and are running Ubuntu 16.04.</p>
<p>Some Nodes are located in a company network, being both located in the LAN and DMZ, and therefore having access to internal services while still being publicly accessible. Others are hosted VMs at a cloud provider.</p>
<p><a href="https://i.stack.imgur.com/WDA7L.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WDA7L.png" alt="Cluster"></a></p>
<p>Consider the simplified example above. I want a Kubernetes Pod to access <code>Internal Server C</code> (which is just a regular server and not part of the cluster). I could enforce the Pod to be scheduled on the internal <code>Node B</code> only, but as there is only a low latency and bandwidth required for the connection, and there is way more resources on <code>Node A</code>, I would prefer to use <code>Node B</code> just as some kind of <em>gateway</em>. (Consider several <code>Node B</code>s, so there is actually no SPOF).</p>
<p>My current approach is to use a DaemonSet with a Node Selector targeting all <em>internal</em> (<em>B</em>) Nodes, defining an HAProxy Pod. Those HAproxy instances can be reached as a Kubernetes Service and forward requests to the internal destination services.</p>
<p>Do you see a better or more straightforward way to realize the connection from a Pod located at any Node to a target that can only be reached by a subset of Nodes?</p>
| <p>Based on what you say here:</p>
<blockquote>
<p>I could enforce the Pod to be scheduled on the internal Node B only, but as there is only a low latency and bandwidth required for the connection, and there is way more resources on Node A, I would prefer to use Node B just as some kind of gateway.</p>
</blockquote>
<p>I think what you're looking for is an <a href="http://www.cnblogs.com/pinganzi/p/7389854.html" rel="nofollow noreferrer">Ambassador</a> pattern. Basically you would create this kind of containers located in your <code>B</code> zone, and your traffic would go to this containers/pods using a <code>ClusterIP</code> service since it's within the cluster.</p>
<p>Then, these containers would have running a <strong>proxy</strong> inside them (kind of what you have now on your daemons), that would route the traffic transparently to the regular server you're targeting. </p>
<p>Other links that may be useful could be <a href="https://learn.microsoft.com/en-us/azure/architecture/patterns/ambassador" rel="nofollow noreferrer">this from MS</a> or <a href="https://www.slideshare.net/luebken/container-patterns" rel="nofollow noreferrer">this slideshows (p.42)</a>.</p>
<p>If this presents a big advantage against what you have already running I'm not sure, but I do prefer to work with just pods and minimize other components if it's possible.</p>
|
<p>Please, help me to deal with accessibility of my simple application.
I created YML with an application:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: myapp-test
spec:
replicas: 2
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: gcr.io/kubernetes-e2e-test-images/echoserver:2.1
ports:
- containerPort: 8080
---
apiVersion: extensions/v1beta1
kind: Service
apiVersion: v1
metadata:
name: myapp-service
spec:
selector:
app: myapp
ports:
- name: http
protocol: TCP
port: 80
targetPort: 8080
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
spec:
rules:
- host: myapp.com
http:
paths:
- path: /
backend:
serviceName: myapp-service
servicePort: 80
- path: /hello
backend:
serviceName: myapp-service
servicePort: 80
</code></pre>
<p>Then I created k8s cluster via kops, like this, all services k8s have risen, I can enter the master:</p>
<pre><code>kops create cluster \
--node-count = 2 \
--node-size = t2.micro \
--master-size = t2.micro \
--master-count = 1 \
--zones = us-east-1a \
--name = ${KOPS_CLUSTER_NAME}
</code></pre>
<p>In the end, I can't get to the application on port 80, it write's that the connection is refused!
Can someone tell me, what is the problem? This yml above fully works, but in the minikube environment(</p>
| <p>Indeed you have created an Ingress resource, but I presume you have not deployed prior the NGINX Ingress Controller for your on-premise cluster on AWS. It's explained <a href="https://kubernetes.github.io/ingress-nginx/deploy/#aws" rel="nofollow noreferrer">here</a> on how to do this in general.</p>
<p><strong>In case</strong> of Kubernetes cluster bootsrapped with <strong>Kops</strong>, things are more complex, and it requires you to modify an existing cluster, to use a dedicated kops add-on: <code>kube-ingress-aws-controller</code>, as explained on their github project page <a href="https://github.com/kubernetes/kops/tree/9da7daf7b6ecb6495b77db21f5f35636529634b1/addons/kube-ingress-aws-controller" rel="nofollow noreferrer">here</a></p>
<p>In current form your app can be reached only via Node/AWS Instance external IP on port assigned from default range (30000-32767). You can check currently assign port via <code>kubectl get svc myapp-service</code>), but this requires opening it first on firewall (default Inbound rules deny All traffic apart SSH). Based on you deploy/service manifest files:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
myapp-service NodePort 100.64.187.80 <none> 80:32076/TCP 37m
</code></pre>
<p>with port 32076 open in inbound rules of Security Group assigned to my instance I can now reach app on NodePort:</p>
<pre><code>curl <node_external_ip>:32076
Hostname: myapp-test-f87bcbd44-8nxpn
Pod Information:
-no pod information available-
Server values:
</code></pre>
|
<h2>Context</h2>
<p>We currently have 3 stable clusters on kubernetes(v1.8.7). These clusters were created by an external team which is no longer available and we have limited documentation. We are trying to upgrade to a higher stable version(v1.13.0). We're aware that we need to upgrade 1 version at a time so 1.8 -> 1.9 -> 1.10 & so on.</p>
<h3>Solved Questions</h3>
<ol>
<li>Any pointers on how to upgrade from 1.8 to 1.9 ?</li>
<li><p>We tried to install kubeadm v1.8.7 & run <code>kubeadm upgrade plan</code>, but it fails with output -</p>
<p>[preflight] Running pre-flight checks
couldn't create a Kubernetes client from file "/etc/kubernetes/admin.conf": failed to load admin kubeconfig [open /etc/kubernetes/admin.conf: no such file or directory]<br>
we can not find the file admin.conf. Any suggestions on how we can regenerate this or what information would it need ?</p></li>
</ol>
<h3>New Question</h3>
<p>Since we now have the admin.conf file, we installed kubectl,kubeadm and kubelet v 1.9.0 -<br>
<code>apt-get install kubelet=1.9.0-00 kubeadm=1.9.0-00 kubectl=1.9.0-00</code>. </p>
<p>When I run <code>kubeadm upgrade plan v1.9.0</code><br>
I get </p>
<pre><code>root@k8s-master-dev-0:/home/azureuser# kubeadm upgrade plan v1.9.0
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/health] FATAL: [preflight] Some fatal errors occurred:
[ERROR APIServerHealth]: the API Server is unhealthy; /healthz didn't return "ok"
[ERROR MasterNodesReady]: couldn't list masters in cluster: Get https://<k8s-master-dev-0 ip>:6443/api/v1/nodes?labelSelector=node-role.kubernetes.io%2Fmaster%3D: dial tcp <k8s-master-dev-0 ip>:6443: getsockopt: connection refused
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
root@k8s-master-dev-0:/home/azureuser# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
heapster-75f8df9884-nxn2z 2/2 Running 0 42d
kube-addon-manager-k8s-master-dev-0 1/1 Running 2 1d
kube-addon-manager-k8s-master-dev-1 1/1 Running 4 123d
kube-addon-manager-k8s-master-dev-2 1/1 Running 2 169d
kube-apiserver-k8s-master-dev-0 1/1 Running 100 1d
kube-apiserver-k8s-master-dev-1 1/1 Running 4 123d
kube-apiserver-k8s-master-dev-2 1/1 Running 2 169d
kube-controller-manager-k8s-master-dev-0 1/1 Running 3 1d
kube-controller-manager-k8s-master-dev-1 1/1 Running 4 123d
kube-controller-manager-k8s-master-dev-2 1/1 Running 4 169d
kube-dns-v20-5d9fdc7448-smf9s 3/3 Running 0 42d
kube-dns-v20-5d9fdc7448-vtjh4 3/3 Running 0 42d
kube-proxy-cklcx 1/1 Running 1 123d
kube-proxy-dldnd 1/1 Running 4 169d
kube-proxy-gg89s 1/1 Running 0 169d
kube-proxy-mrkqf 1/1 Running 4 149d
kube-proxy-s95mm 1/1 Running 10 169d
kube-proxy-zxnb7 1/1 Running 2 169d
kube-scheduler-k8s-master-dev-0 1/1 Running 2 1d
kube-scheduler-k8s-master-dev-1 1/1 Running 6 123d
kube-scheduler-k8s-master-dev-2 1/1 Running 4 169d
kubernetes-dashboard-8555bd85db-4txtm 1/1 Running 0 42d
tiller-deploy-6677dc8d46-5n5cp 1/1 Running 0 42d
</code></pre>
| <p>Lets go by step and first generate the admin.conf file in your cluster:
You can generate the admin.conf file using following command:</p>
<pre><code>kubeadm alpha phase kubeconfig admin --cert-dir /etc/kubernetes/pki --kubeconfig-dir /etc/kubernetes/
</code></pre>
<p>Now, you can check out my following answer how to upgrade kubernetes cluster by kubeadm (The answer is for 1.10.0 to 1.10.11 but it is applicable also to 1.8 to 1.9, you just need to change the version for the package you download)</p>
<p><a href="https://stackoverflow.com/questions/53771883/how-to-upgrade-kubernetes-from-v1-10-0-to-v1-10-11/53773310#53773310">how to upgrade kubernetes from v1.10.0 to v1.10.11</a></p>
<p>Hope this helps.</p>
|
<p>How can we authenticate to a mongodb database created by helm stable/mongo chart (from another pod in the same cluster)?</p>
<ul>
<li><p>The "one pod url" <code>mongodb://user:password@mongodb:27017/dbname</code> does not work because we have to authenticate to the admin pod</p></li>
<li><p>According to mongo documentation, we should use something like :
<code>mongodb://user:password@mongodb-1,mongodb-2,mongodb-3:27017/dbname</code>
but the chart only creates one service ?!</p></li>
</ul>
<p>I tried also to add <code>?authSource=admin&replicaSet=rs0</code> at the url but authentication still fails..</p>
| <p>I managed to connect with the following url (only as root) :
<code>mongodb://root:<root_password>@mongodb.mongodb:27017/<db_name>?authSource=admin&replicaSet=rs0</code>
with the <code>--authenticationDatabase admin</code> of the <code>NOTES.txt</code> converted to <code>authSource=admin</code> url parameter</p>
|
<p>I want to create boiler plate for any k8s object. </p>
<p>For instance, the <code>deployment</code> object boilerplate can be generated by using <code>kubectl</code>: </p>
<p><code>kubectl run --dry-run -o yaml ...</code> </p>
<p>This will generate the yaml configuration file of the deployment object. I can redirect this to a file and modify the fields I need.</p>
<p>But how about objects other than deployment? What about CronJob? Are there any ways to generate boilerplate config file for CronJob object (or any other k8s object at that matter)?</p>
| <p>While <code>kubectl create object-type -o yaml</code> will give you the very basics, it doesn't normally cover much of the spec.</p>
<p>Instead, I prefer to fetch existing objects and modify:</p>
<p><code>kubectl get configmap configMapName -o yaml > configmap.yaml</code></p>
<p>Strip away everything you don't need, including generated fields; and you're good to go. This step probably requires a solid understanding of what to expect in each YAML.</p>
<p>EDIT://</p>
<p>I just realised there's <code>--export</code> when using this approach that strips generated fields for you :)</p>
|
<p>I'm trying to create a user in a Kubernetes cluster.</p>
<p>I spinned up 2 droplets on DigitalOcean using a Terraform script of mine.</p>
<p>Then I logged in the master node droplet using <code>ssh</code>:</p>
<pre><code>doctl compute ssh droplet1
</code></pre>
<p>Following this, I created a new cluster and a namespace in it:</p>
<pre><code>kubectl create namespace thalasoft
</code></pre>
<p>I created a user role in the <code>role-deployment-manager.yml</code> file:</p>
<pre><code>kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: thalasoft
name: deployment-manager
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["deployments", "replicasets", "pods"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
</code></pre>
<p>and executed the command:</p>
<pre><code>kubectl create -f role-deployment-manager.yml
</code></pre>
<p>I created a role grant in the <code>rolebinding-deployment-manager.yml</code> file:</p>
<pre><code>kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: deployment-manager-binding
namespace: thalasoft
subjects:
- kind: User
name: stephane
apiGroup: ""
roleRef:
kind: Role
name: deployment-manager
apiGroup: ""
</code></pre>
<p>and executed the command:
kubectl create -f rolebinding-deployment-manager.yml</p>
<p>Here is my terminal output:</p>
<pre><code>Last login: Wed Dec 19 10:48:48 2018 from 90.191.151.182
root@droplet1:~# kubectl create namespace thalasoft
namespace/thalasoft created
root@droplet1:~# vi role-deployment-manager.yml
root@droplet1:~# kubectl create -f role-deployment-manager.yml
role.rbac.authorization.k8s.io/deployment-manager created
root@droplet1:~# vi rolebinding-deployment-manager.yml
root@droplet1:~# kubectl create -f rolebinding-deployment-manager.yml
rolebinding.rbac.authorization.k8s.io/deployment-manager-binding created
root@droplet1:~#
</code></pre>
<p>Now I'd like to first create a user in the cluster, and then configure the client <code>kubectl</code> with this user so as to operate from my laptop and avoid logging via <code>ssh̀</code> to the droplet.</p>
<p>I know I can configure a user in the <code>kubectl</code> client:</p>
<pre><code># Create a context, that is, a user against a namespace of a cluster, in the client configuration
kubectl config set-context digital-ocean-context --cluster=digital-ocean-cluster --namespace=digital-ocean-namespace --user=stephane
# Configure the client with a user credentials
cd;
kubectl config set-credentials stephane --client-certificate=.ssh/id_rsa.pub --client-key=.ssh/id_rsa
</code></pre>
<p>But this is only some client side configuration as I understand.</p>
<p>UPDATE: I could add a user credentials with a certificate signed by the Kubernetes CA, running the following commands on the droplet hosting the Kubernetes master node:</p>
<pre><code># Create a private key
openssl genrsa -out .ssh/thalasoft.key 4096
# Create a certificate signing request
openssl req -new -key .ssh/thalasoft.key -out .ssh/thalasoft.csr -subj "/CN=stephane/O=thalasoft"
# Sign the certificate
export CA_LOCATION=/etc/kubernetes/pki/
openssl x509 -req -in .ssh/thalasoft.csr -CA $CA_LOCATION/ca.crt -CAkey $CA_LOCATION/ca.key -CAcreateserial -out .ssh/thalasoft.crt -days 1024
# Configure a cluster in the client
kubectl config set-cluster digital-ocean-cluster --server=https://${MASTER_IP}:6443 --insecure-skip-tls-verify=true
# Configure a user in the client
# Copy the key and the certificate to the client
scp -o "StrictHostKeyChecking no" [email protected]:.ssh/thalasoft.* .
# Configure the client with a user credentials
kubectl config set-credentials stephane --client-certificate=.ssh/thalasoft.crt --client-key=.ssh/thalasoft.key
# Create a context, that is, a user against a namespace of a cluster, in the client configuration
kubectl config set-context digital-ocean-context --cluster=digital-ocean-cluster --namespace=digital-ocean-namespace --user=stephane
</code></pre>
| <blockquote>
<p>But this is only some client side configuration as I understand.</p>
<p>What command I should use to create the user ?</p>
</blockquote>
<p>Kubernetes doesn't provide user management. This is handled through x509 certificates that can be signed by your cluster CA.</p>
<p>First, you'll need to create a Key:</p>
<pre><code>openssl genrsa -out my-user.key 4096
</code></pre>
<p>Second, you'll need to create a signing request:</p>
<pre><code>openssl req -new -key my-user.key -out my-user.csr -subj "/CN=my-user/O=my-organisation"
</code></pre>
<p>Third, sign the certificate request:</p>
<pre><code>openssl x509 -req -in my-user.csr -CA CA_LOCATION/ca.crt -CAkey CA_LOCATION/ca.key -CAcreateserial -out my-user.crt -days 500
</code></pre>
<p><code>ca.crt</code> and <code>ca.key</code> is the same cert/key provided by <code>kubeadm</code> or within your master configuration.</p>
<p>You can then give this signed certificate to your user, along with their key, and then can configure access with:</p>
<pre><code>kubectl config set-credentials my-user --client-certificate=my-user.crt --client-key=my-user.key
kubectl config set-context my-k8s-cluster --cluster=cluster-name --namespace=whatever --user=my-user
</code></pre>
<p>Bitnami provide a great resource that explains all of this:</p>
<p><a href="https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/#use-case-1-create-user-with-limited-namespace-access" rel="nofollow noreferrer">https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/#use-case-1-create-user-with-limited-namespace-access</a></p>
|
<p>I have a docker compose file with the following entries</p>
<hr>
<pre><code>version: '2.1'
services:
mysql:
container_name: mysql
image: mysql:latest
volumes:
- ./mysqldata:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: 'password'
ports:
- '3306:3306'
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:3306"]
interval: 30s
timeout: 10s
retries: 5
test1:
container_name: test1
image: test1:latest
ports:
- '4884:4884'
- '8443'
depends_on:
mysql:
condition: service_healthy
links:
- mysql
</code></pre>
<p>The Test-1 container is dependent on mysql and it needs to be up and running.</p>
<p>In docker this can be controlled using health check and depends_on attributes.
The health check equivalent in kubernetes is readinessprobe which i have already created but how do we control the container startup in the pod's?????</p>
<p>Any directions on this is greatly appreciated.</p>
<p>My Kubernetes file:</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: deployment
spec:
replicas: 1
template:
metadata:
labels:
app: deployment
spec:
containers:
- name: mysqldb
image: "dockerregistry:mysqldatabase"
imagePullPolicy: Always
ports:
- containerPort: 3306
readinessProbe:
tcpSocket:
port: 3306
initialDelaySeconds: 15
periodSeconds: 10
- name: test1
image: "dockerregistry::test1"
imagePullPolicy: Always
ports:
- containerPort: 3000
</code></pre>
| <p>That's the beauty of Docker Compose and Docker Swarm... Their simplicity.</p>
<p>We came across this same Kubernetes shortcoming when deploying the ELK stack.
We solved it by using a side-car (initContainer), which is just another container in the same pod thats run first, and when it's complete, kubernetes automatically starts the [main] container. We made it a simple shell script that is in loop until Elasticsearch is up and running, then it exits and Kibana's container starts.</p>
<p>Below is an example of a side-car that waits until Grafana is ready.</p>
<p>Add this 'initContainer' block just above your other containers in the Pod:</p>
<pre><code>spec:
initContainers:
- name: wait-for-grafana
image: darthcabs/tiny-tools:1
args:
- /bin/bash
- -c
- >
set -x;
while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' http://grafana:3000/login)" != "200" ]]; do
echo '.'
sleep 15;
done
containers:
.
.
(your other containers)
.
.
</code></pre>
|
<p>I use this soft </p>
<blockquote>
<p>Kubernetes 1.11.5<br>
Haproxy: last<br>
Nginx: 1.15.7</p>
</blockquote>
<p>I make default/tls-secret from my buyed cert manufactured by comodo CA</p>
<p>And get this error:</p>
<pre><code> Error code: SSL_ERROR_RX_RECORD_TOO_LONG
</code></pre>
<p>There are my configs
Haproxy ingress</p>
<pre><code>---
apiVersion: v1
kind: Namespace
metadata:
name: ingress-controller
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: ingress-controller
namespace: ingress-controller
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: ingress-controller
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: ingress-controller
namespace: ingress-controller
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
- create
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-controller
subjects:
- kind: ServiceAccount
name: ingress-controller
namespace: ingress-controller
- apiGroup: rbac.authorization.k8s.io
kind: User
name: ingress-controller
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: ingress-controller
namespace: ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-controller
subjects:
- kind: ServiceAccount
name: ingress-controller
namespace: ingress-controller
- apiGroup: rbac.authorization.k8s.io
kind: User
name: ingress-controller
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: ingress-default-backend
name: ingress-default-backend
namespace: ingress-controller
spec:
selector:
matchLabels:
run: ingress-default-backend
template:
metadata:
labels:
run: ingress-default-backend
spec:
containers:
- name: ingress-default-backend
image: gcr.io/google_containers/defaultbackend:1.0
ports:
- containerPort: 8080
resources:
limits:
cpu: 10m
memory: 20Mi
---
apiVersion: v1
kind: Service
metadata:
name: ingress-default-backend
namespace: ingress-controller
spec:
ports:
- port: 8080
selector:
run: ingress-default-backend
---
apiVersion: v1
kind: ConfigMap
metadata:
name: haproxy-ingress
namespace: ingress-controller
data:
ssl-options: force-tlsv12
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
labels:
run: haproxy-ingress
name: haproxy-ingress
namespace: ingress-controller
spec:
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
run: haproxy-ingress
template:
metadata:
labels:
run: haproxy-ingress
spec:
hostNetwork: true
nodeSelector:
role: edge-router
serviceAccountName: ingress-controller
containers:
- name: haproxy-ingress
image: quay.io/jcmoraisjr/haproxy-ingress
args:
- --default-backend-service=$(POD_NAMESPACE)/ingress-default-backend
- --default-ssl-certificate=default/tls-secret
- --configmap=$(POD_NAMESPACE)/haproxy-ingress
- --sort-backends
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
- name: stat
containerPort: 1936
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
</code></pre>
<p>This is my app and ingress config for it</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: meteo
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: meteo
template:
metadata:
labels:
app: meteo
spec:
containers:
- name: meteo
image: devprofi/meteo:v39
ports:
- containerPort: 443
imagePullSecrets:
- name: meteo-secret
---
apiVersion: v1
kind: Service
metadata:
name: meteo-svc
namespace: default
spec:
type: NodePort
ports:
# - port: 80
# targetPort: 80
# protocol: TCP
# name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
selector:
app: meteo
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/ssl-passthrough: "true"
kubernetes.io/ingress.class: "haproxy"
ingress.kubernetes.io/secure-backends: "true"
ingress.kubernetes.io/backend-protocol: "HTTPS"
name: meteo-ingress
namespace: default
spec:
tls:
- hosts:
- meteotravel.ru
secretName: cafe-secret # this is another copy of secret made from my buyed cert and key
rules:
- host: meteotravel.ru
http:
paths:
- path: /
backend:
serviceName: meteo-svc
servicePort: 443
</code></pre>
<p>I try this command and get error</p>
<pre><code>openssl s_client -connect meteotravel.ru:443
-----END CERTIFICATE-----
subject=/OU=Domain Control Validated/OU=PositiveSSL/CN=meteotravel.ru
issuer=/C=GB/ST=Greater Manchester/L=Salford/O=COMODO CA Limited/CN=COMODO RSA Domain Validation Secure Server CA
---
No client certificate CA names sent
Peer signing digest: SHA512
Server Temp Key: ECDH, P-256, 256 bits
---
SSL handshake has read 6055 bytes and written 312 bytes
Verification error: self signed certificate in certificate chain
---
New, TLSv1.2, Cipher is ECDHE-RSA-AES128-GCM-SHA256
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.2
Cipher : ECDHE-RSA-AES128-GCM-SHA256
Session-ID: BDD996AF8814404E3E385A6FBE49F56CA2668C54FD157FD3FB28F38DB64F771E
Session-ID-ctx:
Master-Key: F8EB4A4DA674F286E44C71605DF1D7DE4A6FE58D249162B086CE17E899FAC88CFA213018F89B8A9939CB842639D2B68A
PSK identity: None
PSK identity hint: None
SRP username: None
TLS session ticket lifetime hint: 300 (seconds)
TLS session ticket:
0000 - 5e 9a 46 20 a8 60 30 88-fa 2e c5 37 b7 29 0b 4e ^.F .`0....7.).N
0010 - 41 67 2b b6 e7 8e 2e 12-8b 55 0c ad 59 80 f7 d5 Ag+......U..Y...
0020 - d1 07 8e fc 92 a1 2e 01-59 cf 00 2d d5 39 11 10 ........Y..-.9..
0030 - bf f3 89 af 2d 7a 02 59-49 54 3a bf e4 8b 97 f3 ....-z.YIT:.....
0040 - 55 da 4b 6f 9b 86 c4 85-eb e4 f9 a1 e3 74 76 be U.Ko.........tv.
0050 - 65 57 76 ec e3 76 c9 c8-5a 47 c6 c2 ee eb bd ec eWv..v..ZG......
0060 - 61 88 7c 35 8c a6 c0 b3-25 b5 79 06 99 df 66 75 a.|5....%.y...fu
0070 - 8e 9d 3a 17 61 40 7c 1c-09 e3 07 aa 49 b9 c3 cf ..:.a@|.....I...
0080 - d7 ff 7d 1b cc 3f b9 3f-c7 bd ad 4d f9 4f eb 6c ..}..?.?...M.O.l
0090 - 6f 42 2e c8 30 75 a9 07-d4 9e f0 12 6b 9c ca ac oB..0u......k...
Start Time: 1544706461
Timeout : 7200 (sec)
Verify return code: 19 (self signed certificate in certificate chain)
Extended master secret: no
---
HTTP/1.0 408 Request Time-out
Cache-Control: no-cache
Connection: close
Content-Type: text/html
<html><body><h1>408 Request Time-out</h1>
Your browser didn't send a complete request in time.
</body></html>
closed
</code></pre>
<p>Also I tried this command </p>
<pre><code> curl -vL https://meteotravel.ru >/dev/null
* Rebuilt URL to: https://meteotravel.ru/
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0* Trying 212.26.248.233...
* TCP_NODELAY set
* Connected to meteotravel.ru (212.26.248.233) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
} [5 bytes data]
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
} [222 bytes data]
* error:1408F10B:SSL routines:ssl3_get_record:wrong version number
* stopped the pause stream!
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
* Closing connection 0
curl: (35) error:1408F10B:SSL routines:ssl3_get_record:wrong version number
</code></pre>
<p>My nginx is work separately fine
there is the config </p>
<pre><code>server {
access_log /var/log/nginx/default.access.log;
error_log /var/log/nginx/default.error.log warn;
listen 443 ssl default;
#listen 443 ssl http2 default reuseport;
# Redirect HTTP to HTTPS
if ($scheme = http) {
return 301 https://$host$request_uri;
}
ssl_certificate /etc/nginx/ssl/meteotravel.ru/mt.crt;
ssl_certificate_key /etc/nginx/ssl/meteotravel.ru/pk;
server_name meteotravel.ru;
root /var/www/html;
# Add index.php to the list if you are using PHP
index index.html index.htm index.nginx-debian.html;
# server_name _;
location /fop2{
# First attempt to serve request as file, then
# as directory, then fall back to displaying a 404.
# Add index.php to the list if you are using PHP
try_files $uri $uri/ =404;
}
location /{
try_files $uri $uri/ =404;
}
}
</code></pre>
<p>Where is the error??</p>
<p>This is my haproxy config at server</p>
<p>Also I tried set up proxy_protocol for nginx: listen 443 ssl proxy_protocol;
and options for haproxy-ingress: proxy-protocol: [v1|v2|v2-ssl|v2-ssl-cn]
All of this separately
and get errors in log of nginx backend</p>
<p>����kjih9876�����2�.�*�&���=5" while reading PROXY protocol, client: 10.244.5.0, server: 0.0.0.0:443
2018/12/13 19:55:24 [error] 7#7: <em>2271 broken header: "�ۆ�v+\��w �?���3�Alm9i�� �L$&��h��0�,�(�$��
����kjih9876�����2�.�</em>�&���=5" while reading PROXY protocol, client: 10.244.6.0, server: 0.0.0.0:443</p>
| <p>This issue beause of annotation
<code>ingress.kubernetes.io/secure-backends: "true"</code> .
It doesn't need because we make already secured data from haproxy it tcp stream. With this annotation we make encryption twice and nginx can't decrypt it correctly</p>
|
<p>My solution:</p>
<pre><code>├── main.tf
├── modules
│ ├── cluster1
│ │ ├── cluster1.tf
│ │ ├── main.tf
│ │ ├── output.tf
│ │ └── variables.tf
│ ├── cluster2
│ │ ├── cluster.tf
│ │ ├── main.tf
│ │ ├── output.tf
│ │ └── variables.tf
│ └── trafficmanager
│ ├── main.tf
│ ├── output.tf
│ ├── trafficmanager.tf
│ └── variables.tf
├── README.md
└── variables.tf
</code></pre>
<p>in order for me to create a Azure k8s clusters, each cluster requires service principal id and secret. i would be very interested to see some examples on how how to pass environment variables containing service principal and secret to each cluster.</p>
| <p>Terraform will read environment variables in the form of TF_VAR_name to find the value for a variable. For example, the TF_VAR_access_key variable can be set to set the access_key variable.</p>
<h2>Example</h2>
<pre><code>export TF_VAR_region=us-west-1 # normal string
export TF_VAR_alist='[1,2,3]' # array
export TF_VAR_amap='{ foo = "bar", baz = "qux" }' # map
</code></pre>
<p>Pass module to terraform module</p>
<pre><code>variable "region" {}
variable "alist" {}
variable "map" {}
module "test" {
source = "./module/testmodule" # module location
region = "${var.region}"
list = "${var.alist}"
map = "${var.map}"
}
</code></pre>
<p>More information in <a href="https://www.terraform.io/docs/configuration/environment-variables.html#tf_var_name" rel="nofollow noreferrer">this</a> link and some <a href="https://www.terraform.io/docs/configuration/environment-variables.html#tf_var_name" rel="nofollow noreferrer">example</a> </p>
|
<p>Deployed K8s service with type as LoadBalancer. K8s cluster running on an EC2 instance. The service is stuck at "pending state". </p>
<p>Does the service type 'ELB' requires any stipulation in terms of AWS configuration parameters?</p>
| <p>Yes. Typically you need the option <code>--cloud-provider=aws</code> on:</p>
<ul>
<li>All kubelets</li>
<li>kube-apiserserver</li>
<li>kube-controller-manager</li>
</ul>
<p>Also, you have to make sure that all your K8s instances (master/nodes) have an AWS <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html" rel="nofollow noreferrer">instance role</a> that allows them to create/remove ELBs and routes (All access to EC2 should do).</p>
<p>Then you need to make sure all your nodes are tagged:</p>
<ul>
<li>Key: KubernetesCluster, Value: 'your cluster name'</li>
<li>Key: k8s.io/role/node, Value: 1 (For nodes only)</li>
<li>Key: kubernetes.io/cluster/kubernetes, Value: owned</li>
</ul>
<p>Make sure your subnet is also tagged:</p>
<ul>
<li>Key: KubernetesCluster, Value: 'your cluster name'</li>
</ul>
<p>Also, your Kubernetes node definition, you should have something like this:</p>
<pre><code>ProviderID: aws:///<aws-region>/<instance-id>
</code></pre>
<p>Generally, all of the above is not needed if you are using the <a href="https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/" rel="nofollow noreferrer">Kubernetes Cloud Controller Manager</a> which is in beta as of K8s <code>1.13.0</code></p>
|
<p>So I have a mariadb subchart. The mariadb charts fills a config map from diffrent init files with:</p>
<pre><code>{{ (.Files.Glob "files/docker-entrypoint-initdb.d/*[sh|sql|sql.gz]").AsConfig | indent 2 }}
</code></pre>
<p>So is there anyway I can inject the init files ?</p>
<p>Is it possible to overwrite the context of <code>.Files.Glob</code> so it access my parent directory ? or is there another recommended way to create initial sql files ?</p>
<p>the maridb subchart is implented like this in the <code>requirements.yaml</code>:</p>
<pre><code>dependencies:
- name: mariadb
version: 5.x.x
repository: https://kubernetes-charts.storage.googleapis.com/
condition: mariadb.enabled
</code></pre>
| <p>Since your mariadb is a subchart managed by third party, <code>.Files.Glob</code> refers only to files inside mariadb directory. </p>
<p>If you want to place any startup scripts inside subchart, you have to unarchive it.</p>
<p>Let's say you have <strong>custom-init-scripts</strong> directory with all init scripts in your parent chart.</p>
<pre><code>$ ls custom-init-scripts/
init.sh insert.sql
# download mariadb chart package in charts directory
$ helm dependency update
# unarchive and delete package
$ tar -xvf charts/mariadb-5.*.tgz -C charts && rm charts/mariadb-5.*.tgz
# copy init scripts to mariadb subchart
$ cp -a custom-init-scripts/. charts/mariadb/files/docker-entrypoint-initdb.d/
</code></pre>
<p>Now your init files are present in mariadb subchart</p>
<pre><code>helm install --debug --dry-run --set mariadb.enabled=true .
...
---
# Source: mychart/charts/mariadb/templates/initialization-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: elevated-dragonfly-mariadb-master-init-scripts
labels:
app: mariadb
component: "master"
chart: mariadb-5.2.5
release: "elevated-dragonfly"
heritage: "Tiller"
binaryData:
data:
init.sh: "echo \"hi\"\r\n"
insert.sql: INSERT INT Users (FirstName, LastName) VALUES ('A', 'B');
</code></pre>
|
<p>We have a Kubernetes cluster.</p>
<p>Now we want to expand that with GPU nodes (so that would be the only nodes in the Kubernetes cluster that have GPUs).</p>
<p>We'd like to avoid Kubernetes to schedule pods on those nodes unless they require GPUs. </p>
<p>Not all of our pipelines can use GPUs. The absolute majority are still CPU-heavy only. </p>
<p>The servers with GPUs could be very expensive (for example, Nvidia DGX could be as much as $150/k per server).</p>
<p>If we just add DGX nodes to Kubernetes cluster, then Kubernetes would schedule non-GPU workloads there too, which would be a waste of resources (e.g. other jobs that are getting scheduled later and do need GPUs, may have other non-GPU resources there exhausted there like CPU and memory, so they would have to wait for non-GPU jobs/containers to finish).</p>
<p>Is there is a way to customize GPU resource scheduling in Kubernetes so that it would only schedule pods on those expensive nodes if they require GPUs? If they don't, they may have to wait for availability of other non-GPU resources like CPU and memory on non-GPU servers... </p>
<p>Thanks.</p>
| <p>You can use labels and label selectors for this.
<a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/" rel="nofollow noreferrer">kubernates docs</a></p>
<p>Update: example </p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: with-gpu-antiAffinity
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: resources
operator: In
values:
- cpu-only
</code></pre>
|
<p>I have quite a few failures when starting kube-apiserver in 1.10.11 K8s version.
Its health check comes back with poststarthook/rbac/bootstrap-roles failed. Very annoyingly, for security reasons, the reason is "reason withheld"
How do I know what this check is? Am I missing some permissions / bindings? I'm upgrading from 1.9.6. Release notes didn't clearly mention anything like this is required. </p>
| <p>All the details can be accessed with a super user credential or on the unsecured port (if you are running with that enabled) at <code>/healthz/<name-of-health-check></code></p>
<p>The RBAC check in particular reports unhealthy until the initial startup is completed and default roles are verified to exist. Typically, no user action is required to turn the check healthy, it simply reports that the apiserver should not be added to a load balancer yet, and reports healthy after a couple seconds, once startup completes. Persistent failure usually means problems communicating with etcd (I'd expect the /healthz/etcd check to be failing in that case as well). That behavior has been present since RBAC was introduced, and is not new in 1.10</p>
|
<p>I am debugging certain behavior from my application pods; i am launching on K8s cluster. In order to do that I am increasing logging by increasing verbosity of deployment by adding <code>--v=N</code> flag to <code>Kubectl create deployment</code> command.</p>
<p>my question is : how can i configure increased verbosity globally so all pods start reporting increased verbosity; including pods in kube-system name space.</p>
<p>i would prefer if it can be done without re-starting k8s cluster; but if there is no other way I can re-start.</p>
<p>thanks
Ankit </p>
| <p>For your applications, there is nothing global as that is not something that has global meaning. You would have to add the appropriate config file settings, env vars, or cli options for whatever you are using.</p>
<p>For kubernetes itself, you can turn up the logging on the kubelet command line, but the defaults are already pretty verbose so I’m not sure you really want to do that unless you’re developing changes for kubernetes.</p>
|
<p>I have a kubeadm deployed master (v1.10.12) and I'm trying to add a new node to the cluster:</p>
<p>on the master I do:</p>
<pre><code>sudo kubeadm token create
sudo kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
2txs62.83q81hpici7a0u5q 23h 2018-12-20T23:37:46Z authentication,signing <none> system:bootstrappers:kubeadm:default-node-token
</code></pre>
<p>and then on the new node, I run:</p>
<pre><code>sudo yum install -y kubeadm-1.10.12-0
sudo yum install -y kubelet-1.10.12-0
sudo kubeadm reset
sudo kubeadm join --token 2txs62.83q81hpici7a0u5q W.X.Y.Z:6443 --discovery-token-unsafe-skip-ca-verification
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "W.X.Y.Z:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://W.X.Y.Z:6443"
[discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "W.X.Y.Z:6443"
[discovery] Successfully established connection with API Server "W.X.Y.Z:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
unable to fetch the kubeadm-config ConfigMap: failed to get config map: configmaps "kubeadm-config" is forbidden: User "system:bootstrap:2txs62" cannot get configmaps in the namespace "kube-system"
</code></pre>
<p>on the master:</p>
<pre><code>kubectl -n kube-system get cm kubeadm-config -oyaml
apiVersion: v1
data:
MasterConfiguration: |
api:
advertiseAddress: W.X.Y.Z
bindPort: 6443
controlPlaneEndpoint: ""
auditPolicy:
logDir: /var/log/kubernetes/audit
logMaxAge: 2
path: ""
authorizationModes:
- Node
- RBAC
certificatesDir: /etc/kubernetes/pki
cloudProvider: ""
criSocket: /var/run/dockershim.sock
etcd:
caFile: ""
certFile: ""
dataDir: /var/lib/etcd
endpoints: null
image: ""
keyFile: ""
imageRepository: gcr.io/google_containers
kubeProxy:
config:
bindAddress: 0.0.0.0
clientConnection:
acceptContentTypes: ""
burst: 10
contentType: application/vnd.kubernetes.protobuf
kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
qps: 5
clusterCIDR: ""
configSyncPeriod: 15m0s
conntrack:
max: null
maxPerCore: 32768
min: 131072
tcpCloseWaitTimeout: 1h0m0s
tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
featureGates:
"": false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
masqueradeAll: false
masqueradeBit: 14
minSyncPeriod: 0s
syncPeriod: 30s
ipvs:
minSyncPeriod: 0s
scheduler: ""
syncPeriod: 30s
metricsBindAddress: 127.0.0.1:10249
mode: ""
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
resourceContainer: /kube-proxy
udpIdleTimeout: 250ms
kubeletConfiguration: {}
kubernetesVersion: v1.10.12
networking:
dnsDomain: cluster.local
podSubnet: ""
serviceSubnet: 10.96.0.0/12
nodeName: kube-master.novalocal
privilegedPods: false
token: ""
tokenGroups:
- system:bootstrappers:kubeadm:default-node-token
tokenTTL: 24h0m0s
tokenUsages:
- signing
- authentication
unifiedControlPlaneImage: ""
kind: ConfigMap
metadata:
creationTimestamp: 2018-03-28T06:37:58Z
name: kubeadm-config
namespace: kube-system
resourceVersion: "105798137"
selfLink: /api/v1/namespaces/kube-system/configmaps/kubeadm-config
uid: 8dc493f2-3252-11e8-a270-fa163e21c438
</code></pre>
<p>Help!?
Cheers.</p>
| <p>Sounds like you have a version mismatch and running into something like <a href="https://github.com/kubernetes/kubeadm/issues/907" rel="nofollow noreferrer">this</a>.</p>
<p>You can manually try to create a <code>Role</code> in the <code>kube-system</code> namespace with the name <code>kubeadm:kubeadm-config</code>. For example:</p>
<pre><code>cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: kube-system
name: kubeadm:kubeadm-config
rules:
- apiGroups:
- ""
resourceNames:
- kubeadm-config
resources:
- configmaps
verbs:
- get
EOF
</code></pre>
<p>and then create a matching <code>RoleBinding</code>:</p>
<pre><code>cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: kube-system
name: kubeadm:kubeadm-config
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubeadm:kubeadm-config
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:nodes
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:bootstrappers:kubeadm:default-node-token
EOF
</code></pre>
|
<p>I got a fair bit of CRD understanding by reading hello world type examples. I don't see a need for CRDs for my use cases. Could you list the business use cases solved by CRDs?</p>
<p>Update: by business use cases I meant requirements in an organization that are implemented using CRDs. </p>
<p>Example: every application deployed in "prod" namespace should have minimum pod replica count of 2. Define a CRD to store the minimum replica count and use that CRD (instance) in Admission controller to enforce the requirement? I'm not sure this approach is valid. I would like to know such requirements. </p>
| <p>This is a broad question. They can be used for lots of different things. For example listing all the CRDs on my cluster shows all these:</p>
<pre><code>$ kubectl get crd
NAME CREATED AT
adapters.config.istio.io 2018-09-21T04:33:18Z
alertmanagers.monitoring.coreos.com 2018-11-10T00:30:22Z
apikeys.config.istio.io 2018-09-21T04:33:17Z
attributemanifests.config.istio.io 2018-09-21T04:33:17Z
authorizations.config.istio.io 2018-09-21T04:33:17Z
bgpconfigurations.crd.projectcalico.org 2018-07-23T22:22:59Z
bgppeers.crd.projectcalico.org 2018-07-23T22:22:59Z
bypasses.config.istio.io 2018-09-21T04:33:17Z
certificates.certmanager.k8s.io 2018-09-21T04:33:51Z
checknothings.config.istio.io 2018-09-21T04:33:17Z
circonuses.config.istio.io 2018-09-21T04:33:17Z
clusterinformations.crd.projectcalico.org 2018-07-23T22:22:59Z
clusterissuers.certmanager.k8s.io 2018-09-21T04:33:51Z
deniers.config.istio.io 2018-09-21T04:33:17Z
destinationrules.networking.istio.io 2018-09-21T04:33:17Z
edges.config.istio.io 2018-09-21T04:33:17Z
envoyfilters.networking.istio.io 2018-09-21T04:33:17Z
etcdbackups.etcd.database.coreos.com 2018-07-31T00:07:01Z
etcdclusters.etcd.database.coreos.com 2018-07-31T00:07:01Z
etcdrestores.etcd.database.coreos.com 2018-07-31T00:07:01Z
felixconfigurations.crd.projectcalico.org 2018-07-23T22:22:59Z
fluentds.config.istio.io 2018-09-21T04:33:17Z
gateways.networking.istio.io 2018-09-21T04:33:17Z
globalnetworkpolicies.crd.projectcalico.org 2018-07-23T22:22:59Z
globalnetworksets.crd.projectcalico.org 2018-07-23T22:22:59Z
handlers.config.istio.io 2018-09-21T04:33:18Z
hostendpoints.crd.projectcalico.org 2018-07-23T22:22:59Z
httpapispecbindings.config.istio.io 2018-09-21T04:33:17Z
httpapispecs.config.istio.io 2018-09-21T04:33:17Z
instances.config.istio.io 2018-09-21T04:33:18Z
ippools.crd.projectcalico.org 2018-07-23T22:22:59Z
issuers.certmanager.k8s.io 2018-09-21T04:33:51Z
kongconsumers.configuration.konghq.com 2018-09-26T06:06:44Z
kongcredentials.configuration.konghq.com 2018-09-26T06:06:44Z
kongingresses.configuration.konghq.com 2018-09-26T06:06:44Z
kongplugins.configuration.konghq.com 2018-09-26T06:06:44Z
kubernetesenvs.config.istio.io 2018-09-21T04:33:17Z
kuberneteses.config.istio.io 2018-09-21T04:33:17Z
listcheckers.config.istio.io 2018-09-21T04:33:17Z
listentries.config.istio.io 2018-09-21T04:33:17Z
logentries.config.istio.io 2018-09-21T04:33:17Z
memquotas.config.istio.io 2018-09-21T04:33:17Z
meshpolicies.authentication.istio.io 2018-09-21T04:33:17Z
metrics.config.istio.io 2018-09-21T04:33:17Z
networkpolicies.crd.projectcalico.org 2018-07-23T22:22:59Z
noops.config.istio.io 2018-09-21T04:33:17Z
opas.config.istio.io 2018-09-21T04:33:17Z
policies.authentication.istio.io 2018-09-21T04:33:17Z
prometheuses.config.istio.io 2018-09-21T04:33:17Z
prometheuses.monitoring.coreos.com 2018-11-10T00:30:22Z
prometheusrules.monitoring.coreos.com 2018-11-10T00:30:22Z
quotas.config.istio.io 2018-09-21T04:33:17Z
quotaspecbindings.config.istio.io 2018-09-21T04:33:17Z
quotaspecs.config.istio.io 2018-09-21T04:33:17Z
rbacconfigs.rbac.istio.io 2018-09-21T04:33:17Z
rbacs.config.istio.io 2018-09-21T04:33:17Z
redisquotas.config.istio.io 2018-09-21T04:33:17Z
reportnothings.config.istio.io 2018-09-21T04:33:17Z
rules.config.istio.io 2018-09-21T04:33:17Z
servicecontrolreports.config.istio.io 2018-09-21T04:33:17Z
servicecontrols.config.istio.io 2018-09-21T04:33:17Z
serviceentries.networking.istio.io 2018-09-21T04:33:17Z
servicemonitors.monitoring.coreos.com 2018-11-10T00:30:22Z
servicerolebindings.rbac.istio.io 2018-09-21T04:33:18Z
serviceroles.rbac.istio.io 2018-09-21T04:33:18Z
signalfxs.config.istio.io 2018-09-21T04:33:17Z
solarwindses.config.istio.io 2018-09-21T04:33:17Z
stackdrivers.config.istio.io 2018-09-21T04:33:17Z
statsds.config.istio.io 2018-09-21T04:33:17Z
stdios.config.istio.io 2018-09-21T04:33:17Z
templates.config.istio.io 2018-09-21T04:33:18Z
tracespans.config.istio.io 2018-09-21T04:33:17Z
vaultservices.vault.security.coreos.com 2018-07-31T00:07:49Z
virtualservices.networking.istio.io 2018-09-21T04:33:17Z
</code></pre>
<p>Every one of them is a resource that you can use a definition with the <code>Kind</code> type. You see all kinds of stuff, for example:</p>
<pre><code>bgpconfigurations.crd.projectcalico.org => Calico BGP
prometheuses.config.istio.io => Prometheus config
fluentds.config.istio.io => Fluentd config
...
</code></pre>
|
<p>Getting this error.</p>
<p><code>Error: failed to prepare subPath for volumeMount "solr-collection-config" of container "upload-config-container" </code></p>
<p>Using kubernetes <strong>1.10.11</strong></p>
<pre><code> - name: upload-config-container
image: solr:7.4.0-alpine
imagePullPolicy: Always
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"
volumeMounts:
- name: solr-collection-config
mountPath: /tell/carbon/conf
subPath: conf
</code></pre>
<p><code>solr-collection-config</code> is a volume that represents a ConfigMap</p>
<pre><code> volumes:
- name: solr-collection-config
configMap:
name: solr-collection-resources
items:
- key: stopwords_en.txt
path: "conf/lang/stopwords_en.txt"
- key: _rest_managed.json
path: "conf/_rest_managed.json"
- key: currency.xml
path: "conf/currency.xml"
- key: protwords.txt
path: "conf/protwords.txt"
- key: schema.xml
path: "conf/schema.xml"
- key: solrconfig.xml
path: "conf/solrconfig.xml"
- key: stopwords.txt
path: "conf/stopwords.txt"
- key: synonyms.txt
path: "conf/synonyms.txt"
restartPolicy: Never
</code></pre>
<p>Help is kindly appreciated. Thank you</p>
| <p>What happens if you do not use <code>subPath</code>?</p>
<p>All keys from <strong>configMap</strong> will be mounted in directory <code>/tell/carbon/conf</code>. That means, every key will be a separate file under this directory.</p>
<p>Now, what this <code>subPath</code> does? From your example,</p>
<pre><code>volumeMounts:
- name: solr-collection-config
mountPath: /tell/carbon/conf
subPath: conf
</code></pre>
<p>Means, key <code>conf</code> from <strong>configMap</strong> will be mounted as file <code>conf</code> under <code>/tell/carbon</code> directory.</p>
<p>But, you do not have this key. So getting this error.</p>
<blockquote>
<p>Error: failed to prepare subPath for volumeMount "solr-collection-config" of container "upload-config-container"</p>
</blockquote>
<p>Now, you can do like this</p>
<pre><code>volumeMounts:
- name: solr-collection-config
mountPath: /tell/carbon/conf
subPath: stopwords_en.txt
</code></pre>
<p>Which means, value of <code>stopwords_en.txt</code> from your <strong>configMap</strong> will be mounted as <code>conf</code> file under <code>/tell/carbon</code>.</p>
<p>Final words, this <code>subPath</code> is actually a path from volume, from where your data is coming. In your case, <code>subPath</code> should be one of the key from your <strong>configMap</strong></p>
|
<p>I have updated my AKS Azure Kubernetes cluster to version 1.11.5, in this cluster a MongoDB Statefulset is running:</p>
<p>The statefulset is created with this file:</p>
<pre><code>---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: default-view
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view
subjects:
- kind: ServiceAccount
name: default
namespace: default
---
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongo
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 2
template:
metadata:
labels:
role: mongo
environment: test
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- mongod
- "--replSet"
- rs0
- "--bind_ip"
- 0.0.0.0
- "--smallfiles"
- "--noprealloc"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongo,environment=test"
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
annotations:
volume.beta.kubernetes.io/storage-class: "managed-premium"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 32Gi
</code></pre>
<p>after the mentioned update of the cluster to the new k8s version I get this error:</p>
<pre><code>mongo-0 1/2 CrashLoopBackOff 6 9m
mongo-1 2/2 Running 0 1h
</code></pre>
<p>the detailed log from the pod is the following:</p>
<pre><code>2018-12-18T14:28:44.281+0000 W STORAGE [initandlisten] Detected configuration for non-active storage engine mmapv1 when current storage engine is wiredTiger
2018-12-18T14:28:44.281+0000 I CONTROL [initandlisten]
2018-12-18T14:28:44.281+0000 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.
2018-12-18T14:28:44.281+0000 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.
2018-12-18T14:28:44.281+0000 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended.
2018-12-18T14:28:44.281+0000 I CONTROL [initandlisten]
2018-12-18T14:28:44.281+0000 I CONTROL [initandlisten]
2018-12-18T14:28:44.281+0000 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2018-12-18T14:28:44.281+0000 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2018-12-18T14:28:44.281+0000 I CONTROL [initandlisten]
2018-12-18T14:28:44.477+0000 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
2018-12-18T14:28:44.478+0000 I REPL [initandlisten] Rollback ID is 7
2018-12-18T14:28:44.479+0000 I REPL [initandlisten] Recovering from stable timestamp: Timestamp(1545077719, 1) (top of oplog: { ts: Timestamp(1545077349, 1), t: 5 }, appliedThrough: { ts: Timestamp(1545077719, 1), t: 6 }, TruncateAfter: Timestamp(0, 0))
2018-12-18T14:28:44.480+0000 I REPL [initandlisten] Starting recovery oplog application at the stable timestamp: Timestamp(1545077719, 1)
2018-12-18T14:28:44.480+0000 F REPL [initandlisten] Applied op { : Timestamp(1545077719, 1) } not found. Top of oplog is { : Timestamp(1545077349, 1) }.
2018-12-18T14:28:44.480+0000 F - [initandlisten] Fatal Assertion 40313 at src/mongo/db/repl/replication_recovery.cpp 361
2018-12-18T14:28:44.480+0000 F - [initandlisten]
***aborting after fassert() failure
</code></pre>
<p>it seems the two instances went out of sync and are not able to recover. Can someone help?</p>
| <p>I have workaround this issue:</p>
<ol>
<li>adding a MongoDB container to the cluster to dump and restore the MongoDB data</li>
<li>dumping the current database</li>
<li>deleting the MongoDB instance</li>
<li>recreating a new MongoDB instance</li>
<li>restoring the data to the new instance</li>
</ol>
<p>yes unfortunately this comes with a downtime</p>
|
<p>I've deployed a series of deployments and services to a Kubernetes cluster with a load balancer. When I try to access my app this does not work as my application is exposed on port 80 but the URL is always redirected to port 443 (HTTPS). I suspect this is to do with the fact that the cluster IP is on port 443.</p>
<p>Any ideas on how I can fix this?</p>
<pre><code>db NodePort 10.245.175.203 <none> 5432:30029/TCP 25m
kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 8m
redis NodePort 10.245.197.157 <none> 6379:31277/TCP 25m
web LoadBalancer 10.245.126.122 123.12.123.123 80:31430/TCP 25m
</code></pre>
| <p>This is likely due to your application itself redirecting to port <code>443</code>. What type of application is it?</p>
<p>This service exposed on port <code>443</code> has nothing to do with your application:</p>
<pre><code>kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 8m
</code></pre>
<p>It's basically an internal service that allows you to access the kube-apiserver within your cluster.</p>
<p>You could try just setting up the <code>LoadBalancer</code> to listen on port <code>443</code> directly. Only you would have to port <code>80</code> traffic wouldn't work. If you want the port <code>80</code> redirects to work I suggest you use an <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer"><code>Ingress</code></a> controller like <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">nginx</a>. Something like this:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: your-ingress
annotations:
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
tls:
- hosts:
- yourhostname.com
secretName: tls-secret
rules:
- host: yourhostname.com
http:
paths:
- path: /
backend:
serviceName: web
servicePort: 443
</code></pre>
<p>You will also have to create a TLS secret holding your cert and key:</p>
<pre><code>$ kubectl create secret tls tls-secret --key /tmp/tls.key --cert /tmp/tls.crt
</code></pre>
|
<p>I am trying to follow instructions on
<a href="https://kubernetes.io/docs/setup/minikube/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/minikube/</a> </p>
<p>But I am getting an error on the command:</p>
<pre><code>curl $(minikube service hello-minikube --url)
</code></pre>
<p>details below:</p>
<p><a href="https://i.stack.imgur.com/0fs4i.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0fs4i.png" alt="image"></a></p>
<p>As you might have guessed I am a beginner, will appreciate if someone can guide me on what I am doing wrong.</p>
| <p>Basically:</p>
<pre><code>curl $(minikube service hello-minikube --url)
</code></pre>
<p>is a bash command and when use on a bash prompt it executes <code>minikube service hello-minikube --url</code> and the output is passed to <code>curl</code></p>
<p>Since you are using a Windows Command Prompt, you can run this first:</p>
<pre><code>minikube service hello-minikube --url
</code></pre>
<p>Copy the output and then run:</p>
<pre><code>curl <output>
</code></pre>
|
<p>all</p>
<ol>
<li><p>I knew well about k8s' nodePort and ClusterIP type in services.</p></li>
<li><p>But I am very confused about the Ingress way, because how will a request come into a pod in k8s by this Ingress way? </p></li>
</ol>
<p>Suppose K8s master IP is <strong>1.2.3.4</strong>, after Ingress setup, and can connect to backend service(e.g, <strong>myservice</strong>) with a port(e.g, <strong>9000</strong>)</p>
<p>Now, How can I visit this <strong>myservice:9000</strong> outside? i.e, through <strong>1.2.3.4</strong>? As there's no entry port on the <strong>1.2.3.4</strong> machine.</p>
<p>And many docs always said visit this via 'foo.com' configed in the ingress YAML file. But that is really funny, because <strong>xxx.com</strong> definitely needs DNS, it's not a magic to let you new-invent any <strong>xxx.com</strong> you like be a real website and can map your <strong>xxx.com</strong> to your machine!</p>
| <p>The key part of the picture is the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-controllers" rel="nofollow noreferrer">Ingress Controller</a>. It's an instance of a proxy (<a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-controllers" rel="nofollow noreferrer">could be nginx or haproxy or another ingress type</a>) and runs inside the cluster. It acts as an entrypoint and lets you add more sophisticated routing rules. It reads <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#the-ingress-resource" rel="nofollow noreferrer">Ingress Resources</a> that are deployed with apps and which define the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#ingress-rules" rel="nofollow noreferrer">routing rules</a>. This allows each app to say what the Ingress Controller needs to do for routing to it.</p>
<p>Because the controller runs inside the cluster, it needs to be exposed to the outside world. You can do this by NodePort but if you're using a cloud provider then it's more common to use LoadBalancer. This gives you an external IP and port that reaches the Ingress controller and you can point DNS entries at that. If you do point DNS at it then you have the option to use routing rules base on DNS (such as using different subdomains for different apps).</p>
<p>The article <a href="https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0" rel="nofollow noreferrer">'Kubernetes NodePort vs LoadBalancer vs Ingress? When should I use what?'</a> has some good explanations and diagrams - here's the diagram for Ingress:</p>
<p><a href="https://i.stack.imgur.com/bfEDw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bfEDw.png" alt="enter image description here"></a></p>
|
<p>I am trying to troubleshoot my service by looking at the istio-proxy access log (it logs every access). However, I can't find any documentation that explains the meaning of each entry in the log.</p>
<p>For example</p>
<blockquote>
<p>[2018-12-20T11:09:42.302Z] "GET / HTTP/1.1" 200 - 0 614 0 0 "10.32.96.32" "curl/7.54.0" "17b8f245-af00-4379-9f8f-a4dcd2f38c01" "foo.com" "127.0.0.1:8080"</p>
</blockquote>
<p>What does log above mean?</p>
<h1>Updated</h1>
<p>I've tried <a href="https://stackoverflow.com/a/53869876/476917">Vadim's answer</a>, but I couldn't find the log format data. Here's the <a href="https://gist.github.com/bangau1/4644de67daf8b03570ed0db477903001" rel="nofollow noreferrer">output json file</a>. Is there anything that I miss?
I am using istio-1.0.0</p>
| <p>Istio proxy access log's configuration is defined as part of <code>envoy.http_connection_manager</code> or <code>envoy.tcp_proxy</code> filters. To see it's configuration, run:</p>
<pre><code>istioctl proxy-config listeners <your pod> -n <your namespace> -o json
</code></pre>
<p>Search for <code>access_log</code> of <code>envoy.http_connection_manager</code> for HTTP and <code>access_log</code> of <code>envoy.tcp_proxy</code> for TCP.</p>
<p>You will see something like this:</p>
<pre><code> "filters": [
{
"name": "envoy.http_connection_manager",
"config": {
"access_log": [
{
"config": {
"format": "[%START_TIME%] \"%REQ(:METHOD)% %REQ(X-ENVOY-ORIGINAL-PATH?:PATH)% %PROTOCOL%\" %RESPONSE_CODE% %RESPONSE_FLAGS% %BYTES_RECEIVED% %BYTES_SENT% %DURATION% %RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)% \"%REQ(X-FORWARDED-FOR)%\" \"%REQ(USER-AGENT)%\" \"%REQ(X-REQUEST-ID)%\" \"%REQ(:AUTHORITY)%\" \"%UPSTREAM_HOST%\" %UPSTREAM_CLUSTER% %UPSTREAM_LOCAL_ADDRESS% %DOWNSTREAM_LOCAL_ADDRESS% %DOWNSTREAM_REMOTE_ADDRESS% %REQUESTED_SERVER_NAME%\n",
"path": "/dev/stdout"
</code></pre>
<p>Check the log attributes definitions <a href="https://www.envoyproxy.io/docs/envoy/latest/configuration/access_log#command-operators" rel="nofollow noreferrer">here</a> </p>
<p>If <code>access_log</code>'s format is not specified in the output above, <a href="https://www.envoyproxy.io/docs/envoy/latest/configuration/access_log#default-format-string" rel="nofollow noreferrer">the default format</a> is used.</p>
|
<p>We want to deploy an application that utilizes memory cache using docker and kubernetes with horizontal pod auto-scale, but we have no idea if the containerized application inside the pods would use the same cache since it won't be guaranteed that the pods would be in the same node when scaled by the auto-scaler.</p>
<p>I've tried searching for information regarding cache memory on kubernetes clusters, and all I found is a statement in a <a href="https://medium.com/google-cloud/kubernetes-101-pods-nodes-containers-and-clusters-c1509e409e16" rel="nofollow noreferrer">Medium article</a> that states </p>
<blockquote>
<p>the CPU and RAM resources of all nodes are effectively pooled and managed by the cluster</p>
</blockquote>
<p>and a sentence in a <a href="https://www.mirantis.com/blog/multi-container-pods-and-container-communication-in-kubernetes/" rel="nofollow noreferrer">Mirantis blog</a></p>
<blockquote>
<p>Containers in a Pod share the same IPC namespace, which means they can also communicate with each other using standard inter-process communications such as SystemV semaphores or POSIX shared memory.</p>
</blockquote>
<p>But I can't find anything regarding pods in different nodes having access to the same cache. And these are all on 3rd party sites and not in the official kubernetes site.</p>
<p>I'm expecting the cache to be shared between all pods in all nodes, but I just want confirmation regarding the matter.</p>
| <p>No, separate pods do not generally share anything even if running on the same physical node. There are ways around this if you are very very careful and fancy but the idea is for pods to be independent anyway. Within a single pod it's easier, you can use normal shmem, but this is pretty rare since there isn't much reason to do that usually.</p>
|
<p>I have defined value in a json file.</p>
<pre><code>cat templates/encrypt.json
{"encrypt": "cg8StVXbQJ0gPvMd9o7yrg=="}
</code></pre>
<p>The value has to be passed to the yaml file as below</p>
<pre><code>-config-file={{ tpl (.Files.Get "encrypt.json") . | b64enc }} \
</code></pre>
<p>Below is the helm chart snippet</p>
<pre><code>exec /bin/consul agent \
-node="${NODE}" \
-advertise="${POD_IP}" \
-bind=0.0.0.0 \
-client=0.0.0.0 \
{{- if .Values.client.grpc }}
-hcl="ports { grpc = 8502 }" \
{{- end }}
-config-dir=/consul/config \
{{- range .Values.client.extraVolumes }}
{{- if .load }}
-config-dir=/consul/userconfig/{{ .name }} \
{{- end }}
{{- end }}
-datacenter={{ .Values.global.datacenter }} \
-data-dir=/consul/data \
-config-file={{ tpl (.Files.Get "encrypt.json") . | b64enc }} \
{{- if (.Values.client.join) and (gt (len .Values.client.join) 0) }}
</code></pre>
<p>When I run my health charts, I get the below error.</p>
<pre><code>Error: unable to decode "": Object 'Kind' is missing in '{"encrypt":"cg8StVXbQJ0gPvMd9o7yrg=="}'
</code></pre>
| <p>What you're injecting with <code>{{ tpl (.Files.Get "encrypt.json") . | b64enc }}</code> is the content of the json i.e. <code>{"encrypt": "cg8StVXbQJ0gPvMd9o7yrg=="}</code>. But I don't think that is what that parameter expects. It seems to expect a filename for a file that should be available in the Pod, which can be done by mounting the configmap. That is <a href="https://github.com/helm/charts/blob/735b3cc5079c599738e5a4cb0aaab4183018177d/stable/consul/templates/consul-statefulset.yaml#L115" rel="nofollow noreferrer">how the consul helm chart in the official kubernetes charts handles it</a>:</p>
<pre><code> {{- if .Values.Gossip.Encrypt }}
if [ -e /etc/consul/secrets/gossip-key ]; then
echo "{\"encrypt\": \"$(base64 /etc/consul/secrets/gossip-key)\"}" > /etc/consul/encrypt.json
GOSSIP_KEY="-config-file /etc/consul/encrypt.json"
fi
{{- end }}
</code></pre>
<p>It <a href="https://github.com/helm/charts/blob/d18382c23dbe20d668a9af908604056866895e99/stable/consul/values.yaml#L55" rel="nofollow noreferrer">lets the user set a gossip key in the values file</a> and <a href="https://github.com/helm/charts/blob/f49709ff52dc25f7772f6a9baefbf784e734a359/stable/consul/templates/gossip-secret.yaml#L13" rel="nofollow noreferrer">sets that in a secret</a> which is <a href="https://github.com/helm/charts/blob/f49709ff52dc25f7772f6a9baefbf784e734a359/stable/consul/templates/consul-statefulset.yaml#L93" rel="nofollow noreferrer">mounted into</a> the Pods <a href="https://github.com/helm/charts/blob/f49709ff52dc25f7772f6a9baefbf784e734a359/stable/consul/templates/consul-statefulset.yaml#L167" rel="nofollow noreferrer">as a volume</a>. I'd suggest following the approach of that chart if you can.</p>
<p>I guess what you are doing is building on top of <a href="https://github.com/hashicorp/consul-helm/blob/7c9eb5c90b6efd66bfe8a8fb70f92f7bb33a7e36/templates/client-daemonset.yaml#L75" rel="nofollow noreferrer">the consul helm chart that Hashicorp provides</a> as the code you include is similar to that. So presumably you can't use the one from the kubernetes repo but you should be able to follow the approach taken by that chart for this config file.</p>
|
<p>Say that I have 5 apis that i want to deploy in a Kubernetes cluster, my question is simply what is the best practice to store the yaml files related to Kubernetes. </p>
<p>In projects I've seen online, Kubernetes yaml files are just added to the the api project itself. I wonder if it makes sense to decouple all files related to Kubernetes in an entirely separate "project", and which is managed by VCS as a completely separated entity from the api projects themselves. </p>
<p>This question arises since I'm currently reading a book about Kubernetes, on the topic namespaces, and considered it might be a good idea to have separate namespaces per environment (DEV / UAT / PROD), and it may make sense to have these files in a centralized "Kubernetes" project (unless it might be better to have a separate cluster per environment (?)). </p>
| <p>Whether to put the yaml in the same repo as the app is a question that projects answer in different ways. You might want to put them together if you find that you often change both at the same time or you just find it clearer to see everything in one place. You might separate if you mostly work on the yaml separately or if you find it less clutttered or want different visibility for it (e.g. different teams to look at it). If things get more sophisticated then you'll actually want to generate the yaml from templates and inject environment-specific configuration into it at deploy time (whether those environments are namespaces or clusters) - see <a href="https://stackoverflow.com/questions/47168381/best-practices-for-storing-kubernetes-configuration-in-source-control">Best practices for storing kubernetes configuration in source control</a> for more discussion on this.</p>
|
<p>I'm looking for the solution of how to get logs from a pod in Kubernetes cluster using Go. I've looked at "https://github.com/kubernetes/client-go" and "https://godoc.org/sigs.k8s.io/controller-runtime/pkg/client", but couldn't understand how to use them for this purpose. I have no issues getting information of a pod or any other object in K8S except for logs.</p>
<p>For example, I'm using Get() from "https://godoc.org/sigs.k8s.io/controller-runtime/pkg/client#example-Client--Get" to get K8S job info:</p>
<pre><code>found := &batchv1.Job{}
err = r.client.Get(context.TODO(), types.NamespacedName{Name: job.Name, Namespace: job.Namespace}, found)
</code></pre>
<p>Please share of how you get pod's logs nowadays.
Any suggestions would be appreciated!</p>
<p>Update:
The solution provided in <a href="https://stackoverflow.com/questions/32983228/kubernetes-go-client-api-for-log-of-a-particular-pod">Kubernetes go client api for log of a particular pod</a> is out of date. It have some tips, but it is not up to date with current libraries.</p>
| <p>Here is what we came up with eventually using client-go library:</p>
<pre><code>func getPodLogs(pod corev1.Pod) string {
podLogOpts := corev1.PodLogOptions{}
config, err := rest.InClusterConfig()
if err != nil {
return "error in getting config"
}
// creates the clientset
clientset, err := kubernetes.NewForConfig(config)
if err != nil {
return "error in getting access to K8S"
}
req := clientset.CoreV1().Pods(pod.Namespace).GetLogs(pod.Name, &podLogOpts)
podLogs, err := req.Stream()
if err != nil {
return "error in opening stream"
}
defer podLogs.Close()
buf := new(bytes.Buffer)
_, err = io.Copy(buf, podLogs)
if err != nil {
return "error in copy information from podLogs to buf"
}
str := buf.String()
return str
}
</code></pre>
<p>I hope it will help someone. Please share your thoughts or solutions of how you get logs from pods in Kubernetes.</p>
|
<p>I have a cronjob that runs and does things regularly. I want to send a slack message with the technosophos/slack-notify container when that cronjob fails.</p>
<p>Is it possible to have a container run when a pod fails?</p>
| <p>There is nothing built in for this that i am aware of. You could use a web hook to get notified when a pod changes and look for state stuff in there. But you would have to build the plumbing yourself or look for an existing third party tool.</p>
|
<p>I have two API's A and B that I control and both have readiness and liveness health checks. A has a dependency on B.</p>
<pre><code>A
/foo - This endpoint makes a call to /bar in B
/status/live
/status/ready
B
/bar
/status/live
/status/ready
</code></pre>
<p>Should the readiness health check for A make a call to the readiness health check for API B because of the dependency?</p>
| <p>Service A is ready if it can serve business requests. So if being able to reach B is part of what it <em>needs</em> to do (which it seems it is) then it should check B.</p>
<p>An advantage of having A check for B is you can then <a href="https://blog.sebastian-daschner.com/entries/zero-downtime-updates-kubernetes" rel="noreferrer">fail fast on a bad rolling upgrade</a>. Say your A gets misconfigured so that the upgrade features a wrong connection detail for B - maybe B's service name is injected as an environment variable and the new version has a typo. If your A instances check to Bs on startup then you can more easily ensure that the upgrade fails and that no traffic goes to the new misconfigured Pods. For more on this see <a href="https://medium.com/spire-labs/utilizing-kubernetes-liveness-and-readiness-probes-to-automatically-recover-from-failure-2fe0314f2b2e" rel="noreferrer">https://medium.com/spire-labs/utilizing-kubernetes-liveness-and-readiness-probes-to-automatically-recover-from-failure-2fe0314f2b2e</a></p>
<p>It would typically be enough for A to check B's liveness endpoint or any minimal availability endpoint rather than B's readiness endpoint. This is because kubernetes will be <a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-setting-up-health-checks-with-readiness-and-liveness-probes" rel="noreferrer">checking B's readiness probe for you anyway</a> so any B instance that A can reach will be a ready one. Calling B's liveness endpoint rather than readiness can make a difference if B's <a href="https://medium.com/metrosystemsro/kubernetes-readiness-liveliness-probes-best-practices-86c3cd9f0b4a" rel="noreferrer">readiness endpoint performs more checks than the liveness one</a>. Keep in mind that kubernetes will be calling these probes regularly - <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#configure-probes" rel="noreferrer">readiness as well as liveness - they both have a period</a>. The difference is whether the Pod is withdrawn from serving traffic (if readiness fails) or restarted (if liveness fails). You're not trying to do <a href="https://dzone.com/articles/monitoring-microservices-with-health-checks" rel="noreferrer">end-to-end transaction checks</a>, you want these checks to contain minimal logic and not use up too much load.</p>
<p>It is preferable if the code within A's implementation of readiness does the check rather than doing the check at the k8s level (in the Pod spec itself). It is second-best to do it at the k8s level as ideally you want to know that the code running in the container really does connect.</p>
<p>Another way to check dependent services are available <a href="https://medium.com/@xcoulon/initializing-containers-in-order-with-kubernetes-18173b9cc222" rel="noreferrer">is with a check in an initContainer</a>. Using initContainers avoids seeing multiple restarts during startup (by ensuring correct ordering) but doing the checks to dependencies through probes can go deeper (if implemented in the app's code) and the probes will continue to run periodically after startup. So it can be advantageous to use both. </p>
<p>Be careful of checking other services from readiness too liberally as it can lead to cascading unavailability. For example, if a backend briefly goes down and a frontend is probing to it then the frontend will also become unavailable and so won't be able to display a good error message. You might want to start with simple probes and carefully add complexity as you go.</p>
|
<p>I have a subchart in <code>charts/</code> directory. I would like to disable it for some deployments.</p>
<p>Is it possible somehow? Currently i see the only way to add condition to all templates like below:</p>
<p>deployment.yaml</p>
<pre><code>{{- if .Values.isDev }}
deployment code
{{- end }}
</code></pre>
<p>service.yaml</p>
<pre><code>{{- if .Values.isDev }}
service code
{{- end }}
</code></pre>
| <p>Also, for current version of Helm (2.12 at this time), it is also possible to write a <code>requirements.yaml</code> in which one can specify not only remote charts for Helm to download, but also Charts inside the <code>charts</code> folder. In this <code>requirements.yaml</code> one can specify a <code>condition</code> field for each dependency. This field is the path for a parent's Value.</p>
<p>So, for instance, given this <code>requirements.yaml</code>: </p>
<pre><code>dependencies:
- name: one-dep
version: 0.1.0
condition: one-dep.enabled
- name: another-dep
version: 0.1.0
condition: another-dep.enabled
</code></pre>
<p>Your <code>values.yaml</code> could have:</p>
<pre><code>one-dep:
enabled: true
another-dep:
enabled: false
</code></pre>
<p>This will result in Helm only including <code>one-dep</code> chart.
It's worth noting that if the path specified in the <code>condition</code> does not exist, it defaults to <code>true</code>.</p>
<p><a href="https://docs.helm.sh/developing_charts/#tags-and-condition-fields-in-requirements-yaml" rel="noreferrer">Here's the link to the doc</a></p>
|
<p>I am trying to create and mount a volume but getting stuck.</p>
<p>This part creates the storage:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvclaim2
spec:
accessModes:
- ReadWriteOnce
storageClassName: managed-premium
resources:
requests:
storage: 5Gi
</code></pre>
<p>The following is a continuation of my deployment section:</p>
<pre><code>volumeMounts:
- name: config
mountPath: /config
readOnly: true
args:
- --configfile=/config/traefik.toml
volumes:
- name: config
persistentVolumeClaim:
claimName: pvclaim2
configMap:
name: traefik-config
</code></pre>
<p>I keep getting the below error message:</p>
<blockquote>
<p>The Deployment "traefik-ingress-controller" is invalid:
spec.template.spec.containers[0].volumeMounts[0].name: Not found:
"config"</p>
</blockquote>
<p>Any help is appreciated.</p>
<p>UPDATE:</p>
<pre><code>Output from describe pv:
Conditions:
Type Status
PodScheduled False
Volumes:
certs:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pvclaim101
ReadOnly: false
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: traefik-conf
Optional: false
traefik-ingress-controller-token-6npxp:
Type: Secret (a volume populated by a Secret)
SecretName: traefik-ingress-controller-token-6npxp
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 1m (x25 over 2m) default-scheduler persistentvolumeclaim "pvclaim101" not found
</code></pre>
| <p>Looks like you have an indentation it's finding the VolumeMount but not the Volume. Something like this should work:</p>
<pre><code>containers:
- image: your-image
name: your-containers
volumeMounts:
- name: config
mountPath: /config
readOnly: true
args:
- --configfile=/config/traefik.toml
volumes:
- name: config
persistentVolumeClaim:
claimName: pvclaim2
configMap:
name: traefik-config
</code></pre>
|
<p>I am trying to delete a pod in my kubernetes cluster, then check its status to see how long does it take for the pod to get down, and up again. I could not find any helpful example for the second part which is getting a specific pod status using go-client. Any help is appreciated. </p>
| <p>You can use Get function to get specific pod information (below examples are getting whole Status struct):</p>
<pre><code>pod, _ := clientset.CoreV1().Pods("kubernetes").Get(pod.Name, metav1.GetOptions{})
fmt.Println(pod.Status)
</code></pre>
<p>Also, you can use List function to get all pods in the particular namespace and then range them:</p>
<pre><code>pods, _ := clientset.CoreV1().Pods("kubernetes").List(metav1.ListOptions{FieldSelector: "metadata.name=kubernetes"})
for _, pod := range pods.Items {
fmt.Println(pod.Name, pod.Status)
}
</code></pre>
<p>Hope this helps!</p>
|
<p>I am facing the issue when I delete the <code>default-token</code> secret of service account.</p>
<p>Kubernetes automatically mounts the <code>deafult-token</code> into the pod spec so that you can access its API.</p>
<pre><code>volumes:
- name: default-token-hh98h
secret:
defaultMode: 420
secretName: default-token-xxx
</code></pre>
<p>Now when I delete the secret <code>default-token-xxx</code>, kubernetes automatically creates new <code>default-token-yyy</code>, but that token doesn't reflect in the pod spec. If I restart the pods then the new token takes effect.</p>
<p>Is there any way to refresh the <code>default-token</code> secret without deleting the pod?</p>
| <p>It is not possible to update <code>secretName</code> field of <code>pod.spec.volume</code> because it is forbidden.
Even when you try to change, you will get an error </p>
<pre><code>* spec: Forbidden: pod updates may not change fields other than `spec.containers[*].image`, `spec.initContainers[*].image`, `spec.activeDeadlineSeconds` or `spec.tolerations` (only additions to existing tolerations)
</code></pre>
<p>And you are trying to update <code>spec.volumes</code> which is forbidden</p>
|
<p>On our current <code>Rancher environment</code>, we dynamically configure an <code>Nginx</code> configuration based on calls to the Rancher metadata, using labels on the containers to determine if that container is included in the <code>Nginx</code> routing.<br>
We use <code>confd</code> with a <code>Rancher</code> backend to accomplish this metadata check and to dynamically change/reload a new <code>Nginx</code> configuration.</p>
<p>We have started working on migrating to <code>Kubernetes</code> (AWS EKS). Is there an equivalent to this <code>confd/Rancher</code> available for <code>Kubernetes</code> ?</p>
<p>Due to some technical reasons and time scoping reasons, we can't replace this <code>nginx</code> with an <code>ingress</code> equivalent at this time, so are looking into using annotations or labels on services/pods to keep a dynamic configuration capability.</p>
| <p>To add a little further detail on what we eventually found thanks to Scott Anderson's answer.</p>
<p>Using the nginx custom template technique, we were able to dynamically configure the nginx configuration by using annotations in the Ingress resources and referencing them in the nginx custom template.</p>
<p>With an Ingress resource metadata defined as:</p>
<pre><code>metadata:
name: foo
annotations:
kubernetes.io/ingress.class: "nginx-legacy"
mycompany/bar: "path_to_service"
</code></pre>
<p>Within the custom Nginx template (location block), to see if the annotation is present:</p>
<pre><code>{{if index $ing.Annotations "mycompany/bar"}}
</code></pre>
<p>To get a value from an annotation:</p>
<pre><code>{{$bar:= index $ing.Annotations "mycompany/bar"}}
</code></pre>
|
<p>Say that I have 5 apis that i want to deploy in a Kubernetes cluster, my question is simply what is the best practice to store the yaml files related to Kubernetes. </p>
<p>In projects I've seen online, Kubernetes yaml files are just added to the the api project itself. I wonder if it makes sense to decouple all files related to Kubernetes in an entirely separate "project", and which is managed by VCS as a completely separated entity from the api projects themselves. </p>
<p>This question arises since I'm currently reading a book about Kubernetes, on the topic namespaces, and considered it might be a good idea to have separate namespaces per environment (DEV / UAT / PROD), and it may make sense to have these files in a centralized "Kubernetes" project (unless it might be better to have a separate cluster per environment (?)). </p>
| <p>From Production k8s experience for CI/CD:</p>
<ul>
<li>One cluster per environment such as dev , stage , prod ( optionally per data centre )</li>
<li><p>One namespace per project</p></li>
<li><p>One git deployment repo per project</p></li>
<li>One branch in git deployment repo per environment</li>
<li>Use configmaps for configuration aspects</li>
<li>Use secret management solution to store and use secrets</li>
</ul>
|
<p>I am trying to expose a stateful mongo replica set running on my cluster for outside access.</p>
<p>I have three replicas and I have created a <code>LoadBalancer</code> service for each replica with the same <code>LoadBalancerIP</code> while incrementing the port sequentially from <code>10255</code> to <code>10257</code>.</p>
<pre><code> apiVersion: v1
kind: Service
metadata:
name: mongo-service-0
namespace: datastore
labels:
app: mongodb-replicaset
spec:
loadBalancerIP: staticip
type: LoadBalancer
externalTrafficPolicy: Local
selector:
statefulset.kubernetes.io/pod-name: mongo-mongodb-replicaset-0
ports:
- protocol: TCP
port: 10255
targetPort: 27017
</code></pre>
<p>The issue is that only one service <code>mongo-service-0</code> is deployed successfully with static IP, the other timeout after a while.</p>
<p>What I am trying to figure out is if I can use a single static IP address as LoadBalancerIP across multiple services with different ports.</p>
| <p>Since you are using different ports for each of your Mongo replicas you had to create different Kubernetes <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Services</a> for each one of the replicas. You can only tie a Kubernetes service with a single load balancer and each load balancer will have its own unique IP (or IPs) address, so you won't be able to share that IP address across a <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">LoadBalancer</a> type of Service.</p>
<p>The workaround is to use a <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">NodePort</a> service and basically manage your load balancer independently and point it to the NodePort for each one of your replicas.</p>
<p>Another workaround is to use the same port on your Mongo replicas and use the same Kubernetes LoadBalancer service. Is there any reason why you are not using that?</p>
|
<p>I've had great difficulties routing traffic to k8s API and services.</p>
<p>First I've created a cluster(k8s.buycheese.com) with KOPS in private
topology, within a VPC, so that, master and nodes are only accessible from a bastion using SSH.</p>
<p>I own a domain in namecheap (buycheese.com) and I've created a hosted zone(k8s.buycheese.com) in route53.
After KOPS has installed the cluster, it added a couple of record sets to the hosted zone like <code>api.k8s.buycheese.com</code>.</p>
<p>I've added the hosted zone's namespaces to my domain in namecheap, so that I can access the Kubernetes cluster(kubectl). That works correctly!</p>
<p>Next, I've installed an ingress nginx controller. Then I've created 2 ingresses:</p>
<ul>
<li>One to expose the Kubernetes dashboard</li>
<li>Another one to expose a nodeJS application</li>
</ul>
<p>I then tested my nodeJS Application using the ingress nginx ELB's URL and I can confirm that works! So I know that my pods are running correctly and the ELB works fine!</p>
<p>But obviously, I want my applications to be accessed through the domain I own...</p>
<p>So basically:</p>
<p>I need a new subdomain <code>dashboard.buycheese.com</code> to get to the Kubernetes dashboard.</p>
<p>And I need <code>buycheese.com</code> and <code>www.buycheese.com</code> domains to redirect to my nodeJS app.</p>
<p>Well, to do that, I've created a new hosted zone named buycheese.com in route53, and added 4 new namespaces to my domain buycheese.com in namecheap.</p>
<p>Then I've created 2 aliases(A) within that same hosted zone:</p>
<p><code>dashboard.buycheese.com</code> with Alias Target: ingress nginx's ELB
<code>www.buycheese.com</code> with Alias Target: ingress nginx's ELB</p>
<p>Then within my 2 ingress files</p>
<pre><code># Dashboard
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
name: kubernetes-dashboard-oidc
namespace: kube-system
spec:
rules:
- host: dashboard.buycheese.com
http:
paths:
- path: /
backend:
serviceName: kubernetes-dashboard-oidc
servicePort: 80
# NodeJS App
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
name: app
namespace:default
spec:
rules:
- host: buycheese.com
http:
paths:
- path: /
backend:
serviceName: app-service
servicePort: 3000
</code></pre>
<p>To sum up I have 2 hosted zones</p>
<p>1) <code>k8s.buycheese.com</code>
2) <code>buycheese.com</code></p>
<p>2 Alias within hosted zone buycheese.com:</p>
<p>1) <code>www.buycheese.com</code>
2) <code>dashboard.buycheese.com</code></p>
<p>2 Ingresses to expose the dashboard and my app</p>
<p>That configuration does not work at all! The below URLs are not reachable!</p>
<ul>
<li><code>dashboard.buycheese.com</code></li>
<li><code>www.buycheese.com</code></li>
<li><code>buycheese.com</code></li>
</ul>
<p>Only the ELB's URL works!</p>
<p>So first I would like to know whether my set up is correct(obviously no, but why ?)</p>
<p>What's the right way to make all of those URLs exposing my services and applications?</p>
<p>Thanks for your help!</p>
| <p>The only thing that I think may be happening here is that <code>Alias Target:</code> is not forwarding to the ELB.</p>
<p>You can try using <a href="https://documentation.unbounce.com/hc/en-us/articles/203687394-Setting-Up-Your-CNAME-with-NameCheap" rel="nofollow noreferrer">CNAME records instead</a>. You can always test using <code>dig</code> from the command line:</p>
<pre><code>$ dig buycheese.com
$ dig www.buycheese.com
$ dig dashboard.buycheese.com
</code></pre>
|
<p>Every time I deploy a new build in kubernates. I am getting different EXTERNAL-IP which in below case is afea383cbf72c11e8924c0a19b12bce4-xxxxx.us-east-1.elb.amazonaws.com</p>
<pre><code>$ kubectl get services -o wide -l appname=${APP_FULLNAME_SYSTEST},stage=${APP_SYSTEST_ENV}
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
test-systest-lb-https LoadBalancer 123.45.xxx.21 afea383cbf72c11e8924c0a19b12bce4-xxxxx.us-east-1.elb.amazonaws.com 443:30316/TCP 9d appname=test-systest,stage=systest
</code></pre>
<p>How can I have a static external IP (elb) so that I can link that to route 53. Do I have to include something on my Kubernates deployment yml file. </p>
<p>Additional details: I am using below loadbalancer </p>
<pre><code>spec:
type: LoadBalancer
ports:
- name: http
port: 443
targetPort: 8080
protocol: TCP
selector:
appname: %APP_FULL_NAME%
stage: %APP_ENV%
</code></pre>
| <p>If you are just doing new builds of a single Deployment then you should check what your pipeline is doing to the Service. You want to do a <code>kubectl apply</code> and a rolling update on the Deployment (provided the strategy is set on the Deployment) without modifying the Service (so not a <code>delete</code> and a <code>create</code>). If you do <code>kubectl get services</code> you should see its age (your output shows 9d so that's all good) and <code>kubectl describe service <service_name></code> will show any events on it.</p>
<p>I'm guessing do just want an external IP entry you can point to like 'afea383cbf72c11e8924c0a19b12bce4-xxxxx.us-east-1.elb.amazonaws.com' and <a href="https://stackoverflow.com/questions/38063891/how-to-get-permanent-ip-address-of-a-kubernetes-load-balancer-service-on-aws">not a truly static IP</a>. If you do want a true static IP you won't get it like this but you <a href="https://aws.amazon.com/blogs/opensource/network-load-balancer-support-in-kubernetes-1-9/" rel="nofollow noreferrer">can now try NLB</a>.</p>
<p>If you mean you want multiple Deployments (different microservices) to share a single IP then you could install an ingress controller and expose that with an ELB. Then when you deploy new apps you use an Ingress resource for each to tell the controller to expose them externally. So you can then put all your apps on the same external IP but routed under different paths or subdomains. The <a href="https://medium.com/kokster/how-to-setup-nginx-ingress-controller-on-aws-clusters-7bd244278509" rel="nofollow noreferrer">nginx ingress controller is a good option</a>.</p>
|
<p>I am new to Kubernetes. I am trying to follow <a href="https://kubernetes.io/docs/setup/minikube/#quickstart" rel="noreferrer">this tutorial</a> that instructs me on how to use minikube to setup a local service. I was able to get things running with the <code>$ kubectl run hello-minikube --image=k8s.gcr.io/echoserver:1.10 --port=8080</code> service from the tutorial. Huzzah!</p>
<p>Now I want to run a server with a <em>locally tagged-and-built</em> Docker image. According to <a href="https://stackoverflow.com/questions/42564058/how-to-use-local-docker-images-with-minikube">this post</a> all I need to do is tell my computer to use the minikube docker daemon, build my image, and set the <code>imagePullPolicy</code> to never. </p>
<p>How and where do I set the <code>imagePullPolicy</code> with <code>minikube</code>? I've googled around and while there's plenty of results, my "babe in the woods" status with K8 leads to information overload. (i.e. the simpler your answer the better)</p>
| <p>You have to edit your <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="noreferrer">Deployment</a> (<code>kubectl run</code> creates a deployment). The spec would look something like this:</p>
<pre><code>spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 2
selector:
matchLabels:
run: hello-minikube
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: hello-minikube
spec:
containers:
- image: k8s.gcr.io/echoserver:1.10 <-- change to the right image
imagePullPolicy: IfNotPresent <-- change to Always
name: hello-minikube
ports:
- containerPort: 8080
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
</code></pre>
<p>Edit with:</p>
<pre><code>$ kubectl edit deployment hello-minikube
</code></pre>
|
<p>As I will have no wifi tomorrow this is more some kind of theory crafting. I need to prepare the ingress files in "offline mode". </p>
<p>I want to route from <code>ApplicationA</code> to <code>ApplicationB</code>. These routes are hopefully able to carry url parameter. Both applications are using <code>spring boot</code> and <code>REST</code>. The cluster is (currently) set up by <code>minikube</code>.</p>
<p>So e.g. I got this url in <code>ServiceA</code>: <code>http://url.com/customerapi/getCustomerById?id=5</code>. This url should hit a method which is defined in <code>ApplicationB</code>. <code>ApplicationB</code> is reachable using <code>customerservice</code> and port 31001.</p>
<p>Is it as simple as the ingress below? Thats pretty much straight forward. Best regards.</p>
<p>I would define an <code>kubernetes ingress</code> like this:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: serviceA
spec:
rules:
- http:
paths:
- path: /customerapi
backend:
serviceName: customerservice
servicePort: 31001
</code></pre>
| <p>If I understand you correctly, you want to route traffic coming from web into two backends based on the url.</p>
<p>You can set your Ingress the following way:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cafe-ingress-nginx
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: url.com
http:
paths:
- path: /test1
backend:
serviceName: test1-svc
servicePort: 80
- path: /test2
backend:
serviceName: test2-svc
servicePort: 80
</code></pre>
<p>This will route all from <code>url.com/test1</code> to backend <code>test1-svc</code> and all from <code>url.com/test2</code> to backend <code>test2-svc</code>.</p>
<p>If you need to use the parameter inside the <code>Url</code>, I think the following will work:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
annotations:
ingress.kubernetes.io/query-routing: default/query-routing
spec:
backend:
serviceName: default-backend
servicePort: 80
rules:
- host: url.com
---
kind:ConfigMap
apiVersion: v1
metadata:
name: query-routing
data:
mapping: |-
[{
"field": "getCustomerById",
"value": "1",
"path": "customerapi/",
"service": "customerservice",
"port": "31001"
}]
</code></pre>
<p>But please test it on your example, as there are not enough details in your question.</p>
<p>There is a way of catching the parameter from <code>Header</code> using <code>nginx.ingress.kubernetes.io/server-snippet</code> Annotations. This particular one is being used by Shopify and usage is explained <a href="https://github.com/Shopify/ingress/blob/master/docs/user-guide/nginx-configuration/annotations.md#server-snippet" rel="nofollow noreferrer">here</a>. For more annotations please check Kubernetes <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/" rel="nofollow noreferrer">NGINX Ingress Controller</a>.</p>
|
<p>I currently have an Ingress configured on GKE (k8s 1.2) to forward requests towards my application's pods. I have a request which can take a long time (<strong>30</strong> seconds) and timeout from my application (504). I observe that when doing so the response that i receive is not my own 504 but a 502 from what looks like the Google Loadbalancer after <strong>60</strong> seconds. </p>
<p>I have played around with different status codes and durations, exactly after 30 seconds i start receiving this weird behaviour regardless of statuscode emitted.</p>
<p>Anybody have a clue how i can fix this? Is there a way to reconfigure this behaviour? </p>
| <p>Beginning with 1.11.3-gke.18, it is possible to configure timeout settings in kubernetes directly. </p>
<p>First add a backendConfig:</p>
<pre><code>apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: my-bsc-backendconfig
spec:
timeoutSec: 40
</code></pre>
<p>Then add an annotation in Service to use this backendConfig:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-bsc-service
labels:
purpose: bsc-config-demo
annotations:
beta.cloud.google.com/backend-config: '{"ports": {"80":"my-bsc-backendconfig"}}'
spec:
type: NodePort
selector:
purpose: bsc-config-demo
ports:
- port: 80
protocol: TCP
targetPort: 8080
</code></pre>
<p>And viola, your ingress load balancer now has a timeout of 40 second instead of the default 30 seconds.</p>
<p>See <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/configure-backend-service#creating_a_backendconfig" rel="noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/configure-backend-service#creating_a_backendconfig</a></p>
|
<p>Running a Spring Boot application inside a OpenShift Pod. To execute the readiness and liveness probe, I created an appropriate YAML file. However the Pod fails and responds that he was not able to pass the readiness check (after approximately 5 minutes). </p>
<p>My goal is to execute the readiness probe every 20 minutes. But I assume that it is failing because it adds up the initalDelaySeconds together with the periodSeconds. So I guess that the first check after the pod has been started will be executed after 22 minutes.</p>
<p>Following the related configuration of the readiness probe.</p>
<pre><code>readinessProbe:
failureThreshold: 3
httpGet:
path: /actuator/health
port: 8080
scheme: HTTP
initialDelaySeconds: 120
periodSeconds: 1200
successThreshold: 1
timeoutSeconds: 60
</code></pre>
<p>Is my assumption right? How to avoid it (Maybe increase the timeout regarding the kubelet)?</p>
| <p>Your configuration is correct and the <code>initialDelaySeconds</code> and <code>periodSeconds</code> do not sum up. So, the first readinessProbe HTTP call will exactly in 2 min after you start your POD.</p>
<p>I would look for an issue in your app itself, first thing that comes to my mind is that your path is <code>/actuator/health</code>, shouldn't it be just <code>/health</code>? That is the default in case of Spring Boot Actuator.</p>
<p>If that doesn't help, then the best would be to debug it: <code>exec</code> into your container and use <code>curl</code> to check if your health endpoint works correctly (it should return HTTP Code 200).</p>
|
<p>Kuberentes has a mechanism for supporting versioning of CRDs. See <a href="https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning/" rel="noreferrer">https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning/</a>. What is not clear to me is how you actually support an evolution of CRD v1 to CRD v2 when you cannot always convert from v1 <-> v2. Suppose we introduce a new field in v2 that can not be populated by a web hook conversion, then perhaps all we can do is leave the field null? Furthermore when you request api version N you always get back an object as version N even if it was not written as version N so how can you controller know how to treat the object?</p>
| <p>As you can read in <a href="https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definition-versioning/#writing-reading-and-updating-versioned-customresourcedefinition-objects" rel="nofollow noreferrer">Writing, reading, and updating versioned CustomResourceDefinition objects</a></p>
<blockquote>
<p>If you update an existing object, it is rewritten at the version that is currently the storage version. This is the only way that objects can change from one version to another.</p>
</blockquote>
<p>Kubernetes returns the object to you at the version you requested, but the persisted object is neither changed on disk, nor converted in any way (other than changing the apiVersion string) while serving the request.</p>
<p>If you update an existing object, it is rewritten at the version that is currently the storage version. This is the only way that objects can change from one version to another.</p>
<p>You read your object at version <code>v1beta1</code>, then you read the object again at version <code>v1</code>. Both returned objects are identical except for the <code>apiVersion</code> field, <a href="https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definition-versioning/#upgrade-existing-objects-to-a-new-stored-version" rel="nofollow noreferrer">Upgrade existing objects to a new stored version</a></p>
<p>The API server also supports webhook conversions that call an external service in case a conversion is required.
The webhook handles the ConversionReview requests sent by the API servers, and sends back conversion results wrapped in ConversionResponse. You can read about Webbooks <a href="https://book.kubebuilder.io/beyond_basics/what_is_a_webhook.html" rel="nofollow noreferrer">here</a>.</p>
<p>Webhook conversion was introduced in <code>Kubernetes v1.13</code> as an alpha feature.
When the webhook server is deployed into the Kubernetes cluster as a service, it has to be exposed via a service on port 443.
When deprecating versions and dropping support, devise a storage upgrade procedure.</p>
|
<p>I have aks with one kubernetes cluster having 2 nodes. Each node has about 6-7 pod running with 2 containers for each pod. One container is my docker image and the other is created by istio for its service mesh. But after about 10 hours the nodes become 'not ready' and the node describe shows me 2 errors:
1.container runtime is down,PLEG is not healthy: pleg was lastseen active 1h32m35.942907195s ago; threshold is 3m0s.
2.rpc error: code = DeadlineExceeded desc = context deadline exceeded,
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?</p>
<p>When I restart the node, it works fine but, the node goes back to 'NOT READY' after a while. Started facing this issue since adding in istio, but could not find any documents relating the two. Next step is to try and upgrade kubernetes </p>
<p>The node describe log:</p>
<pre><code>Name: aks-agentpool-22124581-0
Roles: agent
Labels: agentpool=agentpool
beta.kubernetes.io/arch=amd64
beta.kubernetes.io/instance-type=Standard_B2s
beta.kubernetes.io/os=linux
failure-domain.beta.kubernetes.io/region=eastus
failure-domain.beta.kubernetes.io/zone=1
kubernetes.azure.com/cluster=MC_XXXXXXXXX
kubernetes.io/hostname=aks-XXXXXXXXX
kubernetes.io/role=agent
node-role.kubernetes.io/agent=
storageprofile=managed
storagetier=Premium_LRS
Annotations: aks.microsoft.com/remediated=3
node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp: Thu, 25 Oct 2018 14:46:53 +0000
Taints: <none>
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Thu, 25 Oct 2018 14:49:06 +0000 Thu, 25 Oct 2018 14:49:06 +0000 RouteCreated RouteController created a route
OutOfDisk False Wed, 19 Dec 2018 19:28:55 +0000 Wed, 19 Dec 2018 19:27:24 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Wed, 19 Dec 2018 19:28:55 +0000 Wed, 19 Dec 2018 19:27:24 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 19 Dec 2018 19:28:55 +0000 Wed, 19 Dec 2018 19:27:24 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 19 Dec 2018 19:28:55 +0000 Thu, 25 Oct 2018 14:46:53 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Wed, 19 Dec 2018 19:28:55 +0000 Wed, 19 Dec 2018 19:27:24 +0000 KubeletNotReady container runtime is down,PLEG is not healthy: pleg was lastseen active 1h32m35.942907195s ago; threshold is 3m0s
Addresses:
Hostname: aks-XXXXXXXXX
Capacity:
cpu: 2
ephemeral-storage: 30428648Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 4040536Ki
pods: 110
Allocatable:
cpu: 1940m
ephemeral-storage: 28043041951
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 3099480Ki
pods: 110
System Info:
Machine ID: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
System UUID: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Boot ID: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Kernel Version: 4.15.0-1035-azure
OS Image: Ubuntu 16.04.5 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://Unknown
Kubelet Version: v1.11.3
Kube-Proxy Version: v1.11.3
PodCIDR: 10.244.0.0/24
ProviderID: azure:///subscriptions/9XXXXXXXXXXX/resourceGroups/MC_XXXXXXXXXXXXXXXXXXXXXXXXXXXX/providers/Microsoft.Compute/virtualMachines/aks-XXXXXXXXXXXX
Non-terminated Pods: (42 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
default emailgistics-graph-monitor-6477568564-q98p2 10m (0%) 0 (0%) 0 (0%) 0 (0%)
default emailgistics-message-handler-7df4566b6f-mh255 10m (0%) 0 (0%) 0 (0%) 0 (0%)
default emailgistics-reports-aggregator-5fd96b94cb-b5vbn 10m (0%) 0 (0%) 0 (0%) 0 (0%)
default emailgistics-rules-844b77f46-5lrkw 10m (0%) 0 (0%) 0 (0%) 0 (0%)
default emailgistics-scheduler-754884b566-mwgvp 10m (0%) 0 (0%) 0 (0%) 0 (0%)
default emailgistics-subscription-token-manager-7974558985-f2t49 10m (0%) 0 (0%) 0 (0%) 0 (0%)
default mollified-kiwi-cert-manager-665c5d9c8c-2ld59 0 (0%) 0 (0%) 0 (0%) 0 (0%)
istio-system grafana-59b787b9b-dzdtc 10m (0%) 0 (0%) 0 (0%) 0 (0%)
istio-system istio-citadel-5d8956cc6-x55vk 10m (0%) 0 (0%) 0 (0%) 0 (0%)
istio-system istio-egressgateway-f48fc7fbb-szpwp 10m (0%) 0 (0%) 0 (0%) 0 (0%)
istio-system istio-galley-6975b6bd45-g7lsc 10m (0%) 0 (0%) 0 (0%) 0 (0%)
istio-system istio-ingressgateway-c6c4bcdbf-bbgcw 10m (0%) 0 (0%) 0 (0%) 0 (0%)
istio-system istio-pilot-d9b5b9b7c-ln75n 510m (26%) 0 (0%) 2Gi (67%) 0 (0%)
istio-system istio-policy-6b465cd4bf-92l57 20m (1%) 0 (0%) 0 (0%) 0 (0%)
istio-system istio-policy-6b465cd4bf-b2z85 20m (1%) 0 (0%) 0 (0%) 0 (0%)
istio-system istio-policy-6b465cd4bf-j59r4 20m (1%) 0 (0%) 0 (0%) 0 (0%)
istio-system istio-policy-6b465cd4bf-s9pdm 20m (1%) 0 (0%) 0 (0%) 0 (0%)
istio-system istio-sidecar-injector-575597f5cf-npkcz 10m (0%) 0 (0%) 0 (0%) 0 (0%)
istio-system istio-telemetry-6944cd768-9794j 20m (1%) 0 (0%) 0 (0%) 0 (0%)
istio-system istio-telemetry-6944cd768-g7gh5 20m (1%) 0 (0%) 0 (0%) 0 (0%)
istio-system istio-telemetry-6944cd768-gd88n 20m (1%) 0 (0%) 0 (0%) 0 (0%)
istio-system istio-telemetry-6944cd768-px8qb 20m (1%) 0 (0%) 0 (0%) 0 (0%)
istio-system istio-telemetry-6944cd768-xzslh 20m (1%) 0 (0%) 0 (0%) 0 (0%)
istio-system istio-tracing-7596597bd7-hjtq2 10m (0%) 0 (0%) 0 (0%) 0 (0%)
istio-system prometheus-76db5fddd5-d6dxs 10m (0%) 0 (0%) 0 (0%) 0 (0%)
istio-system servicegraph-758f96bf5b-c9sqk 10m (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system addon-http-application-routing-default-http-backend-5ccb95zgfm8 10m (0%) 10m (0%) 20Mi (0%) 20Mi (0%)
kube-system addon-http-application-routing-external-dns-59d8698886-h8xds 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system addon-http-application-routing-nginx-ingress-controller-ff49qc7 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system heapster-5d6f9b846c-m4kfp 130m (6%) 130m (6%) 230Mi (7%) 230Mi (7%)
kube-system kube-dns-v20-7c7d7d4c66-qqkfm 120m (6%) 0 (0%) 140Mi (4%) 220Mi (7%)
kube-system kube-dns-v20-7c7d7d4c66-wrxjm 120m (6%) 0 (0%) 140Mi (4%) 220Mi (7%)
kube-system kube-proxy-2tb68 100m (5%) 0 (0%) 0 (0%) 0 (0%)
kube-system kube-svc-redirect-d6gqm 10m (0%) 0 (0%) 34Mi (1%) 0 (0%)
kube-system kubernetes-dashboard-68f468887f-l9x46 100m (5%) 100m (5%) 50Mi (1%) 300Mi (9%)
kube-system metrics-server-5cbc77f79f-x55cs 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system omsagent-mhrqm 50m (2%) 150m (7%) 150Mi (4%) 300Mi (9%)
kube-system omsagent-rs-d688cdf68-pjpmj 50m (2%) 150m (7%) 100Mi (3%) 500Mi (16%)
kube-system tiller-deploy-7f4974b9c8-flkjm 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system tunnelfront-7f766dd857-kgqps 10m (0%) 0 (0%) 64Mi (2%) 0 (0%)
kube-systems-dev nginx-ingress-dev-controller-7f78f6c8f9-csct4 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-systems-dev nginx-ingress-dev-default-backend-95fbc75b7-lq9tw 0 (0%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 1540m (79%) 540m (27%)
memory 2976Mi (98%) 1790Mi (59%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ContainerGCFailed 48m (x43 over 19h) kubelet, aks-agentpool-22124581-0 rpc error: code = DeadlineExceeded desc = context deadline exceeded
Warning ImageGCFailed 29m (x57 over 18h) kubelet, aks-agentpool-22124581-0 failed to get image stats: rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
Warning ContainerGCFailed 2m (x237 over 18h) kubelet, aks-agentpool-22124581-0 rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
</code></pre>
<p>General deployment file:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
name: emailgistics-pod
spec:
minReadySeconds: 10
replicas: 1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
annotations:
sidecar.istio.io/status: '{"version":"ebf16d3ea0236e4b5cb4d3fc0f01da62e2e6265d005e58f8f6bd43a4fb672fdd","initContainers":["istio-init"],"containers":["istio-proxy"],"volumes":["istio-envoy","istio-certs"],"imagePullSecrets":null}'
creationTimestamp: null
labels:
app: emailgistics-pod
spec:
containers:
- image: xxxxxxxxxxxxxxxxxxxxx/emailgistics_pod:xxxxxx
imagePullPolicy: Always
name: emailgistics-pod
ports:
- containerPort: 80
resources: {}
- args:
- proxy
- sidecar
- --configPath
- /etc/istio/proxy
- --binaryPath
- /usr/local/bin/envoy
- --serviceCluster
- emailgistics-pod
- --drainDuration
- 45s
- --parentShutdownDuration
- 1m0s
- --discoveryAddress
- istio-pilot.istio-system:15005
- --discoveryRefreshDelay
- 1s
- --zipkinAddress
- zipkin.istio-system:9411
- --connectTimeout
- 10s
- --proxyAdminPort
- "15000"
- --controlPlaneAuthPolicy
- MUTUAL_TLS
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: INSTANCE_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: ISTIO_META_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: ISTIO_META_INTERCEPTION_MODE
value: REDIRECT
- name: ISTIO_METAJSON_LABELS
value: |
{"app":"emailgistics-pod"}
image: docker.io/istio/proxyv2:1.0.4
imagePullPolicy: IfNotPresent
name: istio-proxy
ports:
- containerPort: 15090
name: http-envoy-prom
protocol: TCP
resources:
requests:
cpu: 10m
securityContext:
readOnlyRootFilesystem: true
runAsUser: 1337
volumeMounts:
- mountPath: /etc/istio/proxy
name: istio-envoy
- mountPath: /etc/certs/
name: istio-certs
readOnly: true
imagePullSecrets:
- name: ga.secretname
initContainers:
- args:
- -p
- "15001"
- -u
- "1337"
- -m
- REDIRECT
- -i
- '*'
- -x
- ""
- -b
- "80"
- -d
- ""
image: docker.io/istio/proxy_init:1.0.4
imagePullPolicy: IfNotPresent
name: istio-init
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
privileged: true
volumes:
- emptyDir:
medium: Memory
name: istio-envoy
- name: istio-certs
secret:
optional: true
secretName: istio.default
status: {}
---
</code></pre>
| <p>Currently this is a known bug and no real fix has been created to normalize nodes behavior.
Inspect below urls:</p>
<p><a href="https://github.com/kubernetes/kubernetes/issues/45419" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/45419</a></p>
<p><a href="https://github.com/kubernetes/kubernetes/issues/61117" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/61117</a></p>
<p><a href="https://github.com/Azure/AKS/issues/102" rel="noreferrer">https://github.com/Azure/AKS/issues/102</a></p>
<p>Hope soon we will have a solution.</p>
|
<p>I have a following use case:</p>
<ol>
<li><p>Our customers frequently release new services on their K8s clusters.
These new services are reachable from the outside world through a load balancing and Ingress to dynamically configure this load balancing once a service is deployed. This makes it really easy for the development teams of our customers because they don’t have to wait until somebody configures a load balancing manually. They can just create their own Ingress resource next to their service deployment and the service will be reachable.</p></li>
<li><p>A customer asked if we can also enable that each of its services can have its own subdomain automatically. So once a new application is deployed it, it should be available as a subdomain of the cluster domain (e.g. <a href="https://helloworld.cyvh5.k8s.ginger.aws.gigantic.io" rel="nofollow noreferrer">https://helloworld.cyvh5.k8s.ginger.aws.gigantic.io</a>) as well as at their own subdomain (e.g.. helloworld.awesome-customer.com).</p></li>
</ol>
<p>I have found <a href="https://stackoverflow.com/questions/43263606/dynamic-wildcard-subdomain-ingress-for-kubernetes">this resource</a> as a starting point.</p>
<p>My questions are:</p>
<ol>
<li><p>Can I achieve the <em>customer subdomain dynamic binding</em> in some other (better) way?</p></li>
<li><p>What are the possible limitations / pitfalls for the suggested solution?</p></li>
</ol>
<p>Thanks!</p>
| <p>Yeah for 1 ingress sounds great.</p>
<p>For 2 it sounds to me like you just need wildcard DNS pointing at the ingress controller. The wildcard DNS entry should say that *.domain.com should point to the ingress controller's external IP. Then host-based Ingress rules/resources can be deployed and traffic can be routed to the appropropriate Service based on the host specified in the request. So it doesn't matter what is in the wildcard part of the DNS of a request insofar as 'a.b.domain.com' will go to the ingress controller and it will then depend on what rules are in the Ingress resources as to where it ends up. </p>
<p>This won't be 'automatic' in the sense that the customer will have to deploy an Ingress rule or two if they want the service exposed on two hosts. But if the customer is happy with deploying Ingress resources then they should be happy with this too. </p>
<p>I don't think you need anything more dynamic because in 'helloworld.awesome-customer.com' it seems 'helloworld' is the service so that fills out your host so there's no need for a wildcard in the Ingress rule itself. What would be more dynamic and more like the example you point to is if they were asking for 'v1.helloworld.awesome-customer.com' and 'v2.helloworld.awesome-customer.com' and for both to be covered by one Ingress entry containing a wildcard (rather than two entries, one per version). But it seems they are not asking for that. </p>
<p>This is how I see the customer domain part anyway. I am not exactly sure what you mean about the cluster domain part - for that I'd need to better understand how that is accessed. Presumably it is again wildcard DNS pointing at something doing routing but I'm not as sure about what is doing the routing there. If the point is that you want to achieve this then it could just be that it's another wildcard DNS entry pointed at the same ingress controller with additional Ingress resources deployed. </p>
|
<p>There is something that I'm missing about the load balancer service.<br>
If I run a load balancer and my load gets spread over say 3 pods, do you have any guarantee that in case of multiple nodes, the same "type" of pods will be spread evenly over the nodes within the cluster? </p>
<p>If I understand it right, Kubernetes will try to spread different kind of pods over the nodes in order to achieve the most optimal use of resources.<br>
But does this guarantee that pods exposing the same application will be spread evenly too? </p>
<p>The replication controller will take care that the a certain amount of pods is always running, but what happens in case of a node failure; let's say 1 node's network interface goes down, and 3 pods of the same type were scheduled on that node. In that case the rc will take care that they are up again on a different node, but how do you know that there won't be a temporary outage of that api? I imagine that when using a load balancer, this can be prevented? </p>
| <p>Kubernetes tries to distribute the load of all his nodes. So it could be that you deploy all the pods in the same node.
As you say, in the event that the node fails, it would leave all your pods inaccessible. </p>
<p>As a developer, if you need to have pods distributed among different nodes, you have tools for it:</p>
<ul>
<li><p>Daemon Set: Assures you that all (or some) nodes run a copy of the pod. <a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/</a></p></li>
<li><p>Node Selector: You can define with labels those nodes where you want the pods to be deployed. <a href="https://docs.openshift.com/container-platform/3.6/admin_guide/scheduling/node_selector.html" rel="nofollow noreferrer">https://docs.openshift.com/container-platform/3.6/admin_guide/scheduling/node_selector.html</a></p></li>
</ul>
<p>There are other ways to achieve your objective like using affinity and anti-affinity or inter-pod affinity and anti-affinity.
Here you have all the solutions that kubernetes offer in the documentation: <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/assign-pod-node/</a></p>
|
<p>I have been playing with Digital Ocean's new managed Kubernetes service. I have created a new cluster using Digital Ocean's dashboard and, seemingly, successfully deployed my yaml file (attached). </p>
<p>running in context <code>kubectl get services</code></p>
<p><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
api-svc NodePort XX.XXX.XXX.XXX <none> 8080:30000/TCP 2h
kubernetes ClusterIP XX.XXX.X.X <none> 443/TCP 2h</code></p>
<p>My question is, <strong>how do I go exposing my service without a load balancer?</strong></p>
<p>I have been able to do this locally using minikube. To get the cluster IP I run <code>minikube ip</code> and use port number <code>30000</code>, as specified in my nodePort config, to reach the <code>api-svc</code> service. </p>
<p>From what I understand, Digital Ocean's managed service abstracts the master node away. So where would I find the public IP address to access my cluster?</p>
<p>Thank you in advance!</p>
<p><strong>my yaml file for reference</strong></p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: regcred
data:
.dockerconfigjson: <my base 64 key>
type: kubernetes.io/dockerconfigjson
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: api-deployment
labels:
app: api-deployment
spec:
replicas: 1
strategy: {}
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: <my-dockerhub-user>/api:latest
ports:
- containerPort: 8080
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
name: api-svc
spec:
type: NodePort
ports:
- port: 8080
targetPort: 8080
nodePort: 30000
protocol: TCP
selector:
app: api
type: NodePort
</code></pre>
| <p>You can hit any of your worker nodes' ip. Example <a href="http://worker-node-ip:30000/" rel="nofollow noreferrer">http://worker-node-ip:30000/</a>. You can get the worker nodes ip from the digitalocean dashboard or use doctl cli.</p>
|
<p>Having trouble figuring out what is wrong. I have a remote kubernetes cluster up and have copied the config locally. I know it is correct because I have gotten other commands to work for me.</p>
<p>The one I can't get to work is a deployment patch. My code:</p>
<pre class="lang-golang prettyprint-override"><code>const namespace = "default"
var clientset *kubernetes.Clientset
func init() {
kubeconfig := "/Users/$USER/go/k8s-api/config"
config, err := clientcmd.BuildConfigFromFlags("", kubeconfig)
if err != nil {
log.Fatal(err)
}
// create the clientset
clientset, err = kubernetes.NewForConfig(config)
if err != nil {
panic(err.Error())
}
}
func main() {
deploymentsClient := clientset.ExtensionsV1beta1().Deployments("default")
patch := []byte(`[{"spec":{"template":{"spec":{"containers":[{"name":"my-deploy-test","image":"$ORG/$REPO:my-deploy0.0.1"}]}}}}]`)
res, err := deploymentsClient.Patch("my-deploy", types.JSONPatchType, patch)
if err != nil {
panic(err)
}
fmt.Println(res)
}
</code></pre>
<p>All I get back is:
<code>panic: the server rejected our request due to an error in our request</code></p>
<p>Any help appreciated, thanks!</p>
| <p>You have mixed up <a href="https://godoc.org/k8s.io/apimachinery/pkg/types#PatchType" rel="nofollow noreferrer"><code>JSONPatchType with MergePatchType</code></a>; <code>JSONPatchType</code> wants the input to be <a href="https://www.rfc-editor.org/rfc/rfc6902#section-4.3" rel="nofollow noreferrer">RFC 6902</a> formatted "commands", and in that case can be a JSON array, because there can be multiple commands applied in order to the input document</p>
<p>However, your payload looks much closer to you wanting <code>MergePatchType</code>, in which case the input should <strong>not</strong> be a JSON array because the source document is not an array of <code>"spec"</code> objects.</p>
<p>Thus, I'd bet just dropping the leading <code>[</code> and trailing <code>]</code>, changing the argument to be <code>types.MergePatchType</code> will get you much further along</p>
|
<p>GKE uses Calico for networking by default. Is there an option to use some other CNI plugin ?</p>
| <p>No, GKE does not offer such option, you have to use the provided Calico.</p>
|
<p>I am trying to create a simple redis high availability setup with 1 master, 1 slave and 2 sentinels.</p>
<p>The setup works perfectly when failing over from <code>redis-master</code> to <code>redis-slave</code>.
When <code>redis-master</code> recovers, it correctly register itself as slave to the new <code>redis-slave</code> master.</p>
<p>However, when <code>redis-slave</code> as a master goes down, <code>redis-master</code> cannot return as master. The log of <code>redis-master</code> go into the loop showing:</p>
<pre><code>1:S 12 Dec 11:12:35.073 * MASTER <-> SLAVE sync started
1:S 12 Dec 11:12:35.073 * Non blocking connect for SYNC fired the event.
1:S 12 Dec 11:12:35.074 * Master replied to PING, replication can continue...
1:S 12 Dec 11:12:35.075 * Trying a partial resynchronization (request 684581a36d134a6d50f1cea32820004a5ccf3b2d:285273).
1:S 12 Dec 11:12:35.076 * Master is currently unable to PSYNC but should be in the future: -NOMASTERLINK Can't SYNC while not connected with my master
1:S 12 Dec 11:12:36.081 * Connecting to MASTER 10.102.1.92:6379
1:S 12 Dec 11:12:36.081 * MASTER <-> SLAVE sync started
1:S 12 Dec 11:12:36.082 * Non blocking connect for SYNC fired the event.
1:S 12 Dec 11:12:36.082 * Master replied to PING, replication can continue...
1:S 12 Dec 11:12:36.083 * Trying a partial resynchronization (request 684581a36d134a6d50f1cea32820004a5ccf3b2d:285273).
1:S 12 Dec 11:12:36.084 * Master is currently unable to PSYNC but should be in the future: -NOMASTERLINK Can't SYNC while not connected with my master
1:S 12 Dec 11:12:37.087 * Connecting to MASTER 10.102.1.92:6379
1:S 12 Dec 11:12:37.088 * MASTER <-> SLAVE sync started
...
</code></pre>
<p>Per <a href="https://redis.io/topics/replication" rel="nofollow noreferrer">Replication doc</a>, it states that:</p>
<blockquote>
<p>Since Redis 4.0, when an instance is promoted to master after a
failover, it will be still able to perform a partial resynchronization
with the slaves of the old master.</p>
</blockquote>
<p>But the log seems to show otherwise. More detail version of log showing both the first <code>redis-master</code> to <code>redis-slave</code> failover and subsequent <code>redis-slave</code> to <code>redis-master</code> log is available <a href="https://www.dropbox.com/s/ir8ta1xtk4ijb6h/redis-failover.txt?dl=0" rel="nofollow noreferrer">here</a>.</p>
<p>Any idea what's going on? What do I have to do to allow the <code>redis-master</code> to return to master role? Configuration detail is provided below:</p>
<h3>SERVICES</h3>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
redis-master ClusterIP 10.102.1.92 <none> 6379/TCP 11m
redis-slave ClusterIP 10.107.0.73 <none> 6379/TCP 11m
redis-sentinel ClusterIP 10.110.128.95 <none> 26379/TCP 11m
</code></pre>
<h3>redis-master config</h3>
<pre><code>requirepass test1234
masterauth test1234
dir /data
tcp-keepalive 60
maxmemory-policy noeviction
appendonly no
bind 0.0.0.0
save 900 1
save 300 10
save 60 10000
slave-announce-ip redis-master.fp8-cache
slave-announce-port 6379
</code></pre>
<h3>redis-slave config</h3>
<pre><code>requirepass test1234
slaveof redis-master.fp8-cache 6379
masterauth test1234
dir /data
tcp-keepalive 60
maxmemory-policy noeviction
appendonly no
bind 0.0.0.0
save 900 1
save 300 10
save 60 10000
slave-announce-ip redis-slave.fp8-cache
slave-announce-port 6379
</code></pre>
| <p>It turn out that the problem is related to the used of host name instead of IP:</p>
<pre><code>slaveof redis-master.fp8-cache 6379
...
slave-announce-ip redis-slave.fp8-cache
</code></pre>
<p>So, when the master came back as slave, sentinel shows that there are now 2 slaves: one with ip address and another with host name. Not sure exactly how does these 2 slave entries (that points to the same Redis server) cause the problem above. Now that I changed the config to use IP address instead of host name the Redis HA is working flawlessly.</p>
|
<p>In Kelsey Hightower's Kubernetes Up and Running, he gives two commands :</p>
<p><code>kubectl get daemonSets --namespace=kube-system kube-proxy</code></p>
<p>and</p>
<p><code>kubectl get deployments --namespace=kube-system kube-dns</code></p>
<p>Why does one use daemonSets and the other deployments?
And what's the difference?</p>
| <p><strong>Kubernetes deployments</strong> manage stateless services running on your cluster (as opposed to for example StatefulSets which manage stateful services). Their purpose is to keep a set of identical pods running and upgrade them in a controlled way. For example, you define how many replicas(<code>pods</code>) of your app you want to run in the deployment definition and kubernetes will make that many replicas of your application spread over nodes. If you say 5 replica's over 3 nodes, then some nodes will have more than one replica of your app running.</p>
<p><strong>DaemonSets</strong> manage groups of replicated Pods. However, DaemonSets attempt to adhere to a one-Pod-per-node model, either across the entire cluster or a subset of nodes. A Daemonset will not run more than one replica per node. Another advantage of using a Daemonset is that, if you add a node to the cluster, then the Daemonset will automatically spawn a pod on that node, which a deployment will not do.</p>
<p><code>DaemonSets</code> are useful for deploying ongoing background tasks that you need to run on all or certain nodes, and which do not require user intervention. Examples of such tasks include storage daemons like <code>ceph</code>, log collection daemons like <code>fluentd</code>, and node monitoring daemons like <code>collectd</code></p>
<p>Lets take the example you mentioned in your question: why is<code>kube-dns</code> a deployment and<code>kube-proxy</code> a daemonset?</p>
<p>The reason behind that is that <code>kube-proxy</code> is needed on every node in the cluster to run IP tables, so that every node can access every pod no matter on which node it resides. Hence, when we make <code>kube-proxy</code> a <code>daemonset</code> and another node is added to the cluster at a later time, kube-proxy is automatically spawned on that node.</p>
<p><code>Kube-dns</code> responsibility is to discover a service IP using its name and only one replica of <code>kube-dns</code> is enough to resolve the service name to its IP. Hence we make <code>kube-dns</code> a <code>deployment</code>, because we don't need <code>kube-dns</code> on every node.</p>
|
<p>I posted this on serverfault, too, but will hopefully get more views/feedback here:</p>
<p>Trying to get the Dashboard UI working in a <code>kubeadm</code> cluster using <code>kubectl proxy</code> for remote access. Getting </p>
<pre><code>Error: 'dial tcp 192.168.2.3:8443: connect: connection refused'
Trying to reach: 'https://192.168.2.3:8443/'
</code></pre>
<p>when accessing <code>http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/</code> via remote browser.</p>
<p>Looking at API logs, I see that I'm getting the following errors:</p>
<pre><code>I1215 20:18:46.601151 1 log.go:172] http: TLS handshake error from 10.21.72.28:50268: remote error: tls: unknown certificate authority
I1215 20:19:15.444580 1 log.go:172] http: TLS handshake error from 10.21.72.28:50271: remote error: tls: unknown certificate authority
I1215 20:19:31.850501 1 log.go:172] http: TLS handshake error from 10.21.72.28:50275: remote error: tls: unknown certificate authority
I1215 20:55:55.574729 1 log.go:172] http: TLS handshake error from 10.21.72.28:50860: remote error: tls: unknown certificate authority
E1215 21:19:47.246642 1 watch.go:233] unable to encode watch object *v1.WatchEvent: write tcp 134.84.53.162:6443->134.84.53.163:38894: write: connection timed out (&streaming.encoder{writer:(*metrics.fancyResponseWriterDelegator)(0xc42d6fecb0), encoder:(*versioning.codec)(0xc429276990), buf:(*bytes.Buffer)(0xc42cae68c0)})
</code></pre>
<p>I presume this is related to not being able to get the Dashboard working, and if so am wondering what the issue with the API server is. Everything else in the cluster appears to be working.</p>
<p>NB, I have admin.conf running locally and am able to access the cluster via kubectl with no issue.</p>
<p>Also, of note is that this had been working when I first got the cluster up. However, I was having networking issues, and had to apply this in order to get CoreDNS to work <a href="https://github.com/kubernetes/kubernetes/issues/63900#issuecomment-389567090" rel="nofollow noreferrer">Coredns service do not work,but endpoint is ok the other SVCs are normal only except dns</a>, so I am wondering if this maybe broke the proxy service?</p>
<p><strong>* EDIT *</strong></p>
<p>Here is output for the dashboard pod:</p>
<pre><code>[gms@thalia0 ~]$ kubectl describe pod kubernetes-dashboard-77fd78f978-tjzxt --namespace=kube-system
Name: kubernetes-dashboard-77fd78f978-tjzxt
Namespace: kube-system
Priority: 0
PriorityClassName: <none>
Node: thalia2.hostdoman/hostip<redacted>
Start Time: Sat, 15 Dec 2018 15:17:57 -0600
Labels: k8s-app=kubernetes-dashboard
pod-template-hash=77fd78f978
Annotations: cni.projectcalico.org/podIP: 192.168.2.3/32
Status: Running
IP: 192.168.2.3
Controlled By: ReplicaSet/kubernetes-dashboard-77fd78f978
Containers:
kubernetes-dashboard:
Container ID: docker://ed5ff580fb7d7b649d2bd1734e5fd80f97c80dec5c8e3b2808d33b8f92e7b472
Image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
Image ID: docker-pullable://k8s.gcr.io/kubernetes-dashboard-amd64@sha256:1d2e1229a918f4bc38b5a3f9f5f11302b3e71f8397b492afac7f273a0008776a
Port: 8443/TCP
Host Port: 0/TCP
Args:
--auto-generate-certificates
State: Running
Started: Sat, 15 Dec 2018 15:18:04 -0600
Ready: True
Restart Count: 0
Liveness: http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/certs from kubernetes-dashboard-certs (rw)
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-mrd9k (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kubernetes-dashboard-certs:
Type: Secret (a volume populated by a Secret)
SecretName: kubernetes-dashboard-certs
Optional: false
tmp-volume:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
kubernetes-dashboard-token-mrd9k:
Type: Secret (a volume populated by a Secret)
SecretName: kubernetes-dashboard-token-mrd9k
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
</code></pre>
<p>I checked the service:</p>
<pre><code>[gms@thalia0 ~]$ kubectl -n kube-system get service kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard ClusterIP 10.103.93.93 <none> 443/TCP 4d23h
</code></pre>
<p>And also of note, if I <code>curl http://localhost:8001/api</code> from the master node, I do get a valid response.</p>
<p>So, in summary, I'm not sure which if any of these errors are the source of not being able to access the dashboard.</p>
<p>I just upgraded my cluster to 1.13.1, in hopes that this issue would be resolved, but alas, no.</p>
| <p>I upgraded all nodes in the cluster to version 1.13.1 and voila, the dashboard now works AND so far I have not had to apply the CoreDNS fix noted above.</p>
|
<p>Many devops use mysql connect over ssh to access to production database for various reasons and queries</p>
<p>after successfull deploy mysql container to a digital ocean kubernetes cluster
im able to ssh into the pod via :</p>
<pre><code>kubectl --kubeconfig="kubeconfig.yaml" exec -it vega-mysql-5df9b745f9-c6859 -c vega-mysql -- /bin/bash
</code></pre>
<p>my question is how can i remote connect applications like : navicat - sequel pro or mysql workbench to this pod ?</p>
| <p>Nitpick: Even though you can use it to start an interactive shell, <code>kubectl exec</code> is not the same as SSH. For that reason, regular MySQL clients that support SSH-tunneled connections, don't (and probably never will) support connecting to a MySQL server tunneled through <code>kubectl exec</code>.</p>
<p>Alternative solution: Use <a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/" rel="noreferrer"><code>kubectl port-forward</code></a> to forward the Pod's MySQL server port 3306 to your local machine:</p>
<pre><code>kubectl port-forward vega-mysql-5df9b745f9-c6859 3306:3306
</code></pre>
<p>This will instruct kubectl to act as a TCP proxy from a local port on your machine into the Pod. Then, connect to <code>127.0.0.1:3306</code> with any MySQL client of your choice:</p>
<pre><code>mysql -u youruser -p -h 127.0.0.1
</code></pre>
|
<p>I posted this on serverfault, too, but will hopefully get more views/feedback here:</p>
<p>Trying to get the Dashboard UI working in a <code>kubeadm</code> cluster using <code>kubectl proxy</code> for remote access. Getting </p>
<pre><code>Error: 'dial tcp 192.168.2.3:8443: connect: connection refused'
Trying to reach: 'https://192.168.2.3:8443/'
</code></pre>
<p>when accessing <code>http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/</code> via remote browser.</p>
<p>Looking at API logs, I see that I'm getting the following errors:</p>
<pre><code>I1215 20:18:46.601151 1 log.go:172] http: TLS handshake error from 10.21.72.28:50268: remote error: tls: unknown certificate authority
I1215 20:19:15.444580 1 log.go:172] http: TLS handshake error from 10.21.72.28:50271: remote error: tls: unknown certificate authority
I1215 20:19:31.850501 1 log.go:172] http: TLS handshake error from 10.21.72.28:50275: remote error: tls: unknown certificate authority
I1215 20:55:55.574729 1 log.go:172] http: TLS handshake error from 10.21.72.28:50860: remote error: tls: unknown certificate authority
E1215 21:19:47.246642 1 watch.go:233] unable to encode watch object *v1.WatchEvent: write tcp 134.84.53.162:6443->134.84.53.163:38894: write: connection timed out (&streaming.encoder{writer:(*metrics.fancyResponseWriterDelegator)(0xc42d6fecb0), encoder:(*versioning.codec)(0xc429276990), buf:(*bytes.Buffer)(0xc42cae68c0)})
</code></pre>
<p>I presume this is related to not being able to get the Dashboard working, and if so am wondering what the issue with the API server is. Everything else in the cluster appears to be working.</p>
<p>NB, I have admin.conf running locally and am able to access the cluster via kubectl with no issue.</p>
<p>Also, of note is that this had been working when I first got the cluster up. However, I was having networking issues, and had to apply this in order to get CoreDNS to work <a href="https://github.com/kubernetes/kubernetes/issues/63900#issuecomment-389567090" rel="nofollow noreferrer">Coredns service do not work,but endpoint is ok the other SVCs are normal only except dns</a>, so I am wondering if this maybe broke the proxy service?</p>
<p><strong>* EDIT *</strong></p>
<p>Here is output for the dashboard pod:</p>
<pre><code>[gms@thalia0 ~]$ kubectl describe pod kubernetes-dashboard-77fd78f978-tjzxt --namespace=kube-system
Name: kubernetes-dashboard-77fd78f978-tjzxt
Namespace: kube-system
Priority: 0
PriorityClassName: <none>
Node: thalia2.hostdoman/hostip<redacted>
Start Time: Sat, 15 Dec 2018 15:17:57 -0600
Labels: k8s-app=kubernetes-dashboard
pod-template-hash=77fd78f978
Annotations: cni.projectcalico.org/podIP: 192.168.2.3/32
Status: Running
IP: 192.168.2.3
Controlled By: ReplicaSet/kubernetes-dashboard-77fd78f978
Containers:
kubernetes-dashboard:
Container ID: docker://ed5ff580fb7d7b649d2bd1734e5fd80f97c80dec5c8e3b2808d33b8f92e7b472
Image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
Image ID: docker-pullable://k8s.gcr.io/kubernetes-dashboard-amd64@sha256:1d2e1229a918f4bc38b5a3f9f5f11302b3e71f8397b492afac7f273a0008776a
Port: 8443/TCP
Host Port: 0/TCP
Args:
--auto-generate-certificates
State: Running
Started: Sat, 15 Dec 2018 15:18:04 -0600
Ready: True
Restart Count: 0
Liveness: http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/certs from kubernetes-dashboard-certs (rw)
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-mrd9k (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kubernetes-dashboard-certs:
Type: Secret (a volume populated by a Secret)
SecretName: kubernetes-dashboard-certs
Optional: false
tmp-volume:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
kubernetes-dashboard-token-mrd9k:
Type: Secret (a volume populated by a Secret)
SecretName: kubernetes-dashboard-token-mrd9k
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
</code></pre>
<p>I checked the service:</p>
<pre><code>[gms@thalia0 ~]$ kubectl -n kube-system get service kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard ClusterIP 10.103.93.93 <none> 443/TCP 4d23h
</code></pre>
<p>And also of note, if I <code>curl http://localhost:8001/api</code> from the master node, I do get a valid response.</p>
<p>So, in summary, I'm not sure which if any of these errors are the source of not being able to access the dashboard.</p>
<p>I just upgraded my cluster to 1.13.1, in hopes that this issue would be resolved, but alas, no.</p>
| <p>When you do <code>kubectl proxy</code> , the default port 8001 only reachable from the localhost. If you ssh to the machine which the kubernetes is installed, you must map this port to your laptop or any device used to ssh. </p>
<p>You can ssh to master node and map the 8001 port to your localbox by :</p>
<pre><code>ssh -L 8001:localhost:8001 hostname@master_node_IP
</code></pre>
|
<p>Am trying to better understand RBAC in kubernetes. Came across this unexpected situation where authorization test using <code>kubectl auth can-i</code> and actual results are different. In short, newly created user should not be able to get pods as per this test, however this user can actually get pods.</p>
<p>Version:</p>
<pre><code>$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:53:20Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:43:26Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>kubectl config for user in questions:</p>
<pre><code>$ kubectl config view --minify
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/master/ca.pem
server: https://192.168.1.111:6443
name: jdoe
contexts:
- context:
cluster: jdoe
user: jdoe
name: jdoe
current-context: jdoe
kind: Config
preferences: {}
users:
- name: jdoe
user:
client-certificate: /home/master/jdoe.pem
client-key: /home/master/jdoe-key.pem
</code></pre>
<p>The test against authorization layer says jdoe cannot get pods.</p>
<pre><code>$ kubectl auth can-i get pods --as jdoe
no
</code></pre>
<p>However, jdoe can get pods:</p>
<pre><code>$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx nginx-ingress-controller-87554c57b-ttgwp 1/1 Running 0 5h
kube-system coredns-5f7d467445-ngnvf 1/1 Running 0 1h
kube-system coredns-5f7d467445-wwf5s 1/1 Running 0 5h
kube-system weave-net-25kq2 2/2 Running 0 5h
kube-system weave-net-5njbh 2/2 Running 0 4h
</code></pre>
<p>Got similar results from auth layer after switching back to admin context:</p>
<pre><code>$ kubectl config use-context kubernetes
Switched to context "kubernetes".
$ kubectl config view --minify
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://192.168.1.111:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: admin
name: kubernetes
current-context: kubernetes
kind: Config
preferences: {}
users:
- name: admin
user:
client-certificate: /home/master/admin.pem
client-key: /home/master/admin-key.pem
</code></pre>
<p>From here too, user jdoe is not supposed to get pods.</p>
<pre><code>$ kubectl auth can-i get pods --as jdoe
no
</code></pre>
<p>Output of <code>kubectl config view</code></p>
<pre><code>$ kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/master/ca.pem
server: https://192.168.1.111:6443
name: jdoe
- cluster:
certificate-authority-data: REDACTED
server: https://192.168.1.111:6443
name: kubernetes
- cluster:
certificate-authority: /home/master/ca.pem
server: https://192.168.1.111:6443
name: master
contexts:
- context:
cluster: jdoe
user: jdoe
name: jdoe
- context:
cluster: kubernetes
user: admin
name: kubernetes
- context:
cluster: master
user: master
name: master
current-context: kubernetes
kind: Config
preferences: {}
users:
- name: admin
user:
client-certificate: /home/master/admin.pem
client-key: /home/master/admin-key.pem
- name: jdoe
user:
client-certificate: /home/master/jdoe.pem
client-key: /home/master/jdoe-key.pem
- name: master
user:
client-certificate: /home/master/master.pem
client-key: /home/master/master-key.pem
</code></pre>
| <p><code>kubectl get pods</code> with no specific pod name actually does a list. See <a href="https://kubernetes.io/docs/reference/access-authn-authz/authorization/#determine-the-request-verb" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/authorization/#determine-the-request-verb</a> for details about what verb corresponds to a given request. </p>
<p>What does <code>can-i list pods</code> return?</p>
|
<p>I'm getting the following error when trying to delete a StatefulSets on my local minikube cluster</p>
<blockquote>
<p>error: no kind "GetOptions" is registered for version "apps/v1"</p>
</blockquote>
<p>I can set the replicas to 0, but that still keeps the StatefulSet alive. </p>
<p>I'm running following version</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:57:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>Any help would be appreciated!</p>
| <p>It seems your kubectl version and kubernetes version isn't in sync. your kubectl version doesn't know new statefulset version.
You need to upgrade your kubectl version.</p>
|
<p>I'm using kubernetes on-prem</p>
<p>While I build gitlab using kubernetes has some problem.
I think it's related with serviceaccount or role-binding.
but couldn't find correct way</p>
<p>I found these posts</p>
<p><a href="https://stackoverflow.com/questions/47973570/kubernetes-log-user-systemserviceaccountdefaultdefault-cannot-get-services">Kubernetes log, User "system:serviceaccount:default:default" cannot get services in the namespace</a></p>
<p><a href="https://github.com/kubernetes/kops/issues/3551" rel="noreferrer">https://github.com/kubernetes/kops/issues/3551</a></p>
<h1>my error logs</h1>
<pre><code>==> /var/log/gitlab/prometheus/current <==
2018-12-24_03:06:08.88786 level=error ts=2018-12-24T03:06:08.887812767Z caller=main.go:240 component=k8s_client_runtime err="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:372: Failed to list *v1.Node: nodes is forbidden: User \"system:serviceaccount:default:default\" cannot list resource \"nodes\" in API group \"\" at the cluster scope"
2018-12-24_03:06:08.89075 level=error ts=2018-12-24T03:06:08.890719525Z caller=main.go:240 component=k8s_client_runtime err="github.com/prometheus/prometheus/discovery/kubernetes/kubernetes.go:320: Failed to list *v1.Pod: pods is forbidden: User \"system:serviceaccount:default:default\" cannot list resource \"pods\" in API group \"\" at the cluster scope"
</code></pre>
| <p>The issue is due to your default service account doesn't have the permission to get the nodes or pods at the cluster scope. The minimum cluster role and cluster role binding to resolve that is:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: prom-admin
rules:
# Just an example, feel free to change it
- apiGroups: [""]
resources: ["pods", "nodes"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: prom-rbac
subjects:
- kind: ServiceAccount
name: default
roleRef:
kind: ClusterRole
name: prom-admin
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>The above cluster role provide permission to default service account to access any pods or nodes in any namespace.</p>
<p>You can change the cluster role to provide more permission to service account, if you want to grant access all permission to default service account then, replace <code>resources: ["*"]</code> in <code>prom-admin</code></p>
<p>Hope this helps.</p>
|
<p>I'm trying to solve gRPC load balance problem with linkerd, but requests will be evenly distributed only when all services are deployed on same node. If I deploy servers on different node, all request will be directed to one of them.</p>
<p><a href="https://imgur.com/a/F2GcbuY" rel="nofollow noreferrer">https://imgur.com/a/F2GcbuY</a></p>
<p>Both gRPC service and client are .Net application. Kubernetes version is v1.12.3. Linkerd version is stable-2.1.0.</p>
<p>Here is the configuration of my gRPC service:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: demogrpc-deploy
name: demogrpc
spec:
replicas: 3
selector:
matchLabels:
app: demogrpc
template:
metadata:
labels:
app: demogrpc
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: kubernetes.io/hostname
labelSelector:
matchLabels:
app: demogrpc
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
containers:
- image: 192.168.99.25:30000/demogrpchost:1.0.9
imagePullPolicy: Always
name: demogrpc
env:
- name: GRPC_HOST
value: "127.0.0.1"
- name: SERVICE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: GRPC_PORT
value: "8000"
ports:
- containerPort: 8000
name: grpc
imagePullSecrets:
- name: kubernetes-registry
---
apiVersion: v1
kind: Service
metadata:
labels:
app: demogrpc
name: demogrpc
spec:
clusterIP: None
ports:
- port: 8000
targetPort: 8000
selector:
app: demogrpc
</code></pre>
<p>How do I make load balance work when services are deployed on different nodes?</p>
<hr>
<p>Update:</p>
<p>I inject linkerd to my client and it started to distribute requests now, but one of the service is still ignored by load balancer somehow.</p>
<hr>
<p>Update:</p>
<p>I scaled up the services to 5 and clients 3 and something interesting happened. All services receive requests now but for each clients, their requests are distributed to 4 of the services.</p>
<p><a href="https://imgur.com/dXdjTsR" rel="nofollow noreferrer">https://imgur.com/dXdjTsR</a></p>
| <p>To configure <strong>Linkerd</strong> service mesh, each node will have a Linkerd. There will be no <code>service-to-service</code> direct communication across the nodes, only <strong>Linkerd</strong> is allowed to talk to another Linkerd. And Linkerd will communicate with services which are created in same node.</p>
<p>In your case, you Kubernetes service will talk to <strong>Linkerd</strong>, then Linkerd will distribute its request to all other Linkerd. This is how requests are distributed across nodes. Then upstream Linkerd will distributed requests among services on its node.</p>
<p><a href="https://i.stack.imgur.com/F6dPi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/F6dPi.png" alt="enter image description here"></a></p>
<p>To get more, please check this: <a href="https://medium.com/@aerokite/linkerd-as-a-service-mesh-for-your-application-in-kubernetes-cluster-f97adc9153cb" rel="nofollow noreferrer">linkerd-as-a-service-mesh</a></p>
|
<p>I have a kubernetes setup with 1 master and 1 slave, hosted on DigitalOcean Droplets.
For exposing my services I want to use Ingresses.</p>
<p>As I have a bare metal install, I have to configure my own ingress controller.
<strong>How do I get it to listen to port 443 or 80 instead of the 30000-32767 range?</strong></p>
<p>For setting up the ingress controller I used this guide: <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/</a></p>
<p>My controller service looks like this:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
- name: https
port: 443
targetPort: 443
protocol: TCP
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
</code></pre>
<p>And now obviously, because the NodePort range is 30000-32767, this controller doesn't get mapped to port 80 or 443:</p>
<pre><code>➜ kubectl get services --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx ingress-nginx NodePort 10.103.166.230 <none> 80:30907/TCP,443:30653/TCP 21m
</code></pre>
| <p>I agree with @Matthew L Daniel, if you don't consider to use external load balancer, the best option would be sharing host network interface with <code>ingress-nginx</code> Pod by enabling <code>hostNetwork</code> option in the Pods' spec:</p>
<pre><code>template:
spec:
hostNetwork: true
</code></pre>
<p>Thus, NGINX Ingress controller can bind ports 80 and 443 directly to Kubernetes nodes, without mapping special proxy ports (30000-32767) to the nested services. Find more information <a href="https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#via-the-host-network" rel="noreferrer">here</a>.</p>
|
<p>On an Ubuntu VM (running on Windows) I would like to install Minikube. My PC in running behind a corporate proxy. Using Proxifier I manage to access Internet and run Docker on Ubuntu. Unfortunately it looks like Minikube can't reach the internet...</p>
<pre><code>minikube start
Starting local Kubernetes v1.9.4 cluster...
Starting VM...
Downloading Minikube ISO
</code></pre>
<p>The ISO can't be downloaded but it runs into a TLS handshake timeout...</p>
| <p>You mentioned you could establish proxy using Proxifier.
So, you could possibly try something like this to force Minikube use Proxifier proxy as well</p>
<pre><code>export NO_PROXY="$NO_PROXY,192.168.0.0/16" # set the no_proxy env variable in the current shell.
minikube start --docker-env "HTTPS_PROXY=http://proxy:808" --docker-env "HTTP_PROXY=http://proxy:808" --docker-env "NO_PROXY=localhost,127.0.0.1,192.168.0.0/16" # start minikube and pass the same no_proxy setting to docker
</code></pre>
<p>Specifying <code>192.168.0.0/16</code> as the range of ipaddresses that shouldn't go through the proxy as this is the range (I think) that minikube generate it's IP addresses in (and I'm certain that I don't have anything else in that range)</p>
<p>More details is available on Minikube’s Github issue <a href="https://github.com/kubernetes/minikube/issues/2706" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/issues/2706</a></p>
|
<p>Here is my MySQL </p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: abc-def-my-mysql
namespace: abc-sk-test
labels:
project: abc
ca: my
spec:
replicas: 1
template:
metadata:
labels:
app: abc-def-my-mysql
project: abc
ca: my
spec:
containers:
- name: mysql
image: mysql:5.6
args: ["--default-authentication-plugin=mysql_native_password", "--ignore-db-dir=lost+found"]
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: "root"
- name: MYSQL_DATABASE
value: "my_abc"
- name: MYSQL_USER
value: "test_user"
- name: MYSQL_PASSWORD
value: "12345"
volumeMounts:
- mountPath: /var/lib/mysql
name: abc-def-my-mysql-storage
volumes:
- name: abc-def-my-mysql-storage
persistentVolumeClaim:
claimName: abc-def-my-mysql-pvc
</code></pre>
<p>I would like to add another user to mysql so real users can connect to it. Instead of using "test_user". how can I add another user, is it like adding any other environment variable to the above config </p>
| <p>Mount a "create user script" into container's <code>/docker-entrypoint-initdb.d</code> directory. it will be executed once, at first pod start.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Pod
metadata:
name: mysql
spec:
containers:
- name: mysql
image: mysql
.....
env:
- name: MYSQL_ROOT_PASSWORD
value: "root"
.....
volumeMounts:
- name: mysql-initdb
mountPath: /docker-entrypoint-initdb.d
volumes:
- name: mysql-initdb
configMap:
name: initdb
---
apiVersion: v1
kind: ConfigMap
metadata:
name: initdb
data:
initdb.sql: |-
CREATE USER 'first_user'@'%' IDENTIFIED BY '111' ;
CREATE USER 'second_user'@'%' IDENTIFIED BY '222' ;
</code></pre>
<p>Test:</p>
<pre><code>kubectl exec -it <PODNAME> -- mysql -uroot -p -e 'SELECT user, host FROM mysql.user;'
+-------------+------+
| user | host |
+-------------+------+
| first_user | % |
| second_user | % |
| root | % |
+-------------+------+
</code></pre>
<p>See <a href="https://hub.docker.com/_/mysql/" rel="noreferrer">Initializing a fresh instance</a> Mysql Docker Hub image:</p>
<blockquote>
<p>When a container is started for the first time, a new database with
the specified name will be created and initialized with the provided
configuration variables. Furthermore, it will execute files with
extensions <code>.sh,</code> <code>.sql</code> and <code>.sql.gz</code> that are found in
<code>/docker-entrypoint-initdb.d</code>. Files will be executed in alphabetical
order. </p>
<p>You can easily populate your <code>mysql</code> services by mounting a SQL
dump into that directory and provide custom images with contributed
data. SQL files will be imported by default to the database specified
by the MYSQL_DATABASE variable.</p>
</blockquote>
|
<p>I have deployed airflow via docker on kubernetes cluster and now I need to increase the persistent volume's storage capacity. While editing the yaml file via UI, I get this error:</p>
<pre><code>PersistentVolumeClaim "data-pallet-airflow-worker-0" is invalid: spec: Forbidden: field is immutable after creation
</code></pre>
| <p>As mentioned by @aerokite, this question seems to have already been answered in this <a href="https://stackoverflow.com/questions/45594836/increasing-size-of-persistent-disks-on-kubernetes?answertab=oldest#tab-top">community post</a>. </p>
<blockquote>
<p>Posted as a community wiki.</p>
</blockquote>
|
<p>In a pod can we have a single volume for two different containers.</p>
| <p>If you have two containers and you want to share data between them, you can do like below:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: production
spec:
containers:
- name: container1
image: image1
volumeMounts:
- name: storage
mountPath: /vol/data
- name: container2
image: image2
volumeMounts:
- name: storage
mountPath: /store/data
volumes:
- name: storage
emptyDir: {}
</code></pre>
<p>Here,</p>
<p>an <code>emptyDir</code> is used to share data in between two containers. Both containers has have volume.</p>
<p>So, if you want to share same data, you can mount same volume in two containers.</p>
<p>But, if you want to use single volume, and do not want to share data between two containers, you can use <code>subPath</code></p>
<pre><code>spec:
containers:
- name: container1
image: image1
volumeMounts:
- name: storage
mountPath: /vol/data
subPath: vol
- name: container2
image: image2
volumeMounts:
- name: storage
mountPath: /store/data
subPath: store
volumes:
- name: storage
emptyDir: {}
</code></pre>
<p>Here,</p>
<p><code>subPath</code> specify a sub-path inside the referenced volume instead of its root. That means, two separate directory from your volume will be mounted in two containers. </p>
<p>In this example, <code>/vol</code> directory will be mounted in <code>container1</code> container and <code>/store</code> from volume will be mounted in <code>container2</code></p>
<p>Now, you data will not be conflicted and shared</p>
|
<p>If I move a relevant config file and run <code>kubectl proxy</code> it will allow me to access the Kubernetes dashboard through this URL:</p>
<pre><code>http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
</code></pre>
<p>However if I try to access the node directly, without <code>kubectl proxy</code>, I will get a 403 Forbidden.</p>
<pre><code>http://dev-master:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/login
</code></pre>
<p>Our kubernetes clusters are hidden inside a private network that users need to VPN in to; furthermore only some of us can talk to the master node of each of our clusters after authenticating to the VPN. As such, running <code>kubectl proxy</code> is a redundant step, and choosing the appropriate config file for each cluster is an additional pain, especially when we want to compare the state of different clusters.</p>
<p>What needs to be changed to allow "anonymous" HTTP access to the dashboard of these already-secured kubernetes master nodes?</p>
| <p>You would want to set up a Service (either NodePort or LoadBalancer) for the dashboard pod(s) to expose it to the outside world (well, outside from the PoV of the cluster, which is still an internal network for you).</p>
|
<p>I configured automated SSL certificate management few months ago as described here: <a href="http://docs.cert-manager.io/en/latest/tutorials/acme/dns-validation.html" rel="nofollow noreferrer">http://docs.cert-manager.io/en/latest/tutorials/acme/dns-validation.html</a>
for domains: <code><myhost>.com</code> and <code>dev.<myhost>.com</code>.
So I have two namespaces: <code>prod</code> for <code><myhost>.com</code> and
<code>dev</code> for <code>dev.<myhost>.com</code>. In each namespace I have ingress controller
and <code>Certificate</code> resource to store certificate to secret.
It's working fine and <code>ClusterIssuer</code> automatically updates certificates.</p>
<p>But few days ago I tried to add new domain: <code>test.<myhost>.com</code> in <code>test</code> namespace with absolutely same configuration of ingress and certificate
as in <code>prod</code> or <code>dev</code> namespace (expect host name and namespace):</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
kubernetes.io/tls-acme: 'true'
name: app-ingress
namespace: test
spec:
tls:
- hosts:
- test.<myhost>.com
secretName: letsencrypt-tls
rules:
- host: test.<myhost>.com
http:
paths:
- backend:
serviceName: web
servicePort: 80
path: /
---
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: cert-letsencrypt
namespace: test
spec:
secretName: letsencrypt-tls
issuerRef:
name: letsencrypt-prod-dns
kind: ClusterIssuer
commonName: 'test.<myhost>.com'
dnsNames:
- test.<myhost>.com
acme:
config:
- dns01:
provider: dns
domains:
- test.<myhost>.com
</code></pre>
<p>and this configuration doesnt work: certificate can't be found in secret, ingress is using "app-ingress-fake-certificate".</p>
<p><code>cert-manager</code> pod shows a lot of similar errors:</p>
<pre><code>pkg/client/informers/externalversions/factory.go:72: Failed to list *v1alpha1.Challenge: challenges.certmanager.k8s.io is forbidden: User "system:serviceaccount:kube-system:cert-manager" cannot list challenges.certmanager.k8s.io at the cluster scope
pkg/client/informers/externalversions/factory.go:72: Failed to list *v1alpha1.Order: orders.certmanager.k8s.io is forbidden: User "system:serviceaccount:kube-system:cert-manager" cannot list orders.certmanager.k8s.io at the cluster scope
</code></pre>
<p>and <code>certificate</code> is not trying to get certificate (<code>kubectl describe -ntest cert-letsencrypt</code>):</p>
<pre><code>API Version: certmanager.k8s.io/v1alpha1
Kind: Certificate
Metadata: ...
Spec:
Acme:
Config:
Dns 01:
Provider: dns
Domains:
test.<myhost>.com
Common Name: test.<myhost>.com
Dns Names:
test.<myhost>.com
Issuer Ref:
Kind: ClusterIssuer
Name: letsencrypt-prod-dns
Secret Name: letsencrypt-tls
Events: <none>
</code></pre>
<p>It should have any status as certificates on other namespaces.</p>
<p>I can't understand why this configuration worked before but can't work now.</p>
<p>I'm not sure it's related, but I updated kubernetes using kops few weeks ago, current version is:</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.0", GitCommit:"0ed33881dc4355495f623c6f22e7dd0b7632b7c0", GitTreeState:"archive", BuildDate:"2018-10-12T16:56:06Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.6", GitCommit:"a21fdbd78dde8f5447f5f6c331f7eb6f80bd684e", GitTreeState:"clean", BuildDate:"2018-07-26T10:04:08Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
| <p>The cause of this issue was Kubernetes upgrade from <code>1.9</code> to <code>1.10</code>. To fix it you need to upgrade cert-manager to <code>0.5.x</code> version.</p>
<p>It may be not possible to upgrade from <code>0.4.x</code> to <code>0.5.x</code> using <code>helm</code> because of bug <a href="https://github.com/jetstack/cert-manager/issues/1134" rel="nofollow noreferrer">https://github.com/jetstack/cert-manager/issues/1134</a>
in such case you need to store all issuers and certificates configurations then delete cert-manager <code>0.4.x</code> and install <code>0.5.x</code>, then apply all issuers and certificates configurations from first step.</p>
|
<p>I couldn't find any information on wherever a connection creation between cluster's pod and locahost is encrypted when running "kubectl port-forward" command.</p>
<p>It seems like it uses "<a href="https://linux.die.net/man/1/socat" rel="noreferrer">socat</a>" library which supports encryption, but I'm not sure if kubernetes actually uses it.</p>
| <p>kubectl port-forward uses socat to make an encrypted TLS tunnel with port forwarding capabilities.
The tunnel goes from you to the kube api-server to the pod so it may actually be 2 tunnels with the kube api-server acting as a pseudo router. </p>
<p>An example of where I've found it useful was that I was doing a quick PoC of a Jenkins Pipeline hosted on Azure Kubernetes Service and earlier in my Kubernetes studies I didn't know how to setup an Ingress, but I could reach the Server via port 80 unencrypted, but I knew my traffic could be snooped on. So I just did kubectl port-forward to temporarily login and securely to debug my POC. Also really helpful with RabbitMQ Cluster hosted on Kubernetes, you can go into the management webpage with kubectl port-forward and make sure that it's clustering the way you wanted it to. </p>
|
<p>by default, in Kubernetes 1.13 CoreDNS is installed.
Can you please tell me how to make a curl in a cluster by the name of the service?</p>
<pre><code>[root@master ~]# kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.233.0.1 <none> 443/TCP 24h
[root@master ~]# kubectl get services --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-system coredns ClusterIP 10.233.0.3 <none> 53/UDP,53/TCP,9153/TCP 21h
tools nexus-svc NodePort 10.233.17.152 <none> 8081:31991/TCP,5000:31111/TCP,8083:31081/TCP,8082:31085/TCP 14h
[root@master ~]# kubectl describe services nexus-svc --namespace=tools
Name: nexus-svc
Namespace: tools
Labels: tools=nexus
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"tools":"nexus"},"name":"nexus-svc","namespace":"tools"},"spec"...
Selector: tools=nexus
Type: NodePort
IP: 10.233.17.152
Port: http 8081/TCP
.....
</code></pre>
<p>So I get the correct answer.</p>
<pre><code>[root@master ~]# curl http://10.233.17.152:8081
<!DOCTYPE html>
<html lang="en">
<head>
<title>Nexus Repository Manager</title>
....
</code></pre>
<p>And so no.</p>
<pre><code>[root@master ~]# curl http://nexus-svc.tools.svc.cluster.local
curl: (6) Could not resolve host: nexus-svc.tools.svc.cluster.local; Unknown error
[root@master ~]# curl http://nexus-svc.tools.svc.cluster.local:8081
curl: (6) Could not resolve host: nexus-svc.tools.svc.cluster.local; Unknown error
</code></pre>
<p>Thanks.</p>
| <p><code>coredns</code> or <code>kubedns</code> are meant to resolve the service name to its <code>clusterIP</code> (normal service) or correspondent Pod IP (headless service) inside the kubernetes cluster not outside. You are trying to curl the service name on the node, not inside the pod and hence it is not able to resolve the service name to its clusterIP.</p>
<p>YOu can go inside the pod and try following:</p>
<pre><code>kubectl exec -it <pod_name> bash
nslookup nexus-svc.tools.svc.cluster.local
</code></pre>
<p>It will return you cluster IP and it means <code>coredns</code> is working fine. If your pod has curl utility then you can also curl it using service name (but from inside the cluster only)</p>
<p>If you want to access the service from outside the cluster, this service already exposed as <code>NodePort</code> so you can access it using:</p>
<pre><code> curl http://<node_ip>:31991
</code></pre>
<p>Hope this helps.</p>
|
<p>If I am running spark on ec2 (or in kubernetes), can I use s3/emrfs in place of hdfs? Is this production ready and does it use parallelism to read/process data from s3?</p>
<p>Thanks in advance</p>
| <p>No, EMRFS is for EMR only, the easy way to make S3 look like part of HDFS. For EC2 you connect to S3, but that is less easy than with EMR. S3 is not tightly coupled to EC2. Yes, parallelism is applied but not according to MR data locality, worker and data node that is.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.