prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>I just saw some of my pods got evicted by kubernetes. What will happen to them? just hanging around like that or I have to delete them manually?</p>
| <p>A quick workaround I use, is to delete all evicted pods manually after an incident. You can use this command:</p>
<pre><code>kubectl get pods --all-namespaces -o json | jq '.items[] | select(.status.reason!=null) | select(.status.reason | contains("Evicted")) | "kubectl delete pods \(.metadata.name) -n \(.metadata.namespace)"' | xargs -n 1 bash -c
</code></pre>
|
<p>URL: /apis/apps/v1/namespaces/diyclientapps/deployments</p>
<blockquote>
<p>)
"{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"deployments.apps
is forbidden: User \"system:serviceaccount:default:default\" cannot
create deployments.apps in the namespace
\"diyclientapps\"","reason":"Forbidden","details":{"group":"apps","kind":"deployments"},"code":403}</p>
</blockquote>
<p>I'm getting the above error when trying to create a deployment via the Kubernetes REST API.</p>
<p>Why? I don't understand the error message...</p>
<p>This occurs on a custom Kubernetes cluster... The above worked correctly on a local Minikube instance.</p>
<p>I can successfully create a deployment via: <code>kubectl run hello-minikube --image=k8s.gcr.io/echoserver:1.4 --port=8080</code></p>
| <p>This is due to the RBAC functionality.</p>
<p>If you do not care about that at all (for example you're the only Kubernetes administrator):</p>
<p><strong>WARNING: This allows any Kubernetes user to have admin access.</strong></p>
<pre><code>kubectl create clusterrolebinding serviceaccounts-cluster-admin \
--clusterrole=cluster-admin \
--group=system:serviceaccounts
</code></pre>
<p><a href="https://kubernetes.io/docs/admin/authorization/rbac/" rel="nofollow noreferrer">https://kubernetes.io/docs/admin/authorization/rbac/</a></p>
|
<p>I am trying to create an S3 bucket using </p>
<p><code>aws s3api create-bucket —bucket kubernetes-aws-wthamira-io</code></p>
<p>It gives this error: </p>
<pre class="lang-sh prettyprint-override"><code>An error occurred (IllegalLocationConstraintException) when calling
the CreateBucket operation: The unspecified location constraint is
incompatible for the region specific endpoint this request was sent
to.
</code></pre>
<p>I set the region using <code>aws configure</code> to <code>eu-west-1</code> </p>
<pre><code>Default region name [eu-west-1]:
</code></pre>
<p>but it gives the same error. How do I solve this?</p>
<p>I use osx terminal to connect aws </p>
| <p>try this:</p>
<pre><code>aws s3api create-bucket --bucket kubernetes-aws-wthamira-io --create-bucket-configuration LocationConstraint=eu-west-1
</code></pre>
<p>Regions outside of <code>us-east-1</code> require the appropriate <code>LocationConstraint</code> to be specified in order to create the bucket in the desired region.</p>
<p><a href="https://docs.aws.amazon.com/cli/latest/reference/s3api/create-bucket.html" rel="noreferrer">https://docs.aws.amazon.com/cli/latest/reference/s3api/create-bucket.html</a></p>
|
<p>I've created a <code>persistentVolumeClaim</code> on my custom Kubernetes cluster, however it seems to be stuck in pending...</p>
<p>Do I need to install/configure some additional something? OR is this functionality only available on GCP / AWS?</p>
<p><strong>pvc.yaml</strong></p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
metadata:
name: testingchris
</code></pre>
<p>describe pvc:</p>
<pre><code>Name: testingchris
Namespace: diyclientapps
StorageClass: standard
Status: Pending
Volume:
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"testingchris","namespace":"diyclientapps"},"spec":{"accessModes"...
Finalizers: []
Capacity:
Access Modes:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 8s (x3 over 36s) persistentvolume-controller storageclass.storage.k8s.io "standard" not found
</code></pre>
| <p>PVC is just a Claim, a declaration of ones requirements for persistent storage.</p>
<p>For PVC to bind, a PV that is matching PVC requirements must show up, and that can happen in two ways : manual provisioning (adding a PV from ie. kubectl) or with <a href="https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/" rel="noreferrer">Dynamic Volume Provisioning</a></p>
<p>What you experience is that your current setup did not auto provision for your PVC</p>
|
<p>I was trying to deploy a very basic Express app, a small server listening on 8080 on a EC2 server (Ubuntu 16.04) <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">following this tutorial</a>. On that server, it was created a Kubernetes cluster through kops 1.8.0.
After that, I created a Dockerfile like the following:</p>
<pre><code>FROM node:carbon
ENV NPM_CONFIG_PREFIX=/home/node/.npm-global
ENV PATH=$PATH:/home/node/.npm-global/bin
# Create app directory
WORKDIR /usr/src/app
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm@5+)
COPY package*.json ./
RUN npm install
# Bundle app source
COPY . .
EXPOSE 8080
CMD [ "node", "server.js" ]
# At the end, set the user to use when running this image
USER node
</code></pre>
<p>After that, I built the image with <code>docker build -t ccastelli/stupid_server:test1</code>, I specified my credentials with <code>docker login -u ccastelli</code>, I copied the imaged ID from <code>docker images</code>, tagged it <code>docker tag c549618dcd86 org/test:first_try</code> and pushed with <code>docker push org/test</code> on a private repository in <a href="http://cloud.docker.com" rel="nofollow noreferrer">cloud.docker.com.</a></p>
<p>After that I created a cluster secret with <code>kubectl create secret docker-registry ccastelli-regcred --docker-server=docker.com --docker-username=ccastelli --docker-password='pass' [email protected]</code></p>
<p>After that I created a deployment file:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: stupid-server-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: stupid-server
spec:
containers:
- name: stupid-server
image: org/test:first_try
imagePullPolicy: Always
ports:
- containerPort: 8080
imagePullSecrets:
- name: ccastelli-regcred
</code></pre>
<p>I see from <code>kubectl get pods</code> that the image transitioned from <code>ErrPullImage</code> to <code>ImagePullBackOff</code> and it's not ready. Anyway the docker container was working on the client instance but not in the cluster. At this point, I'm a bit lost. What am I doing wrong?
Thanks</p>
<p>Edit: message error:</p>
<blockquote>
<p>Failed to pull image "org/test:first_try": rpc error: code =
Unknown desc = Error response from daemon: repository pycomio/test not
found: does not exist or no pull access</p>
</blockquote>
| <p>your --docker-server should be <code>index.docker.io</code></p>
<pre><code>DOCKER_REGISTRY_SERVER=https://index.docker.io/v1/
DOCKER_USER=Type your dockerhub username, same as when you `docker login`
DOCKER_EMAIL=Type your dockerhub email, same as when you `docker login`
DOCKER_PASSWORD=Type your dockerhub pw, same as when you `docker login`
kubectl create secret docker-registry myregistrykey \
--docker-server=$DOCKER_REGISTRY_SERVER \
--docker-username=$DOCKER_USER \
--docker-password=$DOCKER_PASSWORD \
--docker-email=$DOCKER_EMAIL
</code></pre>
|
<pre><code>$ kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: REDACTED
server: https://192.168.1.32:6443
name: kubernetes
</code></pre>
<p>How can I set the IP address / Port of the actual <strong>cluster</strong>? Why is this default set?</p>
<p>I'm not trying to change the 'client' IP/port.</p>
<p>I've tried: <code>sudo kubeadm init --pod-network-cidr="192.168.0.0/16" --apiserver-advertise-address 192.168.99.100 --apiserver-bind-port 8443</code> which just seems to hang on:</p>
<pre><code>[init] Using Kubernetes version: v1.9.3
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks.
[WARNING FileExisting-crictl]: crictl not found in system path
[preflight] Starting the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [diy-virtual-machine kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.99.100]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".
[init] This might take a minute or longer if the control plane images have to be pulled.
</code></pre>
| <blockquote>
<p>--apiserver-advertise-address <strong>string</strong> The IP address the API Server will advertise it's listening on. Specify '0.0.0.0' to use the address of the default network interface.</p>
</blockquote>
<p>by providing specific IP with flags It will take the designated IP and Port respectively.
For example </p>
<pre><code>--apiserver-advertise-address "192.168.99.100"
--apiserver-bind-port 8443
</code></pre>
<p>Reference:</p>
<p><a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-config/#cmd-config-from-flags" rel="nofollow noreferrer">kubeadm-config flags</a></p>
|
<p>When a Kubernetes pod goes into <code>CrashLoopBackOff</code> state, you will fix the underlying issue. How do you force it to be rescheduled?</p>
| <p>For apply new configuration the new pod should be created (the old one will be removed).</p>
<ul>
<li><p>If your pod was created automatically by <code>Deployment</code> or <code>DaemonSet</code> resource, this action will run automaticaly each time after you update resource's yaml.
It is not going to happen if your resource have <code>spec.updateStrategy.type=OnDelete</code>.</p>
</li>
<li><p>If problem was connected with error inside docker image, that you solved, you should update pods manually, you can use <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/" rel="noreferrer">rolling-update</a> feature for this purpose, In case when new image have same tag, you can just remove broken pod. (see below)</p>
</li>
<li><p>In case of node failure, the pod will recreated on new node after few time, the old pod will be removed after full recovery of broken node. worth noting it is not going to happen if your pod was created by <code>DaemonSet</code> or <code>StatefulSet</code>.</p>
</li>
</ul>
<p>Any way you can manual remove crashed pod:</p>
<pre><code>kubectl delete pod <pod_name>
</code></pre>
<p>Or all pods with <code>CrashLoopBackOff</code> state:</p>
<pre><code>kubectl delete pod `kubectl get pods | awk '$3 == "CrashLoopBackOff" {print $1}'`
</code></pre>
<p>If you have completely dead node you can add <code>--grace-period=0 --force</code> options for remove just information about this pod from kubernetes.</p>
|
<p>I have a Python controller which uses <code>scrapy-splash</code> lib that sends <code>SplashRequest</code> to a Splash service.</p>
<p>Locally, I run both, the controller and the splash service in a two different Dockers. </p>
<p><code>yield SplashRequest(url=response.url, callback=parse, splash_url=<URL> endpoint='execute', args=<SPLASH_ARGS>)</code></p>
<p>When I send the request locally with <code>splash_url="http://127.0.0.1:8050</code>, everything works fine.</p>
<p>Now, I wanted to have a Kubernetes deployment with Splash and to process the splash request on the cloud. I have created Splash Deployment and a Service with <code>type=LoadBalancer</code> on Google Cloud Kubernetes.</p>
<p>And sending the splash request to the <code>External Ip</code> of the splash service.</p>
<p>But splash doesn't receive any request... and in the python script I get</p>
<blockquote>
<p>twisted.python.failure.Failure twisted.internet.error.TCPTimedOutError: TCP connection timed out: 60: Operation timed out.</p>
</blockquote>
<p>It worked in the past while using <code>Internal endpoint</code> of the pod, but I started to get <code>Missing schema</code> exception cause I didn't used <code>http://</code> in the url.</p>
<ul>
<li>splash docker image scrapinghub/splash:3.2</li>
<li>Kubernetes version 1.7, (tried also on 1.9)</li>
</ul>
<p>splash-deployment.yaml</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: my-app
name: splash
namespace: ns-app
spec:
replicas: 1
strategy: {}
template:
metadata:
labels:
app: splash
spec:
containers:
- image: scrapinghub/splash:3.2
name: splash
ports:
- containerPort: 8050
resources: {}
restartPolicy: Always
status: {}
</code></pre>
<p>splash-service.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: app
name: splash
namespace: ns-app
spec:
type: LoadBalancer
ports:
- name: "8050"
port: 8050
targetPort: 8050
protocol: TCP
selector:
app: app
status:
loadBalancer: {}
</code></pre>
<p><strong>UPDATE</strong>
I noticed that locally when I get into <code>http://localhost:8050/</code> I see Splash UI, while entering to the via Kubernetes IP I get </p>
<blockquote>
<p>refused to connect</p>
</blockquote>
<p>How to solve it??
Thank you</p>
| <p>The problem is that <code>splash-service.yaml</code> selector is wrong.. it should point to the Deployment name.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: app
name: splash
namespace: ns-app
spec:
type: LoadBalancer
ports:
- name: "8050"
port: 8050
targetPort: 8050
protocol: TCP
selector:
app: splash
status:
loadBalancer: {}
</code></pre>
|
<p>Everytime I make a new deployment of my app, my nodes start reporting <code>nodeHasDiskPressure</code> . After around 10 minutes or so the node goes back to a normal state. I found this SO answer regarding setting thresholds: <a href="https://stackoverflow.com/questions/42576661/diskpressure-crashing-the-node">DiskPressure crashing the node</a> </p>
<p>.. but I am not sure how to actually set these thresholds on Google Kubernetes Engine </p>
| <p>The kubelet option you mentioned can be added to you cluster "instance-template"</p>
<p>Make a copy of the instance-template that has been used for your cluster (instance-group) after clicked on copy before to save you can make some changes at the instance template,you can add those flags into : Instance-template --> Custom metadata--> kube-env </p>
<p>The flag will be added in this way; </p>
<h2>KUBELET_TEST_ARGS: --image-gc-high-threshold=[your value] KUBELET_TEST_ARGS: --low-diskspace-threshold-mb=[your value] KUBELET_TEST_ARGS: --image-gc-low-threshold=[your value]</h2>
<p>Once you set your values,save the instance template then edit the instance group of your cluster by changing the instance-template from the default to your custom one, once done it hit "rolling restart/replace" on your Dashboard on the instance group main page. This will restart your instances of your cluster with the new values.</p>
|
<p>I have a Python controller which uses <code>scrapy-splash</code> lib that sends <code>SplashRequest</code> to a Splash service.</p>
<p>Locally, I run both, the controller and the splash service in a two different Dockers. </p>
<p><code>yield SplashRequest(url=response.url, callback=parse, splash_url=<URL> endpoint='execute', args=<SPLASH_ARGS>)</code></p>
<p>When I send the request locally with <code>splash_url="http://127.0.0.1:8050</code>, everything works fine.</p>
<p>Now, I wanted to have a Kubernetes deployment with Splash and to process the splash request on the cloud. I have created Splash Deployment and a Service with <code>type=LoadBalancer</code> on Google Cloud Kubernetes.</p>
<p>And sending the splash request to the <code>External Ip</code> of the splash service.</p>
<p>But splash doesn't receive any request... and in the python script I get</p>
<blockquote>
<p>twisted.python.failure.Failure twisted.internet.error.TCPTimedOutError: TCP connection timed out: 60: Operation timed out.</p>
</blockquote>
<p>It worked in the past while using <code>Internal endpoint</code> of the pod, but I started to get <code>Missing schema</code> exception cause I didn't used <code>http://</code> in the url.</p>
<ul>
<li>splash docker image scrapinghub/splash:3.2</li>
<li>Kubernetes version 1.7, (tried also on 1.9)</li>
</ul>
<p>splash-deployment.yaml</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: my-app
name: splash
namespace: ns-app
spec:
replicas: 1
strategy: {}
template:
metadata:
labels:
app: splash
spec:
containers:
- image: scrapinghub/splash:3.2
name: splash
ports:
- containerPort: 8050
resources: {}
restartPolicy: Always
status: {}
</code></pre>
<p>splash-service.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: app
name: splash
namespace: ns-app
spec:
type: LoadBalancer
ports:
- name: "8050"
port: 8050
targetPort: 8050
protocol: TCP
selector:
app: app
status:
loadBalancer: {}
</code></pre>
<p><strong>UPDATE</strong>
I noticed that locally when I get into <code>http://localhost:8050/</code> I see Splash UI, while entering to the via Kubernetes IP I get </p>
<blockquote>
<p>refused to connect</p>
</blockquote>
<p>How to solve it??
Thank you</p>
| <p><strong>UPDATE</strong> I noticed now that you found alone the issue, my bad.</p>
<p>I believe that as Ami Hollander is right, it is an issue with the label selector, but I would like to explain you why. </p>
<p>Consider that each time you create a <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">service</a> with a selector, an endpoint resource is created as well, it is populated with all the address of the nodes having a pod matching the label, you can add as well manually any IP or Domain to point to external resources.</p>
<p>Kubernetes services can be exposed on externalIPs that routes to one or more cluster nodes. Traffic that ingresses into the cluster with the external IP (as destination IP), on the service port, will be routed to one of the service endpoints.</p>
<p>Therefore, as they pointed you out, your selector was not matching any pod and the endpoint resource likely does not contain any backend and so any way route the request. You can double check it running:</p>
<pre><code>$ kubectl get endpoints
$ Kubectl describe endpoints endpointname
</code></pre>
<p>It can be misleading because on the other hand if you run</p>
<pre><code>$ kubectl get services
</code></pre>
<p>you will notice that the service has been correctly created showing a private and a public IP that will be simply a dead end.</p>
<ul>
<li>You were able to see it correctly because everything was working, but the request was not routed in the right way.</li>
</ul>
|
<p>I have a mongo database in the gce . (config see below)</p>
<p>when i deploy it to a <strong>1.7.12-gke.1</strong> everything works fine. Which means the sidecar resolves the pods and links then</p>
<p>now when i deploy the same konfiguration to <strong>1.8.7-gke.1</strong> resultes in missing permissions to list pods see below.</p>
<p>I don't get the point what has changed . I assume i need to assign specific permissions to the user account is that right ?</p>
<p>What am I missing?</p>
<p><strong>Error log</strong></p>
<pre><code>message: 'pods is forbidden: User "system:serviceaccount:default:default" cannot list pods at the cluster scope: Unknown user "system:serviceaccount:default:default"',
mongo-sidecar | Feb 28, 2018, 11:04:19 AM | status: 'Failure',
mongo-sidecar | Feb 28, 2018, 11:04:19 AM | metadata: {},
mongo-sidecar | Feb 28, 2018, 11:04:19 AM | apiVersion: 'v1',
mongo-sidecar | Feb 28, 2018, 11:04:19 AM | { kind: 'Status',
mongo-sidecar | Feb 28, 2018, 11:04:19 AM | message:
mongo-sidecar | Feb 28, 2018, 11:04:19 AM | Error in workloop { [Error: [object Object]]
mongo-sidecar | Feb 28, 2018, 11:04:14 AM | statusCode: 403 }
mongo-sidecar | Feb 28, 2018, 11:04:14 AM | code: 403 },
mongo-sidecar | Feb 28, 2018, 11:04:14 AM | details: { kind: 'pods' },
mongo-sidecar | Feb 28, 2018, 11:04:14 AM | reason: 'Forbidden',
</code></pre>
<p><strong>Config</strong>:</p>
<pre><code>---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: fast
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
---
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongo
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: "mongo"
replicas: 3
template:
metadata:
labels:
role: mongo
environment: test
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo:3.4.9
command:
- mongod
- "--replSet"
- rs0
- "--smallfiles"
- "--noprealloc"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongo,environment=test"
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
annotations:
volume.beta.kubernetes.io/storage-class: "fast"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
</code></pre>
| <p>According to original solution: <a href="https://github.com/cvallance/mongo-k8s-sidecar/issues/75" rel="noreferrer">https://github.com/cvallance/mongo-k8s-sidecar/issues/75</a></p>
<p>You have to create role binding which will grant the default service account view permissions:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: default-view
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view
subjects:
- kind: ServiceAccount
name: default
namespace: default
</code></pre>
|
<p>Is it possible to do the traffic routing using the route rules as described in the image below? The traffic routing would need to happen right after the traffic enters the cluster.</p>
<p><a href="https://i.stack.imgur.com/5Y9C3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5Y9C3.png" alt="enter image description here"></a></p>
<p>We have been fiddling around with them but always hit a wall as soon as we had two route rules and used a uri regex in one of the rules.</p>
<p>Our idea is to configure the istio ingress like so</p>
<pre><code># Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gateway
annotations:
kubernetes.io/ingress.class: "istio"
spec:
backend:
serviceName: frontend
servicePort: 8080
</code></pre>
<p>And then configure the route rules that split the traffic depending on the uri header. Here is an example of how we would configure the products route rule:</p>
<pre><code>apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: products-rule
spec:
destination:
name: frontend
precedence: 2
match:
request:
headers:
uri:
regex: "^/(?:.*?)/?products(?:/.*)?$"
route:
- labels:
fe-app: products
</code></pre>
<p>Is this setup possible with istio?</p>
| <p>UPDATE: It seems Kubernetes Ingress does not support regular expressions, see <a href="https://github.com/kubernetes/ingress-nginx/issues/1360" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/issues/1360</a>. <strong>So the answer below would not work.</strong></p>
<p>I would create three different services: <code>products</code>, <code>settings</code> and <code>customers</code> and use Kubernetes Ingress <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#simple-fanout" rel="nofollow noreferrer">fanout</a>. I would use your regex expressions for the paths. I would try it first with a plain Kubernetes Ingress, without <code>kubernetes.io/ingress.class: "istio"</code> annotation. If it would work, I would add the Istio annotation and verify that it works as Istio Ingress as well.</p>
|
<p>I am trying below command</p>
<pre><code>kubectl exec -it sss-pod-four echo "hi" >> /mnt/sss/testnew.txt
</code></pre>
<p>But it throws error</p>
<pre><code>-bash: /mnt/sss/testnew.txt: No such file or directory
</code></pre>
<p>What is the best way to achieve this</p>
| <p>Found a similar question <a href="https://stackoverflow.com/questions/43499313/executing-multiple-commands-or-from-a-shell-script-in-a-kubernetes-pod">here</a> and below command works now</p>
<pre><code> kubectl exec -it sss-pod-four -- bash -c "echo hi > /mnt/sss/testnew.txt"
</code></pre>
|
<p>I have a yaml file which I can use to create pods. I am using the dashboard so I can simply select yaml file and it will create pods. Pod will start the container and container will run the docker image. So now lets say I have done some changes in the docker image and want to deploy it again. For this, I will delete the already running pod and will upload the yaml file.</p>
<p>Instead of deleting and uploading yaml file again, is there any keyword available which will delete the already running pod/deployment and will recreate it.</p>
<p>Thanks</p>
| <p>If you are using this for development you might get away with </p>
<pre><code>containers:
- image: my/app:dev
imagePullPolicy: Always
</code></pre>
<p>With this, whenever your pod is recreated, you will get fresh image version.</p>
<p>That said, you need to use something like Deployment to have a pod restarted automaticaly, and then you can just <code>kubectl delete my-pod-xxxxx-yyy</code> to wipe old one and in few sec get the fresh, current one.</p>
<p>For prod, don't do that please. Just use tagged images and apply changed image to your Deployment with <code>kubectl apply -f my.yaml</code> or preferably something like Helm (but that is more complicated topic for starters)</p>
|
<p>When I run a simple command on my local shell with gcloud sdk.</p>
<pre><code>$ kubectl get pod
</code></pre>
<p>I get such error:</p>
<blockquote>
<p>Error from server (Forbidden): pods is forbidden: User "client" cannot list pods at the cluster scope: Unknown user "client"</p>
</blockquote>
<p>The same command runs fine on GCP cloud shell, and the output of</p>
<pre><code>$ gcloud auth list
</code></pre>
<p>is as expected:</p>
<blockquote>
<p>Credentialed Accounts<br>
<code>ACTIVE ACCOUNT</code><br>
<code>* [email protected]</code></p>
</blockquote>
<p>I also tried to create clusterrolebinding, but get similar error.</p>
| <p>This happens when you disable Legacy Authorisation in the cluster settings, because the client certificate that you are using is a legacy authentication method. So it looks like what is happening is the client authentication succeeds but the authorisation fails, as expected. ("Unknown user" in the error message, confusingly, seems to mean the user is unknown to the authorisation system, not to the authentication system.)</p>
<p>You can either disable the use of the client certificate with</p>
<pre><code>gcloud config unset container/use_client_certificate
</code></pre>
<p>and then regenerate your kubectl config with</p>
<pre><code>gcloud container clusters get-credentials my-cluster
</code></pre>
<p>Or you can simply re-enable Legacy Authorisation in the cluster settings in the Google Cloud Console, or using the command:</p>
<pre><code>gcloud container clusters update [CLUSTER_NAME] --enable-legacy-authorization
</code></pre>
|
<p>I'm running a local kubernetes bundled with docker on Mac OS.</p>
<p>How can I expose a service, so that I can access the service via a browser on my Mac?</p>
<p>I've created:</p>
<p>a) deployment including apache httpd.</p>
<p>b) service via yaml:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: apaches
spec:
selector:
app: web
type: NodePort
ports:
- protocol: TCP
port: 80
externalIPs:
- 192.168.1.10 # Network IP of my Mac
</code></pre>
<p>My service looks like:</p>
<pre><code>$ kubectl get service apaches
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
apaches NodePort 10.102.106.158 192.168.1.10 80:31137/TCP 14m
</code></pre>
<p>I can locally access the service in my kubernetes cluster by <code>wget $CLUSTER-IP</code></p>
<p>I tried to call <a href="http://192.168.1.10/" rel="noreferrer">http://192.168.1.10/</a> on my Mac, but it doesn't work. </p>
<p>This <a href="https://stackoverflow.com/questions/46618683/kubernetes-minikube-cannot-expose-service-on-public-ip-range">question</a> deals to a similar issue. But the solution does not help, because I do not know which IP I can use.</p>
<p><strong>Update</strong></p>
<p>Thanks to Michael Hausenblas I worked out a solution using <a href="https://kubernetes.io/docs/concepts/services-networking/ingress" rel="noreferrer">Ingress</a>.
Nevertheless there are still some open questions:</p>
<ul>
<li>What is the meaning of a service's externalIP? Why do I need an externalIP when I do not directly access a service from external?</li>
<li>What is the meaning of the service port 31137?</li>
<li><ul>
<li>The kubernetes docs describe a method to [publish a service in minikube via NodePort][4]. Is this also possible with kubernetes bundled on docker?</li>
</ul></li>
</ul>
| <p>There are several solutions to expose services in kubernetes:
<a href="http://alesnosek.com/blog/2017/02/14/accessing-kubernetes-pods-from-outside-of-the-cluster/" rel="noreferrer">http://alesnosek.com/blog/2017/02/14/accessing-kubernetes-pods-from-outside-of-the-cluster/</a></p>
<p>Here are my solutions according to alesnosek for a local kubernetes bundled with docker:</p>
<p><strong>1. hostNetwork</strong></p>
<pre><code>hostNetwork: true
</code></pre>
<p>Dirty (the host network should not be shared for security reasons) => I did not check this solution.</p>
<p><strong>2. hostPort</strong></p>
<pre><code>hostPort: 8086
</code></pre>
<p>Does not apply to services => I did not check this solution.</p>
<p><strong>3. NodePort</strong></p>
<p>Expose the service by defining a nodePort:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: apaches
spec:
type: NodePort
ports:
- port: 80
nodePort: 30000
selector:
app: apache
</code></pre>
<p><strong>4. LoadBalancer</strong></p>
<p><strong>EDIT</strong>
@MathObsessed posted the <a href="https://stackoverflow.com/a/60111530/1909531">solution in his anwer</a>.</p>
<p><strong>5. Ingress</strong></p>
<p><strong>a. Install <a href="https://github.com/jnewland/local-dev-with-docker-for-mac-kubernetes" rel="noreferrer">Ingress Controller</a></strong></p>
<pre><code>git clone https://github.com/jnewland/local-dev-with-docker-for-mac-kubernetes.git
kubectl apply -f nginx-ingress/namespaces/nginx-ingress.yaml -Rf nginx-ingress
</code></pre>
<p><strong>b. Configure Ingress</strong></p>
<p><code>kubectl apply -f apache-ing.yaml</code></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: apache-ingress
spec:
rules:
- host: localhost
http:
paths:
- path: /
backend:
serviceName: apaches
servicePort: 80
</code></pre>
<p>Now I can access my apache deployed with kubernetes by calling <a href="http://localhost/" rel="noreferrer">http://localhost/</a></p>
<p><strong>Remarks for using <a href="https://github.com/jnewland/local-dev-with-docker-for-mac-kubernetes" rel="noreferrer">local-dev-with-docker-for-mac-kubernetes</a></strong></p>
<ul>
<li>The repo simplifies the <a href="https://github.com/kubernetes/ingress-nginx/blob/master/deploy/README.md" rel="noreferrer">deployment of the offical ingress-nginx controller</a> </li>
<li>For production use I would follow the <a href="https://github.com/kubernetes/ingress-nginx/blob/master/deploy/README.md" rel="noreferrer">official guide</a>.</li>
<li>The repos ships with a <a href="https://github.com/jnewland/local-dev-with-docker-for-mac-kubernetes/tree/master/httpbin" rel="noreferrer">tiny full featured ingress example</a>. Very useful for getting quickly a working example application.</li>
</ul>
<p><strong>Further documentation</strong></p>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/services-networking/ingress" rel="noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress</a></li>
</ul>
|
<p>I create a one-replica zookeeper + kafka cluster with the official kafka chart from the official incubator repo:</p>
<pre><code>helm install --name mykafka -f kafka.yaml incubator/kafka
</code></pre>
<p>This gives me two pods:</p>
<pre><code>kubectl get pods
NAME READY STATUS
mykafka-kafka-0 1/1 Running
mykafka-zookeeper-0 1/1 Running
</code></pre>
<p>And four services (in addition to the default kubernetes service)</p>
<pre><code>kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP
mykafka-kafka ClusterIP 10.108.143.59 <none> 9092/TCP
mykafka-kafka-headless ClusterIP None <none> 9092/TCP
mykafka-zookeeper ClusterIP 10.109.43.48 <none> 2181/TCP
mykafka-zookeeper-headless ClusterIP None <none> 2888/TCP,3888/TCP
</code></pre>
<p>If I shell into the zookeeper pod:</p>
<pre><code>> kubectl exec -it mykafka-zookeeper-0 -- /bin/bash
</code></pre>
<p>I use the <code>curl</code> tool to test TCP connectivity. I expect a communications error as the server isn't using HTTP, but if curl can't even connect and I have to ctrl-C out, then the TCP connection isn't working.</p>
<p>I can access the local pod through <code>curl localhost:2181</code>:</p>
<pre><code>root@mykafka-zookeeper-0:/# curl localhost:2181
curl: (52) Empty reply from server
</code></pre>
<p>I can access other pod through <code>curl mykafka-kafka:9092</code>:</p>
<pre><code>root@mykafka-zookeeper-0:/# curl mykafka-kafka:9092
curl: (56) Recv failure: Connection reset by peer
</code></pre>
<p>But I can't access <code>mykafka-zookeeper:2181</code>. That name resolves to the cluster IP, but the attempt to TCP connect hangs until I ctrl-C:</p>
<pre><code>root@mykafka-zookeeper-0:/# curl -v mykafka-zookeeper:2181
* Rebuilt URL to: mykafka-zookeeper:2181/
* Trying 10.109.43.48...
^C
</code></pre>
<p>Similarly, I can shell into the kafka pod:</p>
<pre><code>> kubectl exec -it mykafka-kafka-0 -- /bin/bash
</code></pre>
<p>Connecting to the Zookeeper pod by the service name works fine:</p>
<pre><code>root@mykafka-kafka-0:/# curl mykafka-zookeeper:2181
curl: (52) Empty reply from server
</code></pre>
<p>Connecting to localhost kafka works fine:</p>
<pre><code>root@mykafka-kafka-0:/# curl localhost:9092
curl: (56) Recv failure: Connection reset by peer
</code></pre>
<p>But connecting to the Kafka pod by the service name doesn't work and I must ctrl-C the curl attempt:</p>
<pre><code>curl -v mykafka-kafka:9092
* Rebuilt URL to: mykafka-kafka:9092/
* Hostname was NOT found in DNS cache
* Trying 10.108.143.59...
^C
</code></pre>
<p>Can anyone explain why using I can only connect to a Kubernetes service from outside the service and not from within the service?</p>
| <p>I believe what you're experiencing can be resolved by looking at how your kubelet is set up to run. There is a setting you can toggle when starting up the kubelet called <code>--hairpin-mode</code>. By default this setting is set to the string <code>promiscuous</code>, where a pod can't connect to its own service, but you can change it to be <code>hairpin-veth</code>, which would allow a pod to connect to its own service.</p>
<p>There are a few issues on the topic, but this seems to be referenced the most:
<a href="https://github.com/kubernetes/kubernetes/issues/45790" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/45790</a></p>
|
<p>I'm having some trouble with my AWS Kubernetes instance.</p>
<p>I'm trying to get my django instances to connect to the RDS service via the DB endpoint.</p>
<pre><code>DATABASES = {
'default': {
'ENGINE': 'django.contrib.gis.db.backends.postgis',
'NAME': os.environ['NAME'],
'USER': os.environ['USER'],
'PASSWORD': os.environ['PASSWORD'],
'HOST': os.environ['HOST'],
'PORT': os.environ['PORT']
}
}
</code></pre>
<p>The host string would resemble this service.key.region.rds.amazonaws.com and is being passed to the container via env in the deploy.yml</p>
<pre><code>containers:
- name: service
env:
- name: HOST
value: service.key.region.rds.amazonaws.com
</code></pre>
<p>This set up works locally in kubernetes but not when I put it in the cluster I have on AWS. It returns the following error instead:</p>
<pre><code>django.db.utils.OperationalError: could not translate host name
</code></pre>
<p>Any suggestions or am I missing something in how AWS likes handling things?</p>
| <p>Assuming your AWS deployment is now in the same VPC as your RDS, then you will need to change your host to use the private IP.</p>
|
<p>I am building a Kubernetes Cluster on Azure (AKS). I have deployed it into a custom VNet using <a href="https://github.com/Azure/AKS/issues/27#issuecomment-370627500" rel="nofollow noreferrer">this</a> document. By default, the VNet that gets created when AKS is provisioned is 10.0.0.0/8. All of our infrastructures are in 10.27.X.X space hence the need for the custom VNet.</p>
<p>As per the document the Custom VNet is created in a separate Resource Group, in our case Azure.Prod. In the same group, we have established the Virtual Network Gateway for the VPN back to our Data Centre.</p>
<p>Here is the details (obfuscated) of our config:</p>
<ul>
<li>Resource group Azure.Prod </li>
<li>Resource group MC_Azure.Prod (created by AKS)</li>
<li>Virtual network 10.150.0.0/16 in Azure.Prod</li>
<li>Subnet 10.150.1.0/24</li>
<li>Virtual machine 10.150.1.4 in MC_Azure.Prod</li>
<li>Pod network 10.244.0.0/24</li>
<li>Data centre network 10.27.16.x/24</li>
</ul>
<p>One of the containers needs to make a SQL Connection back to the Data Centre but it is failing. I am able to ping 10.150.1.4 from a machine in the data centre so have proved connectivity from DC to Azure.</p>
<p>I have added the following routes in the route table that was created by AKS, followed <a href="https://stackoverflow.com/questions/46277845/k8s-pods-unable-to-reach-external-vm-via-internal-ip/46291889">this</a> article.</p>
<ul>
<li>10.27.16.0/24 > Virtual Network Gateway</li>
</ul>
<p>On the machine in the Data Centre, I have created the following route</p>
<ul>
<li>10.244.0.0/24 > 10.27.16.3 (which is the GW on the DC NW, the device also terminates the VPN)</li>
</ul>
<p>Any help appreciated!</p>
| <p>Right, I finally got to the bottom of this, looks like the routes back to the data centre and also to the pods need to be replicated on the GatewaySubnet as well.</p>
|
<p><strong>Versions:</strong> Kubernetes <code>v1.9.2</code> running in GCE (kube-up.sh)</p>
<p>Container-Optimized OS <code>10032.88.0</code></p>
<hr>
<p><strong>Symptom:</strong> Our COS nodes show the following in their <code>google-ip-forwarding-daemon.service</code> logs:</p>
<pre><code>Mar 07 13:43:28 kube-minion-0fqw google-ip-forwarding[935]: INFO Changing eth0 IPs from [u'104.154.20.xx', u'104.154.72.xxx', u'104.154.243.xxx', u'104.155.131.xxx', u'104.155.181.xx', u'104.197.17.xx', u'104.197.104.xxx', u'104.197.166.xxx', u'104.197.171.xxx', u'104.197.255.xxx', u'104.198.28.188', u'104.198.52.xxx', u'104.198.131.xxx', u'104.198.162.xxx', u'104.198.172.xxx', u'130.211.120.8', u'130.211.238.xxx', u'146.148.61.xx'] to [u'104.154.20.xx', u'104.154.72.xx', u'104.155.131.xxx', u'104.155.181.xx', u'104.197.17.xx', u'104.197.104.xxx', u'104.197.166.xxx', u'104.197.171.xxx', u'104.197.255.xxx', u'104.198.28.188', u'104.198.52.xxx', u'104.198.131.xxx', u'104.198.162.xxx', u'104.198.172.xxx', u'130.211.120.8', u'130.211.238.xxx', u'146.148.61.xx'] by adding None and removing [u'104.154.243.135'].
Mar 07 13:43:45 kube-minion-0fqw google-ip-forwarding[935]: INFO Changing eth0 IPs from [u'104.154.20.xx', u'104.154.72.xxx', u'104.155.131.xxx', u'104.155.181.xx', u'104.197.17.xx', u'104.197.104.xxx', u'104.197.166.xxx', u'104.197.171.xxx', u'104.197.255.xxx', u'104.198.28.188', u'104.198.52.xxx', u'104.198.131.xxx', u'104.198.162.xxx', u'104.198.172.xxx', u'130.211.120.8', u'130.211.238.xxx', u'146.148.61.xx'] to [u'104.154.20.xx', u'104.154.72.xx', u'104.155.131.xxx', u'104.155.181.xx', u'104.197.17.xx', u'104.197.104.xxx', u'104.197.166.xxx', u'104.197.171.xxx', u'104.197.255.xxx', u'104.198.52.xxx', u'104.198.131.xxx', u'104.198.162.xxx', u'104.198.172.xxx', u'130.211.120.8', u'130.211.238.xxx', u'146.148.61.xx'] by adding None and removing [u'104.198.28.188'].
Mar 07 13:44:01 kube-minion-0fqw google-ip-forwarding[935]: INFO Changing eth0 IPs from [u'104.154.20.xx', u'104.154.72.xxx', u'104.155.131.xxx', u'104.155.181.xx', u'104.197.17.xx', u'104.197.104.xxx', u'104.197.166.xxx', u'104.197.171.xxx', u'104.197.255.xxx', u'104.198.52.xxx', u'104.198.131.xxx', u'104.198.162.xxx', u'104.198.172.xxx', u'130.211.120.8', u'130.211.238.xxx', u'146.148.61.xx'] to [u'104.154.20.xx', u'104.154.72.xx', u'104.155.131.xxx', u'104.155.181.xx', u'104.197.17.xx', u'104.197.104.xxx', u'104.197.166.xxx', u'104.197.255.xxx', u'104.198.52.xxx', u'104.198.131.xxx', u'104.198.162.xxx', u'104.198.172.xxx', u'130.211.120.8', u'130.211.238.xxx', u'146.148.61.xx'] by adding None and removing [u'104.197.171.xxx'].
Mar 07 13:44:17 kube-minion-0fqw google-ip-forwarding[935]: INFO Changing eth0 IPs from [u'104.154.20.xx', u'104.154.72.xxx', u'104.155.131.xxx', u'104.155.181.xx', u'104.197.17.xx', u'104.197.104.xxx', u'104.197.166.xxx', u'104.197.255.xxx', u'104.198.52.xxx', u'104.198.131.xxx', u'104.198.162.xxx', u'104.198.172.xxx', u'130.211.120.8', u'130.211.238.xxx', u'146.148.61.xx'] to [u'104.154.20.xx', u'104.154.72.xx', u'104.155.131.xxx', u'104.155.181.xx', u'104.197.17.xx', u'104.197.104.xxx', u'104.197.166.xxx', u'104.197.255.xxx', u'104.198.52.xxx', u'104.198.131.xxx', u'104.198.162.xxx', u'104.198.172.xxx', u'130.211.238.xxx', u'146.148.61.xx'] by adding None and removing [u'130.211.120.8'].
Mar 07 13:44:37 kube-minion-0fqw google-ip-forwarding[935]: INFO Changing eth0 IPs from [u'104.154.20.xx', u'104.154.72.xxx', u'104.155.131.xxx', u'104.155.181.xx', u'104.197.17.xx', u'104.197.104.xxx', u'104.197.166.xxx', u'104.197.255.xxx', u'104.198.52.xxx', u'104.198.131.xxx', u'104.198.162.xxx', u'104.198.172.xxx', u'130.211.238.xxx', u'146.148.61.xx'] to [u'104.154.20.xx', u'104.154.72.xx', u'104.155.131.xxx', u'104.155.181.xx', u'104.197.17.xx', u'104.197.104.xxx', u'104.197.166.xxx', u'104.197.255.xxx', u'104.198.131.xxx', u'104.198.162.xxx', u'104.198.172.xxx', u'130.211.238.xxx', u'146.148.61.xx'] by adding None and removing [u'104.198.52.xxx'].
Mar 07 13:44:53 kube-minion-0fqw google-ip-forwarding[935]: INFO Changing eth0 IPs from [u'104.154.20.xx', u'104.154.72.xxx', u'104.155.131.xxx', u'104.155.181.xx', u'104.197.17.xx', u'104.197.104.xxx', u'104.197.166.xxx', u'104.197.255.xxx', u'104.198.131.xxx', u'104.198.162.xxx', u'104.198.172.xxx', u'130.211.238.xxx', u'146.148.61.xx'] to [u'104.154.20.xx', u'104.154.72.xx', u'104.155.131.xxx', u'104.155.181.xx', u'104.197.104.xxx', u'104.197.166.xxx', u'104.197.255.xxx', u'104.198.131.xxx', u'104.198.162.xxx', u'104.198.172.xxx', u'130.211.238.xxx', u'146.148.61.xx'] by adding None and removing [u'104.197.17.xx'].
Mar 07 13:45:10 kube-minion-0fqw google-ip-forwarding[935]: INFO Changing eth0 IPs from [u'104.154.20.xx', u'104.154.72.xxx', u'104.155.131.xxx', u'104.155.181.xx', u'104.197.104.xxx', u'104.197.166.xxx', u'104.197.255.xxx', u'104.198.131.xxx', u'104.198.162.xxx', u'104.198.172.xxx', u'130.211.238.xxx', u'146.148.61.xx'] to [u'104.154.20.xx', u'104.154.72.xx', u'104.155.131.xxx', u'104.197.104.xxx', u'104.197.166.xxx', u'104.197.255.xxx', u'104.198.131.xxx', u'104.198.162.xxx', u'104.198.172.xxx', u'130.211.238.xxx', u'146.148.61.xx'] by adding None and removing [u'104.155.181.xx'].
Mar 07 13:45:26 kube-minion-0fqw google-ip-forwarding[935]: INFO Changing eth0 IPs from [u'104.154.20.xx', u'104.154.72.xxx', u'104.155.131.xxx', u'104.197.104.xxx', u'104.197.166.xxx', u'104.197.255.xxx', u'104.198.131.xxx', u'104.198.162.xxx', u'104.198.172.xxx', u'130.211.238.xxx', u'146.148.61.xx'] to [u'104.154.20.xx', u'104.154.72.xx', u'104.155.131.xxx', u'104.197.104.xxx', u'104.197.166.xxx', u'104.198.131.xxx', u'104.198.162.xxx', u'104.198.172.xxx', u'130.211.238.xxx', u'146.148.61.xx'] by adding None and removing [u'104.197.255.xxx'].
Mar 07 13:45:43 kube-minion-0fqw google-ip-forwarding[935]: INFO Changing eth0 IPs from [u'104.154.20.xx', u'104.154.72.xxx', u'104.155.131.xxx', u'104.197.104.xxx', u'104.197.166.xxx', u'104.198.131.xxx', u'104.198.162.xxx', u'104.198.172.xxx', u'130.211.238.xxx', u'146.148.61.xx'] to [u'104.154.20.xx', u'104.154.72.xx', u'104.155.131.xxx', u'104.197.104.xxx', u'104.198.131.xxx', u'104.198.162.xxx', u'104.198.172.xxx', u'130.211.238.xxx', u'146.148.61.xx'] by adding None and removing [u'104.197.166.xxx'].
Mar 07 13:45:59 kube-minion-0fqw google-ip-forwarding[935]: INFO Changing eth0 IPs from [u'104.154.20.xx', u'104.154.72.xxx', u'104.155.131.xxx', u'104.197.104.xxx', u'104.198.131.xxx', u'104.198.162.xxx', u'104.198.172.xxx', u'130.211.238.xxx', u'146.148.61.xx'] to [u'104.154.20.xx', u'104.154.72.xx', u'104.155.131.xxx', u'104.197.104.xxx', u'104.198.131.xxx', u'104.198.162.xxx', u'104.198.172.xxx', u'130.211.238.xxx'] by adding None and removing [u'146.148.61.xx'].
Mar 07 13:46:19 kube-minion-0fqw google-ip-forwarding[935]: INFO Changing eth0 IPs from [u'104.154.20.xx', u'104.154.72.xxx', u'104.155.131.xxx', u'104.197.104.xxx', u'104.198.131.xxx', u'104.198.162.xxx', u'104.198.172.xxx', u'130.211.238.xxx'] to [u'104.154.20.xx', u'104.154.72.xx', u'104.155.131.xxx', u'104.198.131.xxx', u'104.198.162.xxx', u'104.198.172.xxx', u'130.211.238.xxx'] by adding None and removing [u'104.197.104.xxx'].
Mar 07 13:46:41 kube-minion-0fqw google-ip-forwarding[935]: INFO Changing eth0 IPs from [u'104.154.20.xx', u'104.154.72.xxx', u'104.155.131.xxx', u'104.198.131.xxx', u'104.198.162.xxx', u'104.198.172.xxx', u'130.211.238.xxx'] to [u'104.154.20.xx', u'104.154.72.xx', u'104.155.131.xxx', u'104.198.131.xxx', u'104.198.162.xxx', u'130.211.238.xxx'] by adding None and removing [u'104.198.172.xxx'].
Mar 07 13:46:58 kube-minion-0fqw google-ip-forwarding[935]: INFO Changing eth0 IPs from [u'104.154.20.xx', u'104.154.72.xxx', u'104.155.131.xxx', u'104.198.131.xxx', u'104.198.162.xxx', u'130.211.238.xxx'] to [u'104.154.20.xx', u'104.154.72.xx', u'104.155.131.xxx', u'104.198.131.xxx', u'130.211.238.xxx'] by adding None and removing [u'104.198.162.xxx'].
Mar 07 13:47:14 kube-minion-0fqw google-ip-forwarding[935]: INFO Changing eth0 IPs from [u'104.154.20.xx', u'104.154.72.xxx', u'104.155.131.xxx', u'104.198.131.xxx', u'130.211.238.xxx'] to [u'104.154.20.xx', u'104.154.72.xx', u'104.198.131.xxx', u'130.211.238.xxx'] by adding None and removing [u'104.155.131.xxx'].
Mar 07 13:47:34 kube-minion-0fqw google-ip-forwarding[935]: INFO Changing eth0 IPs from [u'104.154.20.xx', u'104.154.72.xxx', u'104.198.131.xxx', u'130.211.238.xxx'] to [u'104.154.20.xx', u'104.154.72.xx', u'130.211.238.xxx'] by adding None and removing [u'104.198.131.xxx'].
Mar 07 13:47:53 kube-minion-0fqw google-ip-forwarding[935]: INFO Changing eth0 IPs from [u'104.154.20.xx', u'104.154.72.xxx', u'130.211.238.xxx'] to [u'104.154.20.xx', u'130.211.238.xxx'] by adding None and removing [u'104.154.72.xxx'].
Mar 07 13:48:10 kube-minion-0fqw google-ip-forwarding[935]: INFO Changing eth0 IPs from [u'104.154.20.xx', u'130.211.238.xxx'] to [u'130.211.238.xxx'] by adding None and removing [u'104.154.20.xx'].
Mar 07 13:48:26 kube-minion-0fqw google-ip-forwarding[935]: INFO Changing eth0 IPs from [u'130.211.238.xxx'] to None by adding None and removing [u'130.211.238.xxx'].
</code></pre>
<p>When this happens, running PODs typically continue running yet sometimes become Unknown and are rescheduled. Any newly scheduled pods are stuck in ContainerCreating with these errors:</p>
<p><code>Failed create pod sandbox.</code> & <code>Error response from daemon: network none not found</code></p>
<p>We haven't found a fix to this symptom other than rebooting the instance.</p>
| <p>There are too many versions between Kubernetes 1.2 and 1.9.2, why are you trying to create a very old version and then upgrade it? I followed the same Kubernetes doc and got a cluster running 1.9.3 without issues. You should try the same and verify if the issue you described persist.</p>
<p>Additionally, the image <a href="https://cloud.google.com/container-optimized-os/docs/release-notes#cos-stable-63-10032-88-0" rel="nofollow noreferrer">cos-stable-63-10032-88-0</a> is not the latest release. You should also try running with <a href="https://cloud.google.com/container-optimized-os/docs/release-notes#cos-stable-64-10176-62-0" rel="nofollow noreferrer">cos-stable-64-10176-62-0</a> instead.</p>
|
<p>I am using the Minikube environment and I have defined the max memory using</p>
<pre><code>$] minikube config set memory 2048
</code></pre>
<p>Now I want to get this memory value using the Kubernetes API call in GO.
I have tried the following,</p>
<pre><code>import (
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/rest"
)
availableMem := kubernetes.Interface.StorageV1beta1().RESTClient().Get().Name("config")
</code></pre>
<p>But the output is not in readable manner.</p>
| <p>This code will fetch the available memory of the first cluster</p>
<pre><code>nodeList, err := f.KubeClient.CoreV1().Nodes().List(metav1.ListOptions{})
if err == nil {
if len(nodeList.Items) > 0 {
node := &nodeList.Items[0]
memQuantity := node.Status.Allocatable[v1.ResourceMemory]
totalMemAvail = int(memQuantity.Value() >> 20)
} else {
t.Fatal("Unable to read node list")
return
}
} else {
t.Fatalf("Error while reading node list data: %v", err)
}
</code></pre>
|
<p>I am trying to execute a Python program as a background process inside a container with <code>kubectl</code> as below (<code>kubectl</code> issued on local machine):</p>
<p><code>kubectl exec -it <container_id> -- bash -c "cd some-dir && (python xxx.py --arg1 abc &)"</code></p>
<p>When I log in to the container and check <code>ps -ef</code> I do not see this process running. Also, there is no output from <code>kubectl</code> command itself.</p>
<ul>
<li>Is the <code>kubectl</code> command issued correctly?</li>
<li>Is there a better way to achieve the same?</li>
<li>How can I see the output/logs printed off the background process being run?</li>
<li>If I need to stop this background process after some duration, what is the best way to do this?</li>
</ul>
| <p>The <a href="https://en.wikipedia.org/wiki/Nohup#Overcoming_hanging" rel="noreferrer">nohup</a> Wikipedia page can help; you need to redirect all three IO streams (stdout, stdin and stderr) - an example with <code>yes</code>:</p>
<pre><code>kubectl exec pod -- bash -c "yes > /dev/null 2> /dev/null &"
</code></pre>
<p><code>nohup</code> is not required in the above case because I did not allocate a pseudo terminal (no <code>-t</code> flag) and the shell was not interactive (no <code>-i</code> flag) so no <code>HUP</code> signal is sent to the <code>yes</code> process on session termination. See <a href="https://unix.stackexchange.com/questions/84737/in-which-cases-is-sighup-not-sent-to-a-job-when-you-log-out#answer-85296">this</a> answer for more details.</p>
<p>Redirecting <code>/dev/null</code> to stdin is not required in the above case since stdin already refers to <code>/dev/null</code> (you can see this by running <code>ls -l /proc/YES_PID/fd</code> in another shell).</p>
<p>To see the output you can instead redirect stdout to a file.</p>
<p>To stop the process you'd need to identity the PID of the process you want to stop (<a href="https://linux.die.net/man/1/pgrep" rel="noreferrer">pgrep</a> could be useful for this purpose) and send a fatal signal to it (<code>kill PID</code> for example).</p>
<p>If you want to stop the process after a fixed duration, <a href="https://linux.die.net/man/1/timeout" rel="noreferrer">timeout</a> might be a better option.</p>
|
<p>I'm trying to setup frontend for my two web applications by using ingress controller in local kubernetes cluster. I followed all steps outlined in [1] and detailed instructions in [2]. But so far out of luck. The error I got is the following:</p>
<pre><code>Warning CreatingLoadBalancerFailed Error creating load balancer (will retry): failed to ensure load balancer for service default/frontend: error creating loadbalancer a58617b3f260011e8ad84fa163e0c90a: error creating loadbalancer {a58617b3f260011e8ad84fa163e0c90a
Kubernetes external service a58617b3f260011e8ad84fa163e0c90a 7b4db6f7-3fc1-4c07-a84d-5c15b46e3ac2 <nil> }: Expected HTTP response code [201 202] when accessing [POST https://host.xyz.com:9696/v2.0/lbaas/loadbalancers],
but got 409 instead
{"NeutronError": {"message": "Quota exceeded for resources: ['loadbalancer'].", "type": "OverQuota", "detail": ""}}
</code></pre>
<p>and my service stays in a pending state.</p>
<p>So far I have no idea where to look at to identify the problem and would appreciate any advice.</p>
<p>The manifest yaml file is almost identical to [2], it only lists https interface. But here it is for completeness</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: frontend
annotations:
ingress.kub.webernetes.io/rewrite-target: /
spec:
tls:
- secretName: ing-secret
replicas: 1
template:
metadata:
labels:
k8s-app: nginx-ingress-lb
containers:
- args:
- /nginx-ingress-controller
- "--default-backend-service=$(POD_NAMESPACE)/default-http-backend"
- "--default-ssl-certificate=$(POD_NAMESPACE)/ing-secret"
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: "gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.5"
rules:
- host: host.xxx.com
https:
paths:
- path: /app1
backend:
serviceName: app1
servicePort: 8881
- path: /app2
backend:
serviceName: app2
servicePort: 8882
---
apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
type: LoadBalancer
ports:
- name: https
port: 443
targetPort: https
selector:
k8s-app: nginx-ingress-lb
</code></pre>
<p>[1] <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/</a></p>
<p>[2] <a href="https://hackernoon.com/setting-up-nginx-ingress-on-kubernetes-2b733d8d2f45" rel="nofollow noreferrer">https://hackernoon.com/setting-up-nginx-ingress-on-kubernetes-2b733d8d2f45</a></p>
| <p>It looks like OpenStack lbaas is being used by kubernetes to provision the load balancer service underlying the ingress resource (see <code>https://host.xyz.com:9696/v2.0/lbaas/loadbalancers</code> in the error log)</p>
<p>AFAIK this error indicates a resource limit has been reached on the number of load balancers provisioned. I would raise the issue with your cluster administrator.</p>
|
<p>I am trying to get the mount point details of host from a kuberentes pod. It is a privileged container.
Even if I mount the root file system , I am not able to check the mount details of a particular type say <code>s3fs</code>may be because it belongs to a different namespace.
What is the best way to share the mount point namespace.</p>
| <p>I did some research and found that kubernetes provides an option called <code>MountPropagation</code> which helps to achieve my requirement.</p>
<p>I tested this feature in my local set up and it did give me the result I wanted.</p>
<p>Few links that I found useful:</p>
<p><a href="https://medium.com/kokster/kubernetes-mount-propagation-5306c36a4a2d" rel="nofollow noreferrer">https://medium.com/kokster/kubernetes-mount-propagation-5306c36a4a2d</a>
<a href="https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation</a></p>
|
<p>I installed kubernetes cluster using <em>kubeadm</em> following this <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="noreferrer">guide</a>. After some period of time, I decided to reinstall K8s but run into troubles with removing all related files and not finding any docs on official site how to remove cluster installed via kubeadm.
Did somebody meet the same problems and know the proper way of removing all files and dependencies? Thank you in advance.</p>
<p>For more information, I removed <em>kubeadm</em>, <em>kubectl</em> and <em>kubelet</em> using <code>apt-get purge/remove</code> but when I started installing the cluster again I got next errors:</p>
<pre><code>[preflight] Some fatal errors occurred:
Port 6443 is in use
Port 10251 is in use
Port 10252 is in use
/etc/kubernetes/manifests is not empty
/var/lib/kubelet is not empty
Port 2379 is in use
/var/lib/etcd is not empty
</code></pre>
| <p>In my "Ubuntu 16.04", I use next steps to completely remove and clean Kubernetes (installed with "apt-get"): </p>
<pre><code>kubeadm reset
sudo apt-get purge kubeadm kubectl kubelet kubernetes-cni kube*
sudo apt-get autoremove
sudo rm -rf ~/.kube
</code></pre>
<p>And <strong>restart</strong> the computer. </p>
|
<p>I'm currently exploring more and more of Google Cloud Platform and try to evaluate how I can change my default stack. I like the idea of using all the built-in tooling instead of having to go to various Saas products. One of the more open questions has been how to make use of Stackdriver Monitoring. I understand that the built-in monitoring uses Heapster with the Google Cloud Sink. However, how do I expose my own custom metrics to Stackdriver for use? I don't mean the logs-based metrics, but the real thing?</p>
<p>I know I could always install the agent myself onto the cluster, but it appears to me that as part of Google Container Engine, it is already running, so how could I push metrics to it?</p>
| <p>To answer my own question, the best strategy for now seems to be using <a href="https://github.com/GoogleCloudPlatform/k8s-stackdriver/tree/master/prometheus-to-sd" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/k8s-stackdriver/tree/master/prometheus-to-sd</a></p>
|
<p>I am trying to get the mount point details of host from a kuberentes pod. It is a privileged container.
Even if I mount the root file system , I am not able to check the mount details of a particular type say <code>s3fs</code>may be because it belongs to a different namespace.
What is the best way to share the mount point namespace.</p>
| <p>If you really just want details of the host's mount points rather than access to them you can run your Pod with <code>hostPID: true</code> and then inspect the mounts of a process in the <code>proc</code> filesystem that you know is running in the host's mount namespace (for example PID 1) like so: <code>cat /proc/1/mounts</code></p>
|
<p>I am trying to get the POST and GET requests working (via Postman) but this is what I'm getting,</p>
<p><strong>GET Request</strong></p>
<pre><code>curl -X GET http://localhost:8080/api/v1/namespaces/default/pods/mypod/exec
</code></pre>
<p><strong>GET Response</strong></p>
<pre><code>{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "the server could not find the requested resource",
"reason": "NotFound",
"details": {
},
"code": 404
}
</code></pre>
<p><strong>POST Request</strong></p>
<pre><code>curl -X POST 'http://localhost:8080/api/v1/namespaces/default/pods/mypod/exec?command=ls&container=my-container&stderr=true&stdout=true'
</code></pre>
<p><strong>POST Response</strong></p>
<pre><code>{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "Upgrade request required",
"reason": "BadRequest",
"code": 400
}
</code></pre>
<p>Any idea on how to get these requests working? What parameters need to be changed?</p>
<p>Thanks</p>
| <p>I think you are trying to exec into a pod. A websocket connection is required if you want to exec into a pod.</p>
<p>For a websocket connection, an http(s) call is made first, followed by an upgrade request to websocket using the <em>HTTP Upgrade</em> header.</p>
<p>curl does not support upgrade from http to websocket. Hence the error.</p>
<p><em><a href="https://kubernetes.io/docs/tasks/tools/install-kubectl/" rel="nofollow noreferrer">kubectl</a> exec</em> would be handy for this use case.</p>
<p>You can also try other cli tools which support websockets, like</p>
<ul>
<li><a href="https://github.com/websockets/wscat" rel="nofollow noreferrer">wscat</a></li>
</ul>
|
<p>When i run the kubectl version command , I get the following error message.</p>
<pre><code>kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.0", GitCommit:"925c127ec6b946659ad0fd596fa959be43f0cc05", GitTreeState:"clean", BuildDate:"2017-12-15T21:07:38Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Unable to connect to the server: dial tcp 192.168.99.100:8443: i/o timeout
</code></pre>
<p>How do I resolve this?</p>
| <p>You can get relevant information about the client-server status by using the following command.</p>
<pre><code>kubectl config view
</code></pre>
<p>Now you can update or set k8s context accordingly with the following command.</p>
<pre><code>kubectl config use-context CONTEXT-CHOSEN-FROM-PREVIOUS-COMMAND-OUTPUT
</code></pre>
<p>you can do further action on kubeconfig file. the following command will provide you with all necessary information.</p>
<pre><code>kubectl config --help
</code></pre>
|
<p>We're trying to run a kubernetes cluster with three namespaces:</p>
<ul>
<li><code>public</code> contains services accessible to anyone</li>
<li><code>internal</code> contains services that should only be accessed by members of staff</li>
<li><code>engineering</code> contains services that should only be visible to developers</li>
</ul>
<p>The <code>internal</code> and <code>engineering</code> namespaces are protected with mutual authentication, each using a different certificate authority.</p>
<p>We're using Traefik as to manage ingresses to these services however, following the <a href="https://docs.traefik.io/user-guide/kubernetes/" rel="noreferrer">kubernetes guide</a> to set up Traefik, grants each instance of Traefik permissions to see ingresses across all namespaces. This means you could use a service in the <code>internal</code> namespace via the Traefik instance running in the <code>public</code> namespace, bypassing the mututal authentication.</p>
<p>We're working around this by setting hosts on the ingresses, but this means ingresses must be defined seperately for each environment (eg, <code>host: engineering.example.com</code> is different to <code>host: engineering.staging.example.com</code>). We'd prefer to leave the host out of the ingress configuration.</p>
<p>In theory, using RBAC we, should be able to restrict what Traefik is allowed to see to just resources in it's own ingresses as suggested in <a href="https://kubernetes.io/docs/admin/authorization/rbac/#rolebinding-and-clusterrolebinding" rel="noreferrer">this guide on RBAC</a>.</p>
<p>My understanding is that it still needs ClusterRole permissions such as this:</p>
<pre><code>---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
</code></pre>
<p>But using a Role binding instead of a ClusterRole binding will then restrict those permissions to just the ones in the given service accounts namespace. So if the service account is in the engineering namespace:</p>
<pre><code>---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: engineering
</code></pre>
<p>Then the role binding would be:</p>
<pre><code>---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
namespace: engineering
roleRef:
kind: ClusterRole
apiGroup: rbac.authorization.k8s.io
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
</code></pre>
<p>We then tie the service account to the Traefik deployment with:</p>
<pre><code>---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress-controller
namespace: engineering
spec:
replicas: 1
template:
spec:
serviceAccountName: traefik-ingress-controller
...
</code></pre>
<p>We also set the namespaces in the config as per the kubernetes <a href="https://docs.traefik.io/configuration/backends/kubernetes/" rel="noreferrer">configuration guide</a></p>
<pre><code>[kubernetes]
namespaces = ["engineering"]
</code></pre>
<p>However, when Traefik starts we get the error:</p>
<p><code>E0313 11:15:57.971237 1 reflector.go:199] github.com/containous/traefik/vendor/k8s.io/client-go/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: endpoints is forbidden: User "system:serviceaccount:engineering:traefik-ingress-controller" cannot list endpoints at the cluster scope: Unknown user "system:serviceaccount:engineering:traefik-ingress-controller"</code></p>
<p>The <code>Unknown user</code> is confusing as this is obviously binding a <code>ServiceAccount</code> not a user. Additionally, we can see the <code>ServiceAccount</code> has been created via kubectl.</p>
<p>I'm at a bit of a dead end here. </p>
<p>How do I make Traefik only pick up Ingresses in it's own namespace?</p>
| <p>This error may occur when Traefik believes that no namespaces were configured; that is, the TOML configuration you outlined</p>
<pre><code>[kubernetes]
namespaces = ["engineering"]
</code></pre>
<p>is not becoming effective.</p>
<p>I can think of two reasons:</p>
<ol>
<li>In addition to the TOML configuration file, you are also passing a <code>--kubernetes</code> command-line argument to Traefik (through an <code>args</code> entry in the Deployment manifest). This would disable the <code>namespaces</code> option.</li>
<li>The file is not properly mounted into the Deployment, causing the default <code>namespaces</code> value (the empty list) to be effective. To tell whether that's truly the case, we need to see your full ConfigMap and the relevant volume sections of your Deployment manifests.</li>
</ol>
|
<p>Is it possible to have multiple handlers in a container probe ?
Something like </p>
<pre><code>livenessProbe: {
httpGet: {
path: "/ping",
port: 9099
},
exec: {
command: [
"verify-correctness.sh",
]
}
}
</code></pre>
<hr>
<p>Update:</p>
<p>At Kube 1.6x <code>kubectl apply</code> for a config like this returns </p>
<pre><code>spec.template.spec.containers[0].livenessProbe.httpGet: Forbidden: may not specify more than 1 handler type
</code></pre>
<p>So maybe not supported ? </p>
<hr>
<p>Update 2:</p>
<p>After <a href="https://stackoverflow.com/a/49261781/5771861">Ara Pulido</a>'s answer I combined the httpGet into the command like this:</p>
<pre><code> "livenessProbe": {
"exec": {
"command": [
"sh",
"-c",
"reply=$(curl -s -o /dev/null -w %{http_code} http://127.0.0.1:9099/ping); if [ \"$reply\" -lt 200 -o \"$reply\" -ge 400 ]; then exit 1; fi; verify-correctness.sh;"
]
}
}
</code></pre>
| <p>It is not supported.</p>
<p>There is <a href="https://github.com/kubernetes/kubernetes/issues/37218" rel="nofollow noreferrer">an open issue</a> about this, which contains several workarounds people use.</p>
|
<p>I'm using <a href="https://github.com/rancher/rke" rel="noreferrer">rke</a> to generate a Kubernetes cluster in a private cloud. It produces a <code>kube_config_cluster.yml</code> file. Is there a way to add this config to my <code>$HOME/.kube/config</code> file?</p>
<p>Without having the .kube/config set, when using <code>kubectl</code>, I have to pass the argument:</p>
<pre><code>kubectl --kubeconfig kube_config_cluster.yml <command>
</code></pre>
<p>Or set the KUBECONFIG environment variable.</p>
<pre><code>export KUBECONFIG=kube_config_cluster.yml
</code></pre>
| <p><code>kubectl config merge</code> command is <a href="https://github.com/kubernetes/kubernetes/issues/46381" rel="noreferrer">not yet available</a>. But you can achieve a config merge by running:</p>
<p><strong>Command format:</strong></p>
<pre><code>KUBECONFIG=config1:config2 kubectl config view --flatten
</code></pre>
<p><strong>Example:</strong></p>
<p>Merge a config to <code>~/.kube/config</code> and write back to <code>~/.kube/config-new.yaml</code>.</p>
<h3>Do not pipe directly to the config file! Otherwise, it will delete all your old content!</h3>
<pre><code>KUBECONFIG=~/.kube/config:/path/to/another/config.yml kubectl config view --flatten > ~/.kube/config-new.yaml
</code></pre>
<p><code>cp ~/.kube/config-new.yaml ~/.kube/config</code></p>
|
<p>Should traffic from clients (outside world) to service inside k8s comes in through master nodes or worker nodes? and why?</p>
<p>From what i seen so far, docs are always showing LB pools consisting of master nodes instead of worker nodes. is there a reason for this?</p>
<p>in a big bluster, would it be more beneficial to send all traffic to a few designated worker nodes?</p>
<p>for example:
let say my k8s cluster has 2 master nodes, 4 worker nodes, and an external load balancer. most examples out there load balance incoming traffic to the 2 master nodes instead of the 4 worker nodes. why is this? is there a reason in term of efficiency/performance? </p>
<p>please advise. thank you.</p>
| <p>What do you mean the traffic goes through worker nodes or master node? You expose your service in the pods to the outside world via NodePort or LoadBalancer. So who ever hits the LoadBalancer or reach the node on a particular port would be redirected to the corresponding service. </p>
|
<p>Is it possible to use <code>gcloud container cluster create</code> to create a node pool for GKE using <strong>custom</strong> machine types (<a href="https://cloud.google.com/compute/docs/instances/creating-instance-with-custom-machine-type" rel="noreferrer">https://cloud.google.com/compute/docs/instances/creating-instance-with-custom-machine-type</a>)?</p>
<p>Instead of n1-standard-1/etc, I would like to create an instance with 4 vCPU and 8 GB memory (for example).</p>
<p>I know this is possible in the UI, but I want to wrap this <code>gcloud</code> command in a script.</p>
| <p>Seems like you are trying to use <em>custom machine types</em> rather than <a href="https://cloud.google.com/compute/docs/machine-types#standard_machine_types" rel="noreferrer">standard machine types</a> and want to use <a href="https://cloud.google.com/sdk/gcloud/reference/container/clusters/create" rel="noreferrer">gcloud command</a> for it like <code>gcloud container cluster create</code>.</p>
<p>This is actually supported by a beta gcloud command and you can create a cluster with custom machines by specifying the machine type as below</p>
<blockquote>
<p>--machine-type “custom-{cpus}-{MiB-ram}”</p>
</blockquote>
<p>For the example you have provided 4 vCPU and 8 GB memory, the command would be something like</p>
<p><code>gcloud beta container --project [project name] clusters create [cluster name] --zone [zone name] --username [username] --cluster-version "1.8.7-gke.1" --machine-type "custom-4-8192" ......</code></p>
<p>Hope this helps.</p>
|
<p>I'm trying to do a POC where I can use kubernetes with virtual switch connection type as Internal only. I managed to start minikube and cluster.</p>
<pre><code>PS C:\WINDOWS\system32> minikube status
minikube: Running
cluster: Running
kubectl: Correctly Configured: pointing to minikube-vm at <**Any random Ip**>
</code></pre>
<p>but when I run <code>Minikube Dashboard</code> command I'm getting following error.</p>
<pre><code>PS C:\WINDOWS\system32> minikube dashboard
Could not find finalized endpoint being pointed to by kubernetes-dashboard: Error validating service: Error getting serv
ice kubernetes-dashboard: Get https://<**Same Ip as above**>:8443/api/v1/namespaces/kube-system/services/kubernetes-dashboard:
Service Unavailable
</code></pre>
<p><strong>Following are the details</strong>:</p>
<pre><code>Driver : HyperV
OS : Windows 10
Dynamic Memory allocation : Disabled
Dynamic MAC allocation : Disabled
NATSwitch connection type : Internal Only
Minikube version : minikube-v0.25.1
Kubectl version : 1.9.0
</code></pre>
<p>(<strong>With external its work perfectly fine</strong> I need help with <strong>internal</strong> please refer the screenshot)</p>
<p>[<img src="https://i.stack.imgur.com/KbXAH.jpg" alt="Wi-fi property settings[1]">
<a href="https://i.stack.imgur.com/Ae16d.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ae16d.jpg" alt="HyperV virtual switch settings"></a></p>
| <p>I found the solution for this particular issue and I would like to share with others.</p>
<p>The <strong>root cause</strong> of this issue was proxy settings our network administrator had set some proxy settings policy and that was preventing to access dashboard which was being exposed at <code>https://<**Same Ip as above**>:8443/api/v1/namespaces/kube-system/services/kubernetes-dashboard</code></p>
<p><strong>Solution</strong> for this is you need to check your proxy configuration and update them accordingly.</p>
|
<p>I would like to access my Kubernetes bare-metal cluster with an exposed Nginx Ingress Controller for TLS termination. To be able to automate certificate renewal, I would like to use the Kubernetes addon <a href="https://github.com/jetstack/cert-manager" rel="nofollow noreferrer">cert-manager</a>, which is kube-lego's successor.</p>
<p>What I have done so far:</p>
<ul>
<li><p>Set up a Kubernetes (v1.9.3) cluster on bare-metal (1 master, 1 minion, both running Ubuntu 16.04.4 LTS) with kubeadm and flannel as pod network following this <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="nofollow noreferrer">guide</a>.</p></li>
<li><p>Installed <a href="https://github.com/kubernetes/charts/tree/master/stable/nginx-ingress" rel="nofollow noreferrer">nginx-ingress</a> (chart version 0.9.5) with Kubernetes package manager <a href="https://github.com/kubernetes/helm" rel="nofollow noreferrer">helm</a><br/>
<code>helm install --name nginx-ingress --namespace kube-system stable/nginx-ingress --set controller.hostNetwork=true,rbac.create=true,controller.service.type=ClusterIP</code></p></li>
<li><p>Installed <a href="https://github.com/kubernetes/charts/tree/master/stable/cert-manager" rel="nofollow noreferrer">cert-manager</a> (chart version 0.2.2) with helm<br/>
<code>helm install --name cert-manager --namespace kube-system stable/cert-manager --set rbac.create=true</code></p></li>
</ul>
<p>The Ingress Controller is exposed successfully and works as expected when I test with an Ingress resource. For proper Let's Encrypt certificate management and automatic renewal with cert-manager I do first of all need an Issuer resource. I created it from this <em>acme-staging-issuer.yaml</em>:<br/></p>
<pre><code>apiVersion: certmanager.k8s.io/v1alpha1
kind: Issuer
metadata:
name: letsencrypt-staging
namespace: default
spec:
acme:
server: https://acme-staging.api.letsencrypt.org/directory
email: [email protected]
privateKeySecretRef:
name: letsencrypt-staging
http01: {}
</code></pre>
<p><em>kubectl create -f acme-staging-issuer.yaml</em> runs successfully but <em>kubectl describe issuer/letsencrypt-staging</em> gives me:</p>
<pre><code>Status:
Acme:
Uri:
Conditions:
Last Transition Time: 2018-03-05T21:29:41Z
Message: Failed to verify ACME account: Get https://acme-staging.api.letsencrypt.org/directory: tls: oversized record received with length 20291
Reason: ErrRegisterACMEAccount
Status: False
Type: Ready
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ErrVerifyACMEAccount 1s (x11 over 7s) cert-manager-controller Failed to verify ACME account: Get https://acme-staging.api.letsencrypt.org/directory: tls: oversized record received with length 20291
Warning ErrInitIssuer 1s (x11 over 7s) cert-manager-controller Error initializing issuer: Get https://acme-staging.api.letsencrypt.org/directory: tls: oversized record received with length 20291
</code></pre>
<p>Without a ready Issuer, I can not proceed to generate cert-manager Certificates or utilse the ingress-shim (for automatic renewal).</p>
<p>What am I missing in my setup? Is it sufficient to expose the ingress controller using <em>hostNetwork=true</em> or is there a better way to expose the its ports 80 and 443 on a bare-metal cluster? How can I resolve <em>tls: oversized record received</em> error when creating a cert-manager Issuer resource?</p>
| <p>The <em>tls: oversized record received</em> error was caused by a misconfigured <code>/etc/resolv.conf</code> of the Kubernetes minion. It could be resolved by editing it like this:</p>
<pre><code>$ sudo vi /etc/resolvconf/resolv.conf.d/base
</code></pre>
<p>Add nameserver list:</p>
<pre><code>nameserver 8.8.8.8
nameserver 8.8.4.4
</code></pre>
<p>Update resolvconf:</p>
<pre><code>$ sudo resolvconf -u
</code></pre>
|
<p>Kubernetes <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset" rel="nofollow noreferrer">StatefulSets</a> create internal DNS entries with stable network IDs. The docs describe this here:</p>
<blockquote>
<p>Each Pod in a StatefulSet derives its hostname from the name of the
StatefulSet and the ordinal of the Pod. The pattern for the
constructed hostname is $(statefulset name)-$(ordinal). The example
above will create three Pods named web-0,web-1,web-2. A StatefulSet
can use a Headless Service to control the domain of its Pods. The
domain managed by this Service takes the form: $(service
name).$(namespace).svc.cluster.local, where “cluster.local” is the
cluster domain. As each Pod is created, it gets a matching DNS
subdomain, taking the form: $(podname).$(governing service domain),
where the governing service is defined by the serviceName field on the
StatefulSet.</p>
</blockquote>
<p>I am experimenting with headless services, and this works great for communication between individual services i.e <code>web-0.web.default.svc.cluster.local</code> can connect and communicate with <code>web-1.web.default.svc.cluster.local</code> just fine.</p>
<p><strong>Is there any way that I can configure this to work outside of the cluster network as well, where "cluster.local" is replaced with something like "clustera.com"?</strong></p>
<p>I would like to give another kubernetes cluster, lets call it clusterb.com, access to the individual services of the original cluster (clustera.com); I'm hoping it would look something like clusterb simply hitting endpoints like <code>web-1.web.default.svc.clustera.com</code> and <code>web-0.web.default.svc.clustera.com</code>.</p>
<p>Is this possible? I would like access to the individual services, not a load balanced endpoint.</p>
| <p>I would suggest you to test the following solutions and check if they can help you to achieve your goal in your particular scenario:</p>
<ul>
<li><p>The first one is for sure the easiest and I believe that you didn't implemented it for some reason and you did not reported in the question why.</p>
<p>I am talking about <a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="nofollow noreferrer">Headless services</a> Without selectors CNAME records for ExternalName-type services. </p>
<blockquote>
<p><a href="https://akomljen.com/kubernetes-tips-part-1/" rel="nofollow noreferrer">ExternalName</a>: Maps the service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value. No proxying of any kind is set up. This requires version 1.7 or higher of kube-dns</p>
</blockquote>
<p>Therefore if you need to point a service of an other cluster you will need to register a domain name pointing to the relative IP of <code>clusterb</code>.</p></li>
<li><p>The second solution that I have never tested, but I believe it can apply to your case is to make use of a <a href="https://kubernetes.io/docs/concepts/cluster-administration/federation/" rel="nofollow noreferrer">Federated Cluster</a> whose reason why to use it is accordinding to the documentation:</p>
<blockquote>
<p>Cross cluster discovery: Federation provides the ability to auto-configure DNS servers and load balancers with backends from all clusters. For example, you can ensure that a global VIP or DNS record can be used to access backends from multiple clusters. </p>
</blockquote></li>
</ul>
|
<p>I am actually working on a development environment using Gradle, Docker, Minikube and Helm.</p>
<p>I am using a bunch of bash scripts to get things done.</p>
<p>What I achieved so far is:</p>
<ol>
<li>Gradle builds the jar using a plugin for versioning.</li>
<li>Gradle builds a docker image with the same version calculated on the jar job.</li>
</ol>
<p>Now I need to be able to propagate the version calculated by Gradle to Helm so it can pick the right docker image.</p>
<p>The approach I already have in mind is to define an environment variable so it can be used by Helm.</p>
<p>The problem is that I would need to redefine it afterward.</p>
<p>Is any better way of doing that? </p>
| <p>Most Helm charts contain at least the following in their <code>values.yaml</code> file, which sets a default docker image tag, and also allows the user installing/upgrading the chart to specify a different image without having to modify the chart itself.</p>
<pre><code># values.yaml
image:
repository: <docker-repo-url-here>
tag: <docker-image-tag-here>
</code></pre>
<p>And in the deployment yaml, fetch the values from the values.yaml</p>
<pre><code># deployment.yaml
kind: Deployment
spec:
template:
spec:
containers:
- name: container-name
image: "{{ .Values.image.repository }}:{{ .Values.image.tag}}"
</code></pre>
<p>From there, you can do a simple <code>helm upgrade <release-name> <chart-path> --set image.tag=<new-image-tag></code> when you want to use a new image.</p>
|
<p>Rancher is designed (as best as I can tell) to own and run a kubernetes cluster. Rancher does provide a configuration so that kubectl can interact w/ the kubernetes cluster. Rancher seems like a nice tool. But as far as I can tell, there is no way to connect to an existing kubernetes cluster. Is there any way to do this?</p>
| <p>If you are looking for a service that can connect to an existing k8s cluster(s) then try Containership. You can use Kubectl and/or the Containership UI to manage you workloads, config maps, etc on multiple clusters.</p>
<p>Hope this helps!</p>
|
<p>I am running servers that are registering with Consul, external to my Kubernetes 1.8.x cluster. Consul is running inside my Kube cluster (configured by Helm), and is peered with an external Consul cluster. Kube-dns is configured to use the internal Consul pods as "stubDomains" with the following ConfigMap:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
data:
stubDomains: |
{
"consul": [
"10.244.0.xxx:8600",
"10.244.1.xxx:8600",
"10.244.2.xxx:8600"
]
}
</code></pre>
<p>When everything is working as expected, kube-dns resolves the external consul domain names. The problem is when a Consul pod crashes and restarts with a new IP address.</p>
<p>Is there a way to recovery from Consul pod crashes, without having to manually change the IP addresses listed in the kube-dns ConfigMap?</p>
| <p>I ended up modifying the "consul-ui" service (the one with an IP address) to expose the Consul DNS port. I copied the following from the "consul" service (the one without a cluster IP) to "consul-ui" service, in the ["spec"]["port"] section:</p>
<pre><code> {
"name": "consuldns-tcp",
"protocol": "TCP",
"port": 8600,
"targetPort": 8600,
"nodePort": 30766
},
{
"name": "consuldns-udp",
"protocol": "UDP",
"port": 8600,
"targetPort": 8600,
"nodePort": 32559
}
</code></pre>
<p>Then used the service IP address instead of the Pod IP addresses in the kube-dns ConfigMap.</p>
|
<p>Beginners question sorry for asking i searched on google how i need to add flags to my already running kubelet i have to add the flags
--network-plugin=cni --network-plugin-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin</p>
<p>When i navigate to the /etc/systemd/system/kubelet.service.d/10-kubeadm.conf file these flags are already added to the env kubelet network args. </p>
<p>Is kubelet already configured with these flags or do i need to add those flag in an other file ? </p>
<p>PS: The reason i need to add those flags is a solution (maybe a solution i'am still debugging ) for my kubernetes problem: <a href="https://stackoverflow.com/questions/49113253/kubernetes-pods-stuck-at-containercreating">kubernetes pods stuck at containercreating</a></p>
| <p>The kubelet by default comes with the flags you mentioned. If you have access to the nodes running this cluster, you can check the running kubelet process to check whether it started using the flags mentioned above using </p>
<pre><code>ps -ef | grep kubelet
</code></pre>
<p>You would have an output similar to this</p>
<pre><code>root 11841 1 3 Mar07 ? 05:46:46 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt --cadvisor-port=0 --cgroup-driver=cgroupfs --cloud-provider=openstack --cloud-config=/etc/kubernetes/cloud-config --rotate-certificates=true --cert-dir=/var/lib/kubelet/pki
</code></pre>
<p>If you still want to change config , edit /etc/systemd/system/kubelet.service.d/10-kubeadm.conf file and restart kubelet for the changes to take into effect </p>
<pre><code>systemctl daemon-reload
systemctl restart kubelet
</code></pre>
|
<p><strong>Is there a simple way to change the schedule of a kubernetes cronjob</strong> like <code>kubectl change cronjob my-cronjob "10 10 * * *"</code>? Or any other way without needing to do <code>kubectl apply -f deployment.yml</code>? The latter can be extremely cumbersome in a complex CI/CD setting because manually editing the deployment yaml is often not desired, especially not if the file is created from a template in the build process.</p>
<p><strong>Alternatively, is there a way to start a cronjob manually?</strong> For instance, a job is scheduled to start in 22 hours, but I want to trigger it manually once now without changing the cron schedule for good (for testing or an initial run)?</p>
| <p>You can update only the selected field of resourse by patching it</p>
<pre><code>patch -h
Update field(s) of a resource using strategic merge patch, a JSON merge patch, or a JSON patch.
JSON and YAML formats are accepted.
Please refer to the models in
https://htmlpreview.github.io/?https://github.com/kubernetes/kubernetes/blob/HEAD/docs/api-reference/v1/definitions.html
to find if a field is mutable.
</code></pre>
<p>As provided in comment for ref :</p>
<pre><code>kubectl patch cronjob my-cronjob -p '{"spec":{"schedule": "42 11 * * *"}}'
</code></pre>
<p>Also, in current kubectl versions, to launch a onetime execution of a declared cronjob, you can manualy create a job that adheres to the cronjob spec with</p>
<pre><code>kubectl create job --from=cronjob/mycron
</code></pre>
|
<p>I am trying to deploy my Docker images using Kubernetes orchestration tools.When I am reading about Kubernetes, I am seeing documentation and many YouTube video tutorial of working with Kubernetes. In there I only found that creation of pods, services and creation of that .yml files. Here I have doubts and I am adding below section,</p>
<ol>
<li>When I am using Kubernetes, how I can create clusters and nodes ?</li>
<li>Can I deploy my current docker-compose build image directly using pods only? Why I need to create services yml file?</li>
</ol>
<p>I new to containerizing, Docker and Kubernetes world.</p>
| <ol>
<li><p>My favorite way to create clusters is <a href="https://github.com/kubernetes-incubator/kubespray#readme" rel="nofollow noreferrer">kubespray</a> because I find <a href="https://github.com/ansible/ansible#readme" rel="nofollow noreferrer">ansible</a> very easy to read and troubleshoot, unlike more monolithic "run this binary" mechanisms for creating clusters. The kubespray repo has a <a href="https://www.vagrantup.com" rel="nofollow noreferrer">vagrant</a> configuration file, so you can even try out a full cluster on your local machine, to see what it will do "for real"</p>
<p>But with the popularity of kubernetes, I'd bet if you ask 5 people you'll get 10 answers to that question, so ultimately pick the one you find easiest to reason about, because almost without fail you will need to <em>debug</em> those mechanisms when something inevitably goes wrong</p>
</li>
<li><p>The short version, as Hitesh said, is "yes," but the long version is that one will need to be careful because local docker containers and kubernetes clusters are trying to solve different problems, and (as a general rule) one could not easily swap one in place of the other.</p>
<p>As for the second part of your question, a <code>Service</code> in kubernetes is designed to decouple the current provider of some networked functionality from the long-lived "promise" that such functionality will exist and work. That's because in kubernetes, the Pods (and Nodes, for that matter) are disposable and subject to termination at almost any time. It would be severely problematic if the consumer of a networked service needed to constantly update its IP address/ports/etc to account for the coming-and-going of Pods. This is actually the exact same problem that AWS's Elastic Load Balancers are trying to solve, and kubernetes will cheerfully provision an ELB to represent a <code>Service</code> if you indicate that is what you would like (and similar behavior for other cloud providers)</p>
</li>
</ol>
<p>If you are not yet comfortable with containers and docker as concepts, then I would strongly recommend starting with those topics, and moving on to understanding how kubernetes interacts with those two things after you have a solid foundation. Else, a lot of the terminology -- and even the problems kubernetes is trying to solve -- may continue to seem opaque</p>
|
<p>(Using Istio 0.5.1, kubectl 1.9.1/1.9.0 for client/server, minikube 0.25.0)</p>
<p>I'm trying to get Istio EgressRules to work with Kubernetes Services, but having some trouble.</p>
<p>I tried to set up EgressRules 3 ways:</p>
<ol>
<li>An ExternalName service which points to another domain (like
www.google.com)</li>
<li>A Service with no selector, but an associated
Endpoint object (for services that have an IP address but no DNS
name)</li>
<li>(for comparison) No Kubernetes service, just an EgressRule</li>
</ol>
<p>I figured I could use the FQDN of the kubernetes service as the HTTP-based EgressRule destination service (like <code>ext-service.default.svc.cluster.local</code>), and this is what I attempted for both an ExternalName service as well as a Service with no selectors but an associated Endpoints object.</p>
<p>For the former, I created the following <code>yaml</code> file:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: ext-service
spec:
type: ExternalName
externalName: www.google.com
---
apiVersion: config.istio.io/v1alpha2
kind: EgressRule
metadata:
name: ext-egress-rule
spec:
destination:
service: ext-service.default.svc.cluster.local
ports:
- port: 443
protocol: https
</code></pre>
<p>For the latter, I created this <code>yaml</code> file (I just pinged google and grabbed the IP address):</p>
<pre><code>kind: Endpoints
apiVersion: v1
metadata:
name: ext-service
subsets:
- addresses:
- ip: 216.58.198.78
ports:
- port: 443
---
kind: Service
apiVersion: v1
metadata:
name: ext-service
spec:
ports:
- protocol: TCP
port: 443
targetPort: 443
---
apiVersion: config.istio.io/v1alpha2
kind: EgressRule
metadata:
name: ext-service-egress-rule
spec:
destination:
service: ext-service.default.svc.cluster.local
ports:
- port: 443
protocol: https
</code></pre>
<p>In both cases, in the application code, I access:</p>
<pre><code>http://ext-service.default.svc.cluster.local:443
</code></pre>
<p>My assumption is that the traffic will flow like:</p>
<pre><code>[[ app -> envoy proxy -> (tls origination) -> kubernetes service ]] -> external service
</code></pre>
<p>where <code>[[ ... ]]</code> is the boundary of the service mesh (and also the Kubernetes cluster)</p>
<p>Results:</p>
<ul>
<li>The <code>ExternalName</code> Service <em>almost</em> worked as expected, but it brought me to Google's 404 page (and sometimes the response just seemed empty, not sure how to replicate one or the other specifically)</li>
<li><p>The Service with the Endpoint object did not work, instead printing this message (when making the request via Golang, but I don't think that matters):</p>
<blockquote>
<p>Get <a href="http://ext-service.default.svc.cluster.local:443" rel="nofollow noreferrer">http://ext-service.default.svc.cluster.local:443</a>: EOF</p>
</blockquote>
<p>This also sometimes gives an empty response.</p></li>
</ul>
<p>I'd like to use Kubernetes services (even though it's for external traffic) for a few reasons:</p>
<ol>
<li>You can't use an IP address for the EgressRule's destination service. From <a href="https://istio.io/docs/concepts/traffic-management/rules-configuration.html#egress-rules" rel="nofollow noreferrer">Egress Rules configuration</a>: "The destination of an egress rule ... can be either a fully qualified or wildcard domain name".</li>
<li>For external services that don't have a domain name (some on-prem legacy/monolith service without a DNS name), I'd like the application to be able to access them not by IP address but by a kube-dns (or Istio-related similar) name.</li>
<li>(related to previous) I like the additional layer of abstraction that a Kubernetes service provides, so I can change the underlying destination without changing the EgressRule (unless I'm mistaken and this isn't the right way to architect this). Is the EgressRule meant to replace Kubernetes services for external traffic entirely and without creating additional Kubernetes services?</li>
</ol>
<p>Using <code>https://</code> in the app code isn't an option because then the request would have to disable TLS verification since the kube-dns name doesn't match any on the certificate. It also wouldn't be observable.</p>
<p>If I use the following EgressRule (without any Kubernetes Services), accessing Google via <code>http://www.google.com:443</code> works fine, getting the exact html representation that I expect:</p>
<pre><code>apiVersion: config.istio.io/v1alpha2
kind: EgressRule
metadata:
name: google-egress-rule
spec:
destination:
service: www.google.com
ports:
- port: 443
protocol: https
</code></pre>
<p>I saw there's a TCP EgressRule, but I would rather not have to specify rules for each block of IPs. From <a href="https://istio.io/docs/tasks/traffic-management/egress-tcp.html" rel="nofollow noreferrer">TCP Egress</a>: "In TCP egress rules as opposed to HTTP-based egress rules, the destinations are specified by IPs or by blocks of IPs in CIDR notation.".</p>
<p>Also, I would still like the HTTP-based observability that comes from L7 instead of L4, so I'd prefer an HTTP-based egress. (With TCP Egresses, "The HTTPS traffic originated by the application will be treated by Istio as opaque TCP").</p>
<p>Any help getting a Kubernetes service as "destination service" of an EgressRule (or help understanding why this isn't necessary if that's the case) is appreciated. Thanks!</p>
| <p>The solution is:</p>
<ol>
<li>Define a Kubernetes ExternalName service to point to www.google.com</li>
<li>Do not define any EgressRules</li>
<li>Create a RouteRule to set the Host header.</li>
</ol>
<p>In your case, define an ExternalName service with the port and the protocol:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: ext-service
spec:
type: ExternalName
externalName: www.google.com
ports:
- port: 80
# important to set protocol name
name: http
---
</code></pre>
<p>Define an HTTP Rewrite Route Rule to set the Host header:</p>
<pre><code>apiVersion: config.istio.io/v1alpha2
kind: RouteRule
metadata:
name: externalgoogle-rewrite-rule
#namespace: default
spec:
destination:
name: ext-service
rewrite:
authority: www.google.com
---
</code></pre>
<p>Then access it with <code>curl</code>, for example: <code>curl ext-service</code></p>
<p>Without the Route Rule, the request will arrive to google.com, with the Host header being <code>ext-service</code>. The web server does not know where to forward such a request since google.com does not have such a virtual host. This is what you experienced:</p>
<blockquote>
<p>it brought me to Google's 404 page</p>
</blockquote>
|
<p>This tutorial of Google Kubernetes Engine seems not to work.</p>
<blockquote>
<p><a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app</a></p>
</blockquote>
<pre><code>$ gcloud beta container clusters create hello-cluster --num-nodes=3
WARNING: You invoked `gcloud beta`, but with current configuration Kubernetes Engine v1 API will be used instead of v1beta1 API.
`gcloud beta` will switch to use Kubernetes Engine v1beta1 API by default by the end of March 2018.
If you want to keep using `gcloud beta` to talk to v1 API temporarily, please set `container/use_v1_api` property to true.
But we will drop the support for this property at the beginning of May 2018, please migrate if necessary.
ERROR: (gcloud.beta.container.clusters.create) ResponseError: code=400, message=v1 API cannot be used to access GKE regional clusters. See http:/goo.gl/Vykvt2 for more information.
</code></pre>
<p>It seems this command request <code>GKE regional clusters</code> but I have no idea how to stop it.</p>
| <p>It worked well by adding <code>--zone=</code> option.</p>
<pre><code> gcloud container clusters create hello-cluster --num-nodes=3 --zone=asia-northeast1-a
</code></pre>
<p>You can find a proper zone name with the following command;</p>
<pre><code>gcloud compute zones list
</code></pre>
<p><code>NAME</code> and <code>REGION</code> are slightly different. Please remind to use <code>NAME</code> to the <code>--zone=</code> option.</p>
<p>You can find it in this <code>Available regions & zones</code> document also.</p>
<blockquote>
<p><a href="https://cloud.google.com/compute/docs/regions-zones/#available" rel="noreferrer">https://cloud.google.com/compute/docs/regions-zones/#available</a></p>
</blockquote>
<p>Hope it helps.</p>
|
<p>Ok, so currently I've got kubernetes master up and running on AWS EC2 instance, and a single worker running on my laptop:</p>
<pre><code>$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 34d v1.9.2
worker Ready <none> 20d v1.9.2
</code></pre>
<p>I have created a Deployment using the following configuration:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: hostnames
labels:
app: hostnames-deployment
spec:
selector:
matchLabels:
app: hostnames
replicas: 1
template:
metadata:
labels:
app: hostnames
spec:
containers:
- name: hostnames
image: k8s.gcr.io/serve_hostname
ports:
- containerPort: 9376
protocol: TCP
</code></pre>
<p>The deployment is running:</p>
<pre><code>$ kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
hostnames 1 1 1 1 1m
</code></pre>
<p>A single pod has been created on the worker node:</p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
hostnames-86b6bcdfbc-v8s8l 1/1 Running 0 2m
</code></pre>
<p>From the worker node, I can curl the pod and get the information:</p>
<pre><code>$ curl 10.244.8.5:9376
hostnames-86b6bcdfbc-v8s8l
</code></pre>
<p>I have created a service using the following configuration:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: hostnames-service
spec:
selector:
app: hostnames
ports:
- port: 80
targetPort: 9376
</code></pre>
<p>The service is up and running:</p>
<pre><code>$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hostnames-service ClusterIP 10.97.21.18 <none> 80/TCP 1m
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 34d
</code></pre>
<p>As I understand, the service should expose the pod cluster-wide and I should be able to use the service IP to get the information pod is serving from any node on the cluster.</p>
<p>If I curl the service from the worker node it works just as expected:</p>
<pre><code>$ curl 10.97.21.18:80
hostnames-86b6bcdfbc-v8s8l
</code></pre>
<p>But if I try to curl the service from the master node located on the AWS EC2 instance, the request hangs and gets timed out eventually:</p>
<pre><code>$ curl -v 10.97.21.18:80
* Rebuilt URL to: 10.97.21.18:80/
* Trying 10.97.21.18...
* connect to 10.97.21.18 port 80 failed: Connection timed out
* Failed to connect to 10.97.21.18 port 80: Connection timed out
* Closing connection 0
curl: (7) Failed to connect to 10.97.21.18 port 80: Connection timed out
</code></pre>
<p><strong>Why can't the request from the master node reach the pod on the worker node by using the Cluster-IP service?</strong></p>
<p>I have read quite a bit of articles regarding kubernetes networking and the official kubernetes services documentation and couldn't find a solution.</p>
| <p>Depends of which mode you using it working different in details, but conceptually same.</p>
<p>You trying to connect to 2 different types of addresses - the pod IP address, which is accessible from the node, and the virtual IP address, which is accessible from pods in the Kubernetes cluster.</p>
<p>IP address of the service is <strong>not</strong> an IP address on some pod or any other subject, that is a virtual address which mapped to pods IP address based on rules you define in service and it managed by <code>kube-proxy</code> daemon, which is a part of Kubernetes. </p>
<p>That address specially desired for communication inside a cluster for make able to access the pods behind a service without caring about how much replicas of pod you have and where it actually working, because service IP is static, unlike pod's IP. </p>
<p>So, service IP address desired to be available from other pod, not from nodes. </p>
<p>You can read in <a href="https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies" rel="nofollow noreferrer">official documentation</a> about how the Service Virtual IPs works.</p>
|
<p>I am trying to submit a Spark Job on Kubernetes natively using Apache Spark 2.3.
When I use a Docker image on Docker Hub (for Spark 2.2), it works:</p>
<pre><code>bin/spark-submit \
--master k8s://http://localhost:8080 \
--deploy-mode cluster \
--name spark-pi \
--class org.apache.spark.examples.SparkPi \
--conf spark.executor.instances=5 \
--conf spark.kubernetes.container.image=kubespark/spark-driver:v2.2.0-kubernetes-0.5.0 \
local:///home/fedora/spark-2.3.0-bin-hadoop2.7/examples/jars/spark-examples_2.11-2.3.0.jar
</code></pre>
<p>However, when I try to build a local Docker image,</p>
<pre><code>sudo docker build -t spark:2.3 -f kubernetes/dockerfiles/spark/Dockerfile .
</code></pre>
<p>and submit the job as:</p>
<pre><code>bin/spark-submit \
--master k8s://http://localhost:8080 \
--deploy-mode cluster \
--name spark-pi \
--class org.apache.spark.examples.SparkPi \
--conf spark.executor.instances=5 \
--conf spark.kubernetes.container.image=spark:2.3 \
local:///home/fedora/spark-2.3.0-bin-hadoop2.7/examples/jars/spark-examples_2.11-2.3.0.jar
</code></pre>
<p>I get the following error; <strong>that is "repository docker.io/spark not found: does not exist or no pull access, reason=ErrImagePull, additionalProperties={})"</strong></p>
<pre><code>status: [ContainerStatus(containerID=null, image=spark:2.3, imageID=, lastState=ContainerState(running=null, terminated=null, waiting=null, additionalProperties={}), name=spark-kubernetes-driver, ready=false, restartCount=0, state=ContainerState(running=null, terminated=null, waiting=ContainerStateWaiting(message=rpc error: code = 2 desc = repository docker.io/spark not found: does not exist or no pull access, reason=ErrImagePull, additionalProperties={}), additionalProperties={}), additionalProperties={})]
2018-03-15 11:09:54 INFO LoggingPodStatusWatcherImpl:54 - State changed, new state:
pod name: spark-pi-3a1a6e8ce615395fa7df81eac06d58ed-driver
namespace: default
labels: spark-app-selector -> spark-8d9fdaba274a4eb69e28e2a242fe86ca, spark-role -> driver
pod uid: 5271602b-2841-11e8-a78e-fa163ed09d5f
creation time: 2018-03-15T11:09:25Z
service account name: default
volumes: default-token-v4vhk
node name: mlaas-p4k3djw4nsca-minion-1
start time: 2018-03-15T11:09:25Z
container images: spark:2.3
phase: Pending
status: [ContainerStatus(containerID=null, image=spark:2.3, imageID=, lastState=ContainerState(running=null, terminated=null, waiting=null, additionalProperties={}), name=spark-kubernetes-driver, ready=false, restartCount=0, state=ContainerState(running=null, terminated=null, waiting=ContainerStateWaiting(message=Back-off pulling image "spark:2.3", reason=ImagePullBackOff, additionalProperties={}), additionalProperties={}), additionalProperties={})]
</code></pre>
<p>Also, I tried to run a local Docker registry as described in:
<a href="https://docs.docker.com/registry/deploying/#run-a-local-registry" rel="nofollow noreferrer">https://docs.docker.com/registry/deploying/#run-a-local-registry</a></p>
<pre><code>docker run -d -p 5000:5000 --restart=always --name registry registry:2
sudo docker tag spark:2.3 localhost:5000/spark:2.3
sudo docker push localhost:5000/spark:2.3
</code></pre>
<p>I can do this successfully:
docker pull localhost:5000/spark:2.3</p>
<p>However, when I submit the Spark job:</p>
<pre><code>bin/spark-submit \
--master k8s://http://localhost:8080 \
--deploy-mode cluster \
--name spark-pi \
--class org.apache.spark.examples.SparkPi \
--conf spark.executor.instances=5 \
--conf spark.kubernetes.container.image=localhost:5000/spark:2.3 \
local:///home/fedora/spark-2.3.0-bin-hadoop2.7/examples/jars/spark-examples_2.11-2.3.0.jar
</code></pre>
<p>I again got ErrImagePull:</p>
<pre><code>status: [ContainerStatus(containerID=null, image=localhost:5000/spark:2.3, imageID=, lastState=ContainerState(running=null, terminated=null, waiting=null, additionalProperties={}), name=spark-kubernetes-driver, ready=false, restartCount=0, state=ContainerState(running=null, terminated=null, waiting=ContainerStateWaiting(message=rpc error: code = 2 desc = Error while pulling image: Get http://localhost:5000/v1/repositories/spark/images: dial tcp [::1]:5000: getsockopt: connection refused, reason=ErrImagePull, additionalProperties={}), additionalProperties={}), additionalProperties={})]
</code></pre>
<p>Is there a way in Spark 2.3 to use local Docker images when submitting jobs natively to Kubernetes?</p>
<p>Thank you in advance.</p>
| <p>I guess you using something like a <a href="https://kubernetes.io/docs/getting-started-guides/minikube/" rel="nofollow noreferrer">minikube</a> for set-up a local Kubernetes cluster and in most of cases it using a virtual machines to spawn a cluster.
So, when Kubernetes trying to pull image from <code>localhost</code> address, it connecting to virtual machine local address, not to your computer address. Moreover, your local registry bind only on localhost and not accessible from virtual machines.</p>
<p>The idea of a fix is to make your local docker registry accessible for your Kubernetes and to allow pull images from local insecure registry.</p>
<p>So, first of all, bind your docker registry on your PC to all interfaces:</p>
<p><code>docker run -d -p 0.0.0.0:5000:5000 --restart=always --name registry registry:2</code></p>
<p>Then, check your local IP address of the PC. It will be something like 172.X.X.X or 10.X.X.X. The way of the check is depends of your OS, so just google it if you don't know how to get it.</p>
<p>After, start your minikube with an additional option:</p>
<p><code>minikube start --insecure-registry="<your-local-ip-address>:5000"</code>, where a 'your-local-ip-address' is your local IP address.</p>
<p>Now you can try to run a spark job with a new address of a registry and K8s has be able to download your image:</p>
<p><code>spark.kubernetes.container.image=<your-local-ip-address>:5000/spark:2.3</code></p>
|
<p>I have a raspberry pi cluster (one master , 3 nodes) </p>
<p>My basic image is : raspbian stretch lite</p>
<p>I already set up a basic kubernetes setup where a master can see all his nodes (kubectl get nodes) and they're all running.
I used a weave network plugin for the network communication</p>
<p>When everything is all setup i tried to run a nginx pod (first with some replica's but now just 1 pod) on my cluster as followed
kubectl run my-nginx --image=nginx</p>
<p>But somehow the pod get stuck in the status "Container creating" , when i run docker images i can't see the nginx image being pulled. And normally an nginx image is not that large so it had to be pulled already by now (15 minutes).
The kubectl describe pods give the error that the pod sandbox failed to create and kubernetes will rec-create it.</p>
<p>I searched everything about this issue and tried the solutions on stackoverflow (reboot to restart cluster, searched describe pods , new network plugin tried it with flannel) but i can't see what the actual problem is.
I did the exact same thing in Virtual box (just ubuntu not ARM ) and everything worked.</p>
<p>First i thougt it was a permission issue because i run everything as a normal user , but in vm i did the same thing and nothing changed.
Then i checked kubectl get pods --all-namespaces to verify that the pods for the weaver network and kube-dns are running and also nothing wrong over there . </p>
<p>Is this a firewall issue in Raspberry pi ?
Is the weave network plugin not compatible (even the kubernetes website says it is) with arm devices ?
I 'am guessing there is an api network problem and thats why i can't get my pod runnning on a node</p>
<p>[EDIT]
Log files</p>
<p>kubectl describe podName </p>
<pre><code>>
> Name: my-nginx-9d5677d94-g44l6 Namespace: default Node: kubenode1/10.1.88.22 Start Time: Tue, 06 Mar 2018 08:24:13
> +0000 Labels: pod-template-hash=581233850
> run=my-nginx Annotations: <none> Status: Pending IP: Controlled By: ReplicaSet/my-nginx-9d5677d94 Containers:
> my-nginx:
> Container ID:
> Image: nginx
> Image ID:
> Port: 80/TCP
> State: Waiting
> Reason: ContainerCreating
> Ready: False
> Restart Count: 0
> Environment: <none>
> Mounts:
> /var/run/secrets/kubernetes.io/serviceaccount from default-token-phdv5 (ro) Conditions: Type Status
> Initialized True Ready False PodScheduled True
> Volumes: default-token-phdv5:
> Type: Secret (a volume populated by a Secret)
> SecretName: default-token-phdv5
> Optional: false QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute for
> 300s
> node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From
> Message ---- ------ ---- ----
> ------- Normal Scheduled 5m default-scheduler Successfully assigned my-nginx-9d5677d94-g44l6 to kubenode1 Normal
> SuccessfulMountVolume 5m kubelet, kubenode1 MountVolume.SetUp
> succeeded for volume "default-token-phdv5" Warning
> FailedCreatePodSandBox 1m kubelet, kubenode1 Failed create pod
> sandbox. Normal SandboxChanged 1m kubelet, kubenode1
> Pod sandbox changed, it will be killed and re-created.
</code></pre>
<p>kubectl logs podName</p>
<pre><code>Error from server (BadRequest): container "my-nginx" in pod "my-nginx-9d5677d94-g44l6" is waiting to start: ContainerCreating
</code></pre>
<p>journalctl -u kubelet gives this error </p>
<pre><code>Mar 12 13:42:45 kubeMaster kubelet[16379]: W0312 13:42:45.824314 16379 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d
Mar 12 13:42:45 kubeMaster kubelet[16379]: E0312 13:42:45.824816 16379 kubelet.go:2104] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
</code></pre>
<p>The problem seems to be with my network plugin. In my /etc/systemd/system/kubelet.service.d/10.kubeadm.conf . the flags for the network plugins are present ? environment= kubelet_network_args --cni-bin-dir=/etc/cni/net.d
--network-plugin=cni</p>
| <p>Thank you all for responding to my question.
I solved my problem now. For anyone who has come to my question in the future the solution was as followed.</p>
<p>I cloned my raspberry pi images because i wanted a basicConfig.img for when i needed to add a new node to my cluster of when one gets down.</p>
<p>Weave network (the plugin i used) got confused because on every node and master the os had the same machine-id. When i deleted the machine id and created a new one (and reboot the nodes) my error got fixed.
The commands to do this was </p>
<pre><code>sudo rm /etc/machine-id
sudo rm /var/lib/dbus/machine-id
sudo dbus-uuidgen --ensure=/etc/machine-id
</code></pre>
<p>Once again my patience was being tested. Because my kubernetes setup was normal and my raspberry pi os was normal. I founded this with the help of someone in the kubernetes community. This again shows us how important and great our IT community is. To the people of the future who will come to this question. I hope this solution will fix your error and will decrease the amount of time you will be searching after a stupid small thing.</p>
|
<p>I know a lot of people already had similar question, i read a few of them, but found nothing what actualy helped me so far.</p>
<p>I have a gitlab with private repo enabled, I also use Google Kubernetes Engine. I have a few Docker container in my private repo, and I want to deploy one of them to the Kubernetes Engine.</p>
<p>I have created a secret with <code>kubectl create secret generic db-user-pass --from-file=./username.txt --from-file=./password.txt</code>
I also tried <code>kubectl create secret docker-registry name --docker-server=registry.xy.z --docker-username=google --docker-password=xyz [email protected]</code>
Then I created my Deployment file:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: backend-test
labels:
app: 13371337
spec:
replicas: 1
template:
metadata:
labels:
app: 13371337
spec:
containers:
- name: backend
image: registry.xy.z/group/project/backend:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
imagePullSecrets:
- name: db-user-pass or name
</code></pre>
<p>Any ideas how to get it running?</p>
| <p>Using <code>kubectl create secret docker-registry name</code> is a <a href="https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod" rel="noreferrer">right way</a> to provide credentials of private docker registry.</p>
<p><code>imagePullSecrets</code> options looking good too, if you specify there a name of your docker-registry secret.</p>
<p>So, from Kubernetes path everything looking good.</p>
<p>Try to check events of the pod which will be created by Deployment, just find you pod by <code>kubectl get pods</code> and call <code>kubectl describe pod $name_of_your_pod</code>, you will see an actual reason why it cannot pull an image.</p>
<p>Also, if your depository is insecure or has self-signed certificate, try to follow <a href="https://docs.docker.com/registry/insecure/#deploy-a-plain-http-registry" rel="noreferrer">that guide</a> to allow docker daemon pulling image from there, that is an often reason of image pull failures.</p>
|
<p>I created a service which works perfectly in kubernetes.</p>
<p>Then i moved the connection string to a kubernetes secret; this is my yaml config:</p>
<pre><code> env:
- name: AZURE_CONNECTION
valueFrom:
secretKeyRef:
name: azure
key: connection-string
</code></pre>
<p>But since then the the scheduling of the pod fails</p>
<blockquote>
<p>Error: failed to start container "myservice-api-host": Error response
from daemon: oci runtime error: container_linux.go:247: starting
container process caused "process_linux.go:295: setting oom score for
ready process caused \"write /proc/22658/oom_score_adj: invalid
argument\""</p>
</blockquote>
| <p>The reason was that the secret was base64 encoded based on a UTF16 string. </p>
<p>After encoding the secret in <strong>UTF8</strong> then Base64 everything works fine</p>
|
<p>I have a database that's running using local storage in Kubernetes. Whenever I start up the Pod with the database I would like to run a Job that can look at our backups and backfill any data that we have that isn't on the local disk.</p>
<p>I was looking at the PostStart lifecycle hook, but that just lets me run a command from the main container, which would be a very hacky way to submit this Job. Are there any better approaches to this other than writing my own controller?</p>
| <p>You can use an <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">init container</a> that mounts in the same volumes as the main container, populates it with any missing data and then exits so the main container can start.</p>
<p>If you need the job container to be running at the same time as your main container, you can instead just put the container described above as a second container in the Pod.</p>
<p>If you need an actual kubernetes Job to be created then, as you say, I think the only options would be to create a custom controller or to run an apiserver client such as <code>kubectl</code> as a sidecar container, but you could use the ServiceAccount token that's automatically mounted into your Pod to authenticate with the apiserver and then just apply the necessary RBAC rules to the ServiceAccount to create a Job. You'd have to use some sort of shared data volume to mount the same data into both the Pods spawned by the Job and the main Pod in order to share the data (there are a few other options that are possible also).</p>
|
<p>I got an email from google about that the Kubernetes project recently disclosed new security vulnerabilities.</p>
<p>was advised to upgrade the nodes as soon as the patch becomes available which is with the new version releases by March 16.</p>
<p>How soon should I do it or how long can I wait ? Because I need at least a week to plan the upgrade !!</p>
| <p><a href="https://cve.mitre.org/cgi-bin/cvename.cgi?name=2017-1002101" rel="nofollow noreferrer">CVE-2017-1002101</a> affects all volume types, so to prevent the vulnerability being exploited on your cluster you'd need to deny the use of all volume types using <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/" rel="nofollow noreferrer">PodSecurityPolicy</a>. Refer to the <code>Mitigations prior to upgrading</code> section of the GitHub issue <a href="https://github.com/kubernetes/kubernetes/issues/60813" rel="nofollow noreferrer">here</a>.</p>
<p>There isn't an amount of time you can wait, it's just more likely to be exploited the longer you wait before upgrading. </p>
|
<p>I have an Intel NUC (I5) and Raspberry-Pi Model-B . I tried to create a kubernetes cluster by making the Intel-NUC as master node and Raspberry-Pi as worker node.When I try the above set up, I see that the worker node crashing all the time . Here's the output . This happens only with the above set up. If I try creating a cluster with two Raspberry-Pis (One master and one worker node), it works fine. </p>
<p>What am I doing wrong? </p>
<pre><code>sudo kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-ubuntu 1/1 Running 0 13h
kube-system kube-apiserver-ubuntu 1/1 Running 0 13h
kube-system kube-controller-manager-ubuntu 1/1 Running 0 13h
kube-system kube-dns-6f4fd4bdf-fqmmt 3/3 Running 0 13h
kube-system kube-proxy-46ddk 0/1 CrashLoopBackOff 5 3m
kube-system kube-proxy-j48fc 1/1 Running 0 13h
kube-system kube-scheduler-ubuntu 1/1 Running 0 13h
kube-system kubernetes-dashboard-5bd6f767c7-nh6hz 1/1 Running 0 13h
kube-system weave-net-2bnzq 2/2 Running 0 13h
kube-system weave-net-7hr54 1/2 CrashLoopBackOff 3 3m
</code></pre>
<p>I examined the logs for kube-proxy and found the following entry
<code>Logs from kube-proxy
standard_init_linux.go:178: exec user process caused "exec format error"</code>
This seem to stem from the issue that the image picked up is arm arch as oppose x86 arch. Here's the yaml file </p>
<pre><code>{
"kind": "Pod",
"apiVersion": "v1",
"metadata": {
"name": "kube-proxy-5xc9c",
"generateName": "kube-proxy-",
"namespace": "kube-system",
"selfLink": "/api/v1/namespaces/kube-system/pods/kube-proxy-5xc9c",
"uid": "a227b43b-27ef-11e8-8cf2-b827eb03776e",
"resourceVersion": "22798",
"creationTimestamp": "2018-03-15T01:24:40Z",
"labels": {
"controller-revision-hash": "3203044440",
"k8s-app": "kube-proxy",
"pod-template-generation": "1"
},
"ownerReferences": [
{
"apiVersion": "extensions/v1beta1",
"kind": "DaemonSet",
"name": "kube-proxy",
"uid": "361aca09-27c9-11e8-a102-b827eb03776e",
"controller": true,
"blockOwnerDeletion": true
}
]
},
"spec": {
"volumes": [
{
"name": "kube-proxy",
"configMap": {
"name": "kube-proxy",
"defaultMode": 420
}
},
{
"name": "xtables-lock",
"hostPath": {
"path": "/run/xtables.lock",
"type": "FileOrCreate"
}
},
{
"name": "lib-modules",
"hostPath": {
"path": "/lib/modules",
"type": ""
}
},
{
"name": "kube-proxy-token-kzt5h",
"secret": {
"secretName": "kube-proxy-token-kzt5h",
"defaultMode": 420
}
}
],
"containers": [
{
"name": "kube-proxy",
"image": "gcr.io/google_containers/kube-proxy-arm:v1.9.4",
"command": [
"/usr/local/bin/kube-proxy",
"--config=/var/lib/kube-proxy/config.conf"
],
"resources": {},
"volumeMounts": [
{
"name": "kube-proxy",
"mountPath": "/var/lib/kube-proxy"
},
{
"name": "xtables-lock",
"mountPath": "/run/xtables.lock"
},
{
"name": "lib-modules",
"readOnly": true,
"mountPath": "/lib/modules"
},
{
"name": "kube-proxy-token-kzt5h",
"readOnly": true,
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "IfNotPresent",
"securityContext": {
"privileged": true
}
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"serviceAccountName": "kube-proxy",
"serviceAccount": "kube-proxy",
"nodeName": "udubuntu",
"hostNetwork": true,
"securityContext": {},
"schedulerName": "default-scheduler",
"tolerations": [
{
"key": "node-role.kubernetes.io/master",
"effect": "NoSchedule"
},
{
"key": "node.cloudprovider.kubernetes.io/uninitialized",
"value": "true",
"effect": "NoSchedule"
},
{
"key": "node.kubernetes.io/not-ready",
"operator": "Exists",
"effect": "NoExecute"
},
{
"key": "node.kubernetes.io/unreachable",
"operator": "Exists",
"effect": "NoExecute"
},
{
"key": "node.kubernetes.io/disk-pressure",
"operator": "Exists",
"effect": "NoSchedule"
},
{
"key": "node.kubernetes.io/memory-pressure",
"operator": "Exists",
"effect": "NoSchedule"
}
]
},
"status": {
"phase": "Running",
"conditions": [
{
"type": "Initialized",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2018-03-15T01:24:45Z"
},
{
"type": "Ready",
"status": "False",
"lastProbeTime": null,
"lastTransitionTime": "2018-03-15T01:35:41Z",
"reason": "ContainersNotReady",
"message": "containers with unready status: [kube-proxy]"
},
{
"type": "PodScheduled",
"status": "True",
"lastProbeTime": null,
"lastTransitionTime": "2018-03-15T01:24:46Z"
}
],
"hostIP": "192.168.178.24",
"podIP": "192.168.178.24",
"startTime": "2018-03-15T01:24:45Z",
"containerStatuses": [
{
"name": "kube-proxy",
"state": {
"waiting": {
"reason": "CrashLoopBackOff",
"message": "Back-off 5m0s restarting failed container=kube-proxy pod=kube-proxy-5xc9c_kube-system(a227b43b-27ef-11e8-8cf2-b827eb03776e)"
}
},
"lastState": {
"terminated": {
"exitCode": 1,
"reason": "Error",
"startedAt": "2018-03-15T01:40:51Z",
"finishedAt": "2018-03-15T01:40:51Z",
"containerID": "docker://866dd8e7175bd71557b9dcfc84716a0f3abd634d5d78c94441f971b8bf24cd0d"
}
},
"ready": false,
"restartCount": 8,
"image": "gcr.io/google_containers/kube-proxy-arm:v1.9.4",
"imageID": "docker-pullable://gcr.io/google_containers/kube-proxy-arm@sha256:c6fa0de67fb6dbbb0009b2e6562860d1f6da96574d23617726e862f35f9344e7",
"containerID": "docker://866dd8e7175bd71557b9dcfc84716a0f3abd634d5d78c94441f971b8bf24cd0d"
}
],
"qosClass": "BestEffort"
}
}
</code></pre>
| <p>Yes, it's possible and i've just done that for one of my customers.</p>
<p>Basically there is an issue that the KubeProxy DaemonSet deployed automatically on the master is compiled to x64 - because you wanted that the master will be x64 and the nodes will be ARM.</p>
<p>When you add ARM nodes to the cluster, the DaemonSet tries to deploy the x64 image on them, and fails.</p>
<p>You'll need to edit the default DaemonSet after installation to select only x64 nodes, and deploy another DaemonSet for ARM nodes.
This gist will walk you through:
<a href="https://gist.github.com/squidpickles/dda268d9a444c600418da5e1641239af" rel="noreferrer">Multiplatform (amd64 and arm) Kubernetes cluster setup</a></p>
<p>Hope this helps,
Ofir.</p>
|
<p>I'm currently working on a kubernetes cluster. Cluster is working properly.
I need to establish communication between services without using proxy.
For example I have services below:</p>
<ol>
<li>worker</li>
<li>app1</li>
<li>app2</li>
<li>app3</li>
</ol>
<p>Worker needs to login to app containers directly via SSH and do some commands.
In docker-compose file it was easy by using links and then ssh app1, ssh app2.
How to do it in Kubernetes ?</p>
| <p>You'll want to create a <a href="https://kubernetes.io/docs/concepts/services-networking/service#headless-services" rel="nofollow noreferrer">headless Service</a> (<code>spec.clusterIP: None</code>) selecting your app Pods. This will create a DNS entry (something like <code>my-svc.my-namespace.svc.cluster.local</code>) that will resolve to the set of IPs of the Pods selected by your Service. You can then loop through the returned list of Pod IPs and ssh into each.</p>
<p>More details can be found <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service#services" rel="nofollow noreferrer">here</a>.</p>
|
<p>I have a docker image with the option for property file like, </p>
<pre><code>CMD java -jar /opt/test/test-service.war
--spring.config.location=file:/conf/application.properties
</code></pre>
<p>I use the <code>-v</code> volume mount in my <code>docker run</code> command as follows.</p>
<pre><code>-v /usr/xyz/props/application.properties:/conf/application.properties
</code></pre>
<p>I am not sure how to achieve the same thing in Kubernetes.<br>
I use minikube to run kubernetes in my local mac.</p>
| <p>That should be an <strong><a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="noreferrer">host path volume</a></strong>, illustrated with this example pod.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /data
# this field is optional
type: Directory
</code></pre>
|
<p>Is there a way to delete previous deployments with cli?</p>
<p>When I successfully run <code>oc import-image $APPLICATION</code> a new deployment appears with a RC and after a few runs I hit the RC max and will have to manually delete the previous deployments.</p>
<p>Is there a script similar to <code>oc delete rc $APPLICATION-$(($DEPLOYMENT_NUMBER - 1))</code>?</p>
| <p>Over time you can end up with alot of replication controllers, and thus you need to clean them out. You can use <code>oc delete rc <rc-num></code> or even better <code>oc adm prune</code> to do this. Checkout <code>oc adm options</code> and <code>oc adm prune --help</code> for options.</p>
<p><code>
$ oc adm prune deployments --keep-complete=5 --namespace=myproject --confirm
</code></p>
|
<p>I understand that in Kubernetes you don't want to "tie" a pod to a host, but in certain cases you might need to. </p>
<p>In my particular case I have a DB that lives on blockstorage which is mounted to a specific host. </p>
<p>What I am trying to accomplish with Kubernetes is the equivalent of a bind-mount in Docker. I want to specify the directory on the host that I need mounted in the pod, similar to this:</p>
<pre><code>/mnt/BTC_2:/root/.bitcoin:rw
</code></pre>
<p>How do I specify the location of where I want my persistent storage to be on the node/host? Would this be a <code>hostPath</code> volume like the following:</p>
<pre><code> volumeMounts:
- mountPath: /root/.bitcoin
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /mnt/BTC_2
</code></pre>
| <blockquote>
<p>I want to specify the directory on the host that I need mounted in the pod</p>
</blockquote>
<p>That should be <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">documented here</a></p>
<blockquote>
<p>A hostPath volume mounts a file or directory from the host node’s filesystem into your pod. This is not something that most Pods will need, but it offers a powerful escape hatch for some applications.</p>
</blockquote>
<p>Warning:</p>
<blockquote>
<p>The files or directories created on the underlying hosts are only writable by root. You either need to run your process as root in a privileged container or modify the file permissions on the host to be able to write to a hostPath volume</p>
</blockquote>
<pre><code> volumes:
- name: test-volume
hostPath:
# directory location on host
path: /data
# this field is optional
type: Directory
</code></pre>
|
<p>I'm trying to run a SonarQube pod in Openshift but it seems to be mounting the persistent volume with root as the owner.</p>
<p>How can we change this to a non-root user?
I created my persistent volume with 'hostPath'.</p>
<p>You can find some more information below:</p>
<pre><code>Caused by: java.nio.file.AccessDeniedException: /opt/sonarqube/data/es
at java.nio.file.Files.createDirectory(Files.java:674)
at java.nio.file.Files.createAndCheckIsDirectory(Files.java:781)
at java.nio.file.Files.createDirectories(Files.java:767)
at org.elasticsearch.env.NodeEnvironment.(NodeEnvironment.java:169)
at org.elasticsearch.node.Node.(Node.java:165)
... 6 common frames omitted
</code></pre>
<p>Here is the sonarqube directory screenshot</p>
<p><a href="https://i.stack.imgur.com/5Ox7i.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5Ox7i.png" alt="enter image description here"></a></p>
| <p>You can set a specific <code>securityContext</code> to</p>
<ul>
<li>change the group of mounted filesystems</li>
<li>change the user a pod is run as</li>
<li>pass SELinux options.</li>
</ul>
<p><a href="https://docs.openshift.org/latest/install_config/persistent_storage/pod_security_context.html" rel="nofollow noreferrer">https://docs.openshift.org/latest/install_config/persistent_storage/pod_security_context.html</a> offers some more background.</p>
<p>This setting is done in your DeploymentConfig. The key <code>securityContext</code> should already be present. With the following, the directory should be group-writable:</p>
<pre><code>securityContext:
fsGroup: <GROUPID OF SONAR>
</code></pre>
|
<p>I am trying to mount a NFS share (outside of k8s cluster) in my container via DNS lookup, my config is as below </p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: service-a
spec:
containers:
- name: service-a
image: dockerregistry:5000/centOSservice-a
command: ["/bin/bash"]
args: ["/etc/init.d/jboss","start"]
volumeMounts:
- name: service-a-vol
mountPath: /myservice/por/data
volumes:
- name: service-a-vol
nfs:
server: nfs.service.domain
path: "/myservice/data"
restartPolicy: OnFailure
</code></pre>
<p>nslookup of <code>nfs.service.domin</code> works fine from my container. This is achiveded via <code>StubDomain</code> . However when creating the container it fails to resolve the nfs server. Error:</p>
<pre><code>Warning FailedMount <invalid> kubelet, worker-node-1 MountVolume.SetUp failed for volume "service-a-vol" : mount failed: exit status 32
Mounting command: systemd-run
Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/44aabfb8-2767-11e8-bcf9-fa163ece9426/volumes/kubernetes.io~nfs/service-a-vol --scope -- mount -t nfs nfs.service.domain:/myservice/data /var/lib/kubelet/pods/44aabfb8-2767-11e8-bcf9-fa163ece9426/volumes/kubernetes.io~nfs/service-a-vol
Output: Running scope as unit run-27293.scope.
mount.nfs: Failed to resolve server nfs.service.domain: Name or service not known
mount.nfs: Operation already in progress
</code></pre>
<p>If i modify <code>server: nfs.service.domain</code> to <code>server: 10.10.1.11</code> this works fine! So to summarise</p>
<ol>
<li>DNS resolution of the service works fine</li>
<li>Mounting via DNS resolution does not</li>
<li>Mounting via specific IP address works</li>
<li>I have tried <code>Headless Service</code> instead of StubDomain but the same issue exists</li>
</ol>
<p>Any help much appreciated</p>
<p><em>Update 1</em>: If i add an entry in the /etc/hosts files of worker/master nodes <code>10.10.1.11 nfs.service.domain</code> then my configuration above <code>server: nfs.service.domain</code> works. This is obviously not a desired workaround...</p>
| <p>As pointed out by @Giorgio Cerruti and as referenced in <a href="https://github.com/kubernetes/kubernetes/issues/44528/" rel="nofollow noreferrer">this github ticket</a> among others this is currently not possible as the node needs to be able to resolve the DNS entry and it does not resolve kube-dns. Two possible solutions are:</p>
<ol>
<li>Update <code>/etc/hosts</code> of each kubernetes node to resolve the NFS endpoint (as per update above). This is a primitive solution.</li>
<li><p>A more robust fix that would work for this NFS service and any other remote service in the same domain (as NFS) is to add the remote DNS server to the kubernetes nodes <code>resolv.conf</code> </p>
<p><code>someolddomain.org service.domain xx.xxx.xx
nameserver 10.10.0.12
nameserver 192.168.20.22
nameserver 8.8.4.4</code></p></li>
</ol>
|
<p>I am running Kubernetes (Minikube) on my local Mac.</p>
<p>I am trying to setup a deployment with Docker image and getting the below error. But, the hello-world deployment with the Docker image "gcr.io/google-samples/node-hello:1.0" works as expected.</p>
<p>I am able to pull the image from a console on my local machine. Am I missing any setting here?</p>
<blockquote>
<p>"Failed to pull image
"docker.XYZ.com/dpace/dev/docker-service": rpc error:
code = Unknown desc = Error response from daemon: Get
https:/docker.XYZ.com/v2/: dial tcp: lookup
docker.XYZ.com on 10.0.2.3:53: read udp
10.0.2.15:59292->10.0.2.3:53: i/o timeout"</p>
</blockquote>
<p>I am able to pull the image using <code>docker pull docker.XYZ.com/dpace/dev/docker-service</code> in my local machine without any auth issue. It doesn't need auth for pulling images.</p>
<p>I tried logging into Minikube VM and Docker images returns the following.</p>
<blockquote>
<p>$ docker images REPOSITORY TAG<br />
IMAGE ID CREATED SIZE
k8s.gcr.io/kubernetes-dashboard-amd64 v1.8.1<br />
e94d2f21bc0c 3 months ago 121MB
gcr.io/google-containers/kube-addon-manager v6.5<br />
d166ffa9201a 4 months ago 79.5MB
gcr.io/k8s-minikube/storage-provisioner v1.8.0<br />
4689081edb10 4 months ago 80.8MB
gcr.io/k8s-minikube/storage-provisioner v1.8.1<br />
4689081edb10 4 months ago 80.8MB
k8s.gcr.io/k8s-dns-sidecar-amd64 1.14.5<br />
fed89e8b4248 5 months ago 41.8MB
k8s.gcr.io/k8s-dns-kube-dns-amd64 1.14.5<br />
512cd7425a73 5 months ago 49.4MB
k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64 1.14.5<br />
459944ce8cc4 5 months ago 41.4MB k8s.gcr.io/echoserver<br />
1.4 a90209bb39e3 21 months ago 140MB gcr.io/google_containers/pause-amd64 3.0<br />
99e59f495ffa 22 months ago 747kB k8s.gcr.io/pause-amd64<br />
3.0 99e59f495ffa 22 months ago 747kB gcr.io/google-samples/node-hello 1.0<br />
4c7ea8709739 23 months ago 644MB</p>
</blockquote>
<p>Though the images are there, when I try to pull the existing image, it fails with the below error.</p>
<blockquote>
<p>$ docker pull gcr.io/google-samples/node-hello:1.0 Error response from
daemon: Get <a href="https://gcr.io/v2/" rel="nofollow noreferrer">https://gcr.io/v2/</a>: dial tcp: lookup gcr.io on
10.0.2.3:53: read udp 10.0.2.15:44023->10.0.2.3:53: i/o timeout</p>
</blockquote>
<p>When I try "docker login docker.XYZ.com", it prompts me to enter the credential. It throws the below error after entering the password. Same error while trying to pull the image also.</p>
<blockquote>
<p>"Error response from daemon: Get <a href="https://docker.XYZ.com/v2/" rel="nofollow noreferrer">https://docker.XYZ.com/v2/</a>: dial tcp:
lookup docker.XYZ.com on 10.0.2.3:53: read udp
10.0.2.15:41849->10.0.2.3:53: i/o timeout"</p>
<p>The command "curl google.com" also not working. "Could not resolve
host: google.com"</p>
</blockquote>
<p>Any setting to be done inside Minikube VM. I use VirtualBox.</p>
| <p>Looks like DNS in your minikube is broken, that's why you cannot pull anything.</p>
<p>Here is an <a href="https://github.com/kubernetes/minikube/issues/2302" rel="nofollow noreferrer">Issue on Github</a> with the similar problem.</p>
<p>Try to update your minikube and your hypervisor (in most of cases it is Virtualbox) to the last version (check <a href="https://github.com/kubernetes/minikube/releases" rel="nofollow noreferrer">here</a>) and recreate a cluster, it should help.</p>
|
<p>I was testing some commands and I ran</p>
<pre><code>$ kubectl delete nodes --all
</code></pre>
<p>and it <strike>deletes</strike> de-registers all the nodes including the masters. Now I can't connect to the cluster (Well, Obviously as the master is deleted).</p>
<p>Is there a way to prevent this as anyone could accidentally do this?</p>
<p>Extra Info: I am using KOps for deployment.</p>
<p>P.S. It does not delete the EC2 instances and the nodes come up on doing a EC2 instance reboot on all the instances.</p>
| <p>By default, you using something like a superuser who can do anything he want with a cluster.</p>
<p>For limit access to a cluster for other users you can use <a href="https://kubernetes.io/docs/admin/authorization/rbac" rel="nofollow noreferrer">RBAC</a> authorization for. By RBAC rules you can manage access and limits per resource and action.</p>
<p>In few words, for do that you need to:</p>
<ol>
<li><p>Create new cluster by Kops with <code>--authorization RBAC</code> or modify existing one by adding 'rbac' option to cluster's configuration to 'authorization' section:</p>
<p><code>authorization:
rbac: {}</code></p></li>
<li><p>Now, we can follow <a href="https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/" rel="nofollow noreferrer">that</a> instruction from Bitnami for create a user. For example, let's creating a user which has access only to <code>office</code> namespace and only for a few actions. So, we need to create a namespace firs:</p>
<p><code>kubectl create namespace office</code></p></li>
<li><p>Create a key and certificates for new user:</p>
<p><code>openssl genrsa -out employee.key 2048</code><br>
<code>openssl req -new -key employee.key -out employee.csr -subj "/CN=employee/O=bitnami"</code></p></li>
<li><p>Now, using your CA authority key (It available in the S3 bucket under PKI) we need to approve new certificate:</p>
<p><code>openssl x509 -req -in employee.csr -CA CA_LOCATION/ca.crt -CAkey CA_LOCATION/ca.key -CAcreateserial -out employee.crt -days 500</code></p></li>
<li><p>Creating credentials:</p>
<p><code>kubectl config set-credentials employee --client-certificate=/home/employee/.certs/employee.crt --client-key=/home/employee/.certs/employee.key</code></p></li>
<li><p>Setting a right context:</p>
<p><code>kubectl config set-context employee-context --cluster=YOUR_CLUSTER_NAME --namespace=office --user=employee</code></p></li>
<li><p>New we have a user without access to anything. Let's create a new role with limited access, here is example of Role which will have access only to deployments, replicasets and pods for create, delete and modify them and nothing more. Create file <code>role-deployment-manager.yaml</code> with Role configuration:</p></li>
</ol>
<p><code>kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: office
name: deployment-manager
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["deployments", "replicasets", "pods"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]</code></p>
<ol start="8">
<li>Create a new file <code>rolebinding-deployment-manager.yaml</code> with Rolebinding, which will attach your Role to user:</li>
</ol>
<p><code>kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: deployment-manager-binding
namespace: office
subjects:
- kind: User
name: employee
apiGroup: ""
roleRef:
kind: Role
name: deployment-manager
apiGroup: ""</code></p>
<ol start="9">
<li>Now apply that configurations:</li>
</ol>
<p><code>kubectl create -f role-deployment-manager.yaml
kubectl create -f rolebinding-deployment-manager.yaml</code></p>
<p>So, now you have a user with limited access and he cannot destroy your cluster.</p>
|
<p>I have setup a cron job on GCP Kubernetes. It runs once per day at 10:00am.
The job runs as expected however, I don't really understand what the charts say on the GCP K8S console.</p>
<p><a href="https://i.stack.imgur.com/cwXb4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cwXb4.png" alt="enter image description here"></a></p>
<p>As shown in the charts, there are around 1.5 CPU and 8G RAM at this point, when the cron job is not running. I expect the current usage should be zero as it is <strong>not</strong> running. </p>
<p>Could anyone see what is wrong? or I read the charts wrong?</p>
<p>Note I do retain 7 jobs in the history. Each job ran about 15sec and completed successfully.</p>
<pre><code>State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 08 Mar 2018 04:00:56 +0000
Finished: Thu, 08 Mar 2018 04:01:09 +0000
</code></pre>
<p><a href="https://i.stack.imgur.com/HhLA8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HhLA8.png" alt="enter image description here"></a></p>
<p>20180320 EDIT:
I found all graphs from other cron jobs all look the same. Is it something I setup wrong?</p>
<p><a href="https://i.stack.imgur.com/sMTkQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sMTkQ.png" alt="enter image description here"></a>
<a href="https://i.stack.imgur.com/lJN0q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lJN0q.png" alt="enter image description here"></a></p>
| <p>I reproduced your situation by that way:</p>
<ol>
<li>Created CronJob with a name "Application"</li>
<li>Added an application using Deployment with a name "Application"</li>
</ol>
<p>And now, I see on graphics of Cron Job details information not only about a cronjob itself, but also about an application.</p>
<p>So, I think, dashboard using <strong>name of object</strong> for getting a data.</p>
<p>Looks like you have any other deployment, replicaset etc. which always running and has a same name as your cronjob and on graphics you see the mix of data from it and from your cronjob.</p>
|
<p>What is the correct way to expose a service <strong>behind</strong> an ingress. Should I use nodePort? clusterIp? something else? because most tutorials just ignore this point. and some show the service using a nodePort and then add something like "would be enough for this demo" which isn't very reassuring.</p>
| <p>Services behind an ingress may simply be ClusterIP. The only reason you'd need NodePort is if you had some external LoadBalancer or something where you needed a static port on which you can access the service inside the cluster. As ingresses route traffic within the cluster, you can just use ClusterIP.</p>
|
<p>I have been playing around with Hyperledger to make it run on Kubernetes. And I was successful to do so. The only thing which I was not happy with the solution/work-around for the container that was spun up when chaincode is instantiated by the peer. </p>
<p>Kubernetes is simply not aware of this container as it was not started by Kubernetes and by the peer. And to make the peer and chaincode talk to each other I had to update the docker daemon running on the kubernetes node with dns server ip address of the kube-dns service.</p>
<p>Is it possible to instantiate a chaincode in a way where kubernetes is aware of the container of the chaincode.
And also chaincode container is able to talk to peer in a seamless fashion rather than updating docker daemon process of the node within kubernetes cluster</p>
| <p>I have been investigating the same issue you are having. One alternative to using the docker daemon on your kubernetes node is spinning up a new container in your Pod using DnD (Docker in docker) technique. In this way you can successfully instantiate the chaincode container in a natural way (you will be able to use KubeDNS for example) as it will be sharing the same network space as the kubernetes Pod. I couldn't find any tutorial on the internet showing the implementation of this theory but if you find one (or do it yourself) please share it on this thread.</p>
<p>Thank you</p>
<p>Reference:
<a href="https://medium.com/kokster/simpler-setup-for-hyperledger-fabric-on-kubernetes-using-docker-in-docker-8346f70fbe80" rel="nofollow noreferrer">https://medium.com/kokster/simpler-setup-for-hyperledger-fabric-on-kubernetes-using-docker-in-docker-8346f70fbe80</a></p>
|
<p>I am familiar with docker in docker (dind) but using along with <code>microsoft/azure-cli</code> image throw <code>docker command not found</code>.</p>
<p>Here is my setup for <code>gitlab-ci.yml</code> file. I have created <code>Service Principal</code> which is used to authenticate to azure cloud and respective resource group. </p>
<pre><code>image: docker:latest
variables:
PASSWORD: *********
TENANT_ID: *****-************-*************
APP_ID: *********-*****-*****
CLIENT_ID: ****************
ACR_ID: *******************
stages:
- build
- deploy
services:
- docker:dind
before_script:
- docker info
build_staging_image:
stage: build
image: microsoft/azure-cli
script:
- az login --service-principal --username $APP_ID --password $PASSWORD --tenant $TENANT_ID
- docker build -t azure-vote:latest ./azure-vote
- docker tag azure-vote votingtestapp.azurecr.io/azure-vote:latest
- docker push votingtestapp.azurecr.io/azure-vote:latest
deploy:develop:
stage: deploy
script:
- az login --service-principal --username $APP_ID --password $PASSWORD --tenant $TENANT_ID
- az acr login --name votingTestApp
- az role assignment create --assignee $CLIENT_ID --role Reader --scope $ACR_ID
- kubectl apply -f azure-vote-all-in-one-redis.yaml
only:
- develop
</code></pre>
<p>Any way to fix this error. I am just trying to create CI/CD pipeline. </p>
| <p>The problem is that <code>microsoft/azure-cli</code> docker image does have docker installed and the docker socket is not mounted onto the container. This the <code>docker</code> command will fail.</p>
<p>You are using the <code>microsoft/azure-cli</code> just to login to the registery. But note that you can also login using <code>docker login</code>. Check <a href="https://learn.microsoft.com/en-us/azure/container-registry/container-registry-get-started-docker-cli#log-in-to-a-registry" rel="noreferrer">Log in to a registry</a>.</p>
<p>Therefore, to solve the issue use a <code>dind</code> image and login to azure register using:</p>
<pre><code>docker login myregistry.azurecr.io -u xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx -p myPassword
</code></pre>
|
<p>I'm using two VMs with Atomic Host (1 Master, 1 Node; Centos Image). I want to use NFS shares from another VM (Ubuntu Server 16.04) as persistent volumes for my pods. I can mount them manually and in Kubernetes (Version 1.5.2) the persistent volumes are successfully created and bound to my PVCs. Also they are mounted in my pods. <strong>But when I try to write or even read from the corresponding folder inside the pod, I get the error <code>Permission denied</code>.</strong> From my research I think, the problem lies within the folders permission/owner/group on my NFS Host.</p>
<p>My exports file on the Ubuntu VM (<code>/etc/exports</code>) has 10 shares with the following pattern (The two IPs are the IPs of my Atomic Host Master and Node):</p>
<pre><code>/home/user/pv/pv01 192.168.99.101(rw,insecure,async,no_subtree_check,no_root_squash) 192.168.99.102(rw,insecure,async,no_subtree_check,no_root_squash)
</code></pre>
<p>In the image for my pods I create a new user named <code>guestbook</code>, so that the container doesn't use a privileged user, as this insecure. I read many post like <a href="https://www.linuxquestions.org/questions/linux-server-73/write-permission-for-users-on-nfs-folder-852800/" rel="nofollow noreferrer">this one</a>, that state, you have to set the permissions to world-writable or using the same UID and GID for the shared folders. So in my Dockerfile I create the <code>guestbook</code> user with the UID <code>1003</code> and a group with the same name and GID <code>1003</code>:</p>
<pre><code>RUN groupadd -r guestbook -g 1003 && useradd -u 1003 -r -g 1003 guestbook
</code></pre>
<p>On my NFS Host I also have a user named <code>guestbook</code> with UID <code>1003</code> as a member of the group <code>nfs</code> with GID <code>1003</code>. The permissions of the shared folders (with <code>ls -l</code>) are as following:</p>
<pre><code>drwxrwxrwx 2 guestbook nfs 4096 Feb 19 11:23 pv01
</code></pre>
<p>(world writable, owner guestbook, group nfs). In my Pod I can see the permissions of the mounted folder <code>/data</code> (again with <code>ls -l</code>) as:</p>
<pre><code>drwxrwxrwx. 2 guestbook guestbook 4096 Feb 9 13:37 data
</code></pre>
<p>The persistent Volumes are created with an YAML file with the pattern:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: pv01
annotations:
pv.beta.kubernetes.io/gid: "1003"
spec:
capacity:
storage: 200Mi
accessModes:
- ReadWriteOnce
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
nfs:
path: /home/user/pv/pv01
server: 192.168.99.104
</code></pre>
<p>The Pod is created with this YAML file:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: get-started
spec:
replicas: 3
template:
metadata:
labels:
app: get-started
spec:
containers:
- name: get-started
image: docker.io/cebberg/get-started:custom5
ports:
- containerPort: 2525
env:
- name: GET_HOSTS_FROM
value: dns
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: redis
key: database-password
volumeMounts:
- name: log-storage
mountPath: "/data/"
imagePullPolicy: Always
securityContext:
privileged: false
volumes:
- name: log-storage
persistentVolumeClaim:
claimName: get-started
restartPolicy: Always
dnsPolicy: ClusterFirst
</code></pre>
<p>And the PVC with YAML file:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: get-started
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
</code></pre>
<p>I tried different configuration for the owner/group of the folders. If I use my normal user (which is the same on all systems) as owner and group, I can mount manually and read and write in the folder. But I don't want to use my normal user, but use another user (and especially not a privileged user).</p>
<p><strong>What permissions do I have to set, so that the user I create in my Pod can write to the NFS volume?</strong></p>
| <p>I found the solution to my problem:
By accident I found log entries, that appear everytime I try to access the NFS volumes from my pods. They say, that SELinux has blocked the access to the folder because of different security context.</p>
<p>To resolve the issue, I simply had to turn on the corresponding SELinux boolean <code>virt_use_nfs</code> with the command</p>
<pre><code>setsebool virt_use_nfs on
</code></pre>
<p>This has to be done on all nodes to make it work correctly.</p>
<p>EDIT:
I remembered, that I now use <code>sec=sys</code> as mount option in <code>/etc/exports</code>. This provides access controll based on UID and GID of the user creating a file (which seems to be the default). If you use <code>sec=none</code> you also have to turn on the SELinux boolean <code>nfsd_anon_write</code>, so that the user <code>nfsnobody</code> has the permission to create files.</p>
|
<p>I'm trying to deploy a app to kubernetes cluster through jenkins using <code>Kubernetes Continuous Deploy</code> plugin. I copied the config <code>.yml</code> file into jenkins machine and gave the path in the build step and I'm getting a error:</p>
<blockquote>
<p>"No matching configuration files found" </p>
</blockquote>
<p>screenshots of the plugin and console out are in links.</p>
<p><a href="https://i.stack.imgur.com/7igWg.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7igWg.jpg" alt="Plugin"></a></p>
<p><a href="https://i.stack.imgur.com/wTcUu.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wTcUu.jpg" alt="Console output"></a></p>
| <p>I saw same error on my jenkins server then i fixed it by correcting the path.</p>
<p>Can you put your .yml files to your source control. Because Kubernetes Continuous Deploy plugin will check your workspace.</p>
<p>And plugin will check your path and it will not find it. I attached images and you can see my working configurations.</p>
<p><a href="https://i.stack.imgur.com/En8jA.png" rel="nofollow noreferrer">config path</a>
<a href="https://i.stack.imgur.com/y17vf.png" rel="nofollow noreferrer">project structure</a></p>
|
<p>I use io.fabric8.kubernetes-client, version 3.1.8 to do RollingUpdate of kubernetes resource. It is fine for Deployment. But I meet an exception for StatefulSet. But it is also fine if I use 'kubectl apply -f ***.yaml' for the StatefulSet.</p>
<p>Code to RollingUpdate Deployment:</p>
<pre><code>public void createOrReplaceResourceByYaml(String namespace, KubernetesResource resource) {
KubernetesClient client = k8sRestClient.newKubeClient();
Deployment deployment = (Deployment) resource;
logger.info(String.format("Create/Replace Deployment [%s] in namespace [%s].", ((Deployment) resource).getMetadata().getName(), namespace));
NonNamespaceOperation<Deployment, DeploymentList, DoneableDeployment, ScalableResource<Deployment, DoneableDeployment>> deployments = client.extensions().deployments().inNamespace(namespace);
Deployment result = deployments.createOrReplace(deployment);
logger.info(String.format("Created/Replaced Deployment [%s].", result.getMetadata().getName()));
}
</code></pre>
<p>Code to RollingUpdate StatefulSet</p>
<pre><code>public void createOrReplaceResourceByYaml(String namespace, KubernetesResource resource) {
KubernetesClient client = k8sRestClient.newKubeClient();
StatefulSet statefulSet = (StatefulSet) resource;
logger.info(String.format("Create/Replace StatefulSet [%s] in namespace [%s].", statefulSet.getMetadata().getName(), namespace));
NonNamespaceOperation<StatefulSet, StatefulSetList, DoneableStatefulSet, RollableScalableResource<StatefulSet, DoneableStatefulSet>> statefulSets = client.apps().statefulSets().inNamespace(namespace);
StatefulSet result = statefulSets.createOrReplace(statefulSet);
logger.info(String.format("Created/Replaced StatefulSet [%s].", result.getMetadata().getName()));
}
</code></pre>
<p>Exception when do RollingUpdate of StatefulSet</p>
<blockquote>
<p>Failure executing: PUT at: <a href="https://kubernetes.default.svc/apis/apps/v1beta1/namespaces/itsma1/statefulsets/pro-rabbitmq" rel="noreferrer">https://kubernetes.default.svc/apis/apps/v1beta1/namespaces/itsma1/statefulsets/pro-rabbitmq</a>. Message: StatefulSet.apps "pro-rabbitmq" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden.. Received status: Status(apiVersion=v1, code=422, details=StatusDetails(causes=[StatusCause(field=spec, message=Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden., reason=FieldValueForbidden, additionalProperties={})], group=apps, kind=StatefulSet, name=pro-rabbitmq, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=StatefulSet.apps "pro-rabbitmq" is invalid: spec: Forbidden: updates to statefulset spec for fields other than 'replicas', 'template', and 'updateStrategy' are forbidden., metadata=ListMeta(resourceVersion=null, selfLink=null, additionalProperties={}), reason=Invalid, status=Failure, additionalProperties={}).</p>
</blockquote>
<p><strong>I am curious why the error happened and how to fix it.</strong></p>
| <p>In StatefulSet, unlike Deployment, you can update only limited number of values - <code>replicas</code>, <code>template</code>, and <code>updateStrategy</code>.</p>
<p>You issue happening because Fabric trying to update values which is impossible to update.</p>
<p>The only thing you can do is carefully prepare a new <code>statefulSet</code> object which will have a same name as old but contain only values which you can update. </p>
<p>Alternative way is to delete old <code>statefulSet</code> before upload a new one with a same name.</p>
<p>Also, try to use a Kubernetes version upper 1.9 if you don't, because <code>statefulSet</code> is officially stable only in 1.9 and above.</p>
<p>BTW, here is a <a href="https://github.com/fabric8io/kubernetes-client/issues/931" rel="noreferrer">bug</a> in Fabric's GitHub which can effect your code.</p>
|
<p><em>helmfile</em> was released recently and we would like to adopt it.
<a href="https://github.com/roboll/helmfile" rel="nofollow noreferrer">https://github.com/roboll/helmfile</a></p>
<p><strong>my simple helmfile:</strong></p>
<pre><code>vim charts.yaml
...
releases:
# Published chart example
- name: prometheus_no_rbac # name of this release
namespace: prometheus # target namespace
chart: stable/prometheus # the chart being installed to create this release, referenced by `repository/chart` syntax
#values: [ vault.yaml ] # value files (--values)
set: # values (--set)
- name: rbac.create
value: false
...
wq!
</code></pre>
<p><strong>When I run:</strong></p>
<pre><code>./helmfile -f charts.yaml
NAME:
helmfile -
USAGE:
helmfile [global options] command [command options] [arguments...]
VERSION:
v0.8
COMMANDS:
repos sync repositories from state file (helm repo add && helm repo update)
charts sync charts from state file (helm repo upgrade --install)
diff diff charts from state file against env (helm diff)
sync sync all resources from state file (repos && charts)
delete delete charts from state file (helm delete)
GLOBAL OPTIONS:
--file FILE, -f FILE load config from FILE (default: "charts.yaml")
--quiet, -q silence output
--kube-context value Set kubectl context. Uses current context by default
--help, -h show help
--version, -v print the version
</code></pre>
<p>I just wanted to rewrite this piece of <strong>working code</strong>:</p>
<pre><code>helm install stable/prometheus --name prom --set rbac.create=false --namespace=prometheus
</code></pre>
| <p>Working example <strong>hemlfile</strong> of usage. </p>
<pre><code>cat helmfile.yaml
context: <my_context> # not mandatory I guess
releases:
# Published chart example
- name: promnorbacxubuntu # name of this release
namespace: prometheus # target namespace
chart: stable/prometheus # the chart being installed to create this release, referenced by `repository/chart` syntax
set: # values (--set)
- name: rbac.create
value: false
</code></pre>
<p><strong>Usage</strong>:</p>
<p><code>./helmfile -f hemlfile.yaml sync</code></p>
<p>The problem was that they have released a new version <strong>v0.10</strong>
<a href="https://github.com/roboll/helmfile/releases/tag/v0.10" rel="nofollow noreferrer">https://github.com/roboll/helmfile/releases/tag/v0.10</a></p>
<p><strong>Github Issue</strong>: <a href="https://github.com/roboll/helmfile/issues/55#issuecomment-373714894" rel="nofollow noreferrer">https://github.com/roboll/helmfile/issues/55#issuecomment-373714894</a></p>
<p>I have tested it in following envs.:</p>
<ul>
<li>Ubuntu 16.04</li>
<li>Centos 7.3</li>
<li>windows 10 via cygwin with minikube + Virtualbox</li>
</ul>
<p>Enjoy!</p>
|
<p>I am really new on kubernetes. I created a kubernetes cluster with this guide <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="nofollow noreferrer">using kubeadm</a>. The cluster consists of one master node and two nodes. Since I want to access the kubernetes web UI via master apiserver (by browser on my laptop), I modified <code>/etc/kubernetes/manifests/kube-apiserver.yaml</code> following these <a href="https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/" rel="nofollow noreferrer">K8 WebUI</a>, <a href="https://github.com/kubernetes/dashboard/wiki/Access-control#basic" rel="nofollow noreferrer">Access control</a>. What I did is that I added the following args in <code>/etc/kubernetes/manifests/kube-apiserver.yaml</code>:</p>
<pre><code>- --authentication-mode=basic
- --basic-auth-file=/etc/kubernetes/auth.csv
- hostPath:
path: /etc/kubernetes/auth.csv
name: kubernetes-dashboard
- mountPath: /etc/kubernetes/auth.csv
name: kubernetes-dashboard
readOnly: true
</code></pre>
<p>I have password and user name in <code>auth.csv</code> file. However, after I modified the <code>.yaml</code> file, my kube-apiserver process crashed. I checked by running <code>ps -aux|grep kube</code> to know which processes were running. The result was <code>kube-scheduler,kube-controller-manager,/usr/bin/kubelet</code> were all running but <code>kube-apiserver</code> process was not found. <strong>I was wondering what is a graceful way to restart kubernetes and let my cluster come back to the state immediately before changing the <code>.yaml</code>.</strong></p>
<p><strong>In addition, I will appreciate if someone could show me the exact steps to adding username/passwd authentication so that I can access Kubernetes Dashboad UI via browser on my laptop, Or any other way that can help me view the K8 Web UI on my mac laptop</strong>. I found a similar question <a href="https://stackoverflow.com/questions/41566408/restart-kubernetes-api-server-with-different-options">similar question</a>, but i still can not make it.</p>
<p>The environment:</p>
<ul>
<li>Three ubuntu 16 server: one master two nodes</li>
<li>Kubernetes version 1.9</li>
<li>I can SSH to the three machines and have root privilege.</li>
</ul>
<p><strong>update:</strong> the <code>kube-apiserver.yaml</code> file attached.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --insecure-port=0
- --advertise-address=172.16.28.125
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --secure-port=6443
- --enable-bootstrap-token-auth=true
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --requestheader-group-headers=X-Remote-Group
- --requestheader-allowed-names=front-proxy-client
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --allow-privileged=true
- --requestheader-username-headers=X-Remote-User
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --service-cluster-ip-range=10.96.0.0/12
- --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --authorization-mode=Node,RBAC
- --etcd-servers=http://127.0.0.1:2379
- --authentication-mode=basic
- --basic-auth-file=/etc/kubernetes/auth.csv
image: gcr.io/google_containers/kube-apiserver-amd64:v1.9.4
livenessProbe:
failureThreshold: 8
httpGet:
host: 172.16.28.125
path: /healthz
port: 6443
scheme: HTTPS
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-apiserver
resources:
requests:
cpu: 250m
volumeMounts:
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/pki
name: ca-certs-etc-pki
readOnly: true
- mountPath: /etc/kubernetes/auth.csv
name: kubernetes-dashboard
readOnly: true
hostNetwork: true
volumes:
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/pki
type: DirectoryOrCreate
name: ca-certs-etc-pki
- hostPath:
path: /etc/kubernetes/auth.csv
name: kubernetes-dashboard
status: {}
</code></pre>
| <p>Based on official <a href="https://kubernetes.io/docs/reference/generated/kube-apiserver/" rel="nofollow noreferrer">documentation</a>, <code>--authentication-mode=basic</code> is not a valid option for Kubernetes 1.9 api server.</p>
<p>Try to remove it, hope it will help.</p>
<p>What about exposing your dashboard for external access, the better way is using kube-proxy for that, but if you want to access Dashboard directly, the only more or less secure way is to using <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress</a> for that.</p>
<p>I recommend you to use Helm package manager to manage all installations in your cluster, that is much easier and useful than manually writing all configs.</p>
<p>So, to get a dashboard behind an Ingress on bare-metal, you need:</p>
<ol>
<li>Make sure you cluster work again:)</li>
<li>Install Helm to your PC using <a href="https://docs.helm.sh/using_helm/#installing-helm" rel="nofollow noreferrer">official documentation</a>.</li>
<li>Initialize Helm by call <code>helm init</code>. It will its server path (tiller) to your kubernetes cluster. All details about initialization you can check in the documentation, but usually, it just working.</li>
<li>Now you need to install Ingress. In two words, Ingress in Kubernetes is a special service like a proxy, which will able to you to get static ingress point for applications inside a cluster. We will use Ingress chart based on <a href="https://github.com/kubernetes/charts/tree/master/stable/nginx-ingress" rel="nofollow noreferrer">Nginx</a>. You can check available options in its repo. For installing it with basic configuration, call:</li>
</ol>
<p><code>helm install stable/nginx-ingress --set=controller.service.type=NodePort</code></p>
<ol start="5">
<li>So, now we have an Ingress and its time to install Dashboard using its <a href="https://github.com/kubernetes/charts/tree/master/stable/kubernetes-dashboard" rel="nofollow noreferrer">chart</a>. I highly recommend you to use HTTPS connection instead of HTTP, but now we will use HTTP for making it faster to deploy. You can read about how to enable HTTPS connections on Ingress <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#tls" rel="nofollow noreferrer">here</a> (you will need to add a secret containing your TLS key and cert and set TLS configuration in a chart). So, let's install dashboard:</li>
</ol>
<p><code>helm install stable/kubernetes-dashboard \
--set=ingress.enabled=True,ingress.hosts=my-dashboard.local</code></p>
<ol start="6">
<li>Now check on which node your Ingress pod working by <code>kubectl describe pod $pod-with-ingress</code> and add IP address of that pod to your <code>hosts</code> files with FQDN <code>my-dashboard.local</code>.</li>
</ol>
<p>Finally, the dashboard has to be available in your browser by <code>http://my-dashboard.local</code> address.</p>
<p>P.S. I highly recommend you also to setup RBAC on your cluster for manage privileges of each user and application in it, including a dashboard.</p>
|
<p>I'm using coreos tectonic sandbox. My deployment.yaml file contains the container which should detect the docker daemon running on host via kubernetes.
The container uses docker daemon to identify docker events. For some reason docker daemon is not getting detected. </p>
<pre><code>deployment.yaml
containers:
- name: idn-docker
image: sample/id-docker:latest
- name: docker-socket
mountpath: /var/run/docker.sock
</code></pre>
<p>can some please help me identify what the problem is.</p>
| <p>For run Docker in a Docker, you have 2 options - DooD (Docker out of Docker) and DinD (Docker in Docker). I think you need a first because you need access to events on a host machine.</p>
<p>Here is a good <a href="https://applatix.com/case-docker-docker-kubernetes-part-2/" rel="nofollow noreferrer">article</a> about both schemes.</p>
<p>Example of pod's configuration:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: idn-docker
spec:
containers:
- name: idn-docker
image: sample/id-docker:latest
volumeMounts:
- mountPath: /var/run
name: docker-sock
volumes:
- name: docker-sock
hostPath:
path: /var/run
</code></pre>
<p>You can use a <code>containers</code> section from an example in your deployment because of its structure in <code>template</code> section is the same Pod template as a separate pod configuration.</p>
<p>But, please keep in mind, that solution will have some limitations:</p>
<blockquote>
<p><em>Pod Networking</em> - Cannot access the container using Pod IP.</p>
<p><em>Pod Lifecycle</em> - On Pod termination, this container will keep running especially if the container was started with -d flag.</p>
<p><em>Pod Cleanup</em> - Graph storage will not be clean up after pod terminates.</p>
<p><em>Scheduling and Resource Utilization</em> - Cpu and Memory requested by Pod, will only be for the Pod and not the container spawned from the Pod. Also, limits on CPU and memory settings for the Pod will not be inherited by the spawned container.</p>
</blockquote>
|
<p>From a quick read of the Kubernetes docs, I noticed that the kube-proxy behaves as a Level-4 proxy, and perhaps works well for TCP/IP traffic (s.a. typically HTTP traffic). </p>
<p>However, there are other protocols like SIP (that could be over TCP or UDP), RTP (that is over UDP), and core telecom network signaling protocols like DIAMETER (over TCP or SCTP) or likewise M3UA (over SCTP). Is there a way to handle such traffic in application running in a Kubernetes minion ?</p>
<p>In my reading, I have come across the notion of Ingress API of Kuberntes, but I understood that it is a way to extend the capabilities of the proxy. Is that correct ? </p>
<p>Also, it is true that currently there is no known implementation (open-source or closed-source) of Ingress API, that can allow a Kubernetes cluster to handle the above listed type of traffic ?</p>
<p>Finally, other than usage of the Ingress API, is there no way to deal with the above listed traffic, even if it has performance limitations ? </p>
| <blockquote>
<p>Also, it is true that currently there is no known implementation (open-source or closed-source) of Ingress API, that can allow a Kubernetes cluster to handle the above listed type of traffic ?</p>
</blockquote>
<p>Probably, and this <a href="https://www.ibm.com/support/knowledgecenter/en/SS4U29/ha.html" rel="noreferrer">IBM study on IBM Voice Gateway "Setting up high availability"</a></p>
<p><a href="https://i.stack.imgur.com/2LWln.png" rel="noreferrer"><img src="https://i.stack.imgur.com/2LWln.png" alt="https://www.ibm.com/support/knowledgecenter/SS4U29/images/ha.png"></a></p>
<p>(here with <a href="https://en.wikipedia.org/wiki/Session_Initiation_Protocol" rel="noreferrer">SIPs (Session Initiation Protocol)</a>, <a href="https://www.opensips.org/About/About" rel="noreferrer">like OpenSIPS</a>)</p>
<blockquote>
<h2>Kubernetes deployments</h2>
<p>In Kubernetes terminology, a single voice gateway instance equates to a single pod, which contains both a SIP Orchestrator container and a Media Relay container.<br>
The voice gateway pods are installed into a Kubernetes cluster that is fronted by an external SIP load balancer.<br>
Through Kubernetes, a voice gateway pod can be scheduled to run on a cluster of VMs. The framework also monitors pods and can be configured to automatically restart a voice gateway pod if a failure is detected.</p>
<p>Note: <strong>Because auto-scaling and auto-discovery of new pods by a SIP load balancer in Kubernetes are not currently supported, an external SIP</strong>. </p>
</blockquote>
<p>And, to illustrate Kubernetes limitations:</p>
<blockquote>
<p>Running IBM Voice Gateway in a Kubernetes environment requires special considerations beyond the deployment of a typical HTTP-based application because of the protocols that the voice gateway uses. </p>
<p><strong>The voice gateway relies on the SIP protocol for call signaling and the RTP protocol for media, which both require affinity to a specific voice gateway instance. To avoid breaking session affinity, the Kubernetes ingress router must be bypassed for these protocols</strong>.</p>
<p><strong>To work around the limitations of the ingress router, the voice gateway containers must be configured in host network mode.</strong><br>
In host network mode, when a port is opened in either of the voice gateway containers, those identical ports are also opened and mapped on the base virtual machine or node.<br>
This configuration also eliminates the need to define media port ranges in the kubectl configuration file, which is not currently supported by Kubernetes. Deploying only one pod per node in host network mode ensures that the SIP and media ports are opened on the host VM and are visible to the SIP load balancer.</p>
</blockquote>
<hr>
<p>That network configuration put in place for Kubernetes is best illustrated in <a href="https://stackoverflow.com/a/48104579/6309">this answer</a>, which describes the elements involved in pod/node-communication:</p>
<p><a href="https://i.stack.imgur.com/APpSf.png" rel="noreferrer"><img src="https://i.stack.imgur.com/APpSf.png" alt="https://i.stack.imgur.com/APpSf.png"></a></p>
|
<p>How can I isolate multicast traffic in one namespace in kubernetes. Ingress policy didn't solve this, I can capture multicast traffic from different namespace.</p>
| <p>Usually, you can use <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/" rel="nofollow noreferrer">Network Policy</a> to manage it. </p>
<p>Looks like now the only one Network Provider with a support of Network Policy has the support of multicast - Weave. </p>
<p>But, based on its <a href="https://github.com/weaveworks/weave/blob/master/site/kubernetes/kube-addon.md#network-policy" rel="nofollow noreferrer">documentation</a>, it is not possible to manage multicast rules:</p>
<blockquote>
<p>As of version 1.9 of Weave Net, the Network Policy Controller allows all multicast traffic. Since a single multicast address may be used by multiple pods, we cannot implement rules to isolate them individually. You can turn this behaviour off (block all multicast traffic) by adding --allow-mcast=false as an argument to weave-npc in the YAML configuration.</p>
</blockquote>
|
<p>I have created a kubernetes cluster where I have a master node and two worker nodes. I initialised master node using below command </p>
<pre><code>sudo kubeadm init --token-ttl=0 --apiserver-advertise-address=192.168.0.27
</code></pre>
<p><code>192.168.0.27</code> is the ip address of master node. Then I used the generated token to start my worker nodes. Now the problem is that my network is in DHCP and the ip address changes sometime due to which it starts showing below error:</p>
<pre><code>Unable to connect to the server: dial tcp 192.168.0.27:6443: getsockopt: no route to host
</code></pre>
<p>It shows above error because at the time of initializing the master node, I have used the ip address and after the ip address changes, its not able to access it.</p>
<p>Is it possible to configure master and other nodes in some way so that they can work regardless of any ip address change.</p>
<p>Thanks</p>
| <p>As @Suresh Vishnoi mentioned, it is not possible to set a DNS name in current stable versions of Kubernetes because of <a href="https://github.com/kubernetes/kubernetes/blob/d7cadf5d180277cfed7fd57d1e1a125c538bd751/cmd/kubeadm/app/util/endpoint.go#L41" rel="nofollow noreferrer">implementation</a>. </p>
<p>But, merge request with that feature - new key for DNS name instead of IP address are already <a href="https://github.com/kubernetes/kubernetes/commit/d7cadf5d180277cfed7fd57d1e1a125c538bd751" rel="nofollow noreferrer">merged</a> into Kubernetes master and available from version <code>v1.10.0-beta.4</code>.</p>
<p>In your case, it is not possible to use DNS name for discovery, but, you can set up your DHCP server for associate IP address from DHCP pool to MAC address of your master, which will able you to using all features of DHCP, but an address of your master will be always same. </p>
<p>Standard Linux <code>dhcpd</code> DHCP server you can configure like that (replace a mac address and IP to which one you need):</p>
<p><code>host KubeMaster {
hardware ethernet 00:1F:6A:21:71:3F;
fixed-address 10.0.0.101;
}</code></p>
<p>If you using any router or different OS for your DHCP server, then please check their documentation.</p>
|
<p>I am trying to deploy a helm chart which uses <code>PersistentVolumeClaim</code> and <code>StorageClass</code> to dynamically provision the required sotrage. This works as expected, but I can't find any configuration which allows a workflow like</p>
<pre><code>helm delete xxx
# Make some changes and repackage chart
helm install --replace xxx
</code></pre>
<p>I don't want to run the release constantly, and I want to reuse the storage in deployments in the future.</p>
<p>Setting the storage class to <code>reclaimPolicy: Retain</code> keeps the disks, but helm will delete the PVC and orphan them. Annotating the PVC's so that helm does not delete them fixes this problem, but then running install causes the error</p>
<pre><code>Error: release xxx failed: persistentvolumeclaims "xxx-xxx-storage" already exists
</code></pre>
<p>I think I have misunderstood something fundamental to managing releases in helm. Perhaps the volumes should not be created in the chart at all.</p>
| <p><a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims" rel="nofollow noreferrer">PersistenVolumeClain</a> creating just a mapping between your actual <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer">PersistentVolume</a> and your pod.</p>
<p>Using <code>"helm.sh/resource-policy": keep</code> annotation for PV is not the best idea, because of that remark in a <a href="https://github.com/kubernetes/helm/blob/master/docs/charts_tips_and_tricks.md#tell-tiller-not-to-delete-a-resource" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p>The annotation "helm.sh/resource-policy": keep instructs Tiller to skip this resource during a helm delete operation. However, this resource becomes orphaned. Helm will no longer manage it in any way. This can lead to problems if using helm install --replace on a release that has already been deleted, but has kept resources.</p>
</blockquote>
<p>If you will create a PV manually after you will delete your release, Helm will remove PVC, which will be marked as "Available" and on next deployment, it will reuse it. Actually, you don't need to keep your PVC in the cluster to keep your data. But, for making it always using the same PV, you need to use <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" rel="nofollow noreferrer">labels and selectors</a>.</p>
<p>For keep and reuse volumes you can:</p>
<ol>
<li>Create PersistenVolume with the label, as an example, <code>for_app=my-app</code> and set "Retain" policy for that volume like this:</li>
</ol>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: myappvolume
namespace: my-app
labels:
for_app: my-app
spec:
persistentVolumeReclaimPolicy: Retain
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
</code></pre>
<ol start="2">
<li>Modify your PersistenVolumeClaim configuration in Helm. You need to add a selector for using only PersistenVolumes with a label <code>for_app=my-app</code>.</li>
</ol>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myappvolumeclaim
namespace: my-app
spec:
selector:
matchLabels:
for_app: my-app
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
</code></pre>
<p>So, now your application will use the same volume each time when it started.</p>
<p>But, please keep in mind, you may need to use selectors for other apps in the same namespace for preventing using your PV by them.</p>
|
<p>Moving from VMs to Kubernetes.</p>
<p>We are running our services on multiple VMs. Services are running on multiple VMs and have VIP in front of them. Clients will be accessing VIP and VIP will be routing traffic to services. Here, we use SSL cert for VIP and VIP to VM also using HTTPS. </p>
<p>Here the service will be deployed into VM with a JKS file. This JKS file will have a cert for exposing HTTPS and also to communicate with SSL enabled database.</p>
<p>How to achieve the same thing in Kubernetes cluster? Need HTTPS for VIP and services and also for communication to SSL enabled database from service.</p>
| <p>Depends on the platform where you running Kubernetes (on-premises, AWS, GKE, GCE etc.) you have several ways to do it, but I will describe a solution which will work on all platforms - Ingress with HTTPS termination on it.</p>
<p>So, in Kubernetes you can provide access to your application inside a cluster using <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress</a> object. It can provide load balancing, HTTPS termination, routing by path etc. In most of the cases, you can use Ingress controller based on <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">Nginx</a>. Also, it providing TCP load balancing and SSL Passthrough if you need it.</p>
<p>For providing routing from users to your services, you need:</p>
<ol>
<li>Deploy your application as a combination of <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/" rel="nofollow noreferrer">Pods</a> and <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service</a> for them.</li>
<li>Deploy <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress controlle</a>r, which will manage your Ingress objects.</li>
<li>Create a <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">secret</a> for your certificate.</li>
<li>Create an <code>Ingress</code> object with will point to your service with TLS settings for ask Ingress to use your <code>secret</code> with your certificate, like that:</li>
</ol>
<p><code>
spec:
tls:
hosts:
- foo.bar.com
secretName: foo-secret
</code></p>
<ol start="5">
<li>Now, when you call the <code>foo.bar.com</code> address, Ingress with using FQDN-based routing and provide HTTPS connection between your client and pods in a cluster using a <code>service</code> object, which knows where exactly your pod is. You can read how it works <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress" rel="nofollow noreferrer">here</a> and <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">here</a>.</li>
</ol>
<p>What about encrypted communication between your services inside a cluster - you can use the same scheme with <code>secrets</code> for providing SSL keys to all your services and setup Service to use HTTPS endpoint of an application instead of HTTP. Technically it is same as using <a href="https://docs.nginx.com/nginx/admin-guide/security-controls/securing-http-traffic-upstream/" rel="nofollow noreferrer">https upstream</a> in installations without Kubernetes, but all configuration for Nginx will be provided automatically based on your <code>Service</code> and <code>Ingress</code> objects configuration.</p>
|
<p>I have one kubernetes cluster with 4 nodes and one master. I am trying to run 5 nginx pod in all nodes. Currently sometimes the scheduler runs all the pods in one machine and sometimes in different machine.</p>
<p>What happens if my node goes down and all my pods were running in same node? We need to avoid this.</p>
<p>How to enforce scheduler to run pods on the nodes in round-robin fashion, so that if any node goes down then at at least one node should have NGINX pod in running mode.</p>
<p>Is this possible or not? If possible, how can we achieve this scenario?</p>
| <h1>Use podAntiAfinity</h1>
<p><strong>Reference: <a href="https://www.manning.com/books/kubernetes-in-action" rel="noreferrer">Kubernetes in Action Chapter 16. Advanced scheduling</a></strong></p>
<p>The podAntiAfinity with <strong>requiredDuringSchedulingIgnoredDuringExecution</strong> can be used to prevent the same pod from being scheduled to the same hostname. If prefer more relaxed constraint, use <strong>preferredDuringSchedulingIgnoredDuringExecution</strong>.</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 5
template:
metadata:
labels:
app: nginx
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution: <---- hard requirement not to schedule "nginx" pod if already one scheduled.
- topologyKey: kubernetes.io/hostname <---- Anti affinity scope is host
labelSelector:
matchLabels:
app: nginx
container:
image: nginx:latest
</code></pre>
<h1>Kubelet <a href="https://kubernetes.io/docs/reference/generated/kubelet/" rel="noreferrer">--max-pods</a></h1>
<p>You can specify the max number of pods for a node in kubelet configuration so that in the scenario of node(s) down, it will prevent K8S from saturating another nodes with pods from the failed node.</p>
|
<p>We presently have a setup where applications within our mesos/marathon cluster want to reach out to services which may or may not reside in our mesos/marathon cluster. Ingress for external traffic into the cluster is accomplished via an Amazon ELB sitting in front of a cluster of Traefik instances, which then chooses the appropriate set of container instances to load-balance to via the incoming HTTP Host header compared against essentially a many-to-one association of configured host headers against a particular container instance. Internal-to-internal traffic is actually handled by this same route as well, as the DNS record that is associated with a given service is mapped to that same ELB both internal to and external to our mesos/marathon cluster. We also give the ability to have multiple DNS records pointing against the same container set.</p>
<p>This setup works, but causes seemingly unnecessary network traffic and load against our ELBs as well as our Traefik cluster, as if the applications in the containers or another component were able to self-determine that the services they wished to call out to were within the specific mesos/marathon cluster they were in, and make an appropriate call to either something internal to the cluster fronting the set of containers, or directly to the specific container itself.</p>
<p>From what I understand of Kubernetes, Kubernetes provides the concept of services, which essentially can act as the front for a set of pods based on configuration for which pods the service should match over. However, I'm not entirely sure of the mechanism by which we can have applications in a Kubernetes cluster know transparently to direct network traffic to the service IPs. I <em>think</em> that some of this can be helped by having Envoy proxy traffic meant for, e.g., <code><application-name>.<cluster-name>.company.com</code> to the service name, but if we have a CNAME that maps to that previous DNS entry (say, <code><application-name>.company.com</code>), I'm not entirely sure how we can avoid exiting the cluster.</p>
<p>Is there a good way to solve for both cases? We are trying to avoid having our applications' logic have to understand that it's sitting in a particular cluster and would prefer a component outside of the applications to perform the routing appropriately.</p>
<p>If I am fundamentally misunderstanding a particular component, I would gladly appreciate correction!</p>
| <p>When you are using service-to-service communication inside a cluster, you are using <code>Service</code> abstraction which is something like a static point which will road traffic to the right pods. </p>
<p>Service endpoint available only from inside a cluster by it's IP or internal DNS name, provided by internal Kubernetes DNS server. So, for communicating inside a cluster, you can use DNS names like <code><servicename>.<namespace>.svc.cluster.local</code>.</p>
<p>But, what is more important, Service has a <strong>static IP address</strong>.</p>
<p>So, now you can add that static IP as a <code>hosts</code> record to the pods inside a cluster for making sure that they will communicate each other inside a cluster.</p>
<p>For that, you can use <a href="https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/" rel="nofollow noreferrer">HostAlias</a> feature. Here is an example of configuration:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: hostaliases-pod
spec:
restartPolicy: Never
hostAliases:
- ip: "10.0.1.23"
hostnames:
- "my.first.internal.service.example.com"
- ip: "10.1.2.3"
hostnames:
- "my.second.internal.service.example.com"
containers:
- name: cat-hosts
image: busybox
command:
- cat
args:
- "/etc/hosts"
</code></pre>
<p>So, if you will use your internal Service IP in combination with service's public FQDN, all traffic from your pod will be 100% inside a cluster, because the application will use internal IP address.</p>
<p>Also, you can use <a href="http://blog.kubernetes.io/2017/04/configuring-private-dns-zones-upstream-nameservers-kubernetes.html" rel="nofollow noreferrer">upstream DNS</a> server which will contain same aliases, but an idea will be the same.
With Upstream DNS for the separate zone, resolving will work like that:
<a href="https://i.stack.imgur.com/UGNKZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UGNKZ.png" alt="Resolving with Upstream DNS for zone"></a></p>
<p>With a new version of Kubernetes, which using Core DSN for providing DNS service, and has more features it will be a bit simpler.</p>
|
<p>So now that k8s is integrated directly with spark in 2.3 my spark submit from the console executes correctly on a kuberenetes master without any spark master pods running, spark handles all the k8s details:</p>
<pre><code>spark-submit \
--deploy-mode cluster \
--class com.app.myApp \
--master k8s://https://myCluster.com \
--conf spark.kubernetes.authenticate.driver.serviceAccountName=spark \
--conf spark.app.name=myApp \
--conf spark.executor.instances=10 \
--conf spark.kubernetes.container.image=myImage \
local:///myJar.jar
</code></pre>
<p>What I am trying to do is do a spark-submit via AWS lambda to my k8s cluster. Previously I used the command via the spark master REST API directly (without kubernetes):</p>
<pre><code>request = requests.Request(
'POST',
"http://<master-ip>:6066/v1/submissions/create",
data=json.dumps(parameters))
prepared = request.prepare()
session = requests.Session()
response = session.send(prepared)
</code></pre>
<p>And it worked. Now I want to integrate Kubernetes and do it similarly where I submit an API request to my kubernetes cluster from python and have spark handle all the k8s details, ideally something like:</p>
<pre><code>request = requests.Request(
'POST',
"k8s://https://myK8scluster.com:443",
data=json.dumps(parameters))
</code></pre>
<p>Is it possible in the Spark 2.3/Kubernetes integration?</p>
| <p>I afraid that is impossible for Spark 2.3, if you using native Kubernetes support.</p>
<p>Based on description from <a href="https://spark.apache.org/docs/latest/running-on-kubernetes.html" rel="nofollow noreferrer">deployment instruction</a>, submission process container several steps:</p>
<ol>
<li>Spark creates a Spark driver running within a Kubernetes pod.</li>
<li>The driver creates executors which are also running within Kubernetes pods and connects to them, and executes application code.</li>
<li>When the application completes, the executor pods terminate and are cleaned up, but the driver pod persists logs and remains in “completed” state in the Kubernetes API until it’s eventually garbage collected or manually cleaned up.</li>
</ol>
<p>So, in fact, you have no place to submit a job until you starting a submission process, which will launch a first Spark's pod (driver) for you. And after application completes, everything terminated.</p>
<p>Because of running a fat container on AWS Lambda is not a best solution, and also because if is not way to run any commands in container itself (is is possible, but with hack, here is <a href="https://github.com/jacov/lambda-bash" rel="nofollow noreferrer">blueprint</a> about executing Bash inside an AWS Lambda) the simplest way is to write some small custom service, which will work on machine outside of AWS Lambda and provide REST interface between your application and <code>spark-submit</code> utility. I don't see any other ways to make it without a pain.</p>
|
<p>I'm using docker on a bare metal server. I'm pretty happy with docker-compose to configure and setup applications.</p>
<p>Still some features are missing, like configuration management and monitoring maybe there are other solutions to solve this issues but I'm a bit overwhelmed by the feature set of Kubernetes and can't judge if it would help me here.</p>
<p>I'm also open for recommendations to solve the requirements separately:</p>
<ul>
<li>Configuration / Secret management</li>
<li>Monitoring of my docker hostes applications (e.g. having some kind of dashboard)</li>
<li>Remot container control (SSH is okay with only one Server)</li>
<li>Being ready to scale my environment (based on multiple different Dockerized applications) to more than one server in future - already thinking about networking/service discovery issues with a pure docker-compose setup</li>
</ul>
<p>I'm sure Kubernetes covers some of these features, but I have the feeling that it's too much focused on Cloud platforms where Machines are created on the fly (since I only have at most few bare metal Servers)</p>
<p>I hope the questions scope is not too broad, else please use the comment section and help me to narrow down the question.</p>
<p>Thanks.</p>
| <p>I think the Kubernetes is absolutely much your requests and it is what you need.</p>
<p>Let's start one by one.</p>
<blockquote>
<p>I have the feeling that it's too much focused on Cloud platforms where Machines are created on the fly (since I only have at most few bare metal Servers)</p>
</blockquote>
<p>No, it is not focused on Clouds. Kubernates can be installed almost on any bare-metal platform (include ARM) and have many <a href="https://kubernetes.io/docs/setup/pick-right-solution/#bare-metal" rel="nofollow noreferrer">tools and instructions</a> which can help you to do it. Also, it is easy to deploy it on your local PC using <a href="https://github.com/kubernetes/minikube" rel="nofollow noreferrer">Minikube</a>, which will prepare local cluster for you within VMs or right in your OS (only for Linux).</p>
<blockquote>
<p>Configuration / Secret management</p>
</blockquote>
<p>Kubernates has a powerful configuration and management based on special objects which can be attached to your containers. You can read more about configuration management in <a href="http://blog.kubernetes.io/2016/04/configuration-management-with-containers.html" rel="nofollow noreferrer">that</a> article.</p>
<p>Moreover, some tools like <a href="https://helm.sh" rel="nofollow noreferrer">Helm</a> can provide you more automation and range of preconfigured applications, which you can install using a single command. And you can prepare your own <a href="https://github.com/kubernetes/helm/blob/master/docs/charts.md" rel="nofollow noreferrer">charts</a> for it.</p>
<blockquote>
<p>Monitoring of my docker hostes applications (e.g. having some kind of dashboard)</p>
</blockquote>
<p>Kubernetes has its own <a href="https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/" rel="nofollow noreferrer">dashboard</a> where you can get many kinds of information: current applications status, configuration, statistics and many more. Also, Kubernetes has great integration with <a href="https://github.com/kubernetes/heapster" rel="nofollow noreferrer">Heapster</a> which can be used with <a href="https://grafana.com" rel="nofollow noreferrer">Grafana</a> for powerful visualization of almost anything.</p>
<blockquote>
<p>Remot container control (SSH is okay with only one Server)</p>
</blockquote>
<p>Kubernetes <a href="https://kubernetes.io/docs/reference/kubectl/overview/" rel="nofollow noreferrer">controlling tool</a> <code>kubectl</code> can get logs and connect to containers in the cluster without any problems. As an example, to connect a container "myapp" you just need to call <code>kubectl exec -it myapp sh</code>, and you will get <code>sh</code> session in the container. Also, you can connect to any application inside your cluster using <code>kubectl proxy</code> command, which will forward a port you need to your PC.</p>
<blockquote>
<p>Being ready to scale my environment (based on multiple different Dockerized applications) to more than one server in future - already thinking about networking/service discovery issues with a pure docker-compose setup</p>
</blockquote>
<p>Kubernetes can be scaled up to thousands of nodes. Or can have only one. It is your choice. Independent of a cluster size, you will get production-grade networking, service discovery and load balancing. </p>
<p>So, do not afraid, just <a href="https://kubernetes.io/docs/tutorials/stateless-application/hello-minikube/" rel="nofollow noreferrer">try to use it locally</a> with Minikube. It will make many of operation tasks more simple, not more complex.</p>
|
<p>I running Kubernetes cluster on premises, initialized using KubeAdm. I configured <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/#flannel" rel="noreferrer">flannel</a> networking plugin.</p>
<p>When I exposing service as a NodePort, I'm not able to receive external IP. What do I miss?</p>
<p><a href="https://i.stack.imgur.com/TRjmN.png" rel="noreferrer"><img src="https://i.stack.imgur.com/TRjmN.png" alt="enter image description here"></a></p>
<p>My deployment yaml looks as the following:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: testapp
labels:
run: testapp
spec:
type: NodePort
ports:
- port: 8080
targetPort: 80
protocol: TCP
name: http
- port: 443
protocol: TCP
name: https
selector:
run: testapp
---------
apiVersion: apps/v1
kind: Deployment
metadata:
name: testapp
spec:
selector:
matchLabels:
run: testapp
replicas: 2
template:
metadata:
name: testapp
labels:
run: testapp
spec:
containers:
- image: [omitted]
name: testapp
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /api/health
port: 80
</code></pre>
<p><strong>Environment details:</strong></p>
<p>Kubernetes version:</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T12:22:21Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T11:55:20Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>Running Ubuntu Server 16.04 on vSphere VM (on-prem).</p>
| <p>You would not get an external IP when exposing service as a nodeport. Exposing Service on a Nodeport means that your service would be available on externally via the NodeIP of any node in the cluster at a random port between 30000-32767(default behaviour) . </p>
<p>In your case , the port on which your service is exposed is port 31727. </p>
<p>Each of the nodes in the cluster proxy that port (the same port number on every Node) into the pod where your service is launched.</p>
<p>Easiest way to see this using</p>
<pre><code>kubectl describe service <service-name>
</code></pre>
<p>Check for detail of the Nodeport in the result above. </p>
<p>Later get any the node Ip of any of the nodes in the cluster using</p>
<pre><code>kubectl get nodes -o wide
</code></pre>
<p>You can now access your service externally using <code><Node-IP>:<Node-Port></code></p>
<p>Additionally, if you want a fixed Node port, you can specify that in the yaml. </p>
<p>PS: Just make sure you add a security rule on your nodes to allow traffic on the particular port.</p>
|
<p>I have the following service... </p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
app: mongo
spec:
ports:
- name: mongo
port: 27017
clusterIP: None
selector:
app: mongo
</code></pre>
<p>And the following stateful set... </p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: mongo
selector:
matchLabels:
app: mongo
replicas: 3
template:
metadata:
labels:
app: mongo
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- mongod
- "--replSet"
- rs0
- "--smallfiles"
- "--noprealloc"
ports:
- name: mongo
containerPort: 27017
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
- name: "VERSION"
value: "2"
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongo"
</code></pre>
<p>However, my code can't make a connection, using the url <code>mongo:27017</code>. I've tried connecting to <code>mongo</code>, <code>mongo-0.mongo:27017</code>, loads of others. If I exec into a container and run <code>$ nslookup mongo</code> I get... </p>
<pre><code>Name: mongo
Address 1: 10.1.0.80 mongo-0.mongo.default.svc.cluster.local
Address 2: 10.1.0.81 mongo-1.mongo.default.svc.cluster.local
Address 3: 10.1.0.82 mongo-2.mongo.default.svc.cluster.local
</code></pre>
<p>Hitting <code>$ curl mongo:27017</code> or <code>$ telnet mongo 27017</code> gives me a connection refused error. </p>
| <p>Add <code>bind_ip</code> to command:</p>
<pre><code> command:
- mongod
- "--replSet"
- rs0
- "--smallfiles"
- "--noprealloc"
- "--bind_ip"
- "0.0.0.0"
</code></pre>
<p>This <a href="https://docs.mongodb.com/manual/reference/program/mongod/#core-options" rel="nofollow noreferrer">option</a> tells the monogodb daemon to listen on all IPv4 addresses, rather than the default of <code>localhost</code>.</p>
|
<p>As far as my understanding goes, the Kubernetes engine is meant for deploying applications that can be load balanced, for example, having an application which unhashes a string. If pod-a is on high load, it would be offloaded to pod-b. Correct me if I am wrong here, since if this is false, my following question will not make sense.</p>
<hr>
<p>After exploring it for few hours I can't seem to figure out how to deploy a C++ application to the Kubernetes cluster. How would I do so?</p>
<p>What I tried:</p>
<p>I tried to follow the guide: <em><a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/deploy-interactive/" rel="noreferrer">Interactive Tutorial - Deploying an App</a></em>, however, I couldn't understand how I would get my C++ app as an image that could be deployed.</p>
<p>What the C++ application is:</p>
<p>At the moment it proxies TCP traffic to another HOST designated by clients' HOSTNAME. It is pretty much a reverse proxy, however, this is NOT an HTTP application.</p>
| <p>Is Kubernetes the right choice?</p>
<p>-</p>
<p><a href="https://www.infoworld.com/article/3173266/containers/4-reasons-you-should-use-kubernetes.html" rel="noreferrer">Kubernetes is really useful</a> to loadbalance workloads, to provide high availability in case of failure to speed up test processes, and to increase safety during production rollout through different strategies and increase security through segregation.</p>
<p><strong>However, not all the kind of workloads can take advantage of all the features introduced by Kubernetes</strong>.</p>
<ul>
<li>For example, if your application is built in such a way it needs a stable amount of RAM and CPU, the code as well is really stable and you need merely one replica, then maybe Kubernetes and containers are not the best choice (even if you can perfectly use them), and you should rather implement everything on a big monolithic server/virtual machine.</li>
</ul>
<p>But if you need to deploy it on a different cloud provider, and it should run merely some hours every day, maybe then it can make use as well of those features. <strong>If you are willing to add a layer, make sure that you need the features it introduces, otherwise it would be merely an overhead</strong>.</p>
<p>Note that <strong>Kubernetes it is not capable of splitting your workload alone</strong>. Therefore I do not know if what you mean by "<em>If pod-a is on high load, it would be offloaded to pod-b</em>" likely yes it is possible, but you have to instruct it to do so.</p>
<p>Kubernetes takes care to run your POD, making sure there have been scheduled on nodes where enough memory and CPU is available <a href="https://kubernetes.io/docs/tasks/administer-cluster/memory-default-namespace/" rel="noreferrer">according to your specification</a>, you can set up <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="noreferrer">autoscaling procedures</a> as well to support high workload periods or to <a href="https://kubernetes.io/docs/tasks/administer-cluster/cluster-management/#cluster-autoscaling" rel="noreferrer">scale even the cluster</a> itself. Your application should have been created in such a way to support a divide and conquer pattern, otherwise you will likely have three nodes, one pod running on one node, two idle and a overhead that you could have avoided.</p>
<ul>
<li>If your C++ application POD unhashes a strings and a single request could consume all the resources of a node <strong>Kubernetes will not "spit" the initial workload and will not create for you more PODS scheduling them across the cluster!</strong> Of course you can achieve something similar, but it will not come for free and you will likely need to modify your C++ code.</li>
</ul>
<p>For sure you can take advantage of Kubernetes, running your application on it is pretty easy, but maybe you will have to modify something in the architecture to fully make advantage of those features.</p>
<hr>
<h2>Deploy the C++ application</h2>
<p>The process to deploy your application in Kubernetes is pretty standard. Develop it locally, create a Docker image with all the libraries and components you need, test it locally, push it to the registry, and create the deployment in Kubernetes.</p>
<p>Let's say that you have all the resources needed to run your application and your executable file in a local folder. Create the Docker file.</p>
<p><a href="https://www.howtoforge.com/tutorial/how-to-create-docker-images-with-dockerfile/" rel="noreferrer">Example</a>, modify to implement your application, I have reported it as an example to show syntax:</p>
<pre><code># Download base image, Ubuntu 16.04 (Xenial Xerus)
FROM ubuntu:16.04
# Update software repository
RUN apt-get update
# Install nginx, php-fpm and supervisord from the Ubuntu repository
RUN apt-get install -y nginx php7.0-fpm supervisor
# Define the environment variable
ENV nginx_vhost /etc/nginx/sites-available/default
[...]
# Enable php-fpm on the nginx virtualhost configuration
COPY default ${nginx_vhost}
[...]
RUN chown -R www-data:www-data /var/www/html
# Volume configuration
VOLUME ["/etc/nginx/sites-enabled", "/etc/nginx/certs", "/etc/nginx/conf.d", "/var/log/nginx", "/var/www/html"]
# Configure services and port
COPY start.sh /start.sh
CMD ["./start.sh"]
EXPOSE 80 443
</code></pre>
<p>Built it running:</p>
<pre><code>export PROJECT_ID="$(gcloud config get-value project -q)"
docker build -t gcr.io/${PROJECT_ID}/hello-app:v1 .
gcloud docker -- push gcr.io/${PROJECT_ID}/hello-app:v1
kubectl run hello --image=gcr.io/${PROJECT_ID}/hello-app:v1 --port [port number if needed]
</code></pre>
<p><a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app" rel="noreferrer">More information is here</a>.</p>
|
<p>We are attempting to run Elasticsearch on top of a kubernetes / flannel / coreos cluster.</p>
<p>As flannel <a href="https://github.com/coreos/flannel/issues/52" rel="nofollow">does not support multicast</a>, we cannot use Zen multicast discovery to allow the nodes to find each other, form a cluster and communicate.</p>
<p>Short of hard-coding the IP addresses of all the kubernetes nodes into the ES-config-file, is there another method we can utilise to assist in discovery? Possibly using etcd2 or some other kubernetes-compatible discovery service?</p>
| <p>Version 6.2.0 is supporting kubernetes auto discovery </p>
<p>update your elasticsearch.yml as following </p>
<p>discovery.zen.ping.unicast.hosts: "kubernetes service name"</p>
|
<p>guys,</p>
<p>For various projects, I'm creating single Docker environments. Each Docker container consists of Debian, Nginx, Node.js, etc. and is going to use by developers as well as in production via Google Cloud's Kubernetes. Since the Node.js/module version should be everywhere the same, I would like to restrict the access to certain <strong>npm</strong> commands (somehow). Often developers work with different Node.js and project modules and that caused a lot of trouble in the past. With the Docker containers, I can provide environments with everything you need for a project. To finish this step, I would like to restrict the npm command execution and only allow arguments like <strong>install</strong>, <strong>test</strong>, etc. </p>
<p>Please drop me a comment if you know how to resolve this :)</p>
<p>Cheers</p>
| <p>It is almost impossible to limit your developers to run some commands in the container if they have an access to Dockerfiles and can somehow change a build flow. </p>
<p>But, because container providing isolation and you can build a custom container for which application based on your basic image, it can be not a big problem if the version of any package for one application will be changed somehow, as an example in a build step, because it will not affect other apps. They just have different containers. </p>
<p>So, you will not have a problem with compatibility like when you using one server with many application which using a shared environment.</p>
<p>The only one thing you need to do - make sure that nobody change container which you using as a base image.</p>
|
<p>I am using the following version:</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.1", GitCommit:"3a1c9449a956b6026f075fa3134ff92f7d55f812", GitTreeState:"clean", BuildDate:"2018-01-04T11:52:23Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.1", GitCommit:"3a1c9449a956b6026f075fa3134ff92f7d55f812", GitTreeState:"clean", BuildDate:"2018-01-04T11:40:06Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>Here, I am trying to authenticate a user that makes use of x509 certs using the below custom scrip that I created looking into few online forums and kubernetes docs. </p>
<pre><code>#!/bin/bash
cluster=test-operations-k8
namespace=demo
username=jack
openssl genrsa -out $username.pem 2048
openssl req -new -key $username.pem -out $username.csr -subj "/CN=$username"
cat <<EOF | kubectl create -n $namespace -f -
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: user-request-$username
spec:
groups:
- system:authenticated
request: $(cat $username.csr | base64 | tr -d '\n')
usages:
- digital signature
- key encipherment
- server auth
EOF
kubectl certificate approve user-request-$username
kubectl get csr user-request-$username -o jsonpath='{.status.certificate}' | base64 -d > $username.crt
kubectl --kubeconfig ~/.kube/config-$username config set-cluster $cluster --insecure-skip-tls-verify=true --server=https://$cluster.eastus.cloudapp.azure.com
kubectl --kubeconfig ~/.kube/config-$username config set-credentials $username --client-certificate=$username.crt --client-key=$username.pem --embed-certs=true
kubectl --kubeconfig ~/.kube/config-$username config set-context $cluster --cluster=$cluster --user=$username
kubectl --kubeconfig ~/.kube/config-$username config use-context $cluster
echo "Config file for $username has been created successfully !"
</code></pre>
<p>But while getting resources I get the below error:</p>
<pre><code>error: You must be logged in to the server (Unauthorized)
</code></pre>
<p>Can someone please advise what needs to be done to fix this issue ?</p>
<p>Also please note the appropriate roles and rolebindings have also been created which I have not listed out here.</p>
| <p>Make sure the CA used to sign the CSRs (the <code>--cluster-signing-cert-file</code> file given to kube-controller-manager) is in the <code>--client-ca-file</code> bundle given to kube-apiserver (which is what authenticates client certs presented to the apiserver)</p>
<p>Also ensure the certificate requested is a client certificate (has <code>client auth</code> in the <code>usages</code> field)</p>
|
<p>I'm moving to <code>kubernetes</code> using <code>traefik</code> as my Ingress Controller.</p>
<p>I have a single backend that should respond to 3000+ websites. Depending on the host, I need to add a custom header to the request before proxy passing it to the backend.</p>
<p>I can use the <code>ingress.kubernetes.io/custom-request-headers</code> annotation to add a custom header to the request but it's an annotation for the whole Ingress, so I would need to create 3000+ Ingresses, one for each website.</p>
<p>Is there another way to do this? Creating 3000+ Ingresses is the same thing as creating one Ingress with 3000+ rules?</p>
| <p>Yes, you need to create one Ingress object per one host, if you want different headers her host.</p>
<p>You can do it by Traefik:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: traeffic-custom-request-header
annotations:
ingress.kubernetes.io/custom-request-headers: "mycustomheader: myheadervalue"
spec:
rules:
- host: custom.configuration.com
http:
paths:
- backend:
serviceName: http-svc
servicePort: 80
path: /
</code></pre>
<p>Also, the same thing you can do by <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">Nginx</a> Ingress Controller.</p>
<p>It has the support for <code>configuration snipper</code>. <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/customization/configuration-snippets/ingress.yaml" rel="nofollow noreferrer">Here</a> is an example of using it to set a custom header per Ingress object:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-configuration-snippet
annotations:
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "Request-Id: $request_id";
spec:
rules:
- host: custom.configuration.com
http:
paths:
- backend:
serviceName: http-svc
servicePort: 80
path: /
</code></pre>
<p>BTW, you can use several different ingress controllers on your cluster, so it does not need to migrate everything to only one type of Ingress.</p>
|
<p><strong>Objective</strong></p>
<p>I want to access the redis database in kubernetes, from a function inside ibm functions using javascript.</p>
<p><strong>Question</strong></p>
<p>How do I get the right URI, when redis is running on a Pod in Kubernetes?</p>
<p><strong>Situation</strong></p>
<p>I used this sample to setup the redis database in kubernetes <a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/#creating-a-pod-to-run-a-redis-server" rel="nofollow noreferrer">This is the link to the sample in Kubernetes</a>
I run Kuberentes inside IBM Cloud.</p>
<p><strong>Findings</strong></p>
<p>I was not able to find a answer to my question on the <a href="https://redis.io/documentation" rel="nofollow noreferrer">redis documentation</a> </p>
<p>As far as I understand by default no password configured.
Is this assumption right?</p>
<pre><code>redis://[USER]:[PASSWORD]@[CLUSTER-PUBLIC-IP]:[PORT]
</code></pre>
<p>Thanks for help ... I know this is maybe a to simple question, but currently I do not see the tree in the woods ;-)</p>
| <blockquote>
<p>As far as I understand by default no password configured.</p>
</blockquote>
<p>Yes, there is no default password in that image with Redis, you are right.</p>
<p>If you following the instruction you mentioned, you will use a <code>kubectl proxy</code>, which will forward port of your Redis in cluster to your local machine by call <code>kubectl port-forward redis-master 6379:6379</code>.</p>
<p>So in that case, Redis will be available on <code>redis://localhost:6379</code> on your PC.</p>
<p>If you want to make it available directly from ouside of the cluster, you need to create <a href="https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport" rel="noreferrer">Service with NodePort</a>, <a href="https://kubernetes.io/docs/concepts/services-networking/service/#type-loadbalancer" rel="noreferrer">Service with LoadBalancer</a> (if you in Cloud) or simply <a href="https://kubernetes.io/docs/concepts/services-networking/service" rel="noreferrer">Service</a> with <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="noreferrer">Ingress</a>.</p>
<p>Inside a cluster, you can create Service with Cluster IP (which is actually simply Service, because it always has Cluster IP) for your Redis pod and will be available on:</p>
<p><code>redis://[USER]:[PASSWORD]@[SERVICE-IP]:[PORT]</code></p>
<p>Here is a good official documentation about <a href="https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/" rel="noreferrer">connecting applications with service</a>.</p>
|
<p>I would like to add the docker command <code>--user $(id -u):$(id -g)</code> to my <code>k8s</code>deployment definition. What is the equivalent for that in k8s? </p>
<p>args or command?</p>
<p>How the container gets started normally: </p>
<pre><code>docker run -d -p 5901:5901 -p 6901:6901 --user $(id -u):$(id -g) khwhahn/daedalus:0.1
k8s deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose --file docker-compose.yaml convert
kompose.version: 1.10.0 (8bb0907)
creationTimestamp: null
labels:
io.kompose.service: daedalus
name: daedalus
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: daedalus
spec:
containers:
- env:
- name: DISPLAY
image: khwhahn/daedalus:0.1
imagePullPolicy: Always
ports:
- containerPort: 5901
name: vnc
protocol: TCP
- containerPort: 6901
name: http
protocol: TCP
livenessProbe:
httpGet:
path: /
port: 6901
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
readinessProbe:
httpGet:
path: /
port: 6901
scheme: HTTP
initialDelaySeconds: 10
timeoutSeconds: 1
periodSeconds: 10
successThreshold: 1
failureThreshold: 3
name: daedalus
resources: {}
volumeMounts:
- mountPath: /tmp/.X11-unix
name: daedalus-claim0
- mountPath: /home/daedalus/daedalus/tls
name: cardano-tls
restartPolicy: Always
volumes:
- name: daedalus-claim0
persistentVolumeClaim:
claimName: daedalus-claim0
- name: cardano-tls
persistentVolumeClaim:
claimName: cardano-tls
status: {}
</code></pre>
<p>Thanks</p>
| <p>That was requested initially in <a href="https://github.com/kubernetes/kubernetes/issues/22179" rel="nofollow noreferrer">kubernetes issue 22179</a>.</p>
<p>Implemented partially in:</p>
<ul>
<li><a href="https://github.com/kubernetes/kubernetes/pull/52077" rel="nofollow noreferrer">PR 52077: "API Changes for RunAsGroup"</a>,</li>
<li><a href="https://github.com/kubernetes/community/pull/756" rel="nofollow noreferrer">PR 756: "Allow specifying the primary group id of the container "</a></li>
</ul>
<blockquote>
<p><code>PodSecurityContext</code> allows Kubernetes users to specify RunAsUser which can be overriden by RunAsUser in SecurityContext on a per Container basis.</p>
<p>Introduce a new API field in SecurityContext and PodSecurityContext called <code>RunAsGroup</code>.</p>
</blockquote>
<p>See "<a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" rel="nofollow noreferrer">Configure a Security Context for a Pod or Container</a>".</p>
|
<p>I want to start two maria-db pod with same persistent storage and any point of time I should be able to access both the instance and data should be in sync between them. </p>
<p>I am trying to start two mariadb instance using same volume persistent storage in kubernetes. I am able to start both the instance. I am performing the below steps.</p>
<ol>
<li>Creating a persistent volume</li>
<li>Creating a persistent volume claim</li>
<li>Using the same claim name starting mariadb-instance-1.</li>
<li>Starting mariadb-instance-2 using same storage claim name.</li>
<li>Creating two services for both the instance to access from outside.</li>
</ol>
<p>I am able to access instance-1 but when I am trying to access instance-2 its giving me error. MySQL Error: Can’t connect to local MySQL server through socket ‘/var/run/mysqld/mysqld.sock’.</p>
<p>Please find the attached dockerfiles.</p>
<p>Any help will be appreciated.</p>
<p>Please find the below git repo for db and storage yaml file which I used to create the deployment.</p>
<p><a href="https://github.com/chandan493/db-as-docker" rel="nofollow noreferrer">https://github.com/chandan493/db-as-docker</a></p>
| <p>You can not run two MariaDB engines on the same storage, and if I understood you right this is what you expected. Even if you'd mount an RWX volume on two pods, if you put /var/lib/mysql of containers in two separate MaraiaDB pods in the same place, it will result in a conflict between database engines. For MariaDB clustering lookup <code>MariaDB Galera</code> - an almoust-fully-synchronous replication for MariaDB. But you'll need three db engines running for it to make sense.</p>
|
<p>We got OOMKilled event on our K8s pods. We want in case of such event to run Native memory analysis command BEFORE the pod is evicted. Is it possible to add such a hook?</p>
<p>Being more specific: we run with <code>-XX:NativeMemoryTracking=summary</code> JVM flag. We want to run <code>jcmd <pid> VM.native_memory summary.diff</code> just BEFORE pod eviction to see what causes OOM.</p>
| <p>Looks like it is almost impossible to handle.</p>
<p>Based on an <a href="https://github.com/kubernetes/kubernetes/issues/40157" rel="noreferrer">answer on Github</a> about a gracefully stop on OMM Kill:</p>
<blockquote>
<p>It is not possible to change OOM behavior currently. Kubernetes (or runtime) could provide your container a signal whenever your container is close to its memory limit. This will be on a best effort basis though because memory spikes might not be handled on time.</p>
</blockquote>
<p>Here is from <a href="https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/#node-oom-behavior" rel="noreferrer">official documentation</a>:</p>
<blockquote>
<p>If the node experiences a system OOM (out of memory) event prior to the kubelet is able to reclaim memory, the node depends on the oom_killer to respond.
The kubelet sets a oom_score_adj value for each container based on the quality of service for the Pod.</p>
</blockquote>
<p>So, as you understand, you have not much chance to handle it somehow.
Here is the large <a href="https://lwn.net/Articles/590960/" rel="noreferrer">article</a> about the handling of OOM, I will take just a small part here, about memory controller out of memory handling:</p>
<blockquote>
<p>Unfortunately, there may not be much else that this process can do to respond to an OOM situation. If it has locked its text into memory with mlock() or mlockall(), or it is already resident in memory, it is now aware that the memory controller is out of memory. It can't do much of anything else, though, because most operations of interest require the allocation of more memory. </p>
</blockquote>
<p>The only thing I can offer is getting a data from <a href="https://github.com/google/cadvisor" rel="noreferrer">cAdvisor</a> (here you can get an OOM Killer event) or from Kubernetes API and run your command when you see by metrics that you are very close to out of memory. I am not sure that you will have a time to do something after you will get OOM Killer event.</p>
|
<p>I want to do ssh between two pods in kubernetes. </p>
<p>Can anyone tell me how to do that? </p>
| <p>Oversimplifying the answer, you can not.</p>
<p>That is, under "normal" circumstances... Your containers in pod launch single process, that is your application, be it nodejs, php, java or whatever, so they do not have a running SSH server inside their namespaces. Unless you explicitly run it by ie. running a "fat" container that launches a supervisor process (like ie. by using something like <code>phusion/baseimage</code> container) which by most in container world is considered an anti-pattern, or by running ssh in sidecar container, which will allow you to access that ssh server (but it will have it's own FS and potentially process tree, unless shared PID namespace is used).</p>
<p>As suggested in another answer, you could use serviceaccounts to grant your software rights to call kubernetes API and hence use things like ie. <code>kubectl exec</code>. Is it the right call for you... that depends on what you really want to achieve in the end.</p>
|
<p>I had a problem, maybe easy but I couldn't handle this. How to remove all k8s containers and images from local machine?</p>
<pre><code>gcr.io/google_containers/k8s-dns-sidecar-amd64 Up 36 minutes k8s_sidecar_kube-dns-6fc954457d-mwgvb_kube-system_fd5ebaed-c63c-11e7-b3c8-28d24484a79b_116
gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64 Up 36 minutes k8s_dnsmasq_kube-dns-6fc954457d-mwgvb_kube-system_fd5ebaed-c63c-11e7-b3c8-28d24484a79b_116
gcr.io/google_containers/k8s-dns-kube-dns-amd64 Up 36 minutes k8s_kubedns_kube-dns-6fc954457d-mwgvb_kube-system_fd5ebaed-c63c-11e7-b3c8-28d24484a79b_116
gcr.io/google_containers/kubernetes-dashboard-amd64 Up 36 minutes k8s_kubernetes-dashboard_kubernetes-dashboard-xpg2v_kube-system_fd403866-c63c-11e7-b3c8-28d24484a79b_222
gcr.io/google-containers/kube-addon-manager Up 36 minutes k8s_kube-addon-manager_kube-addon-manager-lenovo-e540_kube-system_9831e93c3188555873fdb49f43198eef_186
gcr.io/google_containers/pause-amd64:3.0 Up 36 minutes k8s_POD_kube-dns-6fc954457d-mwgvb_kube-system_fd5ebaed-c63c-11e7-b3c8-28d24484a79b_116
gcr.io/google_containers/pause-amd64:3.0 Up 36 minutes k8s_POD_kube-addon-manager-lenovo-e540_kube-system_9831e93c3188555873fdb49f43198eef_186
gcr.io/google_containers/pause-amd64:3.0 Up 36 minutes k8s_POD_kubernetes-dashboard-xpg2v_kube-system_fd403866-c63c-11e7-b3c8-28d24484a79b_186
</code></pre>
<p>It's unpossible to stop them (they restart always), also to remove them by <code>rm</code> and <code>rmi</code>. Also tryint to kill <code>kubelet</code>.</p>
<pre><code>$ ps ax | grep kubelet
17234 pts/18 S+ 0:00 grep --color=auto kubelet
$ kill -KILL 17234
bash: kill: (17234) - No such process
systemctl stop kubelet
Failed to stop kubelet.service: Unit kubelet.service not loaded.
</code></pre>
<p>Also trying to force remove this containers:
<code>$ docker rm -f $(docker ps -a -q --filter "name=k8s")</code></p>
<p>but they will recreate after that...</p>
<p>Checking available pods results like this...</p>
<pre><code>$ kubectl get po -n=kube-system
Unable to connect to the server: dial tcp 192.168.99.100:8443: getsockopt: network is unreachable
</code></pre>
<p>I was looking for it in documentations, stack etc. but with no effect.</p>
<p>Here's bug on github, but no one could help: <a href="https://github.com/kubernetes/kubernetes/issues/61173" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/61173</a></p>
<p>Thanks in advance!</p>
<p>Best regards,
Marcin</p>
| <p>I was working with this guide a half year ago: <a href="https://kubernetes.io/docs/getting-started-guides/minikube/" rel="nofollow noreferrer">https://kubernetes.io/docs/getting-started-guides/minikube/</a> so I guess it using VirtualBox. Maybe you know how I can check this?</p>
<p>@Remario thanks for the tip, now when I execute
<code>$ systemctl disable localkube.service</code> and
<code>$ systemctl stop localkube.service</code> </p>
<p><strong>I'm able to delete k8s containers and they didn't restart immediately, so we resolve problem partially. Great!</strong></p>
<p>I got this error when trying to execute <code>docker rmi</code>:
<code>Error response from daemon: No such image: gcr.io/...</code></p>
<p>But images are still in <code>docker images</code> list. <strong>So I run
<code>$ docker system prune -a</code> and all <code>gcr.io</code> images was deleted.</strong></p>
<p>Thanks for your time.</p>
<p>Best regards,
Marcin</p>
|
<p>So i have the following <a href="https://blog.openshift.com/wp-content/uploads/refarch-ocp-on-vmw-1.png" rel="nofollow noreferrer">Openshift/Origin architecture</a> installed following the official <a href="https://docs.openshift.org/3.6/install_config/install/advanced_install.html" rel="nofollow noreferrer">Openshift/Origin documentation</a>
We also want to use the Aggregated logging setup that comes out of the box, that's why was set by using strictly the <a href="https://docs.openshift.com/container-platform/3.6/install_config/aggregate_logging.html" rel="nofollow noreferrer">Openshift aggregated logging</a> documentation. </p>
<p>So far the feedback has been excellent but i have another challenge for which i will need some help.
Developers want to specify by their own the log level that will be forwarded to the Elasticsearch. Current log level is set only thru FluentD.</p>
<p>Is there a way to set the log level by Deployment variable and make it pass Fluentd unchanged to Elasticsearch ?</p>
<p>The goal is to provide people a way to set by their own the log level that will be forwarded to Elasticsearch.</p>
| <p>I afraid there is no way to do it by standard tools without adding a custom FluentD.</p>
<p>First of all, your FluentD in a cluster <a href="https://docs.openshift.com/container-platform/3.6/install_config/aggregate_logging.html#aggregated-fluentd" rel="nofollow noreferrer">reading a container logs</a> provided by Docker thru json-file logging <a href="https://docs.docker.com/config/containers/logging/json-file/" rel="nofollow noreferrer">driver</a>:</p>
<blockquote>
<p>By default, Fluentd reads from /var/log/messages and /var/log/containers/.log for system logs and container logs, respectively.</p>
</blockquote>
<p>Even by using SystemD logging you will get the same result - logging level is set by Docker. Kubernetes <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/#logging-at-the-node-level" rel="nofollow noreferrer">also using</a> that driver.</p>
<p>For Docker json-file driver you can set <code>log-tags</code>, which, theoretically, can help you filter logs. But it is impossible to set that options for a container in runtime by Kubernetes, so there are now way.</p>
<p>The only way I see how you can do it is to use sidecar container <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/#sidecar-container-with-a-logging-agent" rel="nofollow noreferrer">with custom logging agent</a>. It will looks like that:</p>
<p><a href="https://i.stack.imgur.com/pKCSZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pKCSZ.png" alt="logging agent sidecar"></a> </p>
<p>Using sidecar, you can run FluentD with custom configuration inside it and parse a log of your application with any modification, including of using environment variables as a log level. </p>
|
<p>So I'm basically looking for anyone that can point me in the right direction for setting up Kubernetes to perform a common computation on many work items where each work item is a separate file.</p>
<p>I have been reading the documentation <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="nofollow noreferrer">here</a>, and it seems to suggest that it is possible, the examples are shown with queues of words and simply printing the words, however, I am having trouble with persistent volumes.</p>
<p>What I need to end up with is a deployment that will take a large file containing data points and split it into several files. <strong>I then want to have a Job object execute several pods, one on each file, performing the computation before passing the files back to the deployment for post-processing.</strong></p>
<p>I am having trouble finding out how to go about transferring the files, from what I read it seems that a PersistentVolume cannot be bound to more than one pod at once. <strong>So how do I go about passing a file to a single pod in a Job?</strong></p>
<p>Any suggestions or general direction would be greatly appreciated.</p>
| <blockquote>
<p>PersistentVolume cannot be bound to more than one pod at once.</p>
</blockquote>
<p>Whether a PV is shared among Nodes/Pods (or not) is determined by the <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer">accessMode</a>; it's not the case that all PVs are universally bound to just one Node/Pod</p>
<p>As the chart on that page shows, there are many PV technologies that tolerate <code>ReadWriteMany</code>, with the most famous of them being NFS</p>
|
<p>Say if I have a <code>rabbitmq</code> service as follows:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-rabbitmq
spec:
ports:
- port: 6379
selector:
app: my-rabbitmq
</code></pre>
<p>And I have another deployment:</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: A-worker
spec:
replicas: 1
containers:
- name: a-worker
image: worker-image
ports:
- containerPort: 80
env:
- name: rabbitmq_url
value: XXXXXXXXXXXXX
</code></pre>
<p>Is there any way to set the service ip as environment variable in my second deployment by some kind of selector? In other words what should go to the <code>value: XXXXXXXXXX</code> in the second deployment yaml. (Note I know I can get the service ip by <code>kubectl get services</code>, but I'd like to know how to set this by the service name or label). Any advice is welcome!</p>
| <p>kubernetes injects environment variables for a service's host, port, protocol among others into pod containers (see <a href="https://kubernetes.io/docs/concepts/containers/container-environment-variables/" rel="nofollow noreferrer">this doc</a>). </p>
<p><code>kubectl exec <pod> printenv</code> is one way to check which env variables are set. </p>
<p>If the service is created after the pod the env var may not be present so killing (restarting) the pod is one way to make sure the new environment variables are populated.</p>
<p>The convention is typically uppercase <code><SERVICE_NAME>_SERVICE_HOST</code>.
You can set it explicitly in a pod spec using the following syntax. </p>
<blockquote>
<pre><code> - name: rabbitmq_url
value: $(MY-RABBITMQ_SERVICE_HOST)
</code></pre>
</blockquote>
<p>Bear in mind the variable is already injected by k8s and this is just aliasing it. You may want to update your reference in the application layer /script to use the k8s injected environment variable for the service.</p>
|
<p>I'm currently looking for simplest possible JSON log messages that would simply write a severity and a message to Stackdriver logging from a container that is run in Kubernetes Engine and is using the managed Fluentd daemon.</p>
<p>Basically I'm writing single line JSON entries as follows.</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-js lang-js prettyprint-override"><code>{"severity": "DEBUG", "message": "I'm a debug entry"}
{"severity": "ERROR", "message": "I'm an error entry"}</code></pre>
</div>
</div>
</p>
<p>These end up in Stackdriver logging with following results.</p>
<ul>
<li>Severity is always INFO</li>
<li>There's JSON payload in the log entry, and the only content is the message, i.e. severity does not go there.</li>
</ul>
<p>My conclusion is that Fluentd recognizes log row as JSON, but what I don't understand is that how the severity is not set into log entries correctly. Am I e.g. missing some mandatory fields that need to be in place?</p>
| <p>From the information you provided I guess fluentd is passing your whole JSON as as a jsonpayload as a <a href="https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry" rel="noreferrer">logEntry</a> and providing the logName, resource type and the rest of required information from the environment variables. </p>
<p>In the end what Stackdriver is receiving must look something like this:</p>
<pre><code>{
"logName": "projects/[YOUR PROJECT ID]/logs/[KUBERNETES LOG]",
"entries": [
{
"jsonPayload": {
"message": "I'm an ERROR entry",
"severity": "ERROR"
},
"resource": {
"labels": {
"project_id": "[YOUR PROJECT ID]",
"instance_id": "[WHATEVER]",
"zone": "[YOUR ZONE]"
},
"type": "gce_instance"
}
}
]
}
</code></pre>
<p>So you are actually getting the content of the JSON payload on Stackdriver, but as the severity is defined either <strong>outside</strong> the JSON payload or, if you want to do it inside you'll have to use <code>"severity": enum([NUMERICAL VALUE])</code></p>
<p><a href="https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry#logseverity" rel="noreferrer">The numerical values of each log level are:</a> </p>
<blockquote>
<p>Enums<br>
DEFAULT (0) The log entry has no assigned severity level.<br>
DEBUG (100) Debug or trace information.<br>
INFO (200) Routine information, such as ongoing status or performance.<br>
NOTICE (300) Normal but significant events, such as start up, shut down, or a configuration change.<br>
WARNING (400) Warning events might cause problems.<br>
ERROR (500) Error events are likely to cause problems.<br>
CRITICAL (600) Critical events cause more severe problems or outages.<br>
ALERT (700) A person must take an action immediately.<br>
EMERGENCY (800) One or more systems are unusable. </p>
</blockquote>
<p>So, including the field <code>"severity": enum(500)</code> should log the entry as an ERROR instead to fallback to the default INFO.</p>
|
<p>We're moving all of our infrastructure to Google Kubernetes Engine (GKE) - we currently have 50+ AWS machines with lots of APIs, Services, Webapps, Database servers and more.</p>
<p>As we have already dockerized everything, it's time to start moving everything to GKE.</p>
<p>I have a question that may sound too basic, but I've been searching the Internet for a week and did not found any reasonable post about this</p>
<p>Straight to the point, which of the following approaches is better and why:</p>
<ol>
<li><p>Having multiple node pools with multiple machine types and always specify in which pool each deployment should be done; or</p></li>
<li><p>Having a single pool with lots of machines and let Kubernetes scheduler do the job without worrying about where my deployments will be done; or</p></li>
<li><p>Having BIG machines (in multiple zones to improve clusters' availability and resilience) and let Kubernetes deploy everything there.</p></li>
</ol>
| <h2>List of consideration to be taken merely as hints, I do not pretend to describe best practice.</h2>
<ul>
<li><p>Each pod you add brings with it some <strong>overhead</strong>, but you increase in terms of flexibility and availability making failure and maintenance of nodes to be less impacting the production. </p></li>
<li><p>Nodes too small would cause a big waste of resources since sometimes will be not possible to schedule a pod even if the total amount of free RAM or CPU across the nodes would be enough, you can see this issue similar to memory <strong>fragmentation</strong>.</p></li>
<li><p>I guess that the sizes of PODs and their memory and CPU request are not similar, but I do not see this as a big issue in principle and a reason to go for 1). I do not see why a big POD should run merely on big machines and a small one should be scheduled on small nodes. <strong>I would rather use 1) if you need a different memoryGB/CPUcores ratio to support different workloads.</strong> </p></li>
</ul>
<p>I would advise you to run some test in the initial phase to understand which is the size of the biggest POD and the average size of the workload in order to properly chose the machine types. Consider that having 1 POD that exactly fit in one node and assign to it is not the right to proceed(virtual machine exist for this kind of scenario). Since fragmentation of resources would easily cause to impossibility to schedule a large node.</p>
<ul>
<li><p>Consider that their size will likely increase in the future and to <a href="https://stackoverflow.com/questions/45037213/how-to-vertically-scale-google-cloud-instance-without-stopping-running-app">scale vertically</a> is not always this immediate and you need to switch off machine and terminate pods, I would <strong>oversize a bit</strong> taking this issue into account and since scaling horizontally is way easier. </p></li>
<li><p>Talking about the machine type you can decide to go for a machine 5xsize the biggest POD you have (or 3x? or 10x?). <strong>Oversize a bit</strong> as well the numebr of nodes of the cluster to take into account overheads, fragmentation and in order to still have free resources.</p>
<ol>
<li><blockquote>
<p>Remember that you have an hard limit of 100 pods each node and 5000 nodes.</p>
</blockquote></li>
<li><blockquote>
<p><a href="https://cloud.google.com/compute/docs/networks-and-firewalls#egress_throughput_caps" rel="nofollow noreferrer">Remember</a> that in GCP the network egress throughput cap is dependent on the number of vCPUs that a virtual machine instance has. Each vCPU has a 2 Gbps egress cap for peak performance. However each additional vCPU increases the network cap, up to a theoretical maximum of 16 Gbps for each virtual machine.</p>
</blockquote></li>
<li><blockquote>
<p>Regarding the prices of the virtual machines notice that there is no difference in price buying two machines with size x or one with size 2x. Avoid to customise the size of machines because rarely is convenient, if you feel like your workload needs more cpu or mem go for HighMem or HighCpu machine type. </p>
</blockquote></li>
</ol></li>
</ul>
<p>P.S. Since you are going to build a pretty big Cluster, check the <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/" rel="nofollow noreferrer">size of the DNS</a> </p>
<p>I will add any consideration that it comes to my mind, consider in the future to update your question with the description of the path you chose and the issue you faced.</p>
|
<p>Say if I have a <code>rabbitmq</code> service as follows:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-rabbitmq
spec:
ports:
- port: 6379
selector:
app: my-rabbitmq
</code></pre>
<p>And I have another deployment:</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: A-worker
spec:
replicas: 1
containers:
- name: a-worker
image: worker-image
ports:
- containerPort: 80
env:
- name: rabbitmq_url
value: XXXXXXXXXXXXX
</code></pre>
<p>Is there any way to set the service ip as environment variable in my second deployment by some kind of selector? In other words what should go to the <code>value: XXXXXXXXXX</code> in the second deployment yaml. (Note I know I can get the service ip by <code>kubectl get services</code>, but I'd like to know how to set this by the service name or label). Any advice is welcome!</p>
| <p>Reading between the lines (and I hope this helps):</p>
<p>K8s automatically creates service environment variables for you inside each pod. See <a href="https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables</a> for details.</p>
<p>The other route is to enable kube dns, in which case one can contact a service IP simply by using the service name.</p>
|
<p>I have a project with two microservices and gateway generated through JHipster. I deployed the project in AWS cloud using <a href="https://aws.amazon.com/quickstart/architecture/heptio-kubernetes/" rel="nofollow noreferrer">Kubernetes by Heptio on AWS</a>. Kafka container failed to start after several attempts. When I logged into the container through <code>kubectl</code>, I found out that Kafka unable to recognize the zookeeper hostname. Kafka and Zookeeper configuration files <a href="https://www.jhipster.tech/kubernetes/" rel="nofollow noreferrer">generated</a> by JHipster. </p>
<p>Exception:</p>
<pre><code>pjadda$ kubectl attach -it chargecodes-kafka-5799d8f99b-wnqhc -n duppoc
Unable to use a TTY - container kafka did not allocate one
If you don't see a command prompt, try pressing enter.
waiting for kafka to be ready
waiting for kafka to be ready
[2018-03-21 00:22:29,263] INFO Terminate ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2018-03-21 00:22:29,265] FATAL Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.I0Itec.zkclient.exception.ZkException: Unable to connect to chargecodes-zookeeper.duppoc.svc.cluster.local:2181
at org.I0Itec.zkclient.ZkConnection.connect(ZkConnection.java:72)
at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1228)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:157)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:131)
at kafka.utils.ZkUtils$.createZkClientAndConnection(ZkUtils.scala:115)
at kafka.utils.ZkUtils$.withMetrics(ZkUtils.scala:92)
at kafka.server.KafkaServer.initZk(KafkaServer.scala:350)
at kafka.server.KafkaServer.startup(KafkaServer.scala:194)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:38)
at kafka.Kafka$.main(Kafka.scala:92)
at kafka.Kafka.main(Kafka.scala)
Caused by: java.net.UnknownHostException: chargecodes-zookeeper.duppoc.svc.cluster.local: System error
at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
at java.net.InetAddress.getAllByName0(InetAddress.java:1276)
at java.net.InetAddress.getAllByName(InetAddress.java:1192)
at java.net.InetAddress.getAllByName(InetAddress.java:1126)
at org.apache.zookeeper.client.StaticHostProvider.<init>(StaticHostProvider.java:61)
at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:445)
at org.apache.zookeeper.ZooKeeper.<init>(ZooKeeper.java:380)
at org.I0Itec.zkclient.ZkConnection.connect(ZkConnection.java:70)
... 10 more
[2018-03-21 00:22:29,269] INFO shutting down (kafka.server.KafkaServer)
[2018-03-21 00:22:29,280] INFO shut down completed (kafka.server.KafkaServer)
[2018-03-21 00:22:29,281] FATAL Exiting Kafka. (kafka.server.KafkaServerStartable)
[2018-03-21 00:22:29,283] INFO shutting down (kafka.server.KafkaServer)
</code></pre>
<p>Kubernetes Pods:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
chargecodes-56fdf7cd49-fsrhb 0/1 CrashLoopBackOff 7 14m
chargecodes-kafka-5799d8f99b-wnqhc 0/1 CrashLoopBackOff 7 14m
chargecodes-mysql-5d8f4c9484-hxlrl 1/1 Running 0 14m
chargecodes-zookeeper-85d59f6759-lf4mh 1/1 Running 0 14m
jhipster-registry-0 1/1 Running 0 14m
jhipster-registry-1 1/1 Running 0 14m
</code></pre>
<p>chargecodes-kafka.yml</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: chargecodes-kafka
namespace: duppoc
spec:
replicas: 1
template:
metadata:
labels:
app: chargecodes-kafka
spec:
containers:
- name: kafka
image: wurstmeister/kafka:1.0.0
env:
- name: KAFKA_ADVERTISED_HOST_NAME
value: chargecodes-kafka.duppoc.svc.cluster.local
- name: KAFKA_ADVERTISED_PORT
value: '9092'
- name: KAFKA_ZOOKEEPER_CONNECT
value: chargecodes-zookeeper.duppoc.svc.cluster.local:2181
- name: KAFKA_CREATE_TOPICS
value: 'topic-jhipster:1:1'
ports:
- containerPort: 9092
---
apiVersion: v1
kind: Service
metadata:
name: chargecodes-kafka
namespace: duppoc
spec:
selector:
app: chargecodes-kafka
ports:
- port: 9092
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: chargecodes-zookeeper
namespace: duppoc
spec:
replicas: 1
template:
metadata:
labels:
app: chargecodes-zookeeper
spec:
containers:
- name: zookeeper
image: wurstmeister/zookeeper:3.4.6
ports:
- containerPort: 2181
---
apiVersion: v1
kind: Service
metadata:
name: chargecodes-zookeeper
namespace: duppoc
spec:
selector:
app: chargecodes-zookeeper
ports:
- port: 2181
</code></pre>
| <p>I deleted and recreated cloud formation stack in AWS. I think the issue was due to the fact that I deleted <code>kube-system</code> namespace by mistake. I think resetting <code>kube-system</code> namespace also solves the problem, but I had to make some changes to cluster so I deleted it.</p>
|
Subsets and Splits