prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>I have one <strong>kubernetes cluster</strong> on <strong>gcp</strong>, running my <strong>express</strong> and <strong>node.js</strong> application, operating <strong><code>CRUD</code></strong> operations with <strong><code>MongoDB</code></strong>.</p>
<p>I created one secret, containing <code>username</code> and <code>password</code>,
connecting with <code>mongoDB</code> specifiedcified secret as <code>environment</code> in my <code>kubernetes</code> <strong><code>yml</code></strong> file.
Now My question is "<strong>How to access that username and password
in node js application for connecting mongoDB</strong>".</p>
<p>I tried <code>process.env.SECRET_USERNAME</code> and <code>process.env.SECRET_PASSWORD</code>
in <code>Node.JS</code> application, it is throwing <strong><code>undefined</code></strong>.</p>
<p>Any idea ll'be appreciated .</p>
<p><strong>Secret.yaml</strong></p>
<pre><code> apiVersion: v1
data:
password: pppppppppppp==
username: uuuuuuuuuuuu==
kind: Secret
metadata:
creationTimestamp: 2018-07-11T11:43:25Z
name: test-mongodb-secret
namespace: default
resourceVersion: "00999"
selfLink: /api-path-to/secrets/test-mongodb-secret
uid: 0900909-9090saiaa00-9dasd0aisa-as0a0s-
type: Opaque
</code></pre>
<p><strong>kubernetes.yaml</strong></p>
<pre><code> apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:deployment.kubernetes.io/
revision: "4"
creationTimestamp: 2018-07-11T11:09:45Z
generation: 5
labels:
name: test
name: test
namespace: default
resourceVersion: "90909"
selfLink: /api-path-to/default/deployments/test
uid: htff50d-8gfhfa-11egfg-9gf1-42010gffgh0002a
spec:
replicas: 1
selector:
matchLabels:
name: test
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
name: test
spec:
containers:
- env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
key: username
name: test-mongodb-secret
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
key: password
name: test-mongodb-secret
image: gcr-image/env-test_node:latest
imagePullPolicy: Always
name: env-test-node
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: 2018-07-11T11:10:18Z
lastUpdateTime: 2018-07-11T11:10:18Z
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 5
readyReplicas: 1
replicas: 1
updatedReplicas: 1
</code></pre>
| <p>Your<code>kubernetes.yaml</code> file specifies which environment variable to store your secret so it is accessible by apps in that namespace.</p>
<p>Using <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="noreferrer">kubectl secrets cli interface</a> you can upload your secret.</p>
<pre><code>kubectl create secret generic -n node-app test-mongodb-secret --from-literal=username=a-username --from-literal=password=a-secret-password
</code></pre>
<p>(the namespace arg <code>-n node-app</code> is optional, else it will uplaod to the default namespace)</p>
<p>After running this command, you can check your kube dashboard to see that the secret has been save</p>
<p>Then from you node app, access the environment variable <code>process.env.SECRET_PASSWORD</code></p>
<p>Perhaps in your case the secretes are created in the wrong namespace hence why <code>undefined</code> in yourapplication.</p>
<p><strong>EDIT 1</strong></p>
<p>Your indentation for <code>container.env</code> seems to be wrong</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: secret-env-pod
spec:
containers:
- name: mycontainer
image: redis
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
restartPolicy: Never
</code></pre>
|
<p>I am really having trouble debugging this and can use some help. I am successfully staring a kubernetes service and deployment using a working docker image. </p>
<p>My service file:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: auth-svc
labels:
app: auth_v1
spec:
type: NodePort
ports:
- port: 3000
nodePort: 30000
protocol: TCP
selector:
app: auth_v1
</code></pre>
<p>Deploy File: </p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-deploy
labels:
app: auth_v1
spec:
revisionHistoryLimit: 5
minReadySeconds: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
replicas: 3
selector:
matchLabels:
app: auth_v1
template:
metadata:
labels:
app: auth_v1
spec:
containers:
- name: auth-pod
image: index.docker.io/XXX/auth
command: [ "yarn", "start-staging" ]
imagePullPolicy: Always
ports:
- containerPort: 3000
imagePullSecrets:
- name: myregistrykey
</code></pre>
<p><code>kubectl get pods</code> shows that the pods are up and running. I have tested jumping into the pod/conatiner with shell and tried running my application and it works. When I run <code>kubectl describe auth-deploy</code> I am seeing a container listed as auth-pod. However, I am not seeing any containers when I run <code>docker ps</code> or <code>docker ps -a</code>. Also, the logs for my pods show nothing. Is there something I am doing wrong?</p>
<p>For reference, here is my Dockerfile: </p>
<pre><code>FROM node:8.11.2-alpine AS build
LABEL maintainer="[email protected]"
# Copy Root Dir & Set Working Dir
COPY . /src
WORKDIR /src
# Build & Start Our App
RUN apk update
RUN apk add --update alpine-sdk
RUN apk add --update python
RUN yarn install
RUN yarn build-staging
# Build Production Image Using Node Container
FROM node:8.11.2-alpine AS production
# Copy Build to Image
COPY --from=build /src/.next /src/.next/
COPY --from=build /src/production-server /src/production-server/
COPY --from=build /src/static /src/static/
COPY --from=build /src/package.json /src
WORKDIR /src
# Install Essential Pacakges & Start App
RUN apk update
RUN apk add --update alpine-sdk
RUN apk add --update python
RUN yarn install
# Expose Ports Needed
EXPOSE 3000
VOLUME [ "/src/log" ]
# Start App
CMD [ "yarn", "start-staging" ]
</code></pre>
| <p>Is it possible that you are running <code>docker ps</code> on the K8s-master instead of where the pods are located?</p>
<p>You can find out where your pods are running by running the command below:</p>
<pre><code>$ kubectl describe pod auth-deploy
</code></pre>
<p>It should return something similar to below (in my case it's a percona workload):</p>
<pre><code>$ kubectl describe pod percona
Name: percona-b98f87dbd-svq64
Namespace: default
Node: ip-xxx-xx-x-xxx.us-west-2.compute.internal/xxx.xx.x.xxx
</code></pre>
<p>Get the IP, SSH into the node, and run <code>docker ps</code> locally from the node your container is located.</p>
<pre><code>$ docker ps | grep percona
010f3d529c55 percona "docker-entrypoint.s…" 7 minutes ago Up 7 minutes k8s_percona_percona-b98f87dbd-svq64_default_4aa2fe83-861a-11e8-9d5f-061181005f56_0
616d70e010bc k8s.gcr.io/pause-amd64:3.1 "/pause" 8 minutes ago Up 7 minutes k8s_POD_percona-b98f87dbd-svq64_default_4aa2fe83-861a-11e8-9d5f-061181005f56_0
</code></pre>
<p>Another possibility is that you might be using different container runtime such as rkt, containerd, and lxd instead of docker.</p>
|
<p>I am new to Kubernetes and wanted to understand how I can expose a service running in Kubernetes to the outside world. I have exposed it using a NodePort on the cluster.
So, for example: A service exposes port 31234 on the host and I can get to the service from another server through <a href="https://kubeserverIP:31234" rel="noreferrer">https://kubeserverIP:31234</a>.</p>
<p>What I want to achieve is serve this service through nginx (on a different server, out of Kube control) via a url,say, <a href="http://service.example.com" rel="noreferrer">http://service.example.com</a>. I have tried deploying nginx with an upstream pointing to the service but that is not working and get a bad gateway error. </p>
<p>Is there something which I am missing here? Or is there a neater way of achieving this.</p>
<p>I have a baremetal installation of Kubernetes cluster and have no access to gce load balancer or other vendor LBs.</p>
<p>Thanks</p>
| <p>Thanks for pointing in the right direction. </p>
<p>Essential steps broadly were:</p>
<ol>
<li><p>Create an app and its service definition.</p></li>
<li><p>Create a namespace for ingress.</p></li>
<li>Create a default backend deployment and service for redirecting all requests not defined in Ingress rules. Create these in the ingress space</li>
<li>Create the nginx ingress controller deployment. </li>
<li>Create RBAC rules.</li>
<li>Finally create the ingress rule for the applications with the paths and the ports.</li>
</ol>
<p>Found a very useful guide which explained things in details:
<a href="https://akomljen.com/kubernetes-nginx-ingress-controller/" rel="noreferrer">https://akomljen.com/kubernetes-nginx-ingress-controller/</a></p>
|
<p>I'm trying to deploy a function with <a href="https://github.com/openfaas" rel="nofollow noreferrer">OpenFaas project</a> and a kubernetes cluster running on 2 Raspberry Pi 3B+.
Unfortunately, the pod that should handle the function is going to <code>ImageInspectError</code> state...
I tried to run the function with Docker directly and which is contain in a Docker image, and everything is working fine.</p>
<p>I opened an <a href="https://github.com/openfaas/faas-cli/issues/433" rel="nofollow noreferrer">issue</a> on the OpenFaas github and the maintainer told me to ask directly the Kubernetes community to get some clues.</p>
<p>My first question is : What does ImageInspectError mean and where it comes from ?</p>
<p>And here is all the informations I have :</p>
<h2>Expected Behaviour</h2>
<p>Pod should run.</p>
<h2>Current Behaviour</h2>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-masternode 1/1 Running 1 1d
kube-system kube-apiserver-masternode 1/1 Running 1 1d
kube-system kube-controller-manager-masternode 1/1 Running 1 1d
kube-system kube-dns-7f9b64f644-x42sr 3/3 Running 3 1d
kube-system kube-proxy-wrp6f 1/1 Running 1 1d
kube-system kube-proxy-x6pvq 1/1 Running 1 1d
kube-system kube-scheduler-masternode 1/1 Running 1 1d
kube-system weave-net-4995q 2/2 Running 3 1d
kube-system weave-net-5g7pd 2/2 Running 3 1d
openfaas-fn figlet-7f556fcd87-wrtf4 1/1 Running 0 4h
openfaas-fn testfaceraspi-7f6fcb5897-rs4cq 0/1 ImageInspectError 0 2h
openfaas alertmanager-66b98dd4d4-kcsq4 1/1 Running 1 1d
openfaas faas-netesd-5b5d6d5648-mqftl 1/1 Running 1 1d
openfaas gateway-846f8b5686-724q8 1/1 Running 2 1d
openfaas nats-86955fb749-7vsbm 1/1 Running 1 1d
openfaas prometheus-6ffc57bb8f-fpk6r 1/1 Running 1 1d
openfaas queue-worker-567bcf4d47-ngsgv 1/1 Running 2 1d
</code></pre>
<p>The <code>testfaceraspi</code> doesn't run.</p>
<p>Logs from the pod : </p>
<pre><code>$ kubectl logs testfaceraspi-7f6fcb5897-rs4cq -n openfaas-fn
Error from server (BadRequest): container "testfaceraspi" in pod "testfaceraspi-7f6fcb5897-rs4cq" is waiting to start: ImageInspectError
</code></pre>
<p>Pod describe : </p>
<pre><code>$ kubectl describe pod -n openfaas-fn testfaceraspi-7f6fcb5897-rs4cq
Name: testfaceraspi-7f6fcb5897-rs4cq
Namespace: openfaas-fn
Node: workernode/10.192.79.198
Start Time: Thu, 12 Jul 2018 11:39:05 +0200
Labels: faas_function=testfaceraspi
pod-template-hash=3929761453
Annotations: prometheus.io.scrape=false
Status: Pending
IP: 10.40.0.16
Controlled By: ReplicaSet/testfaceraspi-7f6fcb5897
Containers:
testfaceraspi:
Container ID:
Image: gallouche/testfaceraspi
Image ID:
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: ImageInspectError
Ready: False
Restart Count: 0
Liveness: exec [cat /tmp/.lock] delay=3s timeout=1s period=10s #success=1 #failure=3
Readiness: exec [cat /tmp/.lock] delay=3s timeout=1s period=10s #success=1 #failure=3
Environment:
fprocess: python3 index.py
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-5qhnn (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-5qhnn:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-5qhnn
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning DNSConfigForming 2m (x1019 over 3h) kubelet, workernode Search Line limits were exceeded, some search paths have been omitted, the applied search line is: openfaas-fn.svc.cluster.local svc.cluster.local cluster.local heig-vd.ch einet.ad.eivd.ch web.ad.eivd.ch
</code></pre>
<p>And the event logs : </p>
<pre><code>$ kubectl get events --sort-by=.metadata.creationTimestamp -n openfaas-fn
LAST SEEN FIRST SEEN COUNT NAME KIND SUBOBJECT TYPE REASON SOURCE MESSAGE
14m 1h 347 testfaceraspi-7f6fcb5897-rs4cq.1540db41e89d4c52 Pod Warning DNSConfigForming kubelet, workernode Search Line limits were exceeded, some search paths have been omitted, the applied search line is: openfaas-fn.svc.cluster.local svc.cluster.local cluster.local heig-vd.ch einet.ad.eivd.ch web.ad.eivd.ch
4m 1h 75 figlet-7f556fcd87-wrtf4.1540db421002b49e Pod Warning DNSConfigForming kubelet, workernode Search Line limits were exceeded, some search paths have been omitted, the applied search line is: openfaas-fn.svc.cluster.local svc.cluster.local cluster.local heig-vd.ch einet.ad.eivd.ch web.ad.eivd.ch
10m 10m 1 testfaceraspi-7f6fcb5897-d6z78.1540df9ed8b91865 Pod Normal Scheduled default-scheduler Successfully assigned testfaceraspi-7f6fcb5897-d6z78 to workernode
10m 10m 1 testfaceraspi-7f6fcb5897.1540df9ed6eee11f ReplicaSet Normal SuccessfulCreate replicaset-controller Created pod: testfaceraspi-7f6fcb5897-d6z78
10m 10m 1 testfaceraspi-7f6fcb5897-d6z78.1540df9eef3ef504 Pod Normal SuccessfulMountVolume kubelet, workernode MountVolume.SetUp succeeded for volume "default-token-5qhnn"
4m 10m 27 testfaceraspi-7f6fcb5897-d6z78.1540df9eef5445c0 Pod Warning DNSConfigForming kubelet, workernode Search Line limits were exceeded, some search paths have been omitted, the applied search line is: openfaas-fn.svc.cluster.local svc.cluster.local cluster.local heig-vd.ch einet.ad.eivd.ch web.ad.eivd.ch
8m 9m 8 testfaceraspi-7f6fcb5897-d6z78.1540df9f670d0dad Pod spec.containers{testfaceraspi} Warning InspectFailed kubelet, workernode Failed to inspect image "gallouche/testfaceraspi": rpc error: code = Unknown desc = Error response from daemon: readlink /var/lib/docker/overlay2/l: invalid argument
9m 9m 7 testfaceraspi-7f6fcb5897-d6z78.1540df9f670fcf3e Pod spec.containers{testfaceraspi} Warning Failed kubelet, workernode Error: ImageInspectError
</code></pre>
<h2>Steps to Reproduce (for bugs)</h2>
<ol>
<li>Deploy OpenFaas on a 2 node k8s cluster</li>
<li>Create function with <code>faas new testfaceraspi --lang python3-armhf</code></li>
<li><p>Add the following code in the <code>handler.py</code> : </p>
<pre><code>import json
def handle(req):
jsonl = json.loads(req)
return ("Found " + str(jsonl["nbFaces"]) + " faces in OpenFaas Function on raspi !")
</code></pre></li>
<li><p>Change gateway and image in the <code>.yml</code>
provider:
name: faas
gateway: <a href="http://127.0.0.1:31112" rel="nofollow noreferrer">http://127.0.0.1:31112</a></p>
<pre><code>functions:
testfaceraspi:
lang: python3-armhf
handler: ./testfaceraspi
image: gallouche/testfaceraspi
</code></pre></li>
<li><p>Run <code>faas build -f testfacepi.yml</code></p></li>
<li>Login in DockerHub with <code>docker login</code></li>
<li>Run <code>faas push -f testfacepi.yml</code></li>
<li>Run <code>faas deploy -f testfacepi.yml</code></li>
</ol>
<h2>Your Environment</h2>
<ul>
<li><p>FaaS-CLI version ( Full output from: <code>faas-cli version</code> ):</p>
<pre><code>Commit: 3995a8197f1df1ecdf524844477cffa04e4690ea
Version: 0.6.11
</code></pre></li>
<li><p>Docker version ( Full output from: <code>docker version</code> ):</p>
<pre><code>Client:
Version: 18.04.0-ce
API version: 1.37
Go version: go1.9.4
Git commit: 3d479c0
Built: Tue Apr 10 18:25:24 2018
OS/Arch: linux/arm
Experimental: false
Orchestrator: swarm
Server:
Engine:
Version: 18.04.0-ce
API version: 1.37 (minimum version 1.12)
Go version: go1.9.4
Git commit: 3d479c0
Built: Tue Apr 10 18:21:25 2018
OS/Arch: linux/arm
Experimental: false
</code></pre></li>
<li><p>Operating System and version (e.g. Linux, Windows, MacOS):</p>
<pre><code>Distributor ID: Raspbian
Description: Raspbian GNU/Linux 9.4 (stretch)
Release: 9.4
Codename: stretch
</code></pre></li>
</ul>
<p>Thanks by advance and tell me if you need any more informations.</p>
<p>Gallouche</p>
| <p>I've seen this error because the docker version wasn't supported by Kubernetes. As of Kubernetes version 1.11, the supported versions are 1.11.2 to 1.13.1 and 17.03.x.</p>
<p>I couldn't test the solution with OpenFaaS.</p>
|
<p>I have deployed an application into Kubernetes using the following command.</p>
<pre><code>kubectl apply -f deployment.yaml -n <NAMESPACE>
</code></pre>
<p>I have my deployment content in the deployment yaml file.</p>
<p>This is working fine. Now, I have updated few things in the deployment.yaml file and hence would like to update the deployment.</p>
<p><strong>Option 1:- Delete and deploy again</strong></p>
<pre><code>kubectl delete -f deployment.yaml -n <NAMESPACE>
kubectl apply -f deployment.yaml -n <NAMESPACE>
</code></pre>
<p><strong>Option 2:- Use set to update changes</strong></p>
<pre><code>kubectl set image deployment/nginx-deployment nginx=nginx:1.91
</code></pre>
<p>I don't want to use this approach as I am keeping my deployment.yaml file in GitHUB.</p>
<p><strong>Option 3:- Using edit command</strong></p>
<pre><code>kubectl edit deployment/nginx-deployment
</code></pre>
<p>I don't want to use the above 3 options.</p>
<p>Is there any way to update the deployment using the file itself.</p>
<p>Like,</p>
<pre><code>kubectl update deployment.yaml -n NAMESPACE
</code></pre>
<p>This way, I will make sure that I will always have the latest deployment file in my GitHub repo.</p>
| <p>As @Daisy Shipton has said, what you want to do could be simplified with a simple command: <code>kubectl apply -f deployment.yaml</code>. </p>
<p><strong>I will also add that I don't think it's correct to utilize the <em>Option 2</em> to update the image utilized by the <code>Pod</code> with an imperative command!</strong> If the source of truth is the <code>Deployment</code> file present on your GitHub, you should simply update that file, by modifying the image that is used by your Pod's container there! </p>
<ul>
<li>Next time you desire to update your <code>Deployment</code> object, unless you don't forget to modify the .yaml file, you will be setting the Pods to use the previous Nginx's image.</li>
</ul>
<p>So there certainly should exist some restriction in using imperative commands to update the specification of any Kubernetes's object!</p>
|
<p>I am able to install kubernetes using kubeadm method successfully. My environment is behind a proxy. I applied proxy to system, docker and I am able to pull images from Docker Hub without any issues. But at the last step where we have to install the pod network (like weave or flannel), its not able to connect via proxy. It gives a time out error. I am just checking to know if there is any command like curl -x http:// command for kubectl apply -f? Until I perform this step it says the master is NotReady.</p>
| <p>When you do work with a proxy for internet access, do not forget to configure the <code>NO_PROXY</code> environment variable, in addition of <code>HTTP(S)_PROXY</code>.</p>
<p>See <a href="https://docs.openshift.com/container-platform/3.4/install_config/http_proxies.html#configuring-no-proxy" rel="nofollow noreferrer">this example</a>:</p>
<blockquote>
<p>NO_PROXY accepts a comma-separated list of hosts, IP addresses, or IP ranges in <a href="https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing" rel="nofollow noreferrer">CIDR</a> format:</p>
<p>For master hosts</p>
<ul>
<li>Node host name</li>
<li>Master IP or host name</li>
</ul>
<p>For node hosts</p>
<ul>
<li>Master IP or host name</li>
</ul>
<p>For the Docker service</p>
<ul>
<li>Registry service IP and host name</li>
</ul>
</blockquote>
<p>See also for instance <a href="https://github.com/weaveworks/scope/issues/2246#issuecomment-281712035" rel="nofollow noreferrer">weaveworks/scope issue 2246</a>.</p>
|
<p>I have created a k8s cluster using kubeadm and have a couple of questions about the kube-controller-manager and kuber-apiserver components.</p>
<ul>
<li><p>When created using kubeadm, those components are started as pods, not systemd daemons. If I kill any of those pods, they are restarted, but who is restarting them? I haven't seen any replicacontroller nor deployment in charge of doing that.</p></li>
<li><p>What is the "right" way of updating their configuration? Imagine I want to change the authorization-mode of the api server. In the master node we can find a <code>/etc/kubernetes/manifests</code> folder with a <code>kube-apiserver.yaml</code> file. Are we supposed to change this file and just kill the pod so that it restarts with the new config? </p></li>
</ul>
| <p>The feature you've described is called Static Pods. Here is a part of <a href="https://kubernetes.io/docs/tasks/administer-cluster/static-pod/" rel="nofollow noreferrer">documentation</a> that describes their behaviour.</p>
<blockquote>
<p>Static pods are managed directly by kubelet daemon on a specific node,
without the API server observing it. It does not have an associated
replication controller, and kubelet daemon itself watches it and
restarts it when it crashes. There is no health check. Static pods are
always bound to one kubelet daemon and always run on the same node
with it.</p>
<p>Kubelet automatically tries to create a mirror pod on the Kubernetes
API server for each static pod. This means that the pods are visible
on the API server but cannot be controlled from there.</p>
<p>The configuration files are just standard pod definitions in json or
yaml format in a specific directory. Use kubelet
<code>--pod-manifest-path=<the directory></code> to start <code>kubelet</code> daemon, which periodically scans the directory and creates/deletes static pods as
yaml/json files appear/disappear there. Note that kubelet will ignore
files starting with dots when scanning the specified directory. </p>
<p>When kubelet starts, it automatically starts all pods defined in
directory specified in <code>--pod-manifest-path=</code> or <code>--manifest-url=</code>
arguments, i.e. our static-web.</p>
</blockquote>
<p>Usually, those manifests are stored in the directory <code>/etc/kubernetes/manifests</code>.<br>
If you put any changes to any of those manifests, that resource will be adjusted just like if you would run <code>kubectl apply -f something.yaml</code> command.</p>
|
<p>I've recently setup a Kubernetes cluster and I am brand new to all of this so it's quite a bit to take in. Currently I am trying to setup and Ingress for wordpress deployments. I am able to access through nodeport but I know nodeport is not recommended so I am trying to setup the ingress. I am not exactly sure how to do it and I can't find many guides. I followed this to setup the NGINX LB <a href="https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/complete-example" rel="nofollow noreferrer">https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/complete-example</a> and I used this to setup the WP Deployment <a href="https://docs.docker.com/ee/ucp/admin/configure/use-nfs-volumes/#inspect-the-deployment" rel="nofollow noreferrer">https://docs.docker.com/ee/ucp/admin/configure/use-nfs-volumes/#inspect-the-deployment</a></p>
<p>I would like to be able to have multiple WP deployments and have an Ingress that resolves to the correct one, but I really can't find much information on it. Any help is greatly appreciated! </p>
| <p>You can configure your ingress to forward traffic to a different service depending on path.</p>
<p>An example of such a confugration is this:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /foo
backend:
serviceName: s1
servicePort: 80
- path: /bar
backend:
serviceName: s2
servicePort: 80
</code></pre>
<p>Read the kubernetes <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">documentation on ingress</a> for more info.</p>
<p>PS: In order for this to work you need an ingress controller like the one in the links in your question.</p>
|
<p>I am using Minikube to setup a k8s environment for later production but I can't figure out how to pull private container from Docker Hub in my service definition.</p>
<p>I added </p>
<pre><code>imagePullSecrets:
- name: dockerregistrykey
</code></pre>
<p>to the service definition <code>spec.template.spec</code> and the <code>dockerregistrykey</code> to the secrets.</p>
<p>But I am still getting this error</p>
<pre><code>Failed to pull image "keyshake/transaction_service:latest": rpc error: code = Unknown desc = Error response from daemon: pull access denied for keyshake/transaction_service, repository does not exist or may require 'docker login'
</code></pre>
<p>Thanks in advance.</p>
| <p>You need to declare the use of the pullSecret at container level:</p>
<pre><code>...
spec:
containers:
- name: my_container
image: your_image
imagePullSecrets:
- name: dockerregistrykey
</code></pre>
|
<p>I have a 3rd party docker image that I want to use (<a href="https://github.com/coreos/dex/releases/tag/v2.10.0" rel="nofollow noreferrer">https://github.com/coreos/dex/releases/tag/v2.10.0</a>). I need to inject some customisation into the pod (CSS stylesheet and PNG images). </p>
<p>I haven't found a suitable way to do this yet. Configmap binaryData is not available before v1.10 (or 9, can't remember off the top of my head). I could create a new image and <code>COPY</code> the PNG files into the image, but I don't want the overhead of maintaining this new image - far safer to just use the provided image. </p>
<p>Is there an easy way of injecting these 2/3 files I need into the pod I create?</p>
| <p>One way would be to mount 1 or more volumes into the desired locations within the pod, seemingly <code>/web/static</code>. This however would overwrite the entire directly so you would need to supply all the files not just those you wish to overwrite.</p>
<p>Example:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- image: dex:2.10.0
name: dex
volumeMounts:
- mountPath: /web/static # the mount location within the container
name: dex-volume
volumes:
- name: dex-volume
hostPath:
path: /destination/on/K8s/node # path on host machine
</code></pre>
<p>There are a number of types of storage types for different cloud providers so take a look at <a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/volumes/</a> and see if theres something a little more specific to your environment rather than storing on disk.</p>
<p>For what it's worth, creating your own image would probably be the simplest solution.</p>
|
<p>We want to use an app engine flexible process to update our ElasticSearch index, which is on Google Kubernetes Engine. We need to connect to ElasticSearch via a http(s) address. What's the recommended way to do this? We don't want to expose the cluster to the external networks since we don't have authentication in front of it.</p>
<p>I've seen this <a href="https://stackoverflow.com/questions/34747449/connect-from-appengine-to-a-kubernetes-service-from-google-containers">SO post</a> but both k8s and AE have changed a lot in the 2 years since the question/answer.</p>
<p>Thanks for your help!</p>
| <p>The post you linked to was about App Engine Standard. App Engine Flex is built on top of the same Google Cloud networking that is used by Google Compute Engine virtual machines and Google Kubernetes Engine clusters. As long as you put the App Engine flex application into the same VPC as the Google Kubernetes Engine cluster you should be able to communicate between them using internal networking. </p>
<p>On the other hand, to expose a Kubernetes service to anything running outside of the cluster will require you to modify the service for Elastic search because by default Kubernetes services are only reachable from inside of the cluster (due to the way that the service IPs are allocated and reached via IPTables magic). You need to "expose" the service, but rather than exposing it to the internet via an external load balancer, you expose it to the VPC using an internal load balancer. See <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing</a>.</p>
|
<p>I wish to setup pod that will access services in different projects in rancher's v2 kubernetes. (rancher setups network policy that pods can access pods in namespaces from same project)</p>
<p>That pod is custom ingress proxy that might be used in front of all projects in kube </p>
| <p>You can use "Flannel" network plugin instead of "Canal" while setting up the cluster. Work is <s>in progress</s> finished to provide enable/disable of the project network policies for Canal.</p>
<p><strong>Edit 1 (09/03/2018):</strong></p>
<p><a href="https://github.com/rancher/rancher/issues/14462" rel="nofollow noreferrer">https://github.com/rancher/rancher/issues/14462</a> has been resolved.</p>
|
<p>When install <code>autoscaler</code> on AWS as:</p>
<p><a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider/aws" rel="nofollow noreferrer">https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider/aws</a></p>
<p>Got error:</p>
<pre><code>cluster-autoscaler-5f69cdcd84-4kpqw 0/1 RunContainerError 0 3s
</code></pre>
<p>See detail:</p>
<pre><code>$ kubectl describe po cluster-autoscaler-5b454d874c-4f85w -n kube-system
...
Last State: Terminated
Reason: ContainerCannotRun
Message: oci runtime error: container_linux.go:247: starting container process caused "process_linux.go:359: container init caused \"rootfs_linux.go:54: mounting \\\"/etc/ssl/certs/ca-certificates.crt\\\" to rootfs \\\"/var/lib/docker/overlay/f45f8b9b739167c3b6bb5
275c7ca6285508b52ecf940b3759e3ca99b87fadd53/merged\\\" at \\\"/var/lib/docker/overlay/f45f8b9b739167c3b6bb5275c7ca6285508b52ecf940b3759e3ca99b87fadd53/merged/etc/ssl/certs/ca-certificates.crt\\\" caused \\\"not a directory\\\"\""
: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 55s default-scheduler Successfully assigned cluster-autoscaler-5b454d874c-4f85w to ip-100.200.0.1.ap-northeast-1.compute.internal
Normal SuccessfulMountVolume 55s kubelet, ip-100.200.0.1.ap-northeast-1.compute.internal MountVolume.SetUp succeeded for volume "ssl-certs"
Normal SuccessfulMountVolume 55s kubelet, ip-100.200.0.1.ap-northeast-1.compute.internal MountVolume.SetUp succeeded for volume "default-token-2wmct"
Warning Failed 53s kubelet, ip-100.200.0.1.ap-northeast-1.compute.internal Error: failed to start container "cluster-autoscaler": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "process_linux
.go:359: container init caused \"rootfs_linux.go:54: mounting \\\"/etc/ssl/certs/ca-certificates.crt\\\" to rootfs \\\"/var/lib/docker/overlay/3796432b43abb86f70886e31d3bc555bd6beb54a2854d1e09ee6cdc74cab3af3/merged\\\" at \\\"/var/lib/docker/overlay/3796432b43abb86f70886e
31d3bc555bd6beb54a2854d1e09ee6cdc74cab3af3/merged/etc/ssl/certs/ca-certificates.crt\\\" caused \\\"not a directory\\\"\""
: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
Warning Failed 51s kubelet, ip-100.200.0.1.ap-northeast-1.compute.internal Error: failed to start container "cluster-autoscaler": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "process_linux.go:359: contain
er init caused \"rootfs_linux.go:54: mounting \\\"/etc/ssl/certs/ca-certificates.crt\\\" to rootfs \\\"/var/lib/docker/overlay/2c1fac03d81e1e77df060a70035adf2442840705198e5c887825bc3b1eb80f8f/merged\\\" at \\\"/var/lib/docker/overlay/2c1fac03d81e1e77df060a70035adf24428407
05198e5c887825bc3b1eb80f8f/merged/etc/ssl/certs/ca-certificates.crt\\\" caused \\\"not a directory\\\"\""
: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
Warning Failed 33s kubelet, ip-100.200.0.1.ap-northeast-1.compute.internal Error: failed to start container "cluster-autoscaler": Error response from daemon: oci runtime error: container_linux.go:247: starting container process caused "process_linux.go:359: contain
er init caused \"rootfs_linux.go:54: mounting \\\"/etc/ssl/certs/ca-certificates.crt\\\" to rootfs \\\"/var/lib/docker/overlay/f45f8b9b739167c3b6bb5275c7ca6285508b52ecf940b3759e3ca99b87fadd53/merged\\\" at \\\"/var/lib/docker/overlay/f45f8b9b739167c3b6bb5275c7ca6285508b52
ecf940b3759e3ca99b87fadd53/merged/etc/ssl/certs/ca-certificates.crt\\\" caused \\\"not a directory\\\"\""
: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
Warning BackOff 22s (x2 over 47s) kubelet, ip-100.200.0.1.ap-northeast-1.compute.internal Back-off restarting failed container
Normal Pulling 8s (x4 over 55s) kubelet, ip-100.200.0.1.ap-northeast-1.compute.internal pulling image "k8s.gcr.io/cluster-autoscaler:v0.6.0"
Normal Created 7s (x4 over 53s) kubelet, ip-100.200.0.1.ap-northeast-1.compute.internal Created container
Warning FailedSync 7s (x6 over 53s) kubelet, ip-100.200.0.1.ap-northeast-1.compute.internal Error syncing pod
Normal Pulled 7s (x4 over 53s) kubelet, ip-100.200.0.1.ap-northeast-1.compute.internal Successfully pulled image "k8s.gcr.io/cluster-autoscaler:v0.6.0"
</code></pre>
<p>Is it the image <code>k8s.gcr.io/cluster-autoscaler:v0.6.0</code> issue?</p>
| <p>On AWS EKS (Elastic Kubernetes Service), the sslCertPath required by the cluster-autoscaler seems to be indeed <code>/etc/ssl/certs/ca-bundle.crt</code></p>
<p>Exmple:</p>
<pre><code>helm install stable/cluster-autoscaler
--set "autoscalingGroups[0].name=myasgname-worker-nodes-3-NodeGroup-HHTVNI2VF9DF,autoscalingGroups[0].maxSize=10,autoscalingGroups[0].minSize=2"
--name cluster-autoscaler
--namespace kube-system
--set rbac.create=true
--set sslCertPath=/etc/ssl/certs/ca-bundle.crt
</code></pre>
|
<p>I'm using Azure for my Continuous Deployment, My secret name is "<strong>cisecret</strong>" using</p>
<pre><code>kubectl create secret docker-registry cisecret --docker-username=XXXXX --docker-password=XXXXXXX [email protected] --docker-server=XXXXXXXXXX.azurecr.io
</code></pre>
<p>In my Visual Studio Online Release Task
<strong>kubectl run</strong>
Under <strong>Secrets</strong> section
<br/>
Type of secret: dockerRegistry<br/>
Container Registry type: Azure Container Registry<br/>
Secret name: <strong>cisecret</strong></p>
<p>My Release is successfully, but when proxy into kubernetes</p>
<blockquote>
<p>Failed to pull image xxxxxxx unauthorized: authentication required.</p>
</blockquote>
| <p>Could this be due to your container name possibly? I had an issue where I wasn't properly prepending the ACR domain in front of the image name in my Kubernetes YAML which meant I wasn't pointed at the container registry / image and therefore my secret (which was working) appeared to be broken.</p>
<p>Can you post your YAML? Maybe there is something simple amiss since it seems you are on the right track from the secrets perspective.</p>
|
<p><a href="https://marketplace.visualstudio.com/items?itemName=ballerina.ballerina" rel="nofollow noreferrer">Ballerina extension</a> was installed successfully in visual code.
Also I configured <code>ballerina.home</code> to point to the installed package </p>
<pre><code>ballerina.home = "/Library/Ballerina/ballerina-0.975.1"
</code></pre>
<p>Visual code is linting correctly. However, when I introduced <code>@kubernetes:*</code> annotations:</p>
<pre><code>import ballerina/http;
import ballerina/log;
@kubernetes:Deployment {
enableLiveness: true,
image: "ballerina/ballerina-platform",
name: "ballerina-abdennour-demo"
}
@kubernetes:Service {
serviceType: "NodePort",
name: "ballerina-abdennour-demo"
}
service<http:Service> hello bind { port: 9090 } {
sayHello (endpoint caller, http:Request request) {
http:Response res = new;
res.setPayload("Hello World from Ballerina Service");
caller ->respond(res) but { error e => log:printError("Error sending response", err = e)};
}
}
</code></pre>
<p>VisualCode reports an error :</p>
<pre><code>undefined package "kubernetes"
undefined annotation "Deployment"
</code></pre>
<p>Nevertheless, I have minikube up and running, and I don't know if I need another extension, so VisualCode can detect running clusters? </p>
<p>Or is it a package that is missing and should be installed inside Ballerina SDK/ Platform?</p>
<h2>UPDATE</h2>
<p>I am running <code>ballerina build file.bal</code>, and I can see this errors :</p>
<p><a href="https://i.stack.imgur.com/atVV8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/atVV8.png" alt="enter image description here"></a></p>
<p>Any thoughts ?</p>
| <p>Solved! Just add the <code>import</code> instruction at the beginning of the file</p>
<pre><code>import ballerinax/kubernetes;
</code></pre>
<p>Note, it is <code>ballerinax/kubernetes</code> and not <code>ballerina/kubernetes</code> (add <code>x</code>)</p>
|
<p>I'm new to Kubenetes, and would like to clarify the following.</p>
<p>Assume we have a containerized java program (using docker) running in k8s. I need to stop getting requests to a pod when the heap size consumed by JVM reaches a limit. For that can I set the readiness and liveliness probes to certain values, so that I want get further requests to that pod? If so how can I set those values?</p>
<p>Thank you</p>
| <p>Yes this is possible. K8s livinessProbes can be a command.</p>
<p>for instances.</p>
<pre><code>livenessProbe:
exec:
command:
- /bin/sh
- -c
- /home/test/check.sh
</code></pre>
<p>If you then write a script that checks the used heap by the JVM you should have the probe that you want.</p>
|
<p>I am setting up GPU monitoring on a cluster using a <code>DaemonSet</code> and NVIDIA DCGM. Obviously it only makes sense to monitor nodes that have a GPU.</p>
<p>I'm trying to use <code>nodeSelector</code> for this purpose, but <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector" rel="nofollow noreferrer">the documentation states that</a>:</p>
<blockquote>
<p>For the pod to be eligible to run on a node, <strong>the node must have each of the indicated key-value pairs as labels</strong> (it can have additional labels as well). The most common usage is one key-value pair.</p>
</blockquote>
<p>I intended to check if the label <code>beta.kubernetes.io/instance-type</code> was any of those: </p>
<pre><code>[p3.2xlarge, p3.8xlarge, p3.16xlarge, p2.xlarge, p2.8xlarge, p2.16xlarge, g3.4xlarge, g3.8xlarge, g3.16xlarge]
</code></pre>
<p>But I don't see how to make an <code>or</code> relationship when using <code>nodeSelector</code>?</p>
| <p><a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity" rel="nofollow noreferrer">Node Affinity</a> was the solution:</p>
<pre><code>spec:
template:
metadata:
labels:
app: dcgm-exporter
annotations:
prometheus.io/scrape: 'true'
description: |
This `DaemonSet` provides GPU metrics in Prometheus format.
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/instance-type
operator: In
values:
- p2.xlarge
- p2.8xlarge
- p2.16xlarge
- p3.2xlarge
- p3.8xlarge
- p3.16xlarge
- g3.4xlarge
- g3.8xlarge
- g3.16xlarge
</code></pre>
|
<p>I've been tinkering with Kubernetes on and off for the past few years and I am not sure if this has always been the case (maybe this behavior changed recently) but I cannot seem to get Services to publish on the ports I intend - they always publish on a high random port (>30000).</p>
<p>For instance, I'm going through <a href="https://hackernoon.com/setting-up-nginx-ingress-on-kubernetes-2b733d8d2f45" rel="nofollow noreferrer">this walkthrough on Ingress</a> and I create the following Deployment and Service objects per the instructions:</p>
<pre><code>---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: hello-world-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: hello-world
spec:
containers:
- image: "gokul93/hello-world:latest"
imagePullPolicy: Always
name: hello-world-container
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: hello-world-svc
spec:
ports:
- port: 9376
protocol: TCP
targetPort: 8080
selector:
app: hello-world
type: NodePort
</code></pre>
<p>According to this, I should have a Service that's listening on port 8080, but instead it's a high, random port:</p>
<pre><code>~$ kubectl describe svc hello-world-svc
Name: hello-world-svc
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=hello-world
Type: NodePort
IP: 10.109.24.16
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 31669/TCP
Endpoints: 10.40.0.4:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
</code></pre>
<p>I also verified that none of my nodes are listening on 8080, but they are listening on 31669.</p>
<p>This isn't super ideal - especially considering that the Ingress portion will need to know what <code>servicePort</code> is being used (the walkthrough references this at 8080).</p>
<p>By the way, when I create the Ingress controller, this behavior is the same - rather than listening on 80 and 443 like a good load balancer, it's listening on high random ports.</p>
<p>Am I missing something? Am I doing it all wrong?</p>
| <p>Matt,</p>
<p>The reason a random port is being allocated is because you are creating a service of type NodePort. </p>
<p>K8s documentation explains NodePort <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">here</a></p>
<p>Based on your config, the service is exposed on port 9376 (and the backend port is 8080). So hello-word-svc should be available at: 10.109.24.16:9376. Essentially this service can be reached in one of the following means:</p>
<p>Service ip/port :- 10.109.24.16:9376</p>
<p>Node ip/port :- [your compute node ip]:31669 <-- this is created because your service is of type NodePort</p>
<p>You can also query the pod directly to test that the pod is in-fact exposing a service.</p>
<p>Pod ip/port: 10.40.0.4:8080 </p>
<p>Since your eventual goal is to use ingress controller for external reachability to your service, "type: ClusterIP" might suffice your ask.</p>
|
<p>I installed <code>minikube</code> with the below command:</p>
<pre><code>curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && chmod +x minikube && sudo mv minikube /usr/local/bin/
</code></pre>
<p>Then, I start <code>minikube</code> cluster using</p>
<pre><code>minikube start --vm-driver=none
</code></pre>
<p>When I try to access the dashboard I see the error</p>
<p><strong>minikube dashboard</strong> </p>
<blockquote>
<p>Could not find finalized endpoint being pointed to by kubernetes-dashboard: Error validating service: Error getting service kubernetes-dashboard: Get <a href="https://10.0.2.15:8443/api/v1/namespaces/kube-system/services/kubernetes-dashboard" rel="nofollow noreferrer">https://10.0.2.15:8443/api/v1/namespaces/kube-system/services/kubernetes-dashboard</a>: net/http: TLS handshake timeout</p>
</blockquote>
<p>I set the proxy using</p>
<pre><code>set NO_PROXY=localhost,127.0.0.1,10.0.2.15
</code></pre>
<p>Still same error. </p>
<p>Any help would be appreciated.</p>
| <p>I had the same issue, I was behind corporate proxy, adding <code>minikube ip</code> to the <code>no_proxy</code> env variable on host machine solved the issue.</p>
<pre><code>export no_proxy=$no_proxy,$(minikube ip)
</code></pre>
|
<p>We have an application that runs on an Ubuntu VM. This application connects to Azure Redis, Azure Postgres and Azure CosmosDB(mongoDB) services.</p>
<p>I am currently working on moving this application to Azure AKS and intend to access all the above services from the cluster. The services will continue to be external and will not reside inside the cluster.</p>
<p>I am trying to understand how the network/firewall of both the services and aks should be configured so that pods inside the cluster can access the above services or any Azure service in general.</p>
<p>I tried the following:</p>
<ul>
<li>Created a configMap containing the connection params(public ip/address, username/pwd, port, etc) of all the services and used this configMap in the deployment resource. </li>
<li>Hardcoded the connection params of all the services as env vars inside the container image</li>
<li>In the firewall/inbound rules of the services, I added the AKS API ip, individual node ips</li>
</ul>
<p>None of the above worked. Did I miss anything? What else should be configured?</p>
<p>I tested the setup locally on minikube with all the services running on my local machine and it worked fine.</p>
| <blockquote>
<p>I am currently working on moving this application to Azure AKS and
intend to access all the above services from the cluster.</p>
</blockquote>
<p>I assume that you would like to make all services to access each other and all the services are in AKS cluster? If so, I advise you configure the internal load balancer in AKS cluster.</p>
<blockquote>
<p>Internal load balancing makes a Kubernetes service accessible to
applications running in the same virtual network as the Kubernetes
cluster.</p>
</blockquote>
<p>You can take a try and follow the following document: <a href="https://learn.microsoft.com/en-us/azure/aks/internal-lb" rel="nofollow noreferrer">Use an internal load balancer with Azure Kubernetes Service (AKS)</a>. In the end, good luck to you!</p>
|
<p>I have a problem when I set kubelet parameter <code>cluster-dns</code></p>
<p>My OS is CentOS Linux release 7.0.1406 (Core)<br>
Kernel:<code>Linux master 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux</code></p>
<p>kubelet config file:</p>
<pre><code>KUBELET_HOSTNAME="--hostname-override=master"
#KUBELET_API_SERVER="--api-servers=http://master:8080
KUBECONFIG="--kubeconfig=/root/.kube/config-demo"
KUBELET_DNS="–-cluster-dns=10.254.0.10"
KUBELET_DOMAIN="--cluster-domain=cluster.local"
# Add your own!
KUBELET_ARGS="--cgroup-driver=systemd --fail-swap-on=false --pod_infra_container_image=177.1.1.35/library/pause:latest"
</code></pre>
<p>config file:</p>
<pre><code>KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=4"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://master:8080"
</code></pre>
<p>kubelet.service file:</p>
<pre><code>[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBELET_API_SERVER \
$KUBELET_DNS \
$KUBELET_DOMAIN \
$KUBELET_ADDRESS \
$KUBELET_PORT \
$KUBELET_HOSTNAME \
$KUBE_ALLOW_PRIV \
$KUBELET_ARGS \
$KUBECONFIG
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target
</code></pre>
<p>When I start the kubelet service I can see the "--cluster-dns=10.254.0.10" parameter is correct set:</p>
<pre><code>root 29705 1 1 13:24 ? 00:00:16 /usr/bin/kubelet --logtostderr=true --v=4 –-cluster-dns=10.254.0.10 --cluster-domain=cluster.local --hostname-override=master --allow-privileged=false --cgroup-driver=systemd --fail-swap-on=false --pod_infra_container_image=177.1.1.35/library/pause:latest --kubeconfig=/root/.kube/config-demo
</code></pre>
<p>But when I use systemctl status kubelet check the service the cluster-domain parameter just have only on "-" like:</p>
<pre><code>systemctl status kubelet -l
● kubelet.service - Kubernetes Kubelet Server
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2018-07-13 13:24:07 CST; 5s ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 29705 (kubelet)
Memory: 30.6M
CGroup: /system.slice/kubelet.service
└─29705 /usr/bin/kubelet --logtostderr=true --v=4 -cluster-dns=10.254.0.10 --cluster-domain=cluster.local --hostname-override=master --allow-privileged=false --cgroup-driver=systemd --fail-swap-on=false --pod_infra_container_image=177.1.1.35/library/pause:latest --kubeconfig=/root/.kube/config-demo
</code></pre>
<p>In the logs say there is nothing set in cluster-dns flag:</p>
<pre><code>Jul 13 13:24:07 master kubelet: I0713 13:24:07.680625 29705 flags.go:27] FLAG: --cluster-dns="[]"
Jul 13 13:24:07 master kubelet: I0713 13:24:07.680636 29705 flags.go:27] FLAG: --cluster-domain="cluster.local"
</code></pre>
<p>The Pods with errors:</p>
<pre><code>pod: "java-deploy-69c84746b9-b2d7j_default(ce02d183-864f-11e8-9bdb-525400c4f6bf)". kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to "Default" policy.
</code></pre>
<p>My kube-dns config file:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: 10.254.0.10
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
---
#apiVersion: v1
#kind: ServiceAccount
#metadata:
# name: kube-dns
# namespace: kube-system
# labels:
# kubernetes.io/cluster-service: "true"
# addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-dns
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: EnsureExists
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
# replicas: not specified here:
# 1. In order to make Addon Manager do not reconcile this replicas parameter.
# 2. Default is 1.
# 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
strategy:
rollingUpdate:
maxSurge: 10%
maxUnavailable: 0
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
volumes:
- name: kube-dns-config
configMap:
name: kube-dns
optional: true
containers:
- name: kubedns
image: 177.1.1.35/library/kube-dns:1.14.8
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
livenessProbe:
httpGet:
path: /healthcheck/kubedns
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
# we poll on pod startup for the Kubernetes master service and
# only setup the /readiness HTTP server once that's available.
initialDelaySeconds: 3
timeoutSeconds: 5
args:
- --domain=cluster.local.
- --dns-port=10053
- --config-dir=/kube-dns-config
- --kube-master-url=http://177.1.1.40:8080
- --v=2
env:
- name: PROMETHEUS_PORT
value: "10055"
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
- containerPort: 10055
name: metrics
protocol: TCP
volumeMounts:
- name: kube-dns-config
mountPath: /kube-dns-config
- name: dnsmasq
image: 177.1.1.35/library/dnsmasq:1.14.8
livenessProbe:
httpGet:
path: /healthcheck/dnsmasq
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- -v=2
- -logtostderr
- -configDir=/etc/k8s/dns/dnsmasq-nanny
- -restartDnsmasq=true
- --
- -k
- --cache-size=1000
- --no-negcache
- --log-facility=-
- --server=/cluster.local/127.0.0.1#10053
- --server=/in-addr.arpa/127.0.0.1#10053
- --server=/ip6.arpa/127.0.0.1#10053
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
# see: https://github.com/kubernetes/kubernetes/issues/29055 for details
resources:
requests:
cpu: 150m
memory: 20Mi
volumeMounts:
- name: kube-dns-config
mountPath: /etc/k8s/dns/dnsmasq-nanny
- name: sidecar
image: 177.1.1.35/library/sidecar:1.14.8
livenessProbe:
httpGet:
path: /metrics
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- --v=2
- --logtostderr
- --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV
- --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV
ports:
- containerPort: 10054
name: metrics
protocol: TCP
resources:
requests:
memory: 20Mi
cpu: 10m
dnsPolicy: Default # Don't use cluster DNS.
#serviceAccountName: kube-dns
</code></pre>
| <p>Recheck your kubelet config:</p>
<pre><code>KUBELET_DNS="–-cluster-dns=10.254.0.10"
</code></pre>
<p>It seems to me that the first dash is <strong>longer</strong> than the second.</p>
<p>Maybe a copy&paste you made causes that strange character.
Retype it and retry.</p>
|
<p>if I run <code>jx create cluster aws</code> -> it creates the cluster on aws without any issues but if I won't to specify some options like this:</p>
<pre><code>jx create cluster aws --zones us-east-2b --nodes=2 --node-size=t2.micro --master-size=t2.micro
</code></pre>
<p>Then it fails constantly, whatever I tried to change, giving out these kind of errors for almost all options:</p>
<pre><code>Error: unknown flag: - -node-size and the same for other options. Options were taken from here https://jenkins-x.io/commands/jx_create_cluster_aws/
</code></pre>
<p>Setting up the cluster with kops with whatever options don't have any issues</p>
| <p>I asked about this in a comment, but the actual <em>answer</em> appears to be that you are on a version of <code>jx</code> that doesn't match the documentation. Because this is my experience with a freshly downloaded binary:</p>
<pre><code>$ ./jx create cluster aws --verbose=true --zones=us-west-2a,us-west-2b,us-west-2c --cluster-name=sample --node-size=5 --master-size=m5.large
kops not found
kubectl not found
helm not found
? Missing required dependencies, deselect to avoid auto installing: [Use arrows to move, type to filter]
❯ ◉ kops
◉ kubectl
◉ helm
? nodes [? for help] (3)
^C
$ ./jx --version
1.3.90
</code></pre>
|
<p>I am trying to use <a href="https://github.com/kubernetes-client/python#example" rel="nofollow noreferrer">Kubernetes python SDK</a>.<br>
I tried to run the following code: </p>
<pre><code>from kubernetes import client, config
# Configs can be set in Configuration class directly or using helper utility
config.load_kube_config()
v1 = client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(watch=False)
</code></pre>
<p>It failed with lots of errors.<br>
When I run the same code with python from a shell, the same issue.<br>
When I run the same code with <code>sudo python</code> from shell, it works. </p>
<p>I am trying to run PyCharm interperte as root.<br>
Following the <a href="https://intellij-support.jetbrains.com/hc/en-us/community/posts/206587695-How-to-run-debug-programs-with-super-user-privileges" rel="nofollow noreferrer">instruction</a> from JetBrains, I created a script shell with the name <code>pythonCustomInt.sh</code> that contains: </p>
<pre><code>sudo python
</code></pre>
<p>I went to PyCharm settings > Project Interpreter and changed the <code>Base interpreter</code> to <code>/<path>/pythonCutomInt.sh</code> but it writes an error: </p>
<blockquote>
<p>Environment location directory is not empty</p>
</blockquote>
<p><a href="https://i.stack.imgur.com/5Z9MS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5Z9MS.png" alt="enter image description here"></a>
I am not sure where I need to put the script.<br>
Any idea? </p>
| <p>I ran <code>sudo -s</code> and then from the pycharm folder (<code>pycharm-community-2018.1.4/bin</code>) I ran <code>sh ./pycharm.sh</code> and it worked. </p>
|
<p>With Kubernetes 1.10.* we can use <code>binaryData:</code> with <code>ConfigMap</code> and I am trying to combine it with <code>Helm</code>:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: some_config_map
data:
text_data: |-
{{ .Files.Get "truststores/simple_text_file.txt" }}
binaryData:
trustore.jks: |-
{{ .Files.Get "truststores/trustore.jks" | b64enc }}
</code></pre>
<p>I am not sure about the last line - regardless of syntax:</p>
<pre><code> {{ "truststores/trustore.jks" | b64enc }}
{{ "truststores/trustore.jks" }}
</code></pre>
<p>the <code>trustore.jks</code> is empty when I deploy it.</p>
<p>So how can I use <code>binaryData:</code> ?</p>
| <p>Your syntax looks fine and everything should work properly. Files in the field <code>binaryData</code> must be encoded with base64, so, <code>{{ .Files.Get "truststores/trustore.jks" | b64enc }}</code> is correct.</p>
<p>Try to apply the configuration with debug key and investigate what went wrong, possibly there is no such file or there are some problems with encoding.</p>
|
<p>Currently, my <strong>kubernetes</strong> cluster (sitting on <strong>AWS</strong>) has a simple setup with a <code>Deployment</code>, which has a web container listens on <strong>80</strong> and <strong>443</strong>. </p>
<p>The cluster also has a <code>NodePort</code> service, which exposes the traffic to public on ports <strong>30080</strong> and <strong>30443</strong>. I am trying to figure out a way to accept public traffic on <strong>80</strong> and <strong>443</strong>. </p>
<p>What could be a good way to do that?</p>
<p><strong>My thoughts about possible solutions</strong>:</p>
<ol>
<li><p>Manually configure a <strong>ELB</strong> on AWS, which could map <strong>80</strong> and <strong>443</strong> traffic to <strong>30080</strong> and <strong>30443</strong> ports. </p></li>
<li><p>Use <code>LoadBalancer</code> supported by <strong>kubernetes</strong>, but I have some issues of integrating my cluster with <strong>AWS</strong> <a href="https://stackoverflow.com/questions/51254530/aws-load-balancer-is-not-registered-with-instances">described here</a>. I'd rather try other possible solutions than being stuck on this one. </p></li>
<li><p>I've seen many posts talking about <code>Ingress</code> service, but various posts have totally different setup.
I wonder what could be a good use case for <code>Ingress</code> in my setup.
Is <code>Ingress</code> supposed to replace my <code>NodePort</code> service in my case?</p></li>
</ol>
<p>Thanks</p>
| <p>As soon as the <em>NodePort</em> service is bound to specified ports on all cluster nodes, you just need to deliver traffic to any of these nodes, or to all of them. Sometimes it could lead to additional delay, but from the connectivity perspective, it should work fine.</p>
<p>You can configure <em>Load Balancer</em> manually, then add all cluster nodes to its pool and configure health checks for them to exclude a node from the pool when a particular node fails.</p>
<p><em>Ingress</em> actually works in a similar way. All traffic that comes to a specific port of any node is forwarded to the <em>Ingress pod</em>.
<em>Ingress controller</em> looks for created <em>Ingress objects</em> and configures the <em>Ingress pod</em> according to the specifications in these objects.
Actually, <em>Ingress controller</em> and <em>Ingress pod</em> in my example are the same thing. </p>
<p>Ingress can provide additional logic for managing the traffic on the HTTP level, like path based routing, adjusting the request before sending it to the service, serving like SSL endpoint, etc.<br>
But anyway, you should deliver external traffic to the nodes somehow. At this point, we are returning to the <em>Load Balancer</em> configuration. </p>
<p>In some cases, when your cluster is deployed on the cloud that provides <em>Load Balancer</em> service, Ingress controller takes care about creating cloud <em>Load Balancer</em> also.</p>
<p>Did you use <a href="https://kubernetes.io/docs/setup/custom-cloud/kops/" rel="nofollow noreferrer">kops</a> to deploy your Kubernetes cluster on AWS? </p>
<p>Usually, kops create a cluster that integrates with AWS without any problems, so you can use the LoadBalancer type of Service. Doing everything manually you can make small configuration mistake that would be hard to find and correct.</p>
<p>Please check out the very good article: </p>
<ul>
<li><a href="https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0" rel="nofollow noreferrer">Kubernetes NodePort vs LoadBalancer vs Ingress? When should I use what?</a></li>
</ul>
<p>How to create Ingress on AWS:</p>
<ul>
<li><a href="https://medium.com/kokster/how-to-setup-nginx-ingress-controller-on-aws-clusters-7bd244278509" rel="nofollow noreferrer">How to setup NGINX ingress controller on AWS clusters</a> </li>
<li><a href="http://kubernetes-on-aws.readthedocs.io/en/latest/user-guide/ingress.html" rel="nofollow noreferrer">Ingress</a></li>
<li><a href="https://github.com/kubernetes/kops/tree/master/addons/kube-ingress-aws-controller" rel="nofollow noreferrer">Creating ingress with kube-ingress-aws-controller and skipper</a></li>
</ul>
|
<p>Running e2e-test on the local cluster kubernetes, with command:</p>
<pre><code>go run hack/e2e.go -- --provider=local --test --check-version-skew=false --test_args="--host=https://192.168.1.5:6443 --ginkgo.focus=\[Feature:Performance\]"
</code></pre>
<p>Showing the errors:</p>
<pre><code>[Feature:Performance] should allow starting 30 pods per node using { ReplicationController} with 0 secrets, 0 configmaps and 0 daemons [BeforeEach]
• Failure in Spec Setup (BeforeEach) [6.331 seconds]
[sig-scalability] Density
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scalability/framework.go:22
[Feature:Performance] should allow starting 30 pods per node using { ReplicationController} with 0 secrets, 0 configmaps and 0 daemons [BeforeEach]
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scalability/density.go:554
Expected error:
<*errors.errorString | 0xc421733010>: {
s: "Namespace e2e-tests-containers-ssgmn is active",
}
Namespace e2e-tests-containers-ssgmn is active
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scalability/density.go:466
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSJul 14 00:02:24.065: INFO: Running AfterSuite actions on all node
Jul 14 00:02:24.065: INFO: Running AfterSuite actions on node 1
Summarizing 2 Failures:
[Fail] [sig-scalability] Load capacity [BeforeEach] [Feature:Performance] should be able to handle 30 pods per node { ReplicationController} with 0 secrets, 0 configmaps and 0 daemons
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scalability/load.go:156
[Fail] [sig-scalability] Density [BeforeEach] [Feature:Performance] should allow starting 30 pods per node using { ReplicationController} with 0 secrets, 0 configmaps and 0 daemons
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scalability/density.go:466
Ran 2 of 998 Specs in 12.682 seconds
FAIL! -- 0 Passed | 2 Failed | 0 Pending | 996 Skipped --- FAIL: TestE2E (12.71s)
</code></pre>
<p>Seemingly, the local cluster Kubernetes has limitation of pods per node. How to fix this? The local cluster configuration is:</p>
<pre><code>leeivan@master01:~/gowork/src/k8s.io/kubernetes$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master01 Ready master 10d v1.11.0
node01 Ready <none> 10d v1.11.0
node02 Ready <none> 10d v1.11.0
node03 Ready <none> 10d v1.11.0
node04 Ready <none> 10d v1.11.0
node05 Ready <none> 10d v1.11.0
leeivan@master01:~/gowork/src/k8s.io/kubernetes$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:17:28Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T20:08:34Z", GoVersion:"go1.10.2", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
| <p>According to kubelet <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/#options" rel="nofollow noreferrer">documentation</a>: </p>
<pre><code>--max-pods int32
Number of Pods that can run on this Kubelet. (default 110)
</code></pre>
<p>So, 110 should be enough to pass the tests. But it is possible that test measures the real capacity of your nodes in terms of <a href="https://github.com/kubernetes/kubernetes/blob/72440a10e912de196cd4a80c759e17eccf5adb69/test/e2e/scalability/density.go#L479" rel="nofollow noreferrer">Allocatable.CPU and Allocatable.Memory</a></p>
<p>Also, before the test run, all the namespaces should be <a href="https://github.com/kubernetes/kubernetes/blob/72440a10e912de196cd4a80c759e17eccf5adb69/test/e2e/scalability/density.go#L482" rel="nofollow noreferrer">deleted</a>: </p>
<pre><code>// Terminating a namespace (deleting the remaining objects from it - which
// generally means events) can affect the current run. Thus we wait for all
// terminating namespace to be finally deleted before starting this test.
</code></pre>
<p>Looks like one of your namespaces still was active, so the test failed.</p>
|
<p>I have a <code>kubernetes</code> cluster working perfectly fine. It has 5 worker nodes. I am using below command to get the status of pods.</p>
<pre><code>kubectl get pod -o wide --namespace=machines
</code></pre>
<p>which shows below results</p>
<pre><code>NAME READY STATUS RESTARTS AGE IP NODE
deployment-26hfn 0/4 ContainerCreating 0 5m <none> machine003
deployment-782mk 0/4 Pending 0 5m <none> machine001
deployment-7kcc7 0/4 Pending 0 5m <none> machine002
deployment-8fzqs 0/4 ContainerCreating 0 5m <none> machine004
deployment-zfzts 0/4 ContainerCreating 0 5m <none> machine005
</code></pre>
<p>As you can see, the above result is not in order from <code>machine001</code> to <code>machine 005</code>. Is it possible to print the output like below:</p>
<pre><code>NAME READY STATUS RESTARTS AGE IP NODE
deployment-26hfn 0/4 Pending 0 5m <none> machine001
deployment-782mk 0/4 Pending 0 5m <none> machine002
deployment-7kcc7 0/4 ContainerCreating 0 5m <none> machine003
deployment-8fzqs 0/4 ContainerCreating 0 5m <none> machine004
deployment-zfzts 0/4 ContainerCreating 0 5m <none> machine005
</code></pre>
| <p>You can pipe the kubectl command output to sort:</p>
<pre><code>kubectl get pods -o wide --namespace=machines | sort -k7
</code></pre>
<p>or to omit the first line</p>
<pre><code>kubectl get pods -o wide --namespace=machines | sed -n '1!p' | sort -k7
</code></pre>
<p>Also, you should be able to do this by <code>--sort-by</code> option in kubectl:</p>
<pre><code>kubectl get pods --all-namespaces -o wide --sort-by=.spec.nodeName
</code></pre>
|
<p>For my project, I need to send UDP packets to a Kubernetes Pod from an outside cluster. How can I do this?</p>
<p>I am using Kubeadm for creating the cluster. I tried to use nodePort but it seems that my requirement cannot be fulfilled with Nodeport.</p>
| <p>Actually, <code>NodePort</code> can be used to expose ports within TCP and UDP protocols. What was the problem in your case?</p>
<p>You can consider using <code>Nginx Ingress Controller</code> and creating <code>ReplicationController</code> to implement <code>Nginx ingress</code> in order to expose your Pods across UDP port as described <a href="https://github.com/kubernetes/contrib/tree/master/ingress/controllers/nginx/examples/udp" rel="nofollow noreferrer">Here</a> or you can check this <a href="https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/" rel="nofollow noreferrer">Link</a>.</p>
<p>Create <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">ConfigMap</a> and specify External port like <code><namespace/service name>:<service port></code> which you want to access from outside Kubernetes cluster.</p>
<p>Finally, Nginx ingress can be exposed, i.e., using <a href="https://kubernetes.io/docs/concepts/services-networking/service/#external-ips" rel="nofollow noreferrer">Kubernetes ExternalIP</a>.</p>
|
<p>I have created ingress for some services on minikube (1.8.0):</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: gateway-ingress
namespace: kube-system
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- backend:
serviceName: api-service
servicePort: 80
path: /api
paths:
- backend:
serviceName: kubernetes-dashboard
servicePort: 80
path: /ui
</code></pre>
<p>When I access MINIKUBE_IP/ui, the static files of dashboard not work. Below are errors:</p>
<pre><code>192.168.99.100/:1 GET https://192.168.99.100/ui/static/vendor.4f4b705f.css net::ERR_ABORTED
192.168.99.100/:5 GET https://192.168.99.100/ui/static/app.8a6b8127.js net::ERR_ABORTED
VM1524:1 GET https://192.168.99.100/ui/api/v1/thirdpartyresource 404 ()
...
</code></pre>
<p>Please help me to fix this error, thanks.</p>
| <p>I had the same issue.
You can solve it by defining new paths in the Ingress resource. </p>
<pre><code> rules:
- http:
paths:
- path: /ui
backend:
serviceName: kubernetes-dashboard
servicePort: 80
- path: /*
backend:
serviceName: kubernetes-dashboard
servicePort: 80
</code></pre>
<p>The "/*" will allow you to access the static files. </p>
<p>Other resources:</p>
<ul>
<li><a href="https://github.com/kubernetes/ingress-nginx/issues/333" rel="noreferrer">https://github.com/kubernetes/ingress-nginx/issues/333</a></li>
<li><a href="https://github.com/kubernetes/contrib/issues/2238" rel="noreferrer">https://github.com/kubernetes/contrib/issues/2238</a></li>
</ul>
|
<p>I have a Google Cloud Platform project where I use Kubernetes to deploy my apps, but I have noticed on my billing that Stackdriver Logging costs too much for me and I don't really need logging right now.</p>
<p>So, does anyone know how can I disable the Stackdriver Logging API in my clusters?</p>
| <p>You can disable logging in several ways:</p>
<ul>
<li>Disabling like <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/logging#disabling_logging" rel="nofollow noreferrer">in here</a>.</li>
<li>Doing this <a href="https://stackoverflow.com/a/42447388/3058302">old workaround</a>.</li>
</ul>
|
<p>I have a K8s cluster created with kubeadm that consists of a master node and two workers.</p>
<p>I am following this documentation article regarding the etcd backup: <a href="https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#backing-up-an-etcd-cluster" rel="noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/configure-upgrade-etcd/#backing-up-an-etcd-cluster</a></p>
<p>I have to use etcdctl to backup the etcd db so I sh into the etcd pod running on the master node to do it from there: <code>kubectl exec -it -n kube-system etcd-ip-x-x-x-x sh</code></p>
<p>NOTE: The master node hosts the etcd database in this path <code>/var/lib/etcd</code> which is mounted on the pod as a VolumeMount in <code>/var/lib/etcd</code>.</p>
<p>Following the doc I run: <code>ETCDCTL_API=3 etcdctl --endpoints 127.0.0.1:2379 snapshot save snapshotdb</code> and it returns the following error:</p>
<pre><code>Error: rpc error: code = 13 desc = transport: write tcp 127.0.0.1:44464->127.0.0.1:2379: write: connection reset by peer
</code></pre>
<p>What is the problem here?</p>
| <p>I managed to make it work adding the certificates info to the command:</p>
<pre><code>ETCDCTL_API=3 etcdctl --endpoints https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key /etc/kubernetes/pki/etcd/healthcheck-client.key snapshot save ./snapshot.db
</code></pre>
|
<p>I am trying to find a way to determine the status of the "kubectl port-forward" command.
There is a way to determine the readiness of a pod, a node...: i.e. "kubectl get pods" etc...</p>
<p>Is there a way to determine if the kubectl port-forward command has completed and ready to work?</p>
<p>Thank you.</p>
| <p>I have the same understanding as you @VKR.</p>
<p>The way that I choose to solve this was to have a loop with a curl every second to check the status of the forwarded port. It works but I had hoped for a prebaked solution to this.</p>
<pre><code>do curl:6379
timer=0
while curl is false and timer <100
timer++
curl:6379
sleep 1
</code></pre>
<p>Thank you @Nicola and @David, I will keep those in mind when I get past development testing. </p>
|
<p>How can I enable feature gates for my cluster in Rancher 2.0? I am in need of enabling the <code>--feature-gates MountPropagation=true</code>. This will enable me to use storage solutions like StorageOS, CephFS, etc</p>
<p>There are 2 use cases here : </p>
<ol>
<li>If the Rancher is setup already and running?</li>
<li>If I am setting up the cluster from scratch?</li>
</ol>
| <p>Hello and hope this helps someone, After much googling and help from awesome people at Rancher I got the solution for this.
Here is what you can do to set the feature gates flags for the Kubernetes engine RKE.</p>
<p>step 1: Open Rancher2.0 UI</p>
<p>step 2: View cluster in API</p>
<p><a href="https://i.stack.imgur.com/5Atxp.png" rel="noreferrer"><img src="https://i.stack.imgur.com/5Atxp.png" alt="enter image description here"></a></p>
<p>step 3: Click edit and modify the <code>rancherKubernetesEngineConfig</code> input box </p>
<p><a href="https://i.stack.imgur.com/hXv6N.png" rel="noreferrer"><img src="https://i.stack.imgur.com/hXv6N.png" alt="enter image description here"></a></p>
<ul>
<li>Find the services key.</li>
<li><p>Then add extra args for kubelet in below format</p>
<pre><code>"services": {
"etcd": { "type": "/v3/schemas/etcdService" },
"kubeApi": {
"podSecurityPolicy": false,
"type": "/v3/schemas/kubeAPIService",
"extraArgs": { "feature-gates": "PersistentLocalVolumes=true, VolumeScheduling=true,MountPropagation=true" }
},
"kubeController": { "type": "/v3/schemas/kubeControllerService" },
"kubelet": {
"failSwapOn": false,
"type": "/v3/schemas/kubeletService",
"extraArgs": { "feature-gates": "PersistentLocalVolumes=true, VolumeScheduling=true,MountPropagation=true" }
}
</code></pre></li>
</ul>
<p>step 4: Click show request .. you get a curl command and json request.</p>
<p>step 5: Verify the request body data which will be shown. </p>
<p>step 6: Make sure the key's which are not applicable are set to null. e.g <code>amazonElasticContainerServiceConfig</code>, <code>azureKubernetesServiceConfig</code>, <code>googleKubernetesEngineConfig</code> all need to null for me.</p>
<p>step 7: Click send request</p>
<p>You should get a response with status code 201. And your cluster will start updating. You can verify that your cluster RKE has updated by viewing the Cluster in API again.</p>
|
<p>Using GKE</p>
<p>Trying to deploy a yml that has a docker image that contains <code>gcr.io/myproject-101/pyftp</code>. I have not had a problem with pulling this image at all until just recently. And from the looks of it it just started happening after my trial ran out. I've tried recreating my cluster and directly getting the path from the container registry.</p>
<pre><code>Back-off pulling image "gcr.io/myproject-101/pyftp" BackOff Jul 14, 2018, 3:49:29 AM Jul 14, 2018, 3:59:27 AM 41
Failed to pull image "gcr.io/myproject-101/pyftp": rpc error: code = Unknown desc = error pulling image configuration: unknown blob Failed Jul 14, 2018, 3:49:28 AM Jul 14, 2018, 3:51:02 AM 4
</code></pre>
<p>I keep reading that it has to due with coming from private repos...but nothing has changed. They're all within my one project. Pull have worked before. Only thing that I can think of that I have done recently is start to make use of "cookies" since I had to to hook my github desktop application into the private repo. To do this I was linked to a page that provided me commands to run to push to that repo. Does that have something to do with it? </p>
| <p>Using default settings, builds are required to be tagged as latest. Using default auto-build settings for container builder, it lets the tag as the commit's hash rather than the <code>latest</code> tag. </p>
|
<p>I am running a few kubernetes pods in my a cluster (10 node). Each pod contains only one container which hosts one working process. I have specified the CPU "limits" and "requests" for the container . The following is a description of one pod that is running on a node (crypt12). </p>
<pre><code>Name: alexnet-worker-6-9954df99c-p7tx5
Namespace: default
Node: crypt12/172.16.28.136
Start Time: Sun, 15 Jul 2018 22:26:57 -0400
Labels: job=worker
name=alexnet
pod-template-hash=551089557
task=6
Annotations: <none>
Status: Running
IP: 10.38.0.1
Controlled By: ReplicaSet/alexnet-worker-6-9954df99c
Containers:
alexnet-v1-container:
Container ID: docker://214e30e87ed4a7240e13e764200a260a883ea4550a1b5d09d29ed827e7b57074
Image: alexnet-tf150-py3:v1
Image ID: docker://sha256:4f18b4c45a07d639643d7aa61b06bfee1235637a50df30661466688ab2fd4e6d
Port: 5000/TCP
Host Port: 0/TCP
Command:
/usr/bin/python3
cifar10_distributed.py
Args:
--data_dir=xxxx
State: Running
Started: Sun, 15 Jul 2018 22:26:59 -0400
Ready: True
Restart Count: 0
Limits:
cpu: 800m
memory: 6G
Requests:
cpu: 800m
memory: 6G
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hfnlp (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-hfnlp:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hfnlp
Optional: false
QoS Class: Guaranteed
Node-Selectors: kubernetes.io/hostname=crypt12
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
</code></pre>
<p>The following is the output when I run "kubectl describle node crypt12"</p>
<pre><code>Name: crypt12
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/hostname=crypt12
Annotations: kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
node.alpha.kubernetes.io/ttl=0
volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp: Wed, 11 Jul 2018 23:07:41 -0400
Taints: <none>
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
OutOfDisk False Mon, 16 Jul 2018 16:25:43 -0400 Wed, 11 Jul 2018 22:57:22 -0400 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Mon, 16 Jul 2018 16:25:43 -0400 Wed, 11 Jul 2018 22:57:22 -0400 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Mon, 16 Jul 2018 16:25:43 -0400 Wed, 11 Jul 2018 22:57:22 -0400 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Mon, 16 Jul 2018 16:25:43 -0400 Wed, 11 Jul 2018 22:57:22 -0400 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Mon, 16 Jul 2018 16:25:43 -0400 Wed, 11 Jul 2018 22:57:42 -0400 KubeletReady kubelet is posting ready status. AppArmor enabled
Addresses:
InternalIP: 172.16.28.136
Hostname: crypt12
Capacity:
cpu: 8
ephemeral-storage: 144937600Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 8161308Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 133574491939
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 8058908Ki
pods: 110
System Info:
Machine ID: f0444e00ba2ed20e5314e6bc5b0f0f60
System UUID: 37353035-3836-5355-4530-32394E44414D
Boot ID: cf2a9daf-c959-4c7e-be61-5e44a44670c4
Kernel Version: 4.4.0-87-generic
OS Image: Ubuntu 16.04.3 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://1.13.1
Kubelet Version: v1.11.0
Kube-Proxy Version: v1.11.0
Non-terminated Pods: (3 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits
--------- ---- ------------ ---------- --------------- -------------
default alexnet-worker-6-9954df99c-p7tx5 800m (10%) 800m (10%) 6G (72%) 6G (72%)
kube-system kube-proxy-7kdkd 0 (0%) 0 (0%) 0 (0%) 0 (0%)
kube-system weave-net-dpclj 20m (0%) 0 (0%) 0 (0%) 0 (0%)
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 820m (10%) 800m (10%)
memory 6G (72%) 6G (72%)
Events: <none>
</code></pre>
<p>As showed, in the node description ("Non-terminated Pods" section), the CPU limits is 10%. However, when I run "ps" or "top" command on the node(crypt12), the CPU utilization of the working process exceeds 10% (about 20%). Why this happened? Could anyone shed light on this?
<a href="https://i.stack.imgur.com/LkJfX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LkJfX.png" alt="enter image description here"></a></p>
<p>UPDATED:
I found a github issue discussion where I found the answer of my question: the cpu percentage from "kubectl describe node" is "CPU-limits/# of Cores". Since I set CPU-limit to 0.8, 10% is the result of 0.8/8. </p>
| <p>I found a github issue discussion where I found the answer of my question: the cpu percentage from "kubectl describe node" is "CPU-limits/# of Cores". Since I set CPU-limit to 0.8, 10% is the result of 0.8/8.<br>
Here is link: <a href="https://github.com/kubernetes/kubernetes/issues/24925" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/24925</a></p>
|
<p>Is there a way to populate the serviceaccount secrets content to an environment variable?</p>
<p>Example: when a pod is started, it contains a <code>/var/run/secrets/kubernetes.io/secrets/serviceaccount/</code> folder that contains <code>token</code>, <code>ca.crt</code>... and other that is the result to map the <code>serviceaccount</code> sercret to a folder.</p>
<p>Is there anyway to map <code>serviceaccountsecret.token</code> to an environment variable?</p>
<p><strong>EDIT</strong></p>
<p>I'm deploying kubernetes/openshift objects using fabric8 maven plugin. Nevertheless, I was looking for a way of setting this information up on PodSpec.</p>
<p>So, currently openshift/kubernetes is mapping service account information located into secrets and then it's automatically mapped to filesystem (`/var/run...).</p>
<p>I'm looking for a way to map this "unknown" service account secret to environment variable (I mean, I don't know which is the name of this secret, when I'm creating PodSpec).</p>
<pre><code>$ oc get secrets
NAME TYPE DATA AGE
builder-dockercfg-hplx4 kubernetes.io/dockercfg 1 43m
builder-token-bkd8h kubernetes.io/service-account-token 4 43m
builder-token-gpckp kubernetes.io/service-account-token 4 43m
default-dockercfg-q2vpx kubernetes.io/dockercfg 1 43m
default-token-hpr7l kubernetes.io/service-account-token 4 43m
default-token-r5225 kubernetes.io/service-account-token 4 43m
deployer-dockercfg-6h7nw kubernetes.io/dockercfg 1 43m
deployer-token-svmvf kubernetes.io/service-account-token 4 43m
deployer-token-tmg9x kubernetes.io/service-account-token 4 43m
vault-cert kubernetes.io/tls 2 42m
</code></pre>
<p>As you can see, openshiftshift/kubernetes creates secrets regarding with each service account:</p>
<pre><code>$ oc get sa
NAME SECRETS AGE
builder 2 44m
default 2 44m
deployer 2 44m
</code></pre>
<p>Each secret has a form like:</p>
<pre><code>$ oc describe secret default-token-hpr7l
Name: default-token-hpr7l
Namespace: ra-sec
Labels: <none>
Annotations: kubernetes.io/created-by=openshift.io/create-dockercfg-secrets
kubernetes.io/service-account.name=default
kubernetes.io/service-account.uid=82ae89d7-898a-11e8-8d35-f28ae3e0478e
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1070 bytes
namespace: 6 bytes
service-ca.crt: 2186 bytes
token: eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJyYS1zZWMiLCJrdWJlcm5ldGVzLmlvL3Nl...
</code></pre>
<p>Each secret is mapped to filesystem automatically. Nevertheless, I'd like to write into PodSpec:</p>
<pre><code>env:
- name: KUBERNETES_TOKEN
valueFrom:
secretKeyRef:
name: <unknown service account secret name>
key: token
</code></pre>
<p>I hope I've explianed a bit better.</p>
| <p>You can create a secret annotated with <code>kubernetes.io/service-account.name</code> annotation.</p>
<p>This annotation provides related service account information to current secret.</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: vault-auth-secret
annotations:
kubernetes.io/service-account.name: vault-auth
type: kubernetes.io/service-account-token
</code></pre>
<p>By this way, you are able to create a named secret with desired data.</p>
<pre><code>- name: KUBERNETES_TOKEN
valueFrom:
secretKeyRef:
name: vault-auth-secret
key: token
</code></pre>
|
<p>I’m trying to get <code>traefik</code> running in GKE, following the user guide (<a href="https://docs.traefik.io/user-guide/kubernetes/" rel="nofollow noreferrer">https://docs.traefik.io/user-guide/kubernetes/</a>). </p>
<p>Instead of seeing the dashboard, I get a <code>404</code>. I guess there’s a problem with the RBAC setup somewhere but I can’t figure it out.</p>
<p>Any help would be greatly appreciated.</p>
<p>The ingress controller log shows a constant flow of (one each second):</p>
<blockquote>
<p>E0714 12:19:56.665790 1 reflector.go:205]
github.com/containous/traefik/vendor/k8s.io/client-go/informers/factory.go:86:
Failed to list *v1.Service: services is forbidden: User
"system:serviceaccount:kube-system:traefik-ingress-controller" cannot
list services at the cluster scope: Unknown user
"system:serviceaccount:kube-system:traefik-ingress-controller"</p>
</blockquote>
<p>and the traefik pod itself constantly spews:</p>
<blockquote>
<p>E0714 12:17:45.108356 1 reflector.go:205]
github.com/containous/traefik/vendor/k8s.io/client-go/informers/factory.go:86:
Failed to list *v1beta1.Ingress: ingresses.extensions is forbidden:
User "system:serviceaccount:default:default" cannot list
ingresses.extensions in the namespace "kube-system": Unknown user
"system:serviceaccount:default:default"</p>
<p>E0714 12:17:45.708160 1 reflector.go:205]
github.com/containous/traefik/vendor/k8s.io/client-go/informers/factory.go:86:
Failed to list *v1.Service: services is forbidden: User
"system:serviceaccount:default:default" cannot list services in the
namespace "default": Unknown user
"system:serviceaccount:default:default"</p>
<p>E0714 12:17:45.714057 1 reflector.go:205]
github.com/containous/traefik/vendor/k8s.io/client-go/informers/factory.go:86:
Failed to list *v1.Endpoints: endpoints is forbidden: User
"system:serviceaccount:default:default" cannot list endpoints in the
namespace "kube-system": Unknown user
"system:serviceaccount:default:default"</p>
<p>E0714 12:17:45.714829 1 reflector.go:205]
github.com/containous/traefik/vendor/k8s.io/client-go/informers/factory.go:86:
Failed to list *v1beta1.Ingress: ingresses.extensions is forbidden:
User "system:serviceaccount:default:default" cannot list
ingresses.extensions in the namespace "default": Unknown user
"system:serviceaccount:default:default"</p>
<p>E0714 12:17:45.715653 1 reflector.go:205]
github.com/containous/traefik/vendor/k8s.io/client-go/informers/factory.go:86:
Failed to list *v1.Endpoints: endpoints is forbidden: User
"system:serviceaccount:default:default" cannot list endpoints in the
namespace "default": Unknown user
"system:serviceaccount:default:default"</p>
<p>E0714 12:17:45.716659 1 reflector.go:205]
github.com/containous/traefik/vendor/k8s.io/client-go/informers/factory.go:86:
Failed to list *v1.Service: services is forbidden: User
"system:serviceaccount:default:default" cannot list services in the
namespace "kube-system": Unknown user
"system:serviceaccount:default:default"</p>
</blockquote>
<p>I created the clusterrole using:</p>
<pre><code>---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups: [""]
resources: ["servies", "endpoints", "secrets"]
verbs: ["get", "list", "watch"]
- apiGroups: ["extensions"]
resources: ["ingresses"]
verbs: ["get", "list", "watch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: kube-system
</code></pre>
<p>and then deployed traefik as deployment:</p>
<pre><code>---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: kube-system
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress-controller
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
- name: admin
containerPort: 8080
args:
- --api
- --kubernetes
- --logLevel=INFO
---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
type: LoadBalancer
</code></pre>
<p>when using helm to install traefik I used the following values file:</p>
<pre><code>dashboard:
enabled: true
domain: traefik.example.com
kubernetes:
namespaces:
- default
- kube-system
</code></pre>
<p>and finally, for the UI I used the following yaml:</p>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
name: traefik-web-ui
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- name: web
port: 80
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: traefik-web-ui
namespace: kube-system
spec:
rules:
- host: traefik.example.com
http:
paths:
- path: /
backend:
serviceName: traefik-web-ui
servicePort: web
</code></pre>
<p>thanks for looking!</p>
<p>(edit: corrected typo in title)</p>
| <p>Since the namespace "kube-system" is handled by the Master node, you will not be able to deploy anything on that specific namespace. The Master node within GKE is a managed service and is not accessible to users at this time.</p>
<p>If you would like to have this functionality, then the only suggestion I can provide at this time is to create your own <a href="https://kubernetes.io/docs/setup/scratch/" rel="nofollow noreferrer">custom cluster from scratch</a>. This will allow you to have access to the Master Node and you would have the option to customize your cluster to your liking.</p>
<p>Edit: I was able to find instructions from <a href="https://github.com/Zenika/traefik-gke-demo" rel="nofollow noreferrer">github</a> on how to use Traefik as a GKE loadbalancer. I would suggest testing this first before running it in your production cluster.</p>
|
<p>I want to use Kafka to "divide the work". I want to publish instances of work to a topic, and run a cloud of identical consumers to process them. As each consumer finishes its work, it will pluck the next work from the topic. Each work should only be processed once by one consumer. Processing work is expensive, so I will need many consumers running on many machines to keep up. I want the number of consumers to grow and shrink as needed (I plan to use Kubernetes for this).</p>
<p>I found a pattern where a unique partition is created for each consumer. This "divides the work", but the number of partitions is set when the topic is created. Furthermore, the topic must be created on the command line e.g.</p>
<pre><code>bin/kafka-topics.sh --zookeeper localhost:2181 --partitions 3 --topic divide-topic --create --replication-factor 1
</code></pre>
<p>...</p>
<pre><code>for n in range(0,3):
consumer = KafkaConsumer(
bootstrap_servers=['localhost:9092'])
partition = TopicPartition('divide-topic',n)
consumer.assign([partition])
...
</code></pre>
<p>I could create a unique topic for each consumer, and write my own code to assign work to those topic. That seems gross, and I still have to create topics via the command line.</p>
<p>A work queue with a dynamic number of parallel consumers is a common architecture. I can't be the first to need this. What is the right way to do it with Kafka?</p>
| <p>The pattern you found is accurate. Note that topics can also be created using the <a href="http://kafka.apache.org/11/javadoc/org/apache/kafka/clients/admin/AdminClient.html#createTopics-java.util.Collection-" rel="nofollow noreferrer">Kafka Admin API</a> and <a href="http://kafka.apache.org/documentation/#basic_ops_modify_topic" rel="nofollow noreferrer">partitions can also be added</a> once a topic has been created (with some gotchas).</p>
<p>In Kafka, the way to divide work and allow scaling is to use <a href="http://kafka.apache.org/documentation/#kafka_mq" rel="nofollow noreferrer">partitions</a>. This is because in a consumer group, each partition is consumed by a single consumer at any time. </p>
<p>For example, you can have a topic with 50 partitions and a consumer group subscribed to this topic:</p>
<ul>
<li><p>When the throughput is low, you can have only a few consumers in the group and they should be able to handle the traffic.</p></li>
<li><p>When the throughput increases, you can add consumers, up to the number of partitions (50 in this example), to pick up some of the work.</p></li>
</ul>
<p>In this scenario, 50 consumers is the limit in terms of scaling. Consumers expose a number of metrics (like lag) allowing you to decide if you have enough of them at any time</p>
|
<p>I have an application that is internal and exposed only to other application on the cluster by a service with cluster IP. Other services are accessing this application via it's DNS (<code>serviceName-namespace.svc.cluster.local</code>). This application handles sensitive data, so although all the communication is inside the cluster I would like to use TLS to secure the communications to this application.</p>
<p>My question is - how can I enable TLS on a service? Is there something already exist or should I handle it on the application code? Also, is there already a CA I can use on the cluster that can sign certificates for <code>.svc.cluster.local</code>?</p>
<p>To clarify, I know I can use ingress for this purpose. The only problem is keeping this service internal only - so only services inside the cluster will be able to access it.</p>
<p>Thanks,
Omer</p>
| <p>I just found that Kubernetes API can be used to generate a certificate that will be trusted by all the pods running on the cluster. This option might be simpler than the alternatives. You can find the documentation <a href="https://kubernetes.io/docs/tasks/tls/managing-tls-in-a-cluster/" rel="noreferrer">here</a>, including full flow of generating a certificate and using it.</p>
|
<p>We have a kubernetes cluster with Debezium running as a source task from a Postgresql and writing to kafka. Debezium, postgres and kafka are all running in separate pods.
When the postgres pod is deleted and kubernetes re-creates the pod, debezium pod fails to re-connect.
Logs from debezium pod:</p>
<pre><code> 2018-07-17 08:31:38,311 ERROR || WorkerSourceTask{id=inventory-connector-0} Task is being killed and will not recover until manually restarted [org.apache.kafka.connect.runtime.WorkerTask]
2018-07-17 08:31:38,311 INFO || [Producer clientId=producer-4] Closing the Kafka producer with timeoutMillis = 30000 ms. [org.apache.kafka.clients.producer.KafkaProducer]
</code></pre>
<p>Debezium continues to try to flush outstanding messages at intervals, but gives the following exception:</p>
<pre><code> 2018-07-17 08:32:38,167 ERROR || WorkerSourceTask{id=inventory-connector-0} Exception thrown while calling task.commit() [org.apache.kafka.connect.runtime.WorkerSourceTask]
org.apache.kafka.connect.errors.ConnectException: org.postgresql.util.PSQLException: Database connection failed when writing to copy
at io.debezium.connector.postgresql.RecordsStreamProducer.commit(RecordsStreamProducer.java:151)
at io.debezium.connector.postgresql.PostgresConnectorTask.commit(PostgresConnectorTask.java:138)
at org.apache.kafka.connect.runtime.WorkerSourceTask.commitSourceTask(WorkerSourceTask.java:437)
at org.apache.kafka.connect.runtime.WorkerSourceTask.commitOffsets(WorkerSourceTask.java:378)
at org.apache.kafka.connect.runtime.SourceTaskOffsetCommitter.commit(SourceTaskOffsetCommitter.java:108)
at org.apache.kafka.connect.runtime.SourceTaskOffsetCommitter.access$000(SourceTaskOffsetCommitter.java:45)
at org.apache.kafka.connect.runtime.SourceTaskOffsetCommitter$1.run(SourceTaskOffsetCommitter.java:82)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.postgresql.util.PSQLException: Database connection failed when writing to copy
at org.postgresql.core.v3.QueryExecutorImpl.flushCopy(QueryExecutorImpl.java:942)
at org.postgresql.core.v3.CopyDualImpl.flushCopy(CopyDualImpl.java:23)
at org.postgresql.core.v3.replication.V3PGReplicationStream.updateStatusInternal(V3PGReplicationStream.java:176)
at org.postgresql.core.v3.replication.V3PGReplicationStream.forceUpdateStatus(V3PGReplicationStream.java:99)
at io.debezium.connector.postgresql.connection.PostgresReplicationConnection$1.doFlushLsn(PostgresReplicationConnection.java:246)
at io.debezium.connector.postgresql.connection.PostgresReplicationConnection$1.flushLsn(PostgresReplicationConnection.java:239)
at io.debezium.connector.postgresql.RecordsStreamProducer.commit(RecordsStreamProducer.java:146)
... 13 more
Caused by: java.net.SocketException: Broken pipe (Write failed)
at java.net.SocketOutputStream.socketWrite0(Native Method)
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111)
at java.net.SocketOutputStream.write(SocketOutputStream.java:155)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
at org.postgresql.core.PGStream.flush(PGStream.java:553)
at org.postgresql.core.v3.QueryExecutorImpl.flushCopy(QueryExecutorImpl.java:939)
... 19 more
</code></pre>
<p>Is there a way to have debezium re-establish its connection to postgres when it becomes available?
Or am I missing some config?</p>
<ul>
<li>Debezium version 0.8 </li>
<li>kubernetes version 1.10.3 </li>
<li>postgres version 9.6</li>
</ul>
| <p>Looks like this is a common issue and has open feature requests in both debezium and kafka</p>
<p><a href="https://issues.jboss.org/browse/DBZ-248" rel="nofollow noreferrer">https://issues.jboss.org/browse/DBZ-248</a></p>
<p><a href="https://issues.apache.org/jira/browse/KAFKA-5352" rel="nofollow noreferrer">https://issues.apache.org/jira/browse/KAFKA-5352</a></p>
<p>While these are open, it looks like this is expected behaviour</p>
<p>As a workaround I've add this liveness probe to the deployment</p>
<pre><code> livenessProbe:
exec:
command:
- sh
- -ec
- ipaddress=$(ip addr | grep 'state UP' -A2 | tail -n1 | awk '{print $2}' | cut -f1 -d'/'); reply=$(curl -s $ipaddress:8083/connectors/inventory-connector/status | grep -o RUNNING | wc -l); if [ $reply -lt 2 ]; then exit 1; fi;
initialDelaySeconds: 30
periodSeconds: 5
</code></pre>
<p>First clause gets the container IP address:</p>
<pre><code> ipaddress=$(ip addr | grep 'state UP' -A2 | tail -n1 | awk '{print $2}' | cut -f1 -d'/');
</code></pre>
<p>Second clause makes the request and counts instances of 'RUNNING' in the response json:</p>
<pre><code> reply=$(curl -s $ipaddress:8083/connectors/inventory-connector/status | grep -o RUNNING | wc -l);
</code></pre>
<p>Third clause returns exit code 1 if 'RUNNING' appears less than twice</p>
<pre><code> if [ $reply -lt 2 ]; then exit 1; fi
</code></pre>
<p>It seems to be working on initial tests - i.e. restarting the postgres DB triggers a restart of the debezium container. I guess a script something like this (although perhaps 'robustified') could be included in the image to facilitate the probe.</p>
|
<p>I install kube1.10.3 in two virtualbox(centos 7.4) in my win10 machine. I use git clone to get prometheus yaml files. </p>
<pre><code>git clone https://github.com/kubernetes/kubernetes
</code></pre>
<p>Then I enter kubernetes/cluster/addons/prometheus annd follow this order to create pods:</p>
<pre><code>alertmanager-configmap.yaml
alertmanager-pvc.yaml
alertmanager-deployment.yaml
alertmanager-service.yaml
kube-state-metrics-rbac.yaml
kube-state-metrics-deployment.yaml
kube-state-metrics-service.yaml
node-exporter-ds.yml
node-exporter-service.yaml
prometheus-configmap.yaml
prometheus-rbac.yaml
prometheus-statefulset.yaml
prometheus-service.yaml
</code></pre>
<p>But Prometheus and alertmanage are in pending state:</p>
<pre><code>kube-system alertmanager-6bd9584b85-j4h5m 0/2 Pending 0 9m
kube-system calico-etcd-pnwtr 1/1 Running 0 16m
kube-system calico-kube-controllers-5d74847676-mjq4j 1/1 Running 0 16m
kube-system calico-node-59xfk 2/2 Running 1 16m
kube-system calico-node-rqsh5 2/2 Running 1 16m
kube-system coredns-7997f8864c-ckhsq 1/1 Running 0 16m
kube-system coredns-7997f8864c-jjtvq 1/1 Running 0 16m
kube-system etcd-master16g 1/1 Running 0 15m
kube-system heapster-589b7db6c9-mpmks 1/1 Running 0 16m
kube-system kube-apiserver-master16g 1/1 Running 0 15m
kube-system kube-controller-manager-master16g 1/1 Running 0 15m
kube-system kube-proxy-hqq49 1/1 Running 0 16m
kube-system kube-proxy-l8hmh 1/1 Running 0 16m
kube-system kube-scheduler-master16g 1/1 Running 0 16m
kube-system kube-state-metrics-8595f97c4-g6x5x 2/2 Running 0 8m
kube-system kubernetes-dashboard-7d5dcdb6d9-944xl 1/1 Running 0 16m
kube-system monitoring-grafana-7b767fb8dd-mg6dd 1/1 Running 0 16m
kube-system monitoring-influxdb-54bd58b4c9-z9tgd 1/1 Running 0 16m
kube-system node-exporter-f6pmw 1/1 Running 0 8m
kube-system node-exporter-zsd9b 1/1 Running 0 8m
kube-system prometheus-0 0/2 Pending 0 7m
</code></pre>
<p>I checked prometheus pod by command shown below:</p>
<pre><code>[root@master16g prometheus]# kubectl describe pod prometheus-0 -n kube-system
Name: prometheus-0
Namespace: kube-system
Node: <none>
Labels: controller-revision-hash=prometheus-8fc558cb5
k8s-app=prometheus
statefulset.kubernetes.io/pod-name=prometheus-0
Annotations: scheduler.alpha.kubernetes.io/critical-pod=
Status: Pending
IP:
Controlled By: StatefulSet/prometheus
Init Containers:
init-chown-data:
Image: busybox:latest
Port: <none>
Host Port: <none>
Command:
chown
-R
65534:65534
/data
Environment: <none>
Mounts:
/data from prometheus-data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from prometheus-token-f6v42 (ro)
Containers:
prometheus-server-configmap-reload:
Image: jimmidyson/configmap-reload:v0.1
Port: <none>
Host Port: <none>
Args:
--volume-dir=/etc/config
--webhook-url=http://localhost:9090/-/reload
Limits:
cpu: 10m
memory: 10Mi
Requests:
cpu: 10m
memory: 10Mi
Environment: <none>
Mounts:
/etc/config from config-volume (ro)
/var/run/secrets/kubernetes.io/serviceaccount from prometheus-token-f6v42 (ro)
prometheus-server:
Image: prom/prometheus:v2.2.1
Port: 9090/TCP
Host Port: 0/TCP
Args:
--config.file=/etc/config/prometheus.yml
--storage.tsdb.path=/data
--web.console.libraries=/etc/prometheus/console_libraries
--web.console.templates=/etc/prometheus/consoles
--web.enable-lifecycle
Limits:
cpu: 200m
memory: 1000Mi
Requests:
cpu: 200m
memory: 1000Mi
Liveness: http-get http://:9090/-/healthy delay=30s timeout=30s period=10s #success=1 #failure=3
Readiness: http-get http://:9090/-/ready delay=30s timeout=30s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/data from prometheus-data (rw)
/etc/config from config-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from prometheus-token-f6v42 (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
prometheus-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: prometheus-data-prometheus-0
ReadOnly: false
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: prometheus-config
Optional: false
prometheus-token-f6v42:
Type: Secret (a volume populated by a Secret)
SecretName: prometheus-token-f6v42
Optional: false
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 42s (x22 over 5m) default-scheduler pod has unbound PersistentVolumeClaims (repeated 2 times)
</code></pre>
<p>In the last line, it shows warning message: pod has unbound PersistentVolumeClaims (repeated 2 times)</p>
<p>The Prometheus logs says:</p>
<pre><code>[root@master16g prometheus]# kubectl logs prometheus-0 -n kube-system
Error from server (BadRequest): a container name must be specified for pod prometheus-0, choose one of: [prometheus-server-configmap-reload prometheus-server] or one of the init containers: [init-chown-data]
</code></pre>
<p>The I describe alertmanager pod and its logs:</p>
<pre><code>[root@master16g prometheus]# kubectl describe pod alertmanager-6bd9584b85-j4h5m -n kube-system
Name: alertmanager-6bd9584b85-j4h5m
Namespace: kube-system
Node: <none>
Labels: k8s-app=alertmanager
pod-template-hash=2685140641
version=v0.14.0
Annotations: scheduler.alpha.kubernetes.io/critical-pod=
Status: Pending
IP:
Controlled By: ReplicaSet/alertmanager-6bd9584b85
Containers:
prometheus-alertmanager:
Image: prom/alertmanager:v0.14.0
Port: 9093/TCP
Host Port: 0/TCP
Args:
--config.file=/etc/config/alertmanager.yml
--storage.path=/data
--web.external-url=/
Limits:
cpu: 10m
memory: 50Mi
Requests:
cpu: 10m
memory: 50Mi
Readiness: http-get http://:9093/%23/status delay=30s timeout=30s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/data from storage-volume (rw)
/etc/config from config-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-snfrt (ro)
prometheus-alertmanager-configmap-reload:
Image: jimmidyson/configmap-reload:v0.1
Port: <none>
Host Port: <none>
Args:
--volume-dir=/etc/config
--webhook-url=http://localhost:9093/-/reload
Limits:
cpu: 10m
memory: 10Mi
Requests:
cpu: 10m
memory: 10Mi
Environment: <none>
Mounts:
/etc/config from config-volume (ro)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-snfrt (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: alertmanager-config
Optional: false
storage-volume:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: alertmanager
ReadOnly: false
default-token-snfrt:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-snfrt
Optional: false
QoS Class: Guaranteed
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 3m (x26 over 9m) default-scheduler pod has unbound PersistentVolumeClaims (repeated 2 times)
</code></pre>
<p>And its log:</p>
<pre><code>[root@master16g prometheus]# kubectl logs alertmanager-6bd9584b85-j4h5m -n kube-system
Error from server (BadRequest): a container name must be specified for pod alertmanager-6bd9584b85-j4h5m, choose one of: [prometheus-alertmanager prometheus-alertmanager-configmap-reload]
</code></pre>
<p>It has same warning message as Prometheus:</p>
<pre><code>pod has unbound PersistentVolumeClaims (repeated 2 times)
</code></pre>
<p>Then I get pvc by issuing command as follows:</p>
<pre><code>[root@master16g prometheus]# kubectl get pvc --all-namespaces
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
kube-system alertmanager Pending standard 20m
kube-system prometheus-data-prometheus-0 Pending standard 19m
</code></pre>
<p>My question is how to make bound persistentVolumnClaim? Why log says container name must be specified?</p>
<p>===============================================================</p>
<h1>Second edition</h1>
<p>Since pvc file defined storage class, so I need to define a storage class yaml. How to do it if I want Nfs or GlusterFs? In this way, I could avoid cloud vendor, like Google or AWS. </p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: alertmanager
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: EnsureExists
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "2Gi"
</code></pre>
| <p>This log entry:</p>
<pre><code>Error from server (BadRequest): a container name must be specified for pod alertmanager-6bd9584b85-j4h5m, choose one of: [prometheus-alertmanager prometheus-alertmanager-configmap-reload]
</code></pre>
<p>means <code>Pod</code> <code>alertmanager-6bd9584b85-j4h5m</code> consists of two containers:</p>
<ul>
<li>prometheus-alertmanager</li>
<li>prometheus-alertmanager-configmap-reload</li>
</ul>
<p>When you use <code>kubectl logs</code> for <code>Pod</code> which consists of more then one containers you must specify a name of the container to view its logs. Command template:</p>
<pre><code>kubectl -n <namespace> logs <pod_name> <container_name>
</code></pre>
<p>For example, if you want to view logs of the container <code>prometheus-alertmanager</code> which is a part of <code>Pod</code> <code>alertmanager-6bd9584b85-j4h5m</code> in the namespace <code>kube-system</code> you should use this command:</p>
<pre><code>kubectl -n kube-system logs alertmanager-6bd9584b85-j4h5m prometheus-alertmanager
</code></pre>
<p><code>Pending</code> status of the PVCs could mean you have no corresponding PVs</p>
|
<p>I have a Kubernetes cluster running several different applications... one of my PHP applications is calling an external service that requires that the caller's IP address is whitelisted with the service. Since this is a Kubernetes cluster and the IP address can change, I could have the IP address that is currently running my application whitelisted, but it may not stay that way. Is there a "best practice" to whitelist an IP from a Kubernetes cluster? </p>
| <p>To achieve this, you need to add IP addresses of your Kubernetes nodes to the whitelist of your external services. When you call something external from pod, your request goes through the node interface and has node’s external IP. In case your nodes have no external IPs and stay behind a router you need to add IP address of your router. Also, you might configure some kind of proxy, add proxy IP to the whitelist and every time go through this proxy to your external service.</p>
|
<p>I am trying to start up the kubelet service on a worker node (the 3rd worker node)... at the moment, I can't quite tell what the error is here.. I do however, see <code>F0716 16:42:20.047413 556 server.go:155] unknown command: $KUBELET_EXTRA_ARGS</code> in the output given by <code>sudo systemctl status kubelet -l</code>:</p>
<pre><code>[svc.jenkins@node6 ~]$ sudo systemctl status kubelet -l
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: activating (auto-restart) (Result: exit-code) since Mon 2018-07-16 16:42:20 CDT; 4s ago
Docs: http://kubernetes.io/docs/
Process: 556 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255)
Main PID: 556 (code=exited, status=255)
Jul 16 16:42:20 node6 kubelet[556]: --tls-cert-file string File containing x509 Certificate used for serving HTTPS (with intermediate certs, if any, concatenated after server cert). If --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory passed to --cert-dir. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
Jul 16 16:42:20 node6 kubelet[556]: --tls-cipher-suites strings Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used. Possible values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_RC4_128_SHA,TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_3DES_EDE_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_128_CBC_SHA256,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_RC4_128_SHA (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
Jul 16 16:42:20 node6 kubelet[556]: --tls-min-version string Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12 (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
Jul 16 16:42:20 node6 kubelet[556]: --tls-private-key-file string File containing x509 private key matching --tls-cert-file. (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
Jul 16 16:42:20 node6 kubelet[556]: -v, --v Level log level for V logs
Jul 16 16:42:20 node6 kubelet[556]: --version version[=true] Print version information and quit
Jul 16 16:42:20 node6 kubelet[556]: --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging
Jul 16 16:42:20 node6 kubelet[556]: --volume-plugin-dir string The full path of the directory in which to search for additional third party volume plugins (default "/usr/libexec/kubernetes/kubelet-plugins/volume/exec/")
Jul 16 16:42:20 node6 kubelet[556]: --volume-stats-agg-period duration Specifies interval for kubelet to calculate and cache the volume disk usage for all pods and volumes. To disable volume calculations, set to 0. (default 1m0s) (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)
Jul 16 16:42:20 node6 kubelet[556]: F0716 16:42:20.047413 556 server.go:155] unknown command: $KUBELET_EXTRA_ARGS
</code></pre>
<p>Here is the configuration for my dropin loacated at <code>/etc/systemd/system/kubelet.service.d/10-kubeadm.conf</code> (it is the same on the other nodes that are in a working state):</p>
<pre><code>[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true"
Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin"
Environment="KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local"
Environment="KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt"
Environment="KUBELET_CADVISOR_ARGS=--cadvisor-port=0"
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"
Environment="KUBELET_CERTIFICATE_ARGS=--rotate-certificates=true --cert-dir=/data01/kubelet/pki"
Environment="KUBELET_EXTRA_ARGS=$KUBELET_EXTRA_ARGS --root-dir=/data01/kubelet"
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS
</code></pre>
<p>Just need help diagnosing what the issue preventing it from starting so that it can be resolved.. Thank in advanced :)</p>
<p>EDIT: </p>
<pre><code>[svc.jenkins@node6 ~]$ kubelet --version
Kubernetes v1.10.4
</code></pre>
| <p>Currently, in <code>systemd</code> a bit different approach is used. All options are put to separate file and systemd config script refers to that file.</p>
<p>In your case, it would be something like this:</p>
<pre><code>/etc/sysconfig/kubelet
----------------------
KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf
KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true
KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin
KUBELET_DNS_ARGS=--cluster-dns=10.96.0.10 --cluster-domain=cluster.local
KUBELET_AUTHZ_ARGS=--authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt
KUBELET_CADVISOR_ARGS=--cadvisor-port=0
KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs
KUBELET_CERTIFICATE_ARGS=--rotate-certificates=true --cert-dir=/data01/kubelet/pki
KUBELET_EXTRA_ARGS=$KUBELET_EXTRA_ARGS --root-dir=/data01/kubelet
/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
-----------------------------------------------------
...
[Service]
EnvironmentFile=/etc/sysconfig/kubelet
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_SYSTEM_PODS_ARGS $KUBELET_NETWORK_ARGS $KUBELET_DNS_ARGS $KUBELET_AUTHZ_ARGS $KUBELET_CADVISOR_ARGS $KUBELET_CGROUP_ARGS $KUBELET_CERTIFICATE_ARGS $KUBELET_EXTRA_ARGS
</code></pre>
<p>The variables in systemd config file could look like <code>${VARIABLE}</code> or <code>$VARIABLE</code>. Both cases should work fine.</p>
|
<p>I have installed a Kubernetes cluster using kubeadm. It works fine. But accidentally I restarted my host machine where master was up. After the restart the kubelet was not running . I had to do </p>
<blockquote>
<p>Kubeadm reset</p>
</blockquote>
<p>And</p>
<blockquote>
<p>Kubeadm init</p>
</blockquote>
<p>What should I do to bring my cluster up automatically after a host machine restart ?</p>
| <p>There is no need to do <code>kubeadm reset</code> and <code>kubeadm init</code> in such cases.
To start <code>kubelet</code> service during the current session use: </p>
<pre><code>systemctl start kubelet
</code></pre>
<p>To start service automatically during the boot, you must enable it using:</p>
<pre><code>systemctl enable kubelet
</code></pre>
<p>Keep in mind that if you are running above commands as a non-root user, you will have to use <code>sudo</code>.</p>
|
<p>The Istio ingress gateway exposes the following ports by default:</p>
<pre><code>80:31380/TCP,443:31390/TCP,31400:31400/TCP
</code></pre>
<p>Why does it expose 31400 and map it to 31400? I can't find an explanation of this in the docs or elsewhere.</p>
<p><strong>Background:</strong> I'm following the <a href="https://istio.io/docs/setup/kubernetes/helm-install/#option-1-install-with-helm-via-helm-template" rel="noreferrer">Install with Helm via <code>helm template</code></a> guide using Istio 0.8.0. The deployment manifest is built from <a href="https://github.com/istio/istio/tree/0.8.0/install/kubernetes/helm/istio" rel="noreferrer">https://github.com/istio/istio/tree/0.8.0/install/kubernetes/helm/istio</a>, giving the following ingress gateway service definition:</p>
<pre><code># Source: istio/charts/ingressgateway/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: istio-ingressgateway
namespace: istio-system
labels:
chart: ingressgateway-0.8.0
release: istio
heritage: Tiller
istio: ingressgateway
spec:
type: NodePort
selector:
istio: ingressgateway
ports:
-
name: http
nodePort: 31380
port: 80
-
name: https
nodePort: 31390
port: 443
-
name: tcp
nodePort: 31400
port: 31400
</code></pre>
| <p><a href="https://github.com/istio/istio/commit/a4b6cc55dd2066f6c2bbe8fdf6d39657f44f444e" rel="noreferrer">Commit a4b6cc5</a> mentions:</p>
<blockquote>
<p>Adding the 31400 port back because of testdata dependency</p>
</blockquote>
<p>This is part of <a href="https://github.com/istio/istio/pull/6350" rel="noreferrer"><code>istio/istio</code> PR 6350</a></p>
<blockquote>
<p>These changes add support for multiple ingress/egress gateway configuration in the Helm charts.<br>
The new gateways field is an array that by default has one configuration (as it was before) but allows users to add more configurations to have multiple ingress/egress gateways deployed when installing the charts.</p>
</blockquote>
<p>See <a href="https://github.com/istio/istio/pull/6350/commits/05cba4e6570c1350a6b532355b7b4cc9c857c8e7" rel="noreferrer">commit 05cba4e</a>.</p>
|
<p>I successfully deployed helm chart <a href="https://github.com/coreos/prometheus-operator/tree/master/helm/prometheus-operator" rel="nofollow noreferrer">prometheus operator</a>, <a href="https://github.com/coreos/prometheus-operator/tree/master/helm/kube-prometheus" rel="nofollow noreferrer">kube-prometheus</a> and <a href="https://github.com/kubernetes/charts/tree/master/incubator/kafka]" rel="nofollow noreferrer">kafka</a> (tried both image danielqsj/kafka_exporter <code>v1.0.1</code> and <code>v1.2.0</code>). </p>
<p>Install with default value mostly, rbac are enabled. </p>
<p>I can see 3 <code>up</code> nodes in Kafka target list in prometheus, but when go in Grafana, I can's see any kafka metric with <a href="https://grafana.com/dashboards/721" rel="nofollow noreferrer">kafka overview</a></p>
<p>Anything I missed or what I can check to fix this issue?</p>
<p>I can see metrics start with <code>java_</code>, <code>kafka_</code>, but no <code>jvm_</code> and only few <code>jmx_</code> metrics.</p>
<p><a href="https://i.stack.imgur.com/mq9tA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mq9tA.png" alt="enter image description here"></a></p>
<p>I found someone reported similar issue (<a href="https://groups.google.com/forum/#!searchin/prometheus-users/jvm_%7Csort:date/prometheus-users/OtYM7qGMbvA/dZ4vIfWLAgAJ" rel="nofollow noreferrer">https://groups.google.com/forum/#!searchin/prometheus-users/jvm_%7Csort:date/prometheus-users/OtYM7qGMbvA/dZ4vIfWLAgAJ</a>), So I deployed with old version of jmx exporter from 0.6 to 0.9, still no <code>jvm_</code> metrics.</p>
<p>Are there anything I missed?</p>
<h3>env:</h3>
<p>kuberentes: AWS EKS (kubernetes version is 1.10.x)</p>
<p>public grafana dashboard: <a href="https://grafana.com/dashboards/721" rel="nofollow noreferrer">kafka overview</a></p>
| <p>Just realised the owner of <code>jmx-exporter</code> mentioned in README:</p>
<blockquote>
<p>This exporter is <code>intended to be run as a Java Agent</code>, exposing a HTTP server and serving metrics of the local JVM. It can be also run as an independent HTTP server and scrape remote JMX targets, <code>but this has various disadvantages</code>, such as being harder to configure and being unable to expose process metrics (e.g., memory and CPU usage). Running the exporter as a Java Agent is thus strongly encouraged.</p>
</blockquote>
<p>Not really understood what's that meaning, until I saw this comment: </p>
<p><a href="https://github.com/prometheus/jmx_exporter/issues/111#issuecomment-341983150" rel="nofollow noreferrer">https://github.com/prometheus/jmx_exporter/issues/111#issuecomment-341983150</a></p>
<blockquote>
<p>@brian-brazil can you add some sort of tip to the readme that jvm_* metrics are only exposed when using the Java agent? It took me an hour or two of troubleshooting and searching old issues to figure this out, after playing only with the HTTP server version. Thanks!</p>
</blockquote>
<p>So jmx-exporter has to be run with <code>java agent</code> to get <code>jvm_</code> metric. <code>jmx_prometheus_httpserver</code> doesn't support, but it is the default setting in kafka helm chart.</p>
<p><a href="https://github.com/kubernetes/charts/blob/master/incubator/kafka/templates/statefulset.yaml#L82" rel="nofollow noreferrer">https://github.com/kubernetes/charts/blob/master/incubator/kafka/templates/statefulset.yaml#L82</a></p>
<pre><code>command:
- sh
- -exc
- |
trap "exit 0" TERM; \
while :; do \
java \
-XX:+UnlockExperimentalVMOptions \
-XX:+UseCGroupMemoryLimitForHeap \
-XX:MaxRAMFraction=1 \
-XshowSettings:vm \
-jar \
jmx_prometheus_httpserver.jar \ # <<< here
{{ .Values.prometheus.jmx.port | quote }} \
/etc/jmx-kafka/jmx-kafka-prometheus.yml & \
wait $! || sleep 3; \
done
</code></pre>
|
<p>I am attempting to get Istio setup on Kubernetes as an ingress controller. The problem is that my two applications seem to be accessible from the Istio ingress controllers node port (E.g., <a href="http://[host]:31380/application1" rel="nofollow noreferrer">http://[host]:31380/application1</a> and
<a href="http://[host]:31380/application2" rel="nofollow noreferrer">http://[host]:31380/application2</a>) but not accessible from 443/80.</p>
<p>I am new to Kubernetes and Istio so I had use the <a href="https://istio.io/docs/guides/bookinfo/" rel="nofollow noreferrer">https://istio.io/docs/guides/bookinfo/</a> guide as a reference. Following the guide was fairly easy and I was able to access the Bookinfo application using the node port as mentioned. I am unable to access it from 443/80 though. I used the helm chart to install Istio. I also don't see anything under Ingresses within the Kubernetes dashboard.</p>
<p>Here is an example of the gateway/virtual service yaml:</p>
<pre>
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: myapp-gateway
spec:
selector:
istio: ingressgateway # use istio default controller
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: myapp-virtual-service
spec:
hosts:
- "*"
gateways:
- myapp-gateway
http:
- match:
- uri:
prefix: /myapp
route:
- destination:
host: myapp-app-service
port:
number: 7080
- match:
- uri:
prefix: /
route:
- destination:
host: kibana
port:
number: 5601
</pre>
<p>Any ideas on what I have to do to get it to listen on 443? Am I missing a component entirely? </p>
| <p>If routing to your application is required to run on 443/80, your Kubernetes cluster must have an external load balancer deployed. If one is not present, the traffic will be routed to the ingress node port. </p>
<p>Refer to - <a href="https://istio.io/docs/tasks/traffic-management/ingress/#determining-the-ingress-ip-and-ports" rel="nofollow noreferrer">https://istio.io/docs/tasks/traffic-management/ingress/#determining-the-ingress-ip-and-ports</a> (Determining the ingress IP and ports):</p>
<p><em>"If the EXTERNAL-IP value is set, your environment has an external load balancer that you can use for the ingress gateway. If the EXTERNAL-IP value is (or perpetually ), your environment does not provide an external load balancer for the ingress gateway. In this case, you can access the gateway using the service’s node port."</em></p>
<p>Example for my bare-metal instance without an external load balancer:</p>
<pre>[admin@master1 ~]$ kubectl get svc -n istio-system | grep istio-ingress
istio-ingress LoadBalancer 10.114.107.196 <pending> 80:32400/TCP,443:31564/TCP 5d
istio-ingressgateway LoadBalancer 10.99.1.148 <pending> 80:31380/TCP,443:31390/TCP,31400:31400/TCP 5d</pre>
<p>If you are deploying to an online cloud provider such as IBM Bluemix (probably AWS/Azure/etc.), you should already have one configured. If your configuration is on bare-metal, you likely don't have a load balancer configured.</p>
<p>Example for my Bluemix instance with an external load balancer:</p>
<pre>λ kubectl get svc -n istio-system | grep istio-ingress
istio-ingress LoadBalancer 172.21.26.25 123.45.67.195 80:32000/TCP,443:31694/TCP 6h
istio-ingressgateway LoadBalancer 172.21.139.142 123.45.67.196 80:31380/TCP,443:31390/TCP,31400:31400/TCP 6h</pre>
<p>I have not yet gone back to deploy a load balancer to bare-metal so would like to hear if anyone has. I have briefly looked at Metal but have not spent much time on it.</p>
|
<p>We have a Rancher Kubernetes cluster where sometimes the pods get stuck in <code>terminating</code> status when we try to delete the corresponding deployment, as shown below.</p>
<pre><code>$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
...
storage-manager-deployment 1 0 0 0 1d
...
$ kubectl delete deployments storage-manager-deployment
kubectl delete deployments storage-manager-deployment
deployment.extensions "storage-manager-deployment" deleted
C-c C-c^C
$ kubectl get po
NAME READY STATUS RESTARTS AGE
...
storage-manager-deployment-6d56967cdd-7bgv5 0/1 Terminating 0 23h
...
$ kubectl delete pods storage-manager-deployment-6d56967cdd-7bgv5 --grace-period=0 --force
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
pod "storage-manager-deployment-6d56967cdd-7bgv5" force deleted
C-c C-c^C
</code></pre>
<p>Both the delete commands (for the <code>deployment</code> and the <code>pod</code>) get stuck and need to be stopped manually.</p>
<p>We have tried both</p>
<p><code>kubectl delete pod NAME --grace-period=0 --force</code></p>
<p>and</p>
<p><code>kubectl delete pod NAME --now</code></p>
<p>without any luck.</p>
<p>We have also set <code>fs.may_detach_mounts=1</code>, so it seems that all the similar questions already on StackOverflow don't apply to our problem.</p>
<p>If we check the node on which the incriminated pod runs, it does not appear in the <code>docker ps</code> list.</p>
<p>Any suggestion?</p>
<p>Thanks</p>
| <p>Check the pod spec for an array: 'finalizers'</p>
<pre><code>finalizers:
- cattle-system
</code></pre>
<p>If this exists, remove it, and the pod will terminate.</p>
|
<p>I have 2 vms that run a kubernetes master and a slave node that i have setup locally. Till now everything was working fine but suddenly it started giving errors when I try to start the master with kubeadm init command.I have cpppied the error below.</p>
<pre><code>shayeeb@ubuntu:~$ sudo kubeadm init
[init] using Kubernetes version: v1.11.1
[preflight] running pre-flight checks
I0718 11:04:57.038464 20370 kernel_validator.go:81] Validating kernel version
I0718 11:04:57.038896 20370 kernel_validator.go:96] Validating kernel config
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[preflight] Some fatal errors occurred:
[ERROR ImagePull]: failed to pull image [k8s.gcr.io/kube-apiserver-amd64:v1.11.1]: exit status 1
[ERROR ImagePull]: failed to pull image [k8s.gcr.io/kube-controller-manager-amd64:v1.11.1]: exit status 1
[ERROR ImagePull]: failed to pull image [k8s.gcr.io/kube-scheduler-amd64:v1.11.1]: exit status 1
[ERROR ImagePull]: failed to pull image [k8s.gcr.io/kube-proxy-amd64:v1.11.1]: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
</code></pre>
| <p>You can also run following command rather than writing the yaml,</p>
<pre><code>kubeadm init --kubernetes-version=1.11.0 --apiserver-advertise-address=<public_ip> --apiserver-cert-extra-sans=<private_ip>
</code></pre>
<p>If you are using flannel network run following commad,</p>
<pre><code>kubeadm init --kubernetes-version=1.11.0 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=<public_ip> --apiserver-cert-extra-sans=<internal_ip>
</code></pre>
|
<p>I have a project with N git repos, each representing a static website (N varies). For every git repo there exists a build definition that creates an nginx docker image on Azure Container Registry. These N build definitions are linked to N release defenitions that deploy each image to k8s (also on Azure). Overall, CI/CD works fine and after the releases have succedded for the first time, I see a list of environments, each representing a website that is now online.</p>
<p><a href="https://i.stack.imgur.com/FW1iK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FW1iK.png" alt="environments VSTS CI/CD release pipelines"></a></p>
<p>What I cannot do though with VSTS CI/CD is to declare how these environments are torn down. In GitLab CI (which I used before), there exists a concept of <a href="https://docs.gitlab.com/ce/ci/environments.html#stopping-an-environment" rel="nofollow noreferrer">stopping an environment</a> and although this is just a stage in <code>.gitlab-ci.yaml</code>, running it literally removes an environemnt from the list of the deployed ones.</p>
<p>Stopping an environment can be useful when deleting autodeployable feature branches (aka Review Apps). In my case, I'd like to do this when an already shared static website needs to be removed.</p>
<p>VSTS does not seem to have a concept of unreleasing something that has already been released and I'm wondering what the best workaround could be. I tried these two options so far:</p>
<ol>
<li><p>Create N new release definition pipelines, which call <code>kubecetl delete ...</code> for a corresponding static websites. That does make things clear at all, because an environment called <code>k8s prod (website-42)</code> in one pipeline is not the same one as in another one (otherwise, I could see whether <code>web → cloud</code> or <code>web × cloud</code> was called last):</p>
<p><a href="https://i.stack.imgur.com/ubFGc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ubFGc.png" alt="environment cards (multiple release definitions)"></a></p></li>
<li><p>Define a new environment called <code>production (delete)</code> in the same release defenition and trigger it manually.</p>
<p><a href="https://i.stack.imgur.com/iMnf9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iMnf9.png" alt="environment cards (single release definitions)"></a></p>
<p>In this case 'deploy' sits a bit closer to 'undeploy', but its hard to figure out what was last (in the example above, you can kind of guess that re-releasing my k8s resources happened after I deleted them – you need to look at the time on the cards, which is a pretty poor indication).</p></li>
</ol>
<p>What else could work for deleting / undeploying released applications?</p>
| <p>VSTS has not the feature "stop environment" (auto delete the deployed things on the environment) in release management. But you can achieve the same thing in <a href="https://learn.microsoft.com/en-us/vsts/pipelines/build/yaml?view=vsts" rel="nofollow noreferrer"><strong>VSTS YAML build</strong></a>.</p>
<p>So except the two workarounds you shared, you can also stop the environment by VSTS YAML build (similar as the mechanism in GitLab).</p>
<p>For YAML CI build, you just need to commit the file end with <code>.vsts-ci.yml</code>. And in the <code>.vsts-ci.yml</code> file, you can specify with the tasks to delete the deployed app.</p>
|
<p>I am in the process of learning Kubernetes with a view to setting up a simple cluster with Citus DB and I'm having a little trouble with getting things going, so would be grateful for any help.</p>
<p>I have a docker image containing my base debian image configured for Citus for the project, and I want to set it up at this point with one master, that should mount a GCP master disk with a Postgres DB that I'll then distribute among the other containers, each mounted with a individual separate disk with empty tables (configured with the Citus extension) to hold what gets distributed to each. I'd like to automate this further at some point, but now I'm aiming for just a master container, and eight nodes. My plan is to create a deployment that opens port 5432 and 80 on each node, and I thought that I can create two pods, one to hold the master and one to hold the eight nodes. Ideally I'd want to mount all the disks and then run a post-mount script on the master that will find all the node containers (by IP or hostname??), add them as Citus nodes, then run create_distributed_table to distribute the data. </p>
<p>My confusion at present is about how to label all the individual nodes so they will keep their internal address or hostname and so in the case of one going down it will be replaced and resume with the data on the PD. I've read about ConfigMaps and setting hostname aliases but I'm still unclear about how to proceed. Is this possible, or is this the wrong way to approach this kind of setup?</p>
| <p>You are looking for a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a>. That lets you have a known number of pod replicas; with attached storage (PersistentVolumes); and consistent DNS names. In the pod spec I would launch only a single copy of the server and use the StatefulSet's replica count to control the number of "nodes" (also a Kubernetes term), if the replica is #0 then it's the master.</p>
|
<p>I am unable to connect to an exposed IP for a docker container deployed on Google Could Kubernetes. I have been roughly following <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app" rel="noreferrer">this tutorial</a> but using my own application.</p>
<p>The deployment seems to work fine, everything is green and running when visiting the cloud dashboard but when trying to visit the deployed application on the exposed IP, I get a browser error:</p>
<blockquote>
<p>This site can’t be reached
35.231.27.158 refused to connect</p>
</blockquote>
<p>If I ping the IP I do get a response back.</p>
<p><code>kubectl get pods</code> produces the following:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
mtg-dash-7874f6c54d-nktjn 1/1 Running 0 21m
</code></pre>
<p>and <code>kubectl get service</code> shows the following:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.7.240.1 <none> 443/TCP 23m
mtg-dash LoadBalancer 10.7.242.240 35.231.27.158 80:30306/TCP 20m
</code></pre>
<p>and <code>kubectl describe svc</code> show the following:</p>
<pre><code>Name: mtg-dash
Namespace: default
Labels: run=mtg-dash
Annotations: <none>
Selector: run=mtg-dash
Type: LoadBalancer
IP: 10.7.242.240
LoadBalancer Ingress: 35.231.27.158
Port: <unset> 80/TCP
TargetPort: 8080/TCP
NodePort: <unset> 30306/TCP
Endpoints: 10.4.0.6:8080
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 37m service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 37m service-controller Ensured load balancer
</code></pre>
<p>My <code>Dockerfile</code> contains the following:</p>
<pre><code>FROM nginx:alpine
COPY nginx.conf /etc/nginx/nginx.conf
WORKDIR /usr/share/nginx/html
COPY dist/mtg-dash .
</code></pre>
<p>I have a feeling I have missed something obvious. </p>
<p><strong>What more do I need to do to configure this to be accessible on the internet?</strong></p>
<p>Here is a screenshot of running service:</p>
<p><a href="https://i.stack.imgur.com/h8SNP.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/h8SNP.jpg" alt="enter image description here"></a> </p>
| <p>As per the comments the target port should be 80 since that is what the application is set to listen on. Glad I could help. Picked it up from the documentation a month or so ago.</p>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/</a></p>
|
<p>What happened:
Installed kubctl-cli throgh brew. At the time on my pc kubectl version version was 1.11.0 and the server was 1.9.3.
While I was working on my cluster I tried to delete a pod named X, the command got stuck so I had to stop it (ctrl+c) and pod X got stuck in a Termination state.
I tried to SSH into the hosted node and manually delete the pod but no success, also tried th delete the pod with --force and/--period 0/--now.</p>
<p>Is there a way I can remove the pod?</p>
<p>What you expected to happen:</p>
<p>Pod X was deleted successfully from the cluster</p>
<p>How to reproduce it (as minimally and precisely as possible):</p>
<p>Install kubctl-cli version 1.11.0 on server 1.9.3</p>
<p>Environment:</p>
<pre><code>Cloud provider or hardware configuration:
AWS using kops
Kernel (e.g. uname -a):
Darwin h-MacBook-Pro-sl-ido.local 17.6.0 Darwin Kernel Version 17.6.0: Tue May 8 15:22:16 PDT 2018; root:xnu-4570.61.1~1/RELEASE_X86_64 x86_64
</code></pre>
| <p>I provided <a href="https://stackoverflow.com/a/51138459/9485673">the answer</a> to same problem:</p>
<blockquote>
<p>From <a href="https://github.com/kubernetes/community/blob/master/contributors/design-proposals/release/versioning.md#supported-releases-and-component-skew" rel="nofollow noreferrer">documentation</a></p>
<p>a client should be skewed no more than one minor version from the
master, but may lead the master by up to one minor version. For
example, a v1.3 master should work with v1.1, v1.2, and v1.3 nodes,
and should work with v1.2, v1.3, and v1.4 clients</p>
</blockquote>
|
<p>I configured kubernetes cluster with one master and one node, the machines that run master and node aren't in the same network. For networking I installed calico and all the pods are running. For testing the cluster I used <a href="https://kubernetes.io/docs/tasks/debug-application-cluster/get-shell-running-container/" rel="nofollow noreferrer">get shell example</a> and when I run the following command from master machine:</p>
<pre><code>kubectl exec -it shell-demo -- /bin/bash
</code></pre>
<p>I received the error:</p>
<pre><code>Error from server: error dialing backend: dial tcp 10.138.0.2:10250: i/o timeout
</code></pre>
<p>The ip 10.138.0.2 is on eth0 interface on the node machine. </p>
<p>What configuration do I need to make to access the pod from master?</p>
<p><strong>EDIT</strong></p>
<p>kubectl get all --all-namespaces -o wide output:</p>
<pre><code>default shell-demo 1/1 Running 0 10s 192.168.4.2 node-1
kube-system calico-node-7wlqw 2/2 Running 0 49m 10.156.0.2 instance-1
kube-system calico-node-lnk6d 2/2 Running 0 35s 10.132.0.2 node-1
kube-system coredns-78fcdf6894-cxgc2 1/1 Running 0 50m 192.168.0.5 instance-1
kube-system coredns-78fcdf6894-gwwjp 1/1 Running 0 50m 192.168.0.4 instance-1
kube-system etcd-instance-1 1/1 Running 0 49m 10.156.0.2 instance-1
kube-system kube-apiserver-instance-1 1/1 Running 0 49m 10.156.0.2 instance-1
kube-system kube-controller-manager-instance-1 1/1 Running 0 49m 10.156.0.2 instance-1
kube-system kube-proxy-b64b5 1/1 Running 0 50m 10.156.0.2 instance-1
kube-system kube-proxy-xxkn4 1/1 Running 0 35s 10.132.0.2 node-1
kube-system kube-scheduler-instance-1 1/1 Running 0 49m 10.156.0.2 instance-1
</code></pre>
<p>Thanks!</p>
| <p>Before checking your status on Master .Please verify below things.</p>
<p>Please run below commands to check cluster info :</p>
<pre><code>setenforce 0
firewall-cmd --permanent --add-port=6443/tcp
firewall-cmd --permanent --add-port=2379-2380/tcp
firewall-cmd --permanent --add-port=10250/tcp
firewall-cmd --permanent --add-port=10251/tcp
firewall-cmd --permanent --add-port=10252/tcp
firewall-cmd --permanent --add-port=10255/tcp
firewall-cmd --reload
modprobe br_netfilter
echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
</code></pre>
<p>Run above command on both Master and worker node.</p>
<p>Then run below commands to check node status.</p>
<p><strong>kubectl get nodes</strong></p>
|
<p>I'm moving an Apache Mesos application, where my configurations are based in JSON, to Kubernetes, where my configurations are based in YAML. Would the JSON configuration files work as a YAML file since YAML is a superset of JSON, or would I need to write a new YAML file?</p>
| <p>Yes, JSON works as well, it's just more pain than YAML to write it, manually. Also, you may be able to use <a href="https://github.com/micahhausler/container-transform" rel="nofollow noreferrer">micahhausler/container-transform</a> to convert your Marathon specs to Kubernetes specs.</p>
|
<p>I have Kubernetes cluster, hosted on Google cloud platform, that running two deployments: <code>app1</code> and <code>app2</code>.</p>
<p>My cluster have two Node-Pools: <code>pool1</code> and <code>pool2</code>. </p>
<p>Deploying pods, from yaml file, like this: </p>
<pre><code>kubectl apply -f ./app1.yaml
kubectl apply -f ./app2.yaml
</code></pre>
<p>Actualy, it deploy two pods to <code>pool1</code>, which is the cluster "default-pool".</p>
<p>I want to change this behavior to deploy <code>app2</code> to <code>pool2</code> (and <code>app1</code> to <code>pool1</code> - as before, default). Looking to pseudo command like this:</p>
<pre><code>kubectl apply -f ./app1.yaml
kubectl apply -f ./app2.yaml --pool=pool2
</code></pre>
<p>explicit give the pool name when publishing new pod.</p>
<p>How to do it right? </p>
<p>Thanks!</p>
| <p>You need to have labels differentiating the nodes in each pool. Then look at pod affinity to bind pods to nodes with certain labels. If you have to have this off the command line I believe it would be possible via overrides it won't look nearly as pretty as your desired line</p>
|
<p>I am running a jobs with a kubernetes POD and I need to measure the execution time for each job .</p>
<p>I want to get it through some api.</p>
<p>Does anyone know how can I get it ?</p>
| <p>A job has a property denominated <code>status</code> of type <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#jobstatus-v1-batch" rel="nofollow noreferrer">JobStatus</a>. </p>
<p>The properties which you are looking for in the <code>JobStatus</code> type is the <code>startTime</code> and the <code>completionTime</code>, which as the name suggest are responsible for indicating the moment where the job started/completed. The difference between these values is going to lead you to the duration of the execution of the job.</p>
|
<p>How do I restore kubernetes cluster using kops?
I've kubernetes state files in my s3 bucket.</p>
<p>Is there a way to restore kubernetes cluster using kops?</p>
| <p>As you mention, kops stores the state of the cluster in an S3 bucket. If you run <code>kops create cluster</code> with the same state file, it will recreate the cluster as it was before, with the same instancegroups and master configuration. This assumes the cluster has been deleted, if not, you'll need to use the <code>kops update cluster</code> command which should bring the state back to your desired once if the state of the cluster has diverged.</p>
<p>However, this doesn't cover the resources and deployments inside the cluster, and to achieve a full recovery, you may want to recover those deployments. In order to achieve this, you'll need to backup the etcd datastore used by Kubernetes. <a href="https://github.com/kubernetes/kops/blob/master/docs/operations/etcd_backup_restore_encryption.md" rel="nofollow noreferrer">This document</a> covers this in more detail.</p>
<p>You may also want to consider using something like <a href="https://github.com/vmware-tanzu/velero" rel="nofollow noreferrer">Velero</a> for backing up the etcd state</p>
|
<p>I've launched kubernetes cluster using kops. It was working find and I started facing the following problem:</p>
<pre><code>kubectl get pods
The connection to the server localhost:8080 was refused - did you specify the right host or port?
</code></pre>
<p>How do i solve this?
It looks like kubernetes-apiserver is not running, How do i get this working?</p>
<pre><code>kubectl run nginx --image=nginx:1.10.0
error: failed to discover supported resources: Get http://localhost:8080/apis/apps/v1beta1?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused
</code></pre>
<p>Please suggest</p>
| <p>Kubernetes uses a <code>$KUBECONFIG</code> file for connecting to clusters. It may be when provisioning your kops cluster, it didn't write the file correctly. I can't be sure as you haven't provided enough info.</p>
<p>Assuming this is the issue, and you only have a single cluster, it can be resolved like so:</p>
<pre><code># Find your cluster name
kops get clusters
# set the clustername as a var
clustername=<clustername>
# export the KUBECONFIG variable, which kubectl uses to find the kubeconfig file
export KUBECONFIG=~/.kube/${clustername}
# download the kubeconfig file locally using kops
kops export kubecfg --name ${clustername} --config=~$KUBECONFIG
</code></pre>
<p>You can find more information about the <code>KUBECONFIG</code> file <a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/" rel="noreferrer">here</a></p>
|
<p>Unable to start minikube in my Mac machine. Information about the kubernetes, minikube versions as well as the error is given in detail below.</p>
<pre><code>kubernetes-cli version
kubernetes-cli 1.11.0
minikube version
minikube version: v0.28.0
minikube start
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
E0717 16:19:06.522428 87230 start.go:299] Error restarting cluster: running cmd:
sudo kubeadm alpha phase certs all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase kubeconfig all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase controlplane all --config /var/lib/kubeadm.yaml &&
sudo /usr/bin/kubeadm alpha phase etcd local --config /var/lib/kubeadm.yaml
: Process exited with status 1
================================================================================
An error has occurred. Would you like to opt in to sending anonymized crash
information to minikube to help prevent future errors?
To opt out of these messages, run the command:
minikube config set WantReportErrorPrompt false
================================================================================
Please enter your response [Y/n]:
y
</code></pre>
| <p>Minikube should work "outside of the box" according to the documentation. Are you using VirtualBox or native hypervisor? There might be an issue with not enough resources so please check that.
By default minikube uses 2 CPU's and 2048 megabytes of RAM as specified <a href="https://github.com/kubernetes/minikube/blob/232080ae0cbcf9cb9a388eb76cc11cf6884e19c0/pkg/minikube/constants/constants.go#L102" rel="nofollow noreferrer">here</a>. You can also influence minikube vm size using --cpu and/or --memory flags.</p>
<p><code>minikube start --cpus 4 --memory 8192</code> </p>
<p>If resources are fine, try deleting the cluster and running minikube start again with verbose mode and post the results:</p>
<pre><code>minikube delete
minikube start -v=2
</code></pre>
<p>From <a href="https://github.com/kubernetes/minikube/blob/master/docs/debugging.md" rel="nofollow noreferrer">Debugging Issues with Minikube</a> </p>
<blockquote>
<p>To debug issues with minikube (not Kubernetes but minikube itself),
you can use the -v flag to see debug level info. The specified values
for v will do the following (the values are all encompassing in that
higher values will give you all lower value outputs as well):</p>
<ul>
<li>--v=0 INFO level logs</li>
<li>--v=1 WARNING level logs</li>
<li>--v=2 ERROR level logs</li>
<li>--v=3 libmachine logging</li>
<li>--v=7 libmachine --debug level logging</li>
</ul>
</blockquote>
|
<p>I am configuring a StatefulSet where I want the number of replicas (<strong>spec.replicas</strong> as shown below) available to somehow pass as a parameter into the application instance. My application needs <strong>spec.replicas</strong> to determine the numer of replicas so it knows what rows to load from a MySQL table. I don't want to hard-code the number of replicas in both <strong>spec.replicas</strong> and the application parameter as that will not work when scaling the number of replicas up or down, since the application parameter needs to adjust when scaling. </p>
<p>Here is my StatefulSet config:</p>
<pre>
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
labels:
run: my-app
name: my-app
namespace: my-ns
spec:
replicas: 3
selector:
matchLabels:
run: my-app
serviceName: my-app
podManagementPolicy: Parallel
template:
metadata:
labels:
run: my-app
spec:
containers:
- name: my-app
image: my-app:latest
command:
- /bin/sh
- /bin/start.sh
- dev
- 2000m
- "0"
- "3" **Needs to be replaced with # replicas**
- 127.0.0.1
- "32990"
imagePullPolicy: Always
livenessProbe:
httpGet:
path: /health
port: 8081
initialDelaySeconds: 180
periodSeconds: 10
timeoutSeconds: 3
readinessProbe:
failureThreshold: 10
httpGet:
path: /ready
port: 8081
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 15
successThreshold: 1
timeoutSeconds: 3
ports:
- containerPort: 8080
protocol: TCP
resources:
limits:
memory: 2500Mi
imagePullSecrets:
- name: snapshot-pull
restartPolicy: Always
</pre>
<p>I have read the Kubernetes docs and the <strong>spec.replicas</strong> field is scoped at the pod or container level, never the StatefulSet, at least as far as I have seen.</p>
<p>Thanks in advance.</p>
| <p>Normally you would use the downward api for this kind of thing. <a href="https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/</a></p>
<p>However it is currently not possible for kubernetes to propagate deployment/statefulset spec data into the pod spec with the downward api, nor should it be. If you are responsible for this software I'd set up some internal functionality so that it can find it's peers and determine their count periodically.</p>
|
<p>I understand with Prometheus we can set up alerting rules which can detect and raise an alert if a pod crashes. </p>
<p>I want to understand how does Prometheus itself know when a pod crashed or is stuck in pending state.</p>
<ul>
<li>Does it know this when it is trying to scrape metrics from pod's http endpoint port?</li>
</ul>
<p>OR</p>
<ul>
<li>Does Prometheus get the pod status information from Kubernetes?</li>
</ul>
<p>The reason why I'm asking this is because I want to set up Prometheus to monitor existing pods that I have already deployed. I want to be alerted if a pod keeps crashing or if it is stuck in pending state. And I want to know if Prometheus can detect these alerts without making any modifications to the code inside the existing pods.</p>
| <p>The common way for prometheus to extract metrics and health is by the use of scraping (thru an http endpoint is the most common). Since pods can have multiple containers, it is best to scrape an http endpoint of your running container.</p>
<p>If prometheus didnt receive a good response from this endpoint, it can determine that the container is down. </p>
<p>Prometheus itself does not do alerting, you normally delegate that to the alert manager.</p>
|
<p>I have a single node kubernetes deployment running on a home server, on which I have several services running. Since it's a small local network, I wanted to block off a portion of the local address range that the rest of my devices use for pod ips, and then route to them directly.</p>
<p>For example, if I have a web server running, instead of exposing port 80 as an external port and port forwarding from my router to the worker node, I would be able to port forward directly to the pod ip.</p>
<p>I haven't had much luck finding information on how to do this though, is it possible? </p>
<p>I'm new to kubernetes so I am sure I am leaving out important information, please let me know so I can update the question.</p>
<hr>
<p>I got this working by using the macvlan CNI plugin from the <a href="https://github.com/containernetworking/cni" rel="nofollow noreferrer">reference plugins</a>. Using kubeadm to set up the cluster, these plugins are already installed and the cluster will be configured to use them. The only thing to do is drop in a cni.conf (in <code>/etc/cni/net.d</code>). Mine looks like this </p>
<pre><code>{
"name": "net",
"type": "macvlan",
"mode": "bridge",
"master": "eno1",
"ipam": {
"type": "host-local",
"ranges": [[{
"subnet": "10.0.0.0/8",
"gateway": "10.0.0.1",
"rangeStart": "10.0.10.2",
"rangeEnd": "10.0.10.254"
}]],
"routes": [
{ "dst": "0.0.0.0/0" }
]
}
}
</code></pre>
<p>Putting this in place is all that is needed for coredns to start up and any pods you run will have ips from the range defined in the config. Since this is on the same subnet as the rest of my lan, I can freely ping these containers and my router even lets me play with their settings since they have mac addresses (if you dont want this use ipvlan instead of macvlan, you'll still be able to ping and port forward and everything, your router just wont be enumerating all the devices since they dont have hardware addresses). </p>
<p>Couple of caveats:</p>
<ol>
<li><p>Services won't work since they're all "fake" (e.g. they dont have interfaces its all iptables magic that makes them work). There's probably a way to make them work but it wasn't worth it for my use case</p></li>
<li><p>For whatever reason the DNS server keeps revering to 10.96.0.1. I have no idea where it got that address from, but I have been working around it by defining <code>dnsPolicy: None</code> and setting <code>dnsConfig.nameservers[0]</code> to my routers IP. There's probably a better solution for for it.</p></li>
<li><p>You should run kubeadm with <code>--service-cidr 10.0.10.0/24 --pod-network-cidr 10.0.10.0/24</code> or it seems like kubelet (or something) doesn't know how to talk to the pods. I actually don't know if <code>--service-cidr</code> matters but it seems like a good idea</p></li>
<li><p>Out of the box, your pods wont be able to talk to the master since they are using macvlan devices enslaving its ethernet and for whatever reason macvlan doesn't let you talk between host and guest devices. As you can imagine this isnt a good thing. Solution is to manually add a macvlan device on the host with the same subnet as your pods.</p></li>
<li><p>It seems like even ports you don't expose from the pod are usable from the lan devices (which isnt cool), probably since the iptables rules think that anything on the lan is cluster-internal. I haven't put much time into debugging this.</p></li>
</ol>
<p>This is probably some kind of cardinal sin for people used to using kubernetes in production, but its kind of cool and useful for a home setup, though it certainly feels like a hack sometimes. </p>
| <p>I believe the answer to your question is to use the <a href="https://github.com/containernetworking/plugins#ipam-ip-address-allocation" rel="nofollow noreferrer">dhcp IPAM</a> plugin to <a href="https://github.com/containernetworking/cni#readme" rel="nofollow noreferrer">CNI</a>, but being mindful about Pod address recycling. I say be mindful because it <em>might not</em> matter, unless you have high frequency Pod termination, but on the other hand I'm not sure where it falls on the Well That's Unfortunate™ spectrum if a Pod IP is recycled in the cluster.</p>
<p>The bad news is that I have not had any experience with these alternative CNI plugins to be able to speak to the sharp edges one will need to be mindful of, so hopefully if someone else has then they can chime in.</p>
|
<p>I want to deploy kafka on kubernetes.</p>
<p>Because I will be streaming with high bandwidth from the internet to kafka I want to use the hostport and advertise the hosts "dnsName:hostPort" to zookeeper so that all traffic goes directly to the kafka broker (as opposed to using nodeport and a loadbalancer where traffic hits some random node which redirects it creating unnecessary traffic).</p>
<p>I have setup my kubernetes cluster on amazon. With <code>kubectl describe node ${nodeId}</code> I get the internalIp, externalIp, internal and external Dns name of the node. </p>
<p>I want to pass the externalDns name to the kafka broker so that it can use it as advertise host.</p>
<p>How can I pass that information to the container? Ideally I could do this from the deployment yaml but I'm also open to other solutions.</p>
| <blockquote>
<p>How can I pass that information to the container? Ideally I could do this from the deployment yaml but I'm also open to other solutions.</p>
</blockquote>
<p>The first thing I would try is <code>envFrom: fieldRef:</code> and see if it will let you reach into the <code>PodSpec</code>'s <code>status:</code> field to grab the <code>nodeName</code>. I deeply appreciate that isn't the <code>ExternalDnsName</code> you asked about, but if <code>fieldRef</code> works, it could be a lot less typing and thus could be a good tradeoff.</p>
<p>But, with "I'm also open to other solutions" in mind: don't forget that -- unless instructed otherwise -- each Pod is able to interact with the kubernetes API, and with the correct RBAC permissions it can request the very information you're seeking. You can do that either as a <code>command:</code> override, to do setup work before launching the kafka broker, or you can do that work in an <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">init container</a>, write the external address into a shared bit of filesystem (with <code>volume: emptyDir: {}</code> or similar), and then any glue code for slurping that value into your kafka broker.</p>
<p>I am 100% certain that the <code>envFrom: fieldRef:</code> construct that I mentioned earlier can acquire the <code>metadata.name</code> and <code>metadata.namespace</code> of the Pod, at which point the Pod can ask the kubernetes API for its own <code>PodSpec</code>, extract the <code>nodeName</code> from the aforementioned <code>status:</code> field, then ask the kubernetes API for the Node info, and voilà, you have all the information kubernetes knows about that Node.</p>
|
<p>I have kubernetes cluster. I have started mysql from kubectl. I have a image of spring boot application. I am confused with the JDBC url to be used in application.yml. I have tried multiple IP addresses by describing pods, services etc. It is getting errored out with "<strong>communication Link failure</strong>"</p>
<p>Below is my <code>mysql-deployment.yml</code></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
#type: NodePort
ports:
- port: 3306
#targetPort: 3306
#nodePort: 31000
selector:
app: mysql
clusterIP: None
---
apiVersion: v1
kind: Secret
metadata:
name: mysql-secret
type: Opaque
data:
MYSQL_ROOT_PASSWORD: cGFzc3dvcmQ= #password
MYSQL_DATABASE: dGVzdA== #test
MYSQL_USER: dGVzdHVzZXI= #testuser
MYSQL_PASSWORD: dGVzdDEyMw== #test123
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.7
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_ROOT_PASSWORD
- name: MYSQL_DATABASE
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_DATABASE
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_USER
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secret
key: MYSQL_PASSWORD
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
</code></pre>
| <p>Your K8S service should expose port and targetPort 3306 and in your JDBC URL use the name of that service:
<code>jdbc:mysql://mysql/database</code></p>
<p>If your MySQL is a backend service only for apps running in K8S you don't need nodePort in the service manifest.</p>
<p>If you get a <code>SQLException: Connection refused</code> or <code>Connection timed out</code> or a MySQL specific <code>CommunicationsException: Communications link failure</code>, then it means that the DB isn't reachable at all. </p>
<p>This can have one or more of the following causes:</p>
<ul>
<li><p>IP address or hostname in JDBC URL is wrong.</p></li>
<li><p>Hostname in JDBC URL is not recognized by local DNS server.</p></li>
<li><p>Port number is missing or wrong in JDBC URL.</p></li>
<li><p>DB server is down.</p></li>
<li><p>DB server doesn't accept TCP/IP connections.</p></li>
<li><p>DB server has run out of connections.</p></li>
<li><p>Something in between Java and DB is blocking connections, e.g. a firewall or proxy. </p></li>
</ul>
<p>I suggest these steps to better understand the problem: </p>
<ul>
<li><p>Connect to MySQL pod and verify the content of the
<code>/etc/mysql/my.cnf</code> file</p></li>
<li><p>Connect to MySQL from inside the pod to verify it works</p></li>
<li><p>Remove <code>clusterIP: None</code> from Service manifest</p></li>
</ul>
|
<p>I'm having an issue logging to Stackdriver from my golang api. My configuration:</p>
<ul>
<li>GKE cluster running on three compute engine instances</li>
<li>Logging and monitoring enabled on GKE container cluster</li>
<li>Go service behind ESP</li>
<li>There are three nodes in the kube-system namespace running the <code>fluentd</code> image. Names as fluentd-cloud-logging-xxx-xxx-xxx-default-pool-nnnnnn.</li>
<li>When I run my service locally (generated using go-swagger) text entries are sent to the <code>global</code> stackdriver log. Nothing when I deploy to my k8s cluster though.</li>
</ul>
<p>Code I'm using is like: </p>
<pre><code> api.Logger = func(text string, args ...interface{}) {
ctx := context.Background()
// Sets your Google Cloud Platform project ID.
projectID := "my-project-name"
// Creates a client.
client, err := logging.NewClient(ctx, projectID)
if err != nil {
log.Fatalf("Failed to create client: %v", err)
}
// Sets the name of the log to write to.
logName := "tried.various.different.names.with.no.luck"
// Selects the log to write to.
logger := client.Logger(logName)
// Sets the data to log.
textL := fmt.Sprintf(text, args...)
// Adds an entry to the log buffer.
logger.Log(logging.Entry{Payload: textL, Severity: logging.Critical})
// Closes the client and flushes the buffer to the Stackdriver Logging
// service.
if err := client.Close(); err != nil {
log.Fatalf("Failed to close client: %v", err)
}
fmt.Printf("Logged: %v\n", textL)
}
</code></pre>
<p>I don't need error reporting at the moment as I'm just evaluating - I'd be happy with just sending unstructured text. But it's not clear whether I need to do something like <a href="https://stackoverflow.com/questions/36379572/how-to-setup-error-reporting-in-stackdriver-from-kubernetes-pods/36476771#36476771">Boris' discussion on error reporting/stack traces from Kubernetes pods</a> just to get that?</p>
| <p>Log entries created with the Stackdriver logging client does not seem to be categorized under any of the predefined categories, making it very difficult to find in Logs Viewer's basic mode.</p>
<p>Try to access Logs Viewer "advanced filter interface" by converting the query to a <a href="https://cloud.google.com/logging/docs/view/advanced-filters#getting_started_with_advanced_filters" rel="nofollow noreferrer">advanced filter</a> and create the following filter:</p>
<pre><code>logName:"projects/my-project-name/logs/tried.various.different.names.with.no.luck"
</code></pre>
<p>That worked for me at least.</p>
|
<p>The <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/secret" rel="noreferrer">docs states</a> that </p>
<blockquote>
<p>To create a Secret from one or more files, use --from-file. You
specify files in any plaintext format, such as .txt or .env, as long
as the files contain key-value pairs.</p>
</blockquote>
<p><em>.test-secret</em></p>
<pre><code>NAME=martin
GENDER=male
</code></pre>
<p><strong>Testing to create a secret based on my .test-secret file.</strong></p>
<pre><code>kubectl create secret generic person --from-file .test-secret -o yml
$ kubectl get secret person -o yaml
apiVersion: v1
data:
.test-secret: TkFNRT1tYXJ0aW4KR0VOREVSPW1hbGUK
kind: Secret
metadata:
creationTimestamp: 2018-07-19T09:23:05Z
name: person
namespace: default
resourceVersion: "229992"
selfLink: /api/v1/namespaces/default/secrets/person
uid: 579198ab-8b35-11e8-8895-42010a840008
type: Opaque
</code></pre>
<p>Is it possible to read a list of key / values like that? Is it even possible to do so from an <code>.env</code> file? <code>kubectl get pods</code> returns <em>CreateContainerConfigError</em></p>
<p><strong>my-app.yml</strong></p>
<pre><code> 77 - name: NAME
78 valueFrom:
79 secretKeyRef:
80 name: person
81 key: NAME
</code></pre>
| <p>Yes, use the option --from-env-file</p>
<pre><code>kubectl create secret generic person --from-env-file=.test-secret
</code></pre>
<p>To consume the secrets from the initial .env file in a pod, you can use the following :</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: some-meta
spec:
containers:
- name: xyz
image: abc
envFrom:
- secretRef:
name: person # <--
</code></pre>
|
<p>I have Kubernetes cluster, hosted on Google cloud platform, that running two deployments: <code>app1</code> and <code>app2</code>.</p>
<p>My cluster have two Node-Pools: <code>pool1</code> and <code>pool2</code>. </p>
<p>Deploying pods, from yaml file, like this: </p>
<pre><code>kubectl apply -f ./app1.yaml
kubectl apply -f ./app2.yaml
</code></pre>
<p>Actualy, it deploy two pods to <code>pool1</code>, which is the cluster "default-pool".</p>
<p>I want to change this behavior to deploy <code>app2</code> to <code>pool2</code> (and <code>app1</code> to <code>pool1</code> - as before, default). Looking to pseudo command like this:</p>
<pre><code>kubectl apply -f ./app1.yaml
kubectl apply -f ./app2.yaml --pool=pool2
</code></pre>
<p>explicit give the pool name when publishing new pod.</p>
<p>How to do it right? </p>
<p>Thanks!</p>
| <p>You can have two .yaml files for two deployments and you can select the node pool as follows.</p>
<pre><code>nodeSelector:
nodeclass: pool1
</code></pre>
<p>Add the above code to your yaml file.</p>
|
<p>I'm trying to accomplish a VERY common task for an application: </p>
<p>Assign a certificate and secure it with TLS/HTTPS.</p>
<p>I've spent nearly a day scouring thru documentation and trying multiple different tactics to get this working but nothing is working for me.</p>
<p>Initially I setup nginx-ingress on EKS using Helm by following the docs here: <a href="https://github.com/nginxinc/kubernetes-ingress" rel="noreferrer">https://github.com/nginxinc/kubernetes-ingress</a>. I tried to get the sample app working (cafe) using the following config:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cafe-ingress
spec:
tls:
- hosts:
- cafe.example.com
secretName: cafe-secret
rules:
- host: cafe.example.com
http:
paths:
- path: /tea
backend:
serviceName: tea-svc
servicePort: 80
- path: /coffee
backend:
serviceName: coffee-svc
servicePort: 80
</code></pre>
<p>The ingress and all supported services/deploys worked fine but there's one major thing missing: the ingress doesn't have an associated address/ELB:</p>
<pre><code>NAME HOSTS ADDRESS PORTS AGE
cafe-ingress cafe.example.com 80, 443 12h
</code></pre>
<p>Service LoadBalancers create ELB resources, i.e.:</p>
<pre><code>testnodeapp LoadBalancer 172.20.4.161 a64b46f3588fe... 80:32107/TCP 13h
</code></pre>
<p>However, the Ingress is not creating an address. How do I get an Ingress controller exposed externally on EKS to handle TLS/HTTPS? </p>
| <p>I've replicated every step necessary to get up and running on EKS with a secure ingress. I hope this helps anybody else that wants to get their application on EKS quickly and securely.</p>
<p>To get up and running on EKS:</p>
<ol>
<li><p>Deploy EKS using the CloudFormation template <a href="https://gist.github.com/kjenney/356cf4bb029ec0bb7f78fad8230530d5" rel="noreferrer">here</a>: Keep in mind that I've restricted access with the CidrIp: 193.22.12.32/32. Change this to suit your needs.</p></li>
<li><p>Install Client Tools. Follow the guide <a href="https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html" rel="noreferrer">here</a>.</p></li>
<li>Configure the client. Follow the guide <a href="https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html#eks-configure-kubectl" rel="noreferrer">here</a>.</li>
<li>Enable the worker nodes. Follow the guide <a href="https://docs.aws.amazon.com/eks/latest/userguide/launch-workers.html" rel="noreferrer">here</a>.</li>
</ol>
<p>You can verify that the cluster is up and running and you are pointing to it by running:</p>
<p><code>kubectl get svc</code></p>
<p>Now you launch a test application with the nginx ingress.</p>
<p>NOTE: <strong><em>Everything is placed under the ingress-nginx namespace. Ideally this would be templated to build under different namespaces, but for the purposes of this example it works.</em></strong></p>
<p>Deploy nginx-ingress:</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml
</code></pre>
<p>Fetch rbac.yml from <a href="https://gist.github.com/kjenney/e1cd655ec2c646c7a700eac7b6488422kubectl" rel="noreferrer">here</a>. Run:</p>
<p><code>kubectl apply -f rbac.yml</code></p>
<p>Have a certificate and key ready for testing. Create the necessary secret like so:</p>
<p><code>kubectl create secret tls cafe-secret --key mycert.key --cert mycert.crt -n ingress-nginx</code></p>
<p>Copy coffee.yml from <a href="https://gist.github.com/kjenney/c0f9acc33bf6b9b38c9e1cc3102efb1b" rel="noreferrer">here</a>. Copy coffee-ingress.yml from <a href="https://gist.github.com/kjenney/cd980c01f7b1008b3ef61c99a6453bc8" rel="noreferrer">here</a>. Update the domain you want to run this under. Run them like so</p>
<pre><code>kubectl apply -f coffee.yaml
kubectl apply -f coffee-ingress.yaml
</code></pre>
<p>Update the CNAME for your domain to point to the ADDRESS for:</p>
<p><code>kubectl get ing -n ingress-nginx -o wide</code></p>
<p>Refresh DNS cache and test the domain. You should get a secure page with request stats. I've replicated this multiple times so if it fails to work for you check the steps, config, and certificate. Also, check the logs on the nginx-ingress-controller* pod. </p>
<p><code>kubectl logs pod/nginx-ingress-controller-*********** -n ingress-nginx</code></p>
<p>That should give you some indication of what's wrong.</p>
|
<p>According to the <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-architecture#node_allocatable" rel="nofollow noreferrer">documentation</a>, Kubernetes reserves a significant amount of resources on the nodes in the cluster in order to run itself. Are the numbers in the documentation correct or is Google trying to sell me bigger nodes?</p>
<p>Aside: Taking kube-system pods and other reserved resources into account, am I right in saying it's better resource-wise to rent one machine equiped with 15GB of RAM instead of two with 7.5GB of RAM each?</p>
| <p>Yes, kubernetes reserves a significant amount of resources on the nodes. So better consider that before renting the machine. </p>
<p>You can deploy custom machines in GCP. For the pricing you can use <a href="https://cloud.google.com/products/calculator/" rel="nofollow noreferrer">this</a> calculator by Google</p>
|
<p>I want to create and remove a job using Google Cloud Builder. Here's my configuration which builds my Docker image and pushes to GCR.</p>
<pre><code># cloudbuild.yaml
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/xyz/abc:latest','-f','Dockerfile.ng-unit','.']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/xyz/abc:latest']
</code></pre>
<p>Now I want to create a job , I want to run something like </p>
<p><code>kubectl create -R -f ./kubernetes</code></p>
<p>which creates job in kubernetes folder. </p>
<p>I know cloud builder has <code>- name: 'gcr.io/cloud-builders/kubectl'</code> but I can't figure out how to use it. Plus how can I authenticate it to run kubectl commands? How can I use service_key.json </p>
| <p>I wasn't able to connect and get cluster credentials. Here's what I did</p>
<blockquote>
<p>Go to IAM, add another Role to [email protected]. I used Project Editor. </p>
<p>Wrote this on cloudbuild.yaml <code>name: 'gcr.io/cloud-builders/kubectl'
args: ['create', '-R', '-f','./dockertests/unit-tests/kubernetes']</code></p>
</blockquote>
|
<p>How do I know the list of all possible statuses and reasons in Kubernetes?</p>
<p>Right now, I'm working with Kubernetes events. Based on certain unusual events, I will be reacting to it. For example, if a pod is backed off or pending, will receive such events and get notified via Email (custom code). Necessary actions would be taken further for each such events.</p>
<p>I have to know the list of all possible status for a pod and node. That would help me to handle uncommon behaviours in my code. If possible, it would be good to know the list of possible Events' reasons also.</p>
<p>I'm using Fabric8 kubernetes-client as I found some issues with Java Kubernetes-client for handling events.</p>
<p>The way I searched through Google, couldn't get some results.</p>
| <p>If you need to have a complete list of <code>events</code> in Kubernetes, you should take a look directly to the <code>Kubernetes</code> project on github.</p>
<p><a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/events/event.go" rel="noreferrer">Here</a> the link to the <code>event.go</code> file.</p>
|
<p>Hello StackOverflow users,</p>
<p>I've started working in the Kubernetes space recently and saw that <a href="https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/" rel="noreferrer">Custom Resource Definitions(CRDs) types are not namespaced and are available to to all namespaces.</a></p>
<p>I was wondering why it isn't possible to make a CRD type scoped to a namespace. Any help would be appreciated!</p>
| <p>See <a href="https://github.com/kubernetes/kubernetes/issues/65551#issuecomment-400909534" rel="noreferrer">https://github.com/kubernetes/kubernetes/issues/65551#issuecomment-400909534</a> for a discussion of this issue.</p>
<p>A particular CRD can define a custom resource that is namespaced or cluster-wide, but the type definition (the CRD itself) is cluster-wide and applies uniformly to all namespaces.</p>
|
<p>So I have an interesting use case. I am running multiple micro-services on my Kubernetes cluster. My applications use NextJS which make internal calls to _next routes.</p>
<p>My issue came from the fact that I needed a way to differentiate between services and their requests to the _next files. So I implemented NextJS's assetPrefix feature which works perfectly in development, appending my prefix in front of _next so the requests look like <code>.../${PREFIX}/_next/...</code>. That way I could set up an ingress and route files base on the prefix to the appropriate service on my cluster. I set up a Kubernetes Ingress controller following this guide: <a href="https://akomljen.com/kubernetes-nginx-ingress-controller/" rel="nofollow noreferrer">https://akomljen.com/kubernetes-nginx-ingress-controller/</a></p>
<p>My ingress config is: </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: dev-ingress
spec:
rules:
- host: baseurl.com
http:
paths:
- path: /auth
backend:
serviceName: auth-svc
servicePort: 80
- path: /static/auth
backend:
serviceName: auth-svc
servicePort: 80
- path: /login
backend:
serviceName: auth-svc
servicePort: 80
- path: /settings
backend:
serviceName: auth-svc
servicePort: 80
- path: /artwork
backend:
serviceName: artwork-svc
servicePort: 80
- path: /static/artwork
backend:
serviceName: artwork-svc
servicePort: 80
</code></pre>
<p>So here is the problem. Now that everything is set up, properly deployed, and the ingress is running following the above guide and using the above rules, my services are trying to make requests to <code>.../_next/...</code> instead of <code>.../${PREFIX}/_next/...</code> so they can't find the proper files and nothing is working. I cannot seem to figure out what is going on. Anyone have any ideas? Thanks in advance!</p>
| <p>You are using built-in NGINX Ingress Controller that, unfortunately, has no such functionality.</p>
<p>My advice is to use <a href="https://github.com/nginxinc/kubernetes-ingress" rel="nofollow noreferrer">NGINX Plus Ingress Controller</a> annotation functionality if you can afford it.</p>
<p>You can find <a href="https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/rewrites" rel="nofollow noreferrer">official example here</a>.</p>
<p>Example:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cafe-ingress
annotations:
nginx.org/rewrites: "serviceName=tea-svc rewrite=/;serviceName=coffee-svc rewrite=/beans/"
spec:
rules:
- host: cafe.example.com
http:
paths:
- path: /tea/
backend:
serviceName: tea-svc
servicePort: 80
- path: /coffee/
backend:
serviceName: coffee-svc
servicePort: 80
</code></pre>
<p>Below are the examples of how the URI of requests to the <code>tea-svc</code> are rewritten (Note that the <code>/tea</code> requests are redirected to <code>/tea/</code>).</p>
<pre><code>/tea/ -> /
/tea/abc -> /abc
</code></pre>
<p>Below are the examples of how the URI of requests to the <code>coffee-svc</code> are rewritten (Note that the <code>/coffee</code> requests are redirected to <code>/coffee/</code>).</p>
<pre><code>/coffee/ -> /beans/
/coffee/abc -> /beans/abc
</code></pre>
|
<p>I am having an issue configuring GCR with ImagePullSecrets in my deployment.yaml file. It cannot download the container due to permission </p>
<pre><code>Failed to pull image "us.gcr.io/optimal-jigsaw-185903/syncope-deb": rpc error: code = Unknown desc = Error response from daemon: denied: Permission denied for "latest" from request "/v2/optimal-jigsaw-185903/syncope-deb/manifests/latest".
</code></pre>
<p>I am sure that I am doing something wrong but I followed this tutorial (and others like it) but with still no luck.</p>
<p><a href="https://ryaneschinger.com/blog/using-google-container-registry-gcr-with-minikube/" rel="noreferrer">https://ryaneschinger.com/blog/using-google-container-registry-gcr-with-minikube/</a></p>
<p>The pod logs are equally useless:</p>
<pre><code>"syncope-deb" in pod "syncope-deployment-64479cdcf5-cng57" is waiting to start: trying and failing to pull image
</code></pre>
<p>My deployment looks like:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
# Unique key of the Deployment instance
name: syncope-deployment
namespace: default
spec:
# 3 Pods should exist at all times.
replicas: 1
# Keep record of 2 revisions for rollback
revisionHistoryLimit: 2
template:
metadata:
labels:
# Apply this label to pods and default
# the Deployment label selector to this value
app: syncope-deb
spec:
imagePullSecrets:
- name: mykey
containers:
- name: syncope-deb
# Run this image
image: us.gcr.io/optimal-jigsaw-185903/syncope-deb
ports:
- containerPort: 9080
</code></pre>
<p>Any I have a key in my default namespace called "mykey" that looks like (Edited out the Secure Data):</p>
<pre><code>{"https://gcr.io":{"username":"_json_key","password":"{\n \"type\": \"service_account\",\n \"project_id\": \"optimal-jigsaw-185903\",\n \"private_key_id\": \"EDITED_TO_PROTECT_THE_INNOCENT\",\n \"private_key\": \"-----BEGIN PRIVATE KEY-----\\EDITED_TO_PROTECT_THE_INNOCENT\\n-----END PRIVATE KEY-----\\n\",\n \"client_email\": \"[email protected]\",\n \"client_id\": \"109145305665697734423\",\n \"auth_uri\": \"https://accounts.google.com/o/oauth2/auth\",\n \"token_uri\": \"https://accounts.google.com/o/oauth2/token\",\n \"auth_provider_x509_cert_url\": \"https://www.googleapis.com/oauth2/v1/certs\",\n \"client_x509_cert_url\": \"https://www.googleapis.com/robot/v1/metadata/x509/bobs-service%40optimal-jigsaw-185903.iam.gserviceaccount.com\"\n}","email":"[email protected]","auth":"EDITED_TO_PROTECT_THE_INNOCENT"}}
</code></pre>
<p>I even loaded that user up with the permissions of:</p>
<ul>
<li>Editor Cloud Container </li>
<li>Builder Cloud Container </li>
<li>Builder Editor Service</li>
<li>Account Actor Service </li>
<li>Account Admin Storage </li>
<li>Admin Storage Object</li>
<li>Admin Storage Object Creator </li>
<li>Storage Object Viewer</li>
</ul>
<p>Any help would be appreciated as I am spending a lot of time on seemingly a very simple problem.</p>
| <p>The issue is most likely caused by you using a secret of type <code>dockerconfigjson</code> and having valid <code>dockercfg</code> in it. The <code>kubectl</code> command changed at some point that causes this.</p>
<p>Can you check what it is marked as <code>dockercfg</code> or <code>dockerconfigjson</code> and then check if its valid <code>dockerconfigjson</code>.</p>
<p>The json you have provided is <code>dockercfg</code> (not the new format)</p>
<p>See <a href="https://github.com/kubernetes/kubernetes/issues/12626#issue-100691532" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/12626#issue-100691532</a> for info about the formats</p>
|
<p>Is this possible inside the kubernetes cluster?</p>
<p>All examples I've found are copying from a local disk to a pod or vice versa, or is the only option to copy from node to node, for example over SSH, SCP or with other utilities?</p>
| <p>It's not possible to do cluster to cluster copying. You'd need to use <code>kubectl cp</code> to copy it locally, then copy the file back:</p>
<pre><code>kubectl cp <pod>:/tmp/test /tmp/test
kubectl cp /tmp/test <pod>:/tmp/test
</code></pre>
<p>If you are trying to share files between pods, and only one pods needs write access, you probably want to mount an ro volume on multiple pods, or use an object store like S3. Copying files to and from pods really shouldn't be something you're doing often, that's an anti-pattern</p>
|
<p>I have a bare-metal kubernetes (<code>v1.11.0</code>) cluster created with kubeadm and working fine without any issues. Network with calico and made it a single node cluster using kubectl taint nodes command. (single node is a requirement).</p>
<p>I need to run mydockerhub/sampleweb static website image on host port 80. Assume the IP address of the ubuntu server running this kubernetes is 192.168.8.10.</p>
<p>How to make my static website available on <code>192.168.8.10:80</code> or a hostname mapped to it on local DNS server? (Example: <code>frontend.sampleweb.local:80</code>). Later I need to run other services on different port mapped to another subdomain. (Example: <code>backend.sampleweb.local:80</code> which routes to a service run on port 8080).</p>
<p>I need to know:</p>
<ol>
<li><p>Can I achieve this without a load balancer?</p>
</li>
<li><p>What resources needed to create? (ingress, deployment, etc)</p>
</li>
<li><p>What additional configurations needed on the cluster? (network policy, etc)</p>
<p>Much appreciated if sample yaml files are provided.</p>
</li>
</ol>
<p>I'm new to kubernetes world. I got sample kubernetes deployments (like <a href="https://github.com/microservices-demo/microservices-demo" rel="noreferrer">sock-shop</a>) working end-to-end without any issues. I tried NodePort to access the service but instead of running it on a different port I need to run it exact port 80 on the host. I tried many ingress solutions but didn't work.</p>
<p>Screenshot of my setup:<br />
<a href="https://i.stack.imgur.com/G1yJq.png" rel="noreferrer"><img src="https://i.stack.imgur.com/G1yJq.png" alt="enter image description here" /></a></p>
| <p>I recently used <a href="https://traefik.io/" rel="noreferrer">traefik.io</a> to configure a project with similar requirements to yours.</p>
<p>So I'll show a basic solution with <code>traefik</code> and ingresses.</p>
<p>I dedicated a whole namespace (you can use <code>kube-system</code>), called <code>traefik</code>, and created a kubernetes serviceAccount:</p>
<pre><code>apiVersion: v1
kind: Namespace
metadata:
name: traefik
---
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: traefik
name: traefik-ingress-controller
</code></pre>
<p>The traefik controller which is invoked by ingress rules requires a ClusterRole and its binding:</p>
<pre><code>---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
namespace: traefik
name: traefik-ingress-controller
</code></pre>
<p>The traefin controller will be deployed as daemonset (i.e. by definition one for each node in your cluster) and a Kubernetes service is dedicated to the controller:</p>
<pre><code>kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress-controller
namespace: traefik
labels:
k8s-app: traefik-ingress-lb
spec:
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- name: traefik-ingress-lb
image: traefik
ports:
- name: http
containerPort: 80
hostPort: 80
- name: admin
containerPort: 8080
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
args:
- --api
- --kubernetes
- --logLevel=INFO
---
kind: Service
apiVersion: v1
metadata:
namespace: traefik
name: traefik-ingress-service
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 8080
name: admin
</code></pre>
<p>The final part requires you to create a service for each microservice in you project, here an example:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
namespace: traefik
name: my-svc-1
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- port: 80
targetPort: 8080
</code></pre>
<p>and also the ingress (set of rules) that will forward the request to the proper service:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: traefik
name: ingress-ms-1
annotations:
kubernetes.io/ingress.class: traefik
spec:
rules:
- host: my-address-url
http:
paths:
- backend:
serviceName: my-svc-1
servicePort: 80
</code></pre>
<p>In this ingress I wrote a host URL, this will be the entry point in your cluster, so you need to resolve the name to your master K8S node. If you have more nodes which could be master, then a loadbalancer is suggested (in this case the host URL will be the LB).</p>
<p>Take a look to <a href="https://kubernetes.io/" rel="noreferrer">kubernetes.io</a> documentation to have clear the concepts for kubernetes. Also <a href="https://traefik.io/" rel="noreferrer">traefik.io</a> is useful.</p>
<p>I hope this helps you.</p>
|
<p>I am running a managed kubernetes cluster on Google Cloud Platform with a single node for development.</p>
<p>However when I update Pod images too frequently, the ImagePull step fails due to insufficient disk space in the boot disk.</p>
<p>I noticed that images should be auto GC-ed according to documentation, but I have no idea what is the setting on GKE or how to change it.</p>
<p><a href="https://kubernetes.io/docs/concepts/cluster-administration/kubelet-garbage-collection/#image-collection" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/cluster-administration/kubelet-garbage-collection/#image-collection</a></p>
<ol>
<li>Can I manually trigger a unused image clean up, using <code>kubectl</code> or Google Cloud console command?</li>
<li>How do I check/change the above GC setting above such that I wont encounter this issue in the future?</li>
</ol>
| <p>Since Garbage Collector is an automated service, there are no kubectl commands or any other commands within GCP to manually trigger Garbage Collector.</p>
<p>In regards to your second inquiry, Garbage Collector is handled by the Master node. The Master node is not accessible to users since it is a managed service. As such, users cannot configure Garbage Collection withing GKE.</p>
<p>The only workaround I can offer is to <a href="https://kubernetes.io/docs/setup/scratch/" rel="nofollow noreferrer">create a custom cluster from scratch</a> within Google Compute Engine. This would provide you access to the Master node of your cluster so you can have the flexibility of configuring the cluster to your liking.</p>
<p>Edit: If you need to delete old images, I would suggest removing the old images using docker commands. I have attached a github article that provides several different commands that you can run on the node level to remove old images <a href="https://gist.github.com/bastman/5b57ddb3c11942094f8d0a97d461b430" rel="nofollow noreferrer">here</a>.</p>
|
<p>We have a number of different REST-based services running in Azure within a Kubernetes (version 1.9.6) cluster. </p>
<p>Two of the services, let's say A and B needs to communicate with each other using REST-calls. Typically, something like the following:</p>
<pre><code>Client calls A (original request)
A calls B (request 1)
B calls A (request 2)
A responds to B (request 2)
B responds to A (request 1)
A responds to the original request
</code></pre>
<p>The above being a typical intertwined micro-services architecture. Manually running the docker instances works perfectly on our local test servers.</p>
<p>The moment we run this in Kubernetes on Azure we get intermittent timeouts (60+ seconds) on the micro-services calling each other through Kubernetes' networking services. After a timeout, repeating the request would then often give correct responses in a few micro-seconds.</p>
<p>I am stuck at this point as I have no idea what could be causing this. Could it be the dynamic routing? The virtualised network? Kubernetes configuration? </p>
<p>Any ideas?</p>
| <p>Finally figured this out.</p>
<blockquote>
<p>Azure Load Balancers / Public IP addresses have a default 4 minute idle connection timeout.</p>
</blockquote>
<p>Essentially anything running through a Load Balancer (whether created by an Azure AKS Kubernetes Ingress or otherwise) has to abide by this. While you CAN change the timeout there is no way to eliminate it entirely (longest idle timeout duration possible is 30 minutes).</p>
<p>For that reason it makes sense to implement a connection pooling / monitoring solution that will track the idle time that has elapsed on each of your connections (through the load balancer / public IP) and then disconnect / re-create any connection that gets close to the 4 minute cutoff.</p>
<p>We ended up implementing PGbouncer (<a href="https://github.com/pgbouncer/pgbouncer" rel="nofollow noreferrer">https://github.com/pgbouncer/pgbouncer</a>) as an additional container on our Azure AKS / Kubernetes Cluster via the awesome directions which can be found here: <a href="https://github.com/edoburu/docker-pgbouncer/tree/master/examples/kubernetes/singleuser" rel="nofollow noreferrer">https://github.com/edoburu/docker-pgbouncer/tree/master/examples/kubernetes/singleuser</a></p>
<p>Overall I can see the need for the timeout but MAN was it hard to troubleshoot. Hope this saves you guys some time!</p>
<p>More details can be found on my full post over here: <a href="https://stackoverflow.com/questions/50706483/what-azure-kubernetes-aks-time-out-happens-to-disconnect-connections-in-out">What Azure Kubernetes (AKS) 'Time-out' happens to disconnect connections in/out of a Pod in my Cluster?</a></p>
|
<p>I am setting an IP whitelist in my kubernetes ingress config via the annotation <code>ingress.kubernetes.io/whitelist-source-range</code>. But, I have different whitelists to apply for different environments (<code>dev</code>, <code>staging</code>, <code>production</code>, etc.) Therefore, I have a placeholder variable in my <code>deploy.yaml</code> that I find/replace with the appropriate IP whitelist based on the environment.</p>
<p>At the moment, for dev, I am passing an empty string as the whitelist because it should be able to receive traffic from all IPs, which currently works as intended. But, logs in <code>nginx-ingress-controller</code> complain about the following:</p>
<pre><code>error reading Whitelist annotation in Ingress default/{service}: the
annotation does not contain a valid IP address or network: invalid CIDR
address:
</code></pre>
<p>How do I set a proper <code>whitelist-source-range</code> to allow all IPs but not create noisy error logs like the one above?</p>
| <p>Yes, it will give you error reading according to the <a href="https://github.com/kubernetes/ingress-nginx/blob/master/internal/ingress/annotations/ipwhitelist/main_test.go" rel="nofollow noreferrer">ingress annotations ip whitelist</a>.
It suspecting a IP range or Multiple IP ranges, even you didn't put a value, you will get the reading error as well, to get ride of this error and allow all the IPs put 0.0.0.0/0 this will bypass and fix your issue.</p>
|
<p>I am encountering an ENOTFOUND error within a multi-container Kubernetes pod. MongoDB is in one Docker container and appears to be fully operational, and a Node.js application is in another container (see its error below).</p>
<pre>/opt/systran/apps-node/enterprise-server/node_modules/mongoose/node_modules/mongodb/lib/replset.js:365
process.nextTick(function() { throw err; })
^
MongoError: failed to connect to server [mongodb:27017] on first connect [MongoError: getaddrinfo ENOTFOUND mongodb mongodb:27017]
at Pool. (/opt/systran/apps-node/enterprise-server/node_modules/mongodb-core/lib/topologies/server.js:336:35)
at emitOne (events.js:116:13)
at Pool.emit (events.js:211:7)
at Connection. (/opt/systran/apps-node/enterprise-server/node_modules/mongodb-core/lib/connection/pool.js:280:12)
at Object.onceWrapper (events.js:317:30)
at emitTwo (events.js:126:13)
at Connection.emit (events.js:214:7)
at Socket. (/opt/systran/apps-node/enterprise-server/node_modules/mongodb-core/lib/connection/connection.js:189:49)
at Object.onceWrapper (events.js:315:30)
at emitOne (events.js:116:13)
at Socket.emit (events.js:211:7)
at emitErrorNT (internal/streams/destroy.js:64:8)
at _combinedTickCallback (internal/process/next_tick.js:138:11)
at process._tickCallback (internal/process/next_tick.js:180:9)</pre>
<p>In the application container I can do the following, so it seems to know that MongoDB is available on port 27017.</p>
<pre>curl "http://localhost:27017"
It looks like you are trying to access MongoDB over HTTP on the native driver port.</pre>
<p>The application is intended to create databases in MongoDB, and populate collections. This same set of Docker containers work fine with Docker using a docker-compose.yml file. The containers are part of a legacy application (there are other containers in the same pod), and I don't have much control over their content.</p>
<p>I have checked logs for the various containers. Have reviewed all stock pods with "kubectl get pods" and all are working fine. Using "flannel" for CNI, so use the following to initialize Kubernetes.</p>
<pre>kubeadm init --pod-network-cidr=10.244.0.0/16</pre>
| <p>According to the error output, your NodeJS application tries to connect to MongoDB database via <code>mongodb:27017</code>.</p>
<p>As both NodeJS application and MongoDB database are containers of the same pod, NodeJS application should be connected to MongoDB database via <code>localhost:27017</code> instead, because containers in a pod share storage/network resources.</p>
<p>So, you need to change NodeJS application's configuration: set connection to MongoDB <code>localhost:27017</code> instead of <code>mongodb:27017</code>.</p>
|
<p>I have developed a spring boot based REST API service and enabled https on it by using a self signed cert keystore (to test locally), and it works well.</p>
<pre><code>server.ssl.key-store=classpath:certs/keystore.jks
server.ssl.key-store-password=keystore
server.ssl.key-store-type=PKCS12
server.ssl.key-alias=tomcat
</code></pre>
<p>Now, I want to package a docker image and deploy this service in a kubernetes cluster. I know I can expose the service as a NodePort and access it externally. </p>
<p>What I want to know is, I doubt that my self signed cert generated in local machine will work when deployed in kubernetes cluster. I researched and found a couple of solutions using kubernetes ingress, kubernetes secrets, etc. I am confused as to what will be the best way to go about doing this, so that I can access my service running in kubernetes through https. What changes will I need to do to my REST API code?</p>
<p><strong>UPDATED NOTE</strong> : Though I have used a self signed cert for testing purposes, I can obtain a CA signed cert from my company and use it for production. My question is more on the lines of, For a REST API service which already uses a SSL/TLS based connection, what are some of the better ways to deploy and access the cert in kubernetes cluster , eg: package in the application itself, use Secrets, or scrap the application's SSL configuration and use Ingres instead, etc. Hope my question makes sense :)</p>
<p>Thanks for any suggestions.</p>
| <p>Well it depends on the way you want to expose your service. Basically you have either an ingress, an external load balancer (only in certain cloud evironments available) or a Service thats routed to a Port (either via NodePort or HostPort) as options. </p>
<p><strong>Attention:</strong> Our K8S Cluster is self hosted so I have no reliable information about external load balancers in K8S and will therefore omit that option.</p>
<p>If you want to expose your service directly behind one of your domains on port 80 (e.g. <a href="https://app.myorg.org" rel="nofollow noreferrer">https://app.myorg.org</a>) you'll want to use ingress. But if you don't need that and you can live with a specific port the NodePort approach should do the trick (e.g. <a href="https://one.ofyourcluster.servers:30000/" rel="nofollow noreferrer">https://one.ofyourcluster.servers:30000/</a>).</p>
<p>Let's assume you want to try the ingress approach than you need to add the certificates to the ingress definition in K8S instead of the spring boot application or you must additionally specify that the service is reachable via https itself in the ingress. The way to do it may differ from ingress controller to ingress controller.</p>
<p>For the NodePort/HostPort you just need to enable SSL in your application.</p>
<p>Despite that you also need a valid certificate e.g. issued by <a href="https://letsencrypt.org/" rel="nofollow noreferrer">https://letsencrypt.org/</a>
Actually for K8S there are some projects that can fetch you a letsencrypt certificate automatically if you to use ingresses. (e.g. <a href="https://github.com/jetstack/cert-manager/" rel="nofollow noreferrer">https://github.com/jetstack/cert-manager/</a>)</p>
|
<p>I have a working Cluster with services that all respond behind a helm installed Ingress nGinx running on Azure AKS. <strong><em>This ended up being Azure specific.</em></strong></p>
<blockquote>
<p>My question is:
Why does my connection to the services / pods in this cluster periodically get severed (apparently by some sort of idle timeout), and why does that connection severing appear to also coincide with my Az AKS Browse UI connection getting cut?</p>
</blockquote>
<p>This is an effort to get a final answer on what exactly triggers the time-out that causes the local 'Browse' proxy UI to disconnect from my Cluster (more background on why I am asking to follow).</p>
<p>When working with Azure AKS from the Az CLI you can launch the local Browse UI from the terminal using:</p>
<p><code>az aks browse --resource-group <resource-group> --name <cluster-name></code></p>
<p>This works fine and pops open a browser window that looks something like this (yay):</p>
<p><a href="https://i.stack.imgur.com/pCr3t.png" rel="noreferrer"><img src="https://i.stack.imgur.com/pCr3t.png" alt="Azure AKS Disconnects Connections entering pods"></a></p>
<p>In your terminal you will see something along the lines of:</p>
<ol>
<li>Proxy running on <a href="http://127.0.0.1:8001/" rel="noreferrer">http://127.0.0.1:8001/</a> Press CTRL+C to close the tunnel... </li>
<li>Forwarding from 127.0.0.1:8001 -> 9090 Forwarding from</li>
<li>[::1]:8001 -> 9090 Handling connection for 8001 Handling connection for 8001 Handling connection for 8001</li>
</ol>
<p>If you leave the connection to your Cluster idle for a few minutes (ie. you don't interact with the UI) you should see the following print to indicate that the connection has timed out:</p>
<blockquote>
<p>E0605 13:39:51.940659 5704 portforward.go:178] lost connection to pod</p>
</blockquote>
<p>One thing I still don't understand is whether OTHER activity inside of the Cluster can prolong this timeout but regardless once you see the above you are essentially at the same place I am... which means we can talk about the fact that it looks like all of my other connections OUT from pods in that server have also been closed by whatever timeout process is responsible for cutting ties with the AKS browse UI.</p>
<h2>So what's the issue?</h2>
<p>The reason this is a problem for me is that I have a Service running a Ghost Blog pod which connects to a remote MySQL database using an npm package called 'Knex'. As it happens the newer versions of Knex have a bug (which has yet to be addressed) whereby if a connection between the Knex client and a remote db server is cut and needs to be restored — it doesn't re-connect and just infinitely loads.</p>
<h1>nGinx Error 503 Gateway Time-out</h1>
<p>In my situation that resulted in nGinx Ingress giving me an Error 503 Gateway time-out. This was because Ghost wasn't responding after the Idle timeout cut the Knex connection — since Knex wasn't working properly and doesn't restore the broken connection to the server properly.</p>
<p><strong><em>Fine.</em></strong>
I rolled back Knex and everything works great.</p>
<blockquote>
<p>But why the heck are my pod connections being severed from my Database to begin with? </p>
</blockquote>
<p>Hence this question to hopefully save some future person days of attempting to troubleshoot phantom issues that relate back to Kubernetes (maybe Azure specific, maybe not) cutting connections after a service / pod has been idle for some time.</p>
| <h2>Short Answer:</h2>
<blockquote>
<p>Azure AKS automatically deploys an Azure Load Balancer (with public IP address) when you add a new ingress (nGinx / Traefik... ANY Ingress) — that Load Balancer has its settings configured as a 'Basic' Azure LB which has a 4 minute idle connection timeout.</p>
</blockquote>
<p>That idle timeout is both standard AND required (although you MAY be able to modify it, see here: <a href="https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-tcp-idle-timeout" rel="noreferrer">https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-tcp-idle-timeout</a>). That being said there is no way to ELIMINATE it entirely for any traffic that is heading externally OUT from the Load Balancer IP — the longest duration currently supported is 30 minutes.</p>
<p><strong>There is no native Azure way to get around an idle connection being cut.</strong></p>
<p>So as per the original question, the best way (I feel) to handle this is to leave the timeout at 4 minutes (since it has to exist anyway) and then setup your infrastructure to disconnect your connections in a graceful way (when idle) prior to hitting the Load Balancer timeout.</p>
<h2>Our Solutions</h2>
<p>For our Ghost Blog (which hit a MySQL database) I was able to roll back as mentioned above which made the Ghost process able to handle a DB disconnect / reconnect scenario.</p>
<h1>What about Rails?</h1>
<p>Yep. Same problem.</p>
<p>For a separate Rails based app we also run on AKS which is connecting to a remote Postgres DB (not on Azure) we ended up implementing PGbouncer (<a href="https://github.com/pgbouncer/pgbouncer" rel="noreferrer">https://github.com/pgbouncer/pgbouncer</a>) as an additional container on our Cluster via the awesome directions found here: <a href="https://github.com/edoburu/docker-pgbouncer/tree/master/examples/kubernetes/singleuser" rel="noreferrer">https://github.com/edoburu/docker-pgbouncer/tree/master/examples/kubernetes/singleuser</a></p>
<p>Generally anyone attempting to access a remote database FROM AKS is probably going to need to implement an intermediary connection pooling solution. The pooling service sits in the middle (PGbouncer for us) and keeps track of how long a connection has been idle so that your worker processes don't need to care about that.</p>
<p>If you start to approach the Load Balancer timeout the connection pooling service will throw out the old connection and make a new fresh one (resetting the timer). That way when your client sends data down the pipe it lands on your Database server as anticipated.</p>
<h2>In closing</h2>
<p>This was an INSANELY frustrating bug / case to track down. We burned at least 2 dev-ops days figuring the first solution out but even KNOWING that it was probably the same issue we burned another 2 days this time around.</p>
<p>Even elongating the timer beyond the 4 minute default wouldn't really help since that would just make the problem more ephemeral to troubleshoot. I guess I just hope that anyone who has trouble connecting from Azure AKS / Kubernetes to a remote db is better at googling than I am and can save themselves some pain.</p>
<p><em>Thanks to MSFT Support (Kris you are the best) for the hint on the LB timer and to the dude who put together PGbouncer in a container so I didn't have to reinvent the wheel.</em></p>
|
<p>I have setup cluster with kubeadm its working fine and 6443 port is up. but after reboot my system cluster is not getting up. </p>
<p>What should I do?</p>
<p>please find the logs</p>
<pre><code>node@node1:~$ sudo kubeadm init
[init] using Kubernetes version: v1.11.1
......
node@node1:~$
node@node1:~$ mkdir -p $HOME/.kube
node@node1:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
node@node1:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
node@node1:~$
node@node1:~$
node@node1:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1 NotReady master 4m v1.11.1
node@node1:~$ ps -ef | grep 6443
root 5542 5503 8 13:17 ? 00:00:17 kube-apiserver --authorization-mode=Node,RBAC --advertise-address=172.16.2.171 --allow-privileged=true --client-ca-file=/etc/kubernetes/pki/ca.crt --disable-admission-plugins=PersistentVolumeLabel --enable-admission-plugins=NodeRestriction --enable-bootstrap-token-auth=true --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key --etcd-servers=https://127.0.0.1:2379 --insecure-port=0 --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key --requestheader-allowed-names=front-proxy-client --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6443 --service-account-key-file=/etc/kubernetes/pki/sa.pub --service-cluster-ip-range=10.96.0.0/12 --tls-cert-file=/etc/kubernetes/pki/apiserver.crt --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
node 6792 4426 0 13:20 pts/1 00:00:00 grep --color=auto 6443
node@node1:~$
node@node1:~$
node@node1:~$
node@node1:~$ sudo reboot
Connection to node1 closed by remote host.
Connection to node1 closed.
abc@xyz:~$ ssh node@node1
node@node1's password:
node@node1:~$ kubectl get nodes
No resources found.
The connection to the server 172.16.2.171:6443 was refused - did you specify the right host or port?`enter code here`
node@node1:~$
node@node1:~$ ps -ef | grep 6443
node 7083 1920 0 13:36 pts/0 00:00:00 grep --color=auto 6443
</code></pre>
| <p>Your <code>kubelet</code> service is not running. Try to view its logs:</p>
<pre><code>$ journalctl -u kubelet
</code></pre>
<p>To start the service:</p>
<pre><code>$ sudo systemctl start kubelet
</code></pre>
<p>If you want to make <code>kubelet</code> running during the boot you'll need to enbale it. First of all check the <code>kubelet</code> service status:</p>
<pre><code>$ systemctl status kubelet
</code></pre>
<p>There will be a line:</p>
<pre><code> ...
Loaded: loaded (/etc/systemd/system/kubelet.service; (enabled|disabled)
...
</code></pre>
<p>"disabled" entry means you should enable it:</p>
<pre><code>$ sudo systemctl enable kubelet
</code></pre>
<p>But, highly likely it is already enabled, because this was done by "systemd vendor preset", so you will have to debug why <code>kubelet</code> falls. You can post logs output here and stackoverflow's community will help you.</p>
|
<p>An ingress controller is a Layer 7 construct. Does it bypass the Service (VIP) and Layer 4 kube proxy?</p>
| <p>In a nutshell: Ingress deals with North-South traffic (bringing traffic from the outside world into the cluster), a service acts a load balancer, routing the traffic to one of its pods. So, if I understand your question correctly, the answer is no: Ingress and services work together to get traffic from a client outside of the cluster to a certain pod.</p>
<p>You can read more about the topic in an excellent blog post series by Mark Betz (linked from <a href="https://mhausenblas.info/cn-ref/" rel="noreferrer">here</a>, in the "3rd-party articles" section).</p>
|
<p>We currently have 2 Kubernetes clusters:</p>
<ul>
<li><p>One setup with Kops running on AWS</p></li>
<li><p>One setup with Kubeadm running on our own hardware</p></li>
</ul>
<p>We want to combine them to only have a single cluster to manage. </p>
<p>The master could end up being on AWS or on our servers, both are fine.</p>
<p>We can't find a way to add nodes configured with one cluster to the other. </p>
<ul>
<li><p><code>kubeadm</code> is not made available on nodes setup with Kops, so we can't do eg <code>kubeadm token create --print-join-command</code></p></li>
<li><p>Kops doesn't seem to have utilities to let us add arbitrary nodes, see <a href="https://stackoverflow.com/questions/50248179/how-to-add-an-node-to-my-kops-cluster-node-in-here-is-my-external-instance">how to add an node to my kops cluster? (node in here is my external instance)</a></p></li>
</ul>
<p>This issue raises the same question but was left unanswered: <a href="https://github.com/kubernetes/kops/issues/5024" rel="nofollow noreferrer">https://github.com/kubernetes/kops/issues/5024</a></p>
| <p>You can join the nodes manually, but this is really not a recommend way of doing things.</p>
<p>If you're using kubeadm, you probably already have all the relevant components installed on the workers to have them join in a valid way. What I'd say the process to follow is:</p>
<p>run <code>kubeadm reset</code> on the on-prem in question</p>
<p>login to the kops node, and examine the kubelet configuration:</p>
<p><code>systemctl cat kubelet</code></p>
<p>In here, you'll see the kubelet config is specified at <code>/etc/sysconfig/kubelet</code>. You'll need to copy that file and ensure the on-prem node has it in its systemd startup config</p>
<p>Copy the relevent config over to the on-prem node. You'll need to remove any references to the AWS cloud provider stuff, as well as make sure the hostname is valid. Here's an example config I copied from a kops node, and modified:</p>
<pre><code>DAEMON_ARGS="--allow-privileged=true --cgroup-root=/ --cluster-dns=100.64.0.10 --cluster-domain=cluster.local --enable-debugging-handlers=true - --feature-gates=ExperimentalCriticalPodAnnotation=true --hostname-override=<my_dns_name> --kubeconfig=/var/lib/kubelet/kubeconfig --network-plugin=cni --node-labels=kops.k8s.io/instancegroup=onpremnodes,kubernetes.io/role=node,node-role.kubernetes.io/node= --non-masquerade-cidr=100.64.0.0/10 --pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0 --pod-manifest-path=/etc/kubernetes/manifests --register-schedulable=true --v=2 --cni-bin-dir=/opt/cni/bin/ --cni-conf-dir=/etc/cni/net.d/"
HOME="/root"
</code></pre>
<p>Also, examine the kubelet kubeconfig configuration (it should be at <code>/var/lib/kubelet/kubeconfig</code>). This is the config which tells the kubelet which API server to register with. Ensure that exists on the on-prem node</p>
<p>This should get your node joining the API. You may have to go through some debugging as you go through this process.</p>
<p>I really don't recommend doing this though, for the following reasons:</p>
<ul>
<li>Unless you use node-labels in a sane way, you're going to have issues provisioning cloud elements. The kubelet will interact with the AWS API regularly, so if you try use service of type LoadBalancer or any cloud volumes, you'll need to pin the workloads to specific nodes. You'll need to make heavy uses of <a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/" rel="nofollow noreferrer">taints and tolerations.</a></li>
<li>Kubernetes workers aren't designed to connect over a WAN. You're probably going to see issue at some point with network latency etc</li>
<li>If you do choose to go down this route, you'll need to ensure you have TLS configured in both directions for the API <-> kubelet communication, or a VPN.</li>
</ul>
|
<p>How can I rename route that has been created via web console?
I go to <code>Applications>Routes</code>, selected route name, then <code>Action>Edit YAML</code> and I want to achieve the following change, from <code>test.site</code> into <code>old.test.site</code></p>
<p><strong>Current route yml config</strong></p>
<pre><code>...
metadata:
name: test
selfLink: /oapi/v1/namespaces/keycloak/routes/test
...
spec:
host: test.site
...
status:
ingress:
- conditions:
- lastTransitionTime: '2017-12-13T02:19:22Z'
status: 'True'
type: Admitted
host: test.site
</code></pre>
<p><strong>Attempt</strong></p>
<pre><code>...
metadata:
name: test
selfLink: /oapi/v1/namespaces/keycloak/routes/test
...
spec:
host: old.test.site
...
status:
ingress:
- conditions:
- lastTransitionTime: '2017-12-13T02:19:22Z'
status: 'True'
type: Admitted
host: old.test.site
</code></pre>
<p>I get the following error messages:</p>
<blockquote>
<p>Failed to process the resource.
Reason: Route "test" is invalid: spec.host: Invalid value: "old.test.site": field is immutable</p>
</blockquote>
| <p>As <a href="https://stackoverflow.com/users/128141/graham-dumpleton">Graham Dumpleton</a> wrote:</p>
<p>As far as I know you can't edit the host in place for an existing route. From the command line try </p>
<pre><code>oc get route test -o yaml > route.yaml
</code></pre>
<p>Then edit the <code>route.yaml</code> and run </p>
<pre><code>oc replace route test -f route.yaml
</code></pre>
<p>The <code>replace</code> action may allow you to do it.
Else after editing local copy, try </p>
<pre><code>oc delete route test
</code></pre>
<p>and </p>
<pre><code>oc apply route test -f route.yaml
</code></pre>
<p>In doing this, when edit the file, you can delete the whole <code>status</code> section. </p>
<p>But keep in mind there are exist some fields which are required and you cannot delete them. That's why you had a problem with modification. </p>
|
<p>Setup: Kubernetes cluster on AKS with nginx-kubernetes ingress. Azure Application Gateway routing domain with SSL certificate to nginx-kubernetes.</p>
<p>No problems serving everything in Kubernetes.</p>
<p>Now I moved static content to Azure Blob Storage. There's an option to use a custom domain, which works fine, but does not allow to use a custom SSL certificate. The only possible way is to set up a CDN and use the Verizon plan to use custom SSL certificates.</p>
<p>I'd prefer to keep all the routing in the ingress configuration, since some subroutes are directed to different Kubernetes services. Is there a way to mask and rewrite a path to the external blob storage url in nginx-kubernetes? Or is there any other available option that proxies an external url through ingress?</p>
<p>I don't mind having direct blob storage URLs for resources, but the main entry point should use the custom domain.</p>
| <p>Not exactly the answer to the question, but the answer to the problem. Unfortunately this isn't documented very well. The solution is to create a service with a type of "ExternalName". According to <a href="https://akomljen.com/kubernetes-tips-part-1/" rel="noreferrer">https://akomljen.com/kubernetes-tips-part-1/</a> the service should look like this:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: external-service
namespace: default
spec:
type: ExternalName
externalName: full.qualified.domain.name
</code></pre>
<p>I just tried it and it works like a charm.</p>
|
<p>I'm creating a Kubernetes PVC and a Deploy that uses it.</p>
<p>In the yaml it is specified that uid and gid must be 1000.</p>
<p>But when deployed the volume is mounted with different IDs so I have no write access on it.</p>
<p>How can I specify effectively uid and gid for a PVC?</p>
<p>PVC yaml:</p>
<pre class="lang-yaml prettyprint-override"><code>---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jmdlcbdata
annotations:
pv.beta.kubernetes.io/gid: "1000"
volume.beta.kubernetes.io/mount-options: "uid=1000,gid=1000"
volume.beta.kubernetes.io/storage-class: default
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "2Gi"
storageClassName: "default"
</code></pre>
<p>Deploy yaml:</p>
<pre class="lang-yaml prettyprint-override"><code>---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
name: jmdlcbempty
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
name: jmdlcbempty
spec:
securityContext:
runAsUser: 1000
fsGroup: 1000
volumes:
- name: jmdlcbdata
persistentVolumeClaim:
claimName: jmdlcbdata
containers:
- name: myalpine
image: "alpine"
command:
- /bin/sh
- "-c"
- "sleep 60m"
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /usr/share/logstash/data
name: jmdlcbdata
</code></pre>
<p>And here is the dir list:</p>
<pre class="lang-yaml prettyprint-override"><code>$ kubectl get pvc; kubectl get pods;
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
jmdlcbdata Bound pvc-6dfcdb29-8a0a-11e8-938b-1a5d4ff12be9 20Gi RWO default 2m
NAME READY STATUS RESTARTS AGE
jmdlcbempty-68cd675757-q4mll 1/1 Running 0 6s
$ kubectl exec -it jmdlcbempty-68cd675757-q4mll -- ls -ltr /usr/share/logstash/
total 4
drwxr-xr-x 2 nobody 42949672 4096 Jul 17 21:44 data
</code></pre>
<p>I'm working on a IBM's Bluemix cluster.</p>
<p>Thanks.</p>
| <p>After some experiments, finally, I can provide an answer. </p>
<p>There are several ways to run processes in a Container from specific UID and GID:</p>
<ol>
<li><p><code>runAsUser</code> field in <code>securityContext</code> in a Pod definition specifies a user ID for the first process runs in Containers in the Pod. </p></li>
<li><p><code>fsGroup</code> field in <code>securityContext</code> in a Pod specifies what group ID is associated with all Containers in the Pod. This group ID is also associated with volumes mounted to the Pod and with any files created in these volumes.</p></li>
<li><p>When a Pod consumes a PersistentVolume that has a <code>pv.beta.kubernetes.io/gid</code> annotation, the annotated GID is applied to all Containers in the Pod in the same way that GIDs specified in the Pod’s security context are. </p></li>
</ol>
<p>Note, every GID, whether it originates from a PersistentVolume annotation or the Pod’s specification, is applied to the first process run in each Container.</p>
<p>Also, there are several ways to set up mount options for <code>PersistentVolume</code>s. A <code>PersistentVolume</code> is a piece of storage in the cluster that has been provisioned by an administrator. Also, it can be provisioned dynamically using a <code>StorageClass</code>. Therefore, you can specify mount options in a <code>PersistentVolume</code> when you create it manually. Or you can specify them in <code>StorageClass</code>, and every PersistentVolume requested from that class by a <code>PersistentVolumeClaim</code> will have these options.</p>
<p>It is better to use <code>mountOptions</code> attribute than <code>volume.beta.kubernetes.io/mount-options</code> annotation and <code>storageClassName</code> attribute instead of <code>volume.beta.kubernetes.io/storage-class</code> annotation. These annotations were used instead of the attributes in the past and they are still working now, however they will become fully deprecated in a future Kubernetes release. Here is an example:</p>
<pre class="lang-yaml prettyprint-override"><code>kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: with-permissions
provisioner: <your-provider>
parameters:
<option-for your-provider>
reclaimPolicy: Retain
mountOptions: #these options
- uid=1000
- gid=1000
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "2Gi"
storageClassName: "with-permissions" #these options
</code></pre>
<p>Note that mount options are not validated, so mount will simply fail if one is invalid. And you can use <code>uid=1000, gid=1000</code> mount options for file systems like FAT or NTFS, but not for EXT4, for example.</p>
<p>Referring to your configuration:</p>
<ol>
<li><p>In your PVC yaml <code>volume.beta.kubernetes.io/mount-options: "uid=1000,gid=1000"</code> is not working, because it is an option for StorageClass or PV.</p></li>
<li><p>You specified <code>storageClassName: "default"</code> and <code>volume.beta.kubernetes.io/storage-class: default</code> in your PVC yaml, but they are doing the same. Also, <code>default</code> <code>StorageClass</code> do not have mount options by default.</p></li>
<li><p>In your PVC yaml 'pv.beta.kubernetes.io/gid: "1000"' annotation does the same as <code>securityContext.fsGroup: 1000</code> option in Deployment definition, so the first is unnecessary.</p></li>
</ol>
<p>Try to create a <code>StorageClass</code> with required mount options (<code>uid=1000, gid=1000</code>), and use a PVC to request a PV from it, as in the example above. After that, you need to use a <code>Deployment</code> definition with <code>SecurityContext</code> to setup access to mounted PVC. But make sure that you are using mount options available for your file system.</p>
|
<p>To make ingress work <a href="https://medium.com/google-cloud/understanding-kubernetes-networking-ingress-1bc341c84078" rel="nofollow noreferrer">as far as I understand it</a> you need to create all services (that are using ingress controllers) as type NodePort.</p>
<p>Therefore is 2768 the service limit for ingress (maximum available NodePorts)?</p>
| <p>You're understanding isn't necessarily correct.</p>
<p>It depends on your environment, cloud provider, ingress controller etc. Because the ingress controller is provisioned inside the cluster, all services that requires an ingress can use ClusterIP, and the ingress controller will route traffic to it.</p>
<p>Again, depending on your platform, the only service that <em>needs</em> to be type=NodePort is the service attached to your ingress controller deployment. The rest can be ClusterIP</p>
|
<p>Is it possible to use the Ingress Controller function in Kubernetes without a load balancer (in Digital Ocean). </p>
<p>Is there any other mechanism to allow a domain name to map to a Kubernetes service; for instance if I host two WordPress sites on a Kubernetes cluster:</p>
<p>==> WP Site 1: Node Port 80
==> WP Site 2: Node Port 8080</p>
<p>How does a domain name map to the container port 8080 without explicitly entering the port number.</p>
<p>Any help is appreciated.</p>
| <p>DNS doesn't support adding port numbers, you need an ingress controller (which essentially acts like a reverse proxy) to do this.</p>
<p>If you install the <a href="https://github.com/digitalocean/digitalocean-cloud-controller-manager" rel="noreferrer">digital ocean cloud controller manager</a> you'll be able to provision loadbalancers using services with type LoadBalancer. You can then deploy a standard ingress controller, like the <a href="https://github.com/kubernetes/ingress-nginx" rel="noreferrer">nginx ingress controller</a> and give the service type=LoadBalancer.</p>
<p>This then becomes the ingress into your cluster, and you only have a single LoadBalancer, which keeps costs down.</p>
|
<p>I'm using <a href="https://kubernetes.io/docs/setup/minikube/" rel="nofollow noreferrer">minikube</a> to test out a local Kubernetes scenario. I have 2 <a href="http://flask.pocoo.org/" rel="nofollow noreferrer">Python Flask</a> applications named 'frontend.py' and 'interceptor.py'. Basically I want to do a GET to the frontend application which will then perform a POST to the interceptor application, and receive a response back from the interceptor. I have this working if I manually find the interceptor application's IP and provide it in my 'frontend.py' script. </p>
<p>My question: <strong>Is there a recommended way to automatically determine the IP address of an application running in one pod's container from an application inside another pod's container?</strong> </p>
<p>I'm wondering if I might not even need to gather the IP in the application themselves if there is some other way to expose the pods' info to each other via the <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Kubernetes Service</a> that I create, but I haven't had any luck yet.</p>
<p>Below are my services, and deployments.</p>
<p><strong>flask-frontend-deployment.yaml</strong></p>
<pre><code>apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: frontend-deployment
labels:
app: flask
spec:
selector:
matchLabels:
app: flask
replicas: 1
template:
metadata:
labels:
app: flask
tier: frontend
spec:
containers:
- name: flask-frontend
image: alec/flask/frontend:1.0
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: dns
ports:
- containerPort: 5001
</code></pre>
<p><strong>flask-frontend-service.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: frontend-service
spec:
type: NodePort
selector:
app: flask
ports:
- protocol: TCP
port: 5001
targetPort: 5001
</code></pre>
<p><strong>flask-interceptor-deployment.yaml</strong></p>
<pre><code>apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: interceptor-deployment
labels:
app: flask
spec:
selector:
matchLabels:
app: inter
role: mid
tier: middle
replicas: 1
template:
metadata:
labels:
app: inter
role: mid
tier: middle
spec:
containers:
- name: flask-interceptor
image: alec/flask/interceptor:1.0
resources:
requests:
cpu: 100m
memory: 100Mi
env:
- name: GET_HOSTS_FROM
value: env
ports:
- containerPort: 5002
</code></pre>
<p><strong>flask-interceptor-service.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: interceptor-service
spec:
ports:
- protocol: TCP
port: 5002
</code></pre>
<p><strong>frontend.py</strong></p>
<pre><code>from flask import Flask
from flask import request
import requests
app = Flask(__name__)
@app.route("/")
def hello():
address = 'http://172.17.0.5:5002/' #<--- this is the IP I want to get automatically
r = requests.post(address, data="---[Token]---")
return "Frontend Here. Response from Backend = "+str(r.content)
if __name__ == "__main__":
app.run(debug=True,host='0.0.0.0',port=5001)
</code></pre>
<p><strong>interceptor.py</strong></p>
<pre><code>from flask import Flask
from flask import request
import requests
app = Flask(__name__)
content = ""
@app.route("/", methods=['POST'])
def testPost():
if request.method == 'POST':
return "Received POST --->>> " + str(request.data)
else:
return "Post didnt work"
@app.route("/", methods=['GET'])
def hello():
return "Hello from the interceptor!"
if __name__ == "__main__":
app.run(debug=True,host='0.0.0.0',port=5002)
</code></pre>
| <p>As long as you have the <code>kube-dns</code> add-on enabled in <code>minikube</code> you should be able to perform DNS-based service discovery. I would say that this is the recommended way of getting hold of the IP addresses of your resources.</p>
<p>To double check that you have <code>kube-dns</code> enabled run <code>minikube addons list</code>, it should be enabled by default.</p>
<p>The DNS names created for you will default be derived from the <code>metadata: name</code> field of your resource(s). Assuming you'll be using the <code>default</code> namespace then your Services DNS-names would be:</p>
<p><code>interceptor-service.default.svc.cluster.local</code></p>
<p>and</p>
<p><code>frontend-service.default.svc.cluster.local</code></p>
<p>You could use these fully qualified domain-names within your application to reach your Services or use a shorter version e.g. <code>frontend-service</code> as-is. This is possible since your Pods <code>resolv.conf</code> will be configured to look in <code>default.svc.cluster.local</code> when querying for <code>frontend-service</code>.</p>
<p>To test this (and noted in the comments), you should now be able to change this line:</p>
<p><code>address = 'http://172.17.0.5:5002/'</code></p>
<p>to</p>
<p><code>address = 'http://interceptor-service:5002/'</code></p>
<p>in your Flask app code.</p>
<p>Please see <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">this</a> page for more detailed information about DNS for Services and Pods in Kubernetes.</p>
|
<p>I came across an open source Kubernetes project <a href="https://github.com/kubernetes/kops" rel="noreferrer">KOPS</a> and AWS Kubernetes service EKS. Both these products allow installation of a Kubernetes cluster. However, I wonder why one would pick EKS over KOPS or vice versa if one has not run any of them earlier. </p>
<p>This question does not ask which one is better, but rather asks for a comparison.</p>
| <p>The two are largely the same, at the time of writing, the following are the differences I'm aware of between the 2 offerings</p>
<p>EKS:</p>
<ul>
<li>Fully managed control plane from AWS - you have no control over the masters</li>
<li><a href="https://docs.aws.amazon.com/eks/latest/userguide/managing-auth.html" rel="noreferrer">AWS native authentication IAM authentication with the cluster</a></li>
<li><a href="https://aws.amazon.com/blogs/opensource/networking-foundation-eks-aws-cni-calico/" rel="noreferrer">VPC level networking for pods</a> meaning you can use things like security groups at the cluster/pod level</li>
</ul>
<p>kops:</p>
<ul>
<li>Support for more Kubernetes features, such as <a href="https://github.com/kubernetes/kops/blob/master/docs/cluster_spec.md" rel="noreferrer">API server options</a></li>
<li>Auto provisioned nodes use the built in kops <code>node_up</code> tool</li>
<li>More flexibility over Kubernetes versions, EKS only has a few versions available right now</li>
</ul>
|
<p>Please help me. I'm following this guide: <a href="https://pleasereleaseme.net/deploy-a-dockerized-asp-net-core-application-to-kubernetes-on-azure-using-a-vsts-ci-cd-pipeline-part-1/" rel="nofollow noreferrer">https://pleasereleaseme.net/deploy-a-dockerized-asp-net-core-application-to-kubernetes-on-azure-using-a-vsts-ci-cd-pipeline-part-1/</a>. My docker images is successfully built and pushed to my private container registry and my CD pipeline is okay as well. But when I tried to check it using kubectl get pods the status is always pending, when I tried to use <code>kubectl describe pod k8s-aspnetcore-deployment-64648bb5ff-fxg2k</code> the message is:</p>
<pre><code>Name: k8s-aspnetcore-deployment-64648bb5ff-fxg2k
Namespace: default
Node: <none>
Labels: app=k8s-aspnetcore
pod-template-hash=2020466199
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicaSet/k8s-aspnetcore-deployment-64648bb5ff
Containers:
k8s-aspnetcore:
Image: mycontainerregistryb007.azurecr.io/k8saspnetcore:2033
Port: 80/TCP
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-tr892 (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
default-token-tr892:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-tr892
Optional: false
QoS Class: BestEffort
Node-Selectors: beta.kubernetes.io/os=windows
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 3m (x57 over 19m) default-scheduler 0/1 nodes are available: 1 MatchNodeSelector.
</code></pre>
<p>Here is for kubectl describe deployment:</p>
<pre><code>Name: k8s-aspnetcore-deployment
Namespace: default
CreationTimestamp: Sat, 21 Jul 2018 13:41:52 +0000
Labels: app=k8s-aspnetcore
Annotations: deployment.kubernetes.io/revision=2
Selector: app=k8s-aspnetcore
Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 5
RollingUpdateStrategy: 1 max unavailable, 1 max surge
Pod Template:
Labels: app=k8s-aspnetcore
Containers:
k8s-aspnetcore:
Image: mycontainerregistryb007.azurecr.io/k8saspnetcore:2033
Port: 80/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
OldReplicaSets: <none>
NewReplicaSet: k8s-aspnetcore-deployment-64648bb5ff (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 26m deployment-controller Scaled up replica set k8s-aspnetcore-deployment-7f756cc78c to 1
Normal ScalingReplicaSet 26m deployment-controller Scaled up replica set k8s-aspnetcore-deployment-64648bb5ff to 1
Normal ScalingReplicaSet 26m deployment-controller Scaled down replica set k8s-aspnetcore-deployment-7f756cc78c to 0
</code></pre>
<p>Here is for kubectl describe service:</p>
<pre><code>Name: k8s-aspnetcore-service
Namespace: default
Labels: version=test
Annotations: <none>
Selector: app=k8s-aspnetcore
Type: LoadBalancer
IP: 10.0.26.188
LoadBalancer Ingress: 40.112.73.28
Port: <unset> 80/TCP
TargetPort: 80/TCP
NodePort: <unset> 30282/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 26m service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 25m service-controller Ensured load balancer
</code></pre>
<p>I don't know if my setup for Kubernetes Cluster is wrong, but here is the script I used:</p>
<pre><code>#!/bin/bash
azureSubscriptionId="xxxxxxx-xxxxx-xxxx-xxx-xxxxxxxx"
resourceGroup="k8sResourceGroup"
clusterName="k8sCluster"
location="northeurope"
# Useful if you have more than one Aure subscription
az account set --subscription $azureSubscriptionId
# Resource group for cluster - only availble in certain regions at time of writing
az group create --location $location --name $resourceGroup
# Create actual cluster
az aks create --resource-group $resourceGroup --name $clusterName --node-count 1 --generate-ssh-keys
# Creates a config file at ~/.kube on local machine to tell kubectl which cluster it should work with
az aks get-credentials --resource-group $resourceGroup --name $clusterName
</code></pre>
<p>Here is my deployment.yaml:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: k8s-aspnetcore-deployment
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
template:
metadata:
labels:
app: k8s-aspnetcore
spec:
containers:
- name: k8s-aspnetcore
image: mycontainerregistryb007.azurecr.io/k8saspnetcore
ports:
- containerPort: 80
imagePullSecrets:
- name: k8ssecret
nodeSelector:
"beta.kubernetes.io/os": windows
</code></pre>
<p>Here is my service.yaml:</p>
<pre><code>kind: Service
metadata:
name: k8s-aspnetcore-service
labels:
version: test
spec:
selector:
app: k8s-aspnetcore
ports:
- port: 80
type: LoadBalancer
</code></pre>
| <p>The reason they are in Pending state is described in the message at the bottom of the <code>describe pod</code> output: <code>MatchNodeSelector</code>. That means Kubernetes didn't find a Node in your cluster that was able to fulfill the Node selection criteria specified in the PodSpec.</p>
<p>Specifically, it's very likely these lines:</p>
<pre><code>nodeSelector:
"beta.kubernetes.io/os": windows
</code></pre>
<p>Only a <code>kubectl describe nodes</code> would tell if there are any Nodes that could possibly fulfill that criteria</p>
|
<p>I am trying to enable some admission controllers on EKS. How do you see the existing admission controllers and enable new ones?</p>
| <p>I don't believe this is possible at this time. The control plane is managed by Amazon, and it's not possible to modify it.</p>
<p>If you need a Kubernetes cluster in AWS with these kind of options, use <a href="https://github.com/kubernetes/kops" rel="noreferrer">kops</a></p>
|
<p>I'm trying to deploy my Dockerized React app to Kubernetes. I believe I've dockerized it correctly, but i'm having trouble accessing the exposed pod.</p>
<p>I don't have experience in Docker or Kubernetes, so any help would be appreciated.</p>
<p>My React app is just static files (from npm run build) being served from Tomcat.</p>
<p>My Dockerfile is below. In summary, I put my app in the Tomcat folder and expose port 8080.</p>
<pre><code>FROM private-docker-registry.com/repo/tomcat:latest
EXPOSE 8080:8080
# Copy build directory to Tomcat webapps directory
RUN mkdir -p /tomcat/webapps/app
COPY /build/sample-app /tomcat/webapps/app
# Create a symbolic link to ROOT -- this way app starts at root path
(localhost:8080)
RUN ln -s /tomcat/webapps/app /tomcat/webapps/ROOT
# Start Tomcat
ENTRYPOINT ["catalina.sh", "run"]
</code></pre>
<p>I build and pushed the Docker image to the Private Docker Registry.
I verified that container runs correctly by running it like this:</p>
<pre><code>docker run -p 8080:8080 private-docker-registry.com/repo/sample-app:latest
</code></pre>
<p>Then, if I go to localhost:8080, I see the homepage of my React app.</p>
<p>Now, the trouble I'm having is deploying to Kubernetes and accessing the app externally.</p>
<p>Here's my deployment.yaml file:</p>
<pre><code>kind: Deployment
apiVersion: apps/v1beta2
metadata:
name: sample-app
namespace: dev
labels:
app: sample-app
spec:
replicas: 1
selector:
matchLabels:
app: sample-app
template:
metadata:
labels:
app: sample-app
spec:
containers:
- name: sample-app
image: private-docker-registry.com/repo/sample-app:latest
ports:
- containerPort: 8080
protocol: TCP
nodeSelector:
TNTRole: luxkube
---
kind: Service
apiVersion: v1
metadata:
name: sample-app
labels:
app: sample-app
spec:
selector:
app: sample-app
type: NodePort
ports:
- port: 80
targetPort: 8080
protocol: TCP
</code></pre>
<p>I created the deployment and service by running
kubectl --namespace=dev create -f deployment.yaml</p>
<pre><code>Output of 'describe deployment'
Name: sample-app
Namespace: dev
CreationTimestamp: Sat, 21 Jul 2018 12:27:30 -0400
Labels: app=sample-app
Annotations: deployment.kubernetes.io/revision=1
Selector: app=sample-app
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=sample-app
Containers:
sample-app:
Image: private-docker-registry.com/repo/sample-app:latest
Port: 8080/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: sample-app-bb6f59b9 (1/1 replicas created)
Events: <none>
Output of 'describe service'
Name: sample-app
Namespace: fab-dev
Labels: app=sample-app
Annotations: <none>
Selector: app=sample-app
Type: NodePort
IP: 10.96.29.199
Port: <unset> 80/TCP
TargetPort: 8080/TCP
NodePort: <unset> 34604/TCP
Endpoints: 192.168.138.145:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
</code></pre>
<p>Now I don't know which IP and port I should be using to access the app.
I have tried every combination but none has loaded my app. I believe the port should be 80, so if I just have the IP, i shuold be able to go to the browser and access the React app by going to http://.</p>
<p>Does anyone have suggestions?</p>
| <p>The short version is that the Service is listening on the same TCP/IP port on every Node in your cluster (<code>34604</code>) as is shown in the output of <code>describe service</code>:</p>
<pre><code>NodePort: <unset> 34604
</code></pre>
<p>If you wish to access the application through a "nice" URL, you'll want a load balancer that can translate the hostname into the in-cluster IP and port combination. That's what an Ingress controller is designed to do, but it isn't the only way -- changing the Service to be <code>type: LoadBalancer</code> will do that for you, if you're running in a cloud environment where Kubernetes knows how to programmatically create load balancers for you.</p>
|
<p>With a Kubernetes cluster up and running and the ability to go to the master over ssh with ssh-keys and run kubectl commands there; I want to run kubectl commands on my local machine. So I try to setup the configuration, following the <a href="https://kubernetes-v1-4.github.io/docs/user-guide/kubectl/kubectl_config_set-cluster/" rel="nofollow noreferrer">kubectl config</a>:</p>
<pre><code>kubectl config set-cluster mykube --server=https://<master-ip>:6443
kubectl config set-context mykube --cluster=mykube --user=mykube-adm
kubectl config set-credentials mykube-adm --client-key=path/to/private/keyfile
</code></pre>
<p>Activate the context:</p>
<pre><code>kubectl config use-context mykube
</code></pre>
<p>When I run a kubectl command:</p>
<pre><code>kubectl get nodes
</code></pre>
<p>It returns:</p>
<blockquote>
<p>The connection to the server localhost:8080 was refused - did you specify the right host or port?</p>
</blockquote>
<p><strong>The output of <code>kubectl config view</code></strong></p>
<pre><code>apiVersion: v1
clusters:
- cluster:
server: https://<master-ip>:6443
name: mykubecontexts:
- context:
cluster: mykube
user: mykube-adm
name: mykube
current-context: mykube
kind: Config
preferences: {}
users:
- name: mykube-adm
user:
client-key: path/to/private/keyfile
</code></pre>
| <p>Unfortunately, above <code>kubectl</code> config file is incorrect. It seems an error appeared due to manual formatting or something else. </p>
<p>New line is missing in this part (<code>name: mykubecontexts:</code>):</p>
<pre><code>clusters:
- cluster:
server: https://<master-ip>:6443
name: mykubecontexts:
- context:
cluster: mykube
user: mykube-adm
name: mykube
</code></pre>
<p>Correct one is:</p>
<pre><code>clusters:
- cluster:
server: https://<master-ip>:6443
name: mykube
contexts:
- context:
cluster: mykube
user: mykube-adm
name: mykube
</code></pre>
<p>That's why cluster's name is <code>mykubecontexts:</code>:</p>
<pre><code>clusters:
- cluster:
server: https://<master-ip>:6443
name: mykubecontexts:
</code></pre>
<p>and that's why there is no context in it, because <code>contexts:</code> is not defined.</p>
<p><code>kubectl</code> cannot find context <code>mykube</code> and switches to default one where <code>server=localhost:8080</code> is by default.</p>
<p><code>kubectl</code> config is located in <code>${HOME}/.kube/config</code> file by default if <code>--kubeconfig</code> flag or <code>$KUBECONFIG</code> environment variable are not set.</p>
<p>Please correct it to the following one:</p>
<pre><code>apiVersion: v1
clusters:
- cluster:
server: https://<master-ip>:6443
name: mykube
contexts:
- context:
cluster: mykube
user: mykube-adm
name: mykube
current-context: mykube
kind: Config
preferences: {}
users:
- name: mykube-adm
user:
client-key: path/to/private/keyfile
</code></pre>
|
<p>I am trying to install Minikube on a GCP VM. I am running into an issue where the OS is complaining that VT-X/AMD-v needs to be enabled. Are there any specific instructions for setting this up on GCP?</p>
| <p><a href="https://cloud.google.com/compute/docs/instances/enable-nested-virtualization-vm-instances" rel="nofollow noreferrer">Nested Virtualization</a> is supported on GCP and I can confirm the documentation I've linked is up to date and workable.</p>
<p>Quoting the 3 basic points here that you need:</p>
<ul>
<li>A supported OS
<ul>
<li>CentOS 7 with kernel version 3.10</li>
<li>Debian 9 with kernel version 4.9</li>
<li>Debian 8 with kernel version 3.16</li>
<li>RHEL 7 with kernel version 3.10</li>
<li>SLES 12.2 with kernel version 4.4</li>
<li>SLES 12.1 with kernel version 3.12</li>
<li>Ubuntu 16.04 LTS with kernel version 4.4</li>
<li>Ubuntu 14.04 LTS with kernel version 3.13</li>
</ul></li>
<li>Create an <strong>image</strong> using the special licence <code>https://www.googleapis.com/compute/v1/projects/vm-options/global/licenses/enable-vmx</code> (this is offered at no additional cost; it simply signals GCE that you want the feature enabled on instances using this image)
<ul>
<li>Create is using an already existing <strong>disk</strong> (for example): <code>gcloud compute images create nested-vm-image --source-disk disk1 --source-disk-zone us-central1-a --licenses "https://www.googleapis.com/compute/v1/projects/vm-options/global/licenses/enable-vmx"</code> (You will have to create disk1 yourself, for example by starting an instance from an OS image, and deleting the instance afterwards while keeping the boot disk)</li>
<li>Create it using an already existing <strong>image</strong> with (for example): <code>gcloud compute images create nested-vm-image --source-image=debian-10-buster-v20200326 --source-image-project=debian-cloud --licenses="https://www.googleapis.com/compute/v1/projects/vm-options/global/licenses/enable-vmx"</code></li>
</ul></li>
<li>Create an <strong>instance</strong> from a nested virtualization enabled image. Something like: <code>gcloud compute instances create example-nested-vm --zone us-central1-b --image nested-vm-image</code> . Keep in mind that you need to pick a zone that has at least Haswell CPUs.</li>
</ul>
<p>SSH into the new instance and verify that the feature is enabled by running <code>grep vmx /proc/cpuinfo</code>. If you get any output it means that the feature is enabled successfully.</p>
|
<p>I am working with Minikube and I have alpine pod with one container.<br>
When I run:<br>
<code>kubectl exec -it -v=6 alpinec1-7c65db48b4-v2gpc /bin/sh</code> </p>
<p>I receive a shell and I can run any command (<code>ifconfig</code>, etc.) inside it. </p>
<p>But when I tried to run <code>sh</code> with <code>-c</code> it failed:</p>
<pre><code>root:~# kubectl exec -it -v=6 alpinec1-7c65db48b4-v2gpc /bin/sh -c 'ifconfig'
I0722 05:45:25.091111 80392 loader.go:357] Config loaded from file /home/root/.kube/config
I0722 05:45:25.111876 80392 round_trippers.go:405] GET https://192.168.190.143:8443/api/v1/namespaces/default/pods/alpinec1-7c65db48b4-v2gpc 200 OK in 16 milliseconds
I0722 05:45:25.232564 80392 round_trippers.go:405] POST https://192.168.190.143:8443/api/v1/namespaces/default/pods/alpinec1-7c65db48b4-v2gpc/exec?command=%2Fbin%2Fsh&container=ifconfig&container=ifconfig&stdin=true&stdout=true&tty=true 400 Bad Request in 13 milliseconds
I0722 05:45:25.232921 80392 helpers.go:201] server response object: [{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "container ifconfig is not valid for pod alpinec1-7c65db48b4-v2gpc",
"reason": "BadRequest",
"code": 400
}]
F0722 05:45:25.233095 80392 helpers.go:119] Error from server (BadRequest): container ifconfig is not valid for pod alpinec1-7c65db48b4-v2gpc
</code></pre>
| <p>kubectl interprets the <code>-c</code> flag not as a flag for <code>ifconfig</code>, but as a flag for the <code>kubectl exec</code> command itself -- which specifies the exact container of a Pod in which the command should be executed; this is also the reason that <code>kubectl</code> looks for a <em>container</em> named "ifconfig" in your Pod. See <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#exec" rel="noreferrer">the documentation</a> for more information.</p>
<p>Instead, use <code>--</code> to denote flags that should not be interpreted by <code>kubectl exec</code> any more but instead be passed to the invoked command (<code>ifconfig</code>, in this case) as-is:</p>
<pre><code>$ kubectl exec -it -v=6 alpinec1-7c65db48b4-v2gpc -- /bin/sh -c 'ifconfig'
</code></pre>
<p>Also note that in this case, you do not really need to invoke ifconfig from a shell; you could also just directly call <code>ifconfig</code> without using <code>/bin/sh</code>:</p>
<pre><code>$ kubectl exec -it -v=6 alpinec1-7c65db48b4-v2gpc -- ifconfig
</code></pre>
|
<p>I run my application in one pod and the Mongo database in other pod.
For my application successful startup, it needs to know the IP address where the Mongo is running.</p>
<p>I have questions below:</p>
<ol>
<li>How do I get to know the the Mongo pod IP address so that I can configure this in my application.</li>
<li>My application will run on some IP & port, and this is provided as part of some configuration file. But as these are containerized and Kubernetes assigns the Pod IP address, how can my application pick this IP address as it own IP?</li>
</ol>
| <p>You need to expose mongodb using Kubernetes Services. With the help of Services there is no need for an application to know the actual IP address of the Pod, you can use the service name to resolve mongodb.</p>
<p>Reference: <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/</a></p>
<p>An example using mysql:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
name: mysql
name: mysql
spec:
ports:
- port: 3306
selector:
name: mysql
---
apiVersion: v1
kind: Pod
metadata:
name: mysql
labels:
name: mysql
spec:
containers:
image: mysql
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: wppassword
ports:
- containerPort: 3306
name: mysql
</code></pre>
<p>If there is application container, in the same namespace, trying to use the mysql container, it can directly use <code>mysql:3306</code> to connect with out using the POD IP address. And <code>mysql.namespace_name:3306</code> if the app is in a different namespace.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.