prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>I'm having an issue where because an application was originally configured to execute on docker-compose.
I managed to port and rewrite the .yaml deployment files to Kubernetes, however, the issue lies within the communication of the pods. </p>
<p>The frontend communicates with the backend to access the services, and I assume as it should be in the same network, the frontend calls the services from the localhost.
I don't have access to the code, as it is an proprietary application that was developed by a company and it does not support Kubernetes, so modifying the code is out of question.</p>
<p>I believe the main reason is because the frontend and backend are runnning on different pods, with different IPs. </p>
<p>When the frontend tries to call the APIs, it does not find the service, and returns an error.
Therefore, I'm trying to deploy both the frontend image and backend image into the same pod, so they share the same Cluster IP.</p>
<p>Unfortunately, I do not know how to make a yaml file to create both containers within a single pod.</p>
<p>Is it possible to have both frontend and backend containers running on the same pod, or would there be another way to make the containers communicate (maybe a proxy)?</p>
| <p>Yes, you just add entries to the <code>containers</code> section in your yaml file, example:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: two-containers
spec:
restartPolicy: Never
containers:
- name: nginx-container
image: nginx
- name: debian-container
image: debian
</code></pre>
|
<p>I got an issue when I try to access an exposed <code>kubernetes</code> service through browser. Below is my Environment.</p>
<p>created two <code>ubuntu</code> EC2 instances(with all ports open in security group) and installed all kubernetes related tools like kubectl, kubeadm, docker, calico network.</p>
<p>created <code>nginx</code> pod, scaled it to 3 and exposed it with type <b>LoadBalancer</b>. When I curl from master or worker node to the exposed nginx it works fine(with public or private ip). But it does not work if i curl from outside. The request is timed out. I tried to delete service and expose it again with NodePort. But still I could not access from outside. I ensured the security group allows ingress. Is there a way to debug why it cannot be accessed from outside or I am missing something</p>
<p>I am not running <code>cloud controller manager</code> but <code>kube-controller-manager</code>. Will this be an issue.? </p>
<p>below is the output of all kubernetes components</p>
<pre><code>ubuntu@ip-172-31-29-98:~$ kubectl get all --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default pod/nginx-6f858d4d45-2wtlh 1/1 Running 0 51m
default pod/nginx-6f858d4d45-5dkws 1/1 Running 0 51m
default pod/nginx-6f858d4d45-h9cwg 1/1 Running 0 51m
kube-system pod/calico-etcd-82xkv 1/1 Running 1 18h
kube-system pod/calico-kube-controllers-74b888b647-prr2q 1/1 Running 1 18h
kube-system pod/calico-node-kbckk 2/2 Running 4 17h
kube-system pod/calico-node-n5zhr 2/2 Running 3 18h
kube-system pod/coredns-78fcdf6894-qjhlq 1/1 Running 1 18h
kube-system pod/coredns-78fcdf6894-sm7c9 1/1 Running 1 18h
kube-system pod/etcd-ip-172-31-29-98 1/1 Running 1 18h
kube-system pod/kube-apiserver-ip-172-31-29-98 1/1 Running 1 18h
kube-system pod/kube-controller-manager-ip-172-31-29-98 1/1 Running 1 18h
kube-system pod/kube-proxy-jxg88 1/1 Running 1 18h
kube-system pod/kube-proxy-knx59 1/1 Running 1 17h
kube-system pod/kube-scheduler-ip-172-31-29-98 1/1 Running 1 18h
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 18h
default service/nginx LoadBalancer 10.99.144.149 <pending> 80:31808/TCP 45m
kube-system service/calico-etcd ClusterIP 10.96.232.136 <none> 6666/TCP 18h
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 18h
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/calico-etcd 1 1 1 1 1 node-role.kubernetes.io/master= 18h
kube-system daemonset.apps/calico-node 2 2 2 2 2 <none> 18h
kube-system daemonset.apps/kube-proxy 2 2 2 2 2 beta.kubernetes.io/arch=amd64 18h
NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
default deployment.apps/nginx 3 3 3 3 51m
kube-system deployment.apps/calico-kube-controllers 1 1 1 1 18h
kube-system deployment.apps/calico-policy-controller 0 0 0 0 18h
kube-system deployment.apps/coredns 2 2 2 2 18h
NAMESPACE NAME DESIRED CURRENT READY AGE
default replicaset.apps/nginx-6f858d4d45 3 3 3 51m
kube-system replicaset.apps/calico-kube-controllers-74b888b647 1 1 1 18h
kube-system replicaset.apps/calico-policy-controller-55b469c8fd 0 0 0 18h
kube-system replicaset.apps/coredns-78fcdf6894 2 2 2 18h
</code></pre>
<p><strong>Edit 1:</strong>
Tried to do the same in <code>GCloud</code>. It is same in <code>GCloud</code> too. The <code>nginx</code> Service is accessible through private/public ip inside the nodes. But when I curl from outside, it does not work. I spin up a simply python server in one of the node and I am able to access the python server from outside. But only the services exposed through <code>kubernetes</code> are not curl-able from outside. </p>
<p>I think I am missing some fundamental understanding about <code>kubernetes</code> networking(especially in cloud). Can I get any help from experts?</p>
| <p>How did you create your cluster? <code>kubeadm</code>? You need some custom configs for your cluster to run with AWS. For example your <code>kube-controller-manager</code> and <code>kube-apiserver</code> need to have the option <code>--cloud-provider=aws</code>. Same for all your kubelets. </p>
<p>I'd recommend using <a href="https://github.com/kubernetes/kops" rel="nofollow noreferrer">kops</a> if you don't want to deal with this.</p>
|
<p>I have an installer that spins up two pods in my CI flow, let's call them web and activemq. When the web pod starts it tries to communicate with the activemq pod using the k8s assigned amq-deployment-0.activemq pod name. </p>
<p>Randomly, the web will get an unknown host exception when trying to access amq-deployment1.activemq. If I restart the web pod in this situation the web pod will have no problem communicating with the activemq pod. </p>
<p>I've logged into the web pod when this happens and the /etc/resolv.conf and /etc/hosts files look fine. The host machines /etc/resolve.conf and /etc/hosts are sparse with nothing that looks questionable.</p>
<p>Information:
There is only 1 worker node.</p>
<p>kubectl --version
Kubernetes v1.8.3+icp+ee</p>
<p>Any ideas on how to go about debugging this issue. I can't think of a good reason for it to happen randomly nor resolve itself on a pod restart. </p>
<p>If there is other useful information needed, I can get it. Thank in advance</p>
<p>For activeMQ we do have this service file</p>
<pre><code>apiVersion: v1 kind: Service
metadata:
name: activemq
labels:
app: myapp
env: dev
spec:
ports:
- port: 8161
protocol: TCP
targetPort: 8161
name: http
- port: 61616
protocol: TCP
targetPort: 61616
name: amq
selector:
component: analytics-amq
app: myapp
environment: dev
type: fa-core
clusterIP: None
</code></pre>
<p>And this ActiveMQ stateful set (this is the template)</p>
<pre><code>kind: StatefulSet
apiVersion: apps/v1beta1
metadata:
name: pa-amq-deployment
spec:
replicas: {{ activemqs }}
updateStrategy:
type: RollingUpdate
serviceName: "activemq"
template:
metadata:
labels:
component: analytics-amq
app: myapp
environment: dev
type: fa-core
spec:
containers:
- name: pa-amq
image: default/myco/activemq:latest
imagePullPolicy: Always
resources:
limits:
cpu: 150m
memory: 1Gi
livenessProbe:
exec:
command:
- /etc/init.d/activemq
- status
initialDelaySeconds: 10
periodSeconds: 15
failureThreshold: 16
ports:
- containerPort: 8161
protocol: TCP
name: http
- containerPort: 61616
protocol: TCP
name: amq
envFrom:
- configMapRef:
name: pa-activemq-conf-all
- secretRef:
name: pa-activemq-secret
volumeMounts:
- name: timezone
mountPath: /etc/localtime
volumes:
- name: timezone
hostPath:
path: /usr/share/zoneinfo/UTC
</code></pre>
<p>The Web stateful set:</p>
<pre><code>apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: pa-web-deployment
spec:
replicas: 1
updateStrategy:
type: RollingUpdate
serviceName: "pa-web"
template:
metadata:
labels:
component: analytics-web
app: myapp
environment: dev
type: fa-core
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: component
operator: In
values:
- analytics-web
topologyKey: kubernetes.io/hostname
containers:
- name: pa-web
image: default/myco/web:latest
imagePullPolicy: Always
resources:
limits:
cpu: 1
memory: 2Gi
readinessProbe:
httpGet:
path: /versions
port: 8080
initialDelaySeconds: 30
periodSeconds: 15
failureThreshold: 76
livenessProbe:
httpGet:
path: /versions
port: 8080
initialDelaySeconds: 30
periodSeconds: 15
failureThreshold: 80
securityContext:
privileged: true
ports:
- containerPort: 8080
name: http
protocol: TCP
envFrom:
- configMapRef:
name: pa-web-conf-all
- secretRef:
name: pa-web-secret
volumeMounts:
- name: shared-volume
mountPath: /MySharedPath
- name: timezone
mountPath: /etc/localtime
volumes:
- nfs:
server: 10.100.10.23
path: /MySharedPath
name: shared-volume
- name: timezone
hostPath:
path: /usr/share/zoneinfo/UTC
</code></pre>
<p>This web pod also has a similar "unknown host" problem finding an external database we have configured. The issue being resolved similarly by restarting the pod. Here is the configuration of that external service. Maybe it is easier to tackle the problem from this angle? ActiveMQ has no problem using the database service name to find the DB and startup.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: dbhost
labels:
app: myapp
env: dev
spec:
type: ExternalName
externalName: mydb.host.com
</code></pre>
| <p>Is it possible that it is a question of which pod, and the app in its container, is started up first and which second?</p>
<p>In any case, connecting using a <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Service</a> and not the pod name would be recommended as the pod's name assigned by Kubernetes changes between pod restarts.</p>
<p>A way to test connectivity, is to use <code>telnet</code> (or <a href="https://curl.haxx.se/" rel="nofollow noreferrer"><code>curl</code></a> for the protocols it supports), if found in the image:</p>
<pre><code>telnet <host/pod/Service> <port>
</code></pre>
|
<p>I have a Spring application built into a Docker image with the following command in <code>dockerfile</code></p>
<pre><code>CMD cd /opt/app/jar \
&& java -Dspring.config.location=file:/opt/app/config/ -Dspring.profiles.active=test -jar *.jar
</code></pre>
<p>When creating app on OpenShift with </p>
<pre><code>oc new-app --name=test-app --docker-image=MyImage
</code></pre>
<p>and inspect the log with <code>oc logs <pod_name></code>, I see this error:</p>
<pre><code>java.lang.IllegalStateException: Logback configuration error detected:
ERROR in ch.qos.logback.core.rolling.RollingFileAppender[LOGFILE] - Failed to create parent directories for [/opt/app/jar/../log/debug.log]
ERROR in ch.qos.logback.core.rolling.RollingFileAppender[LOGFILE] - openFile(/opt/app/jar/../log/debug.log,true) call failed. java.io.FileNotFoundException: /opt/app/jar/../log/debug.log (No such file or directory)
</code></pre>
<p>However, when I run the image directly with <code>docker run -it <image_ID> /bin/bash</code>, and then execute the <code>java -jar</code> command above, it runs fine.</p>
<p>Here is the snippet from my <code>logback.xml</code> file:</p>
<pre><code><appender name="LOGFILE" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${user.dir}/../log/debug.log</file>
<encoder>
<pattern>${FILE_LOG_PATTERN}</pattern>
<charset>utf8</charset>
</encoder>
<rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
<fileNamePattern>${user.dir}/../log/archived/debug.%d{yyyy-MM-dd}.%i.log</fileNamePattern>
<maxFileSize>50MB</maxFileSize>
<maxHistory>10</maxHistory>
</rollingPolicy>
</appender>
</code></pre>
<p>Could you please advise what I miss?</p>
<p>Versions I use:</p>
<pre><code>*# oc version
oc v3.7.14
kubernetes v1.7.6+a08f5eeb62
features: Basic-Auth GSSAPI Kerberos SPNEGO
# docker version
Client:
Version: 1.12.6
API version: 1.24
Package version: docker-1.12.6-71.git3e8e77d.el7.x86_64
Go version: go1.8.3
Git commit: 3e8e77d/1.12.6
Built: Wed Dec 13 12:18:58 2017
OS/Arch: linux/amd64
Server:
Version: 1.12.6
API version: 1.24
Package version: docker-1.12.6-71.git3e8e77d.el7.x86_64
Go version: go1.8.3
Git commit: 3e8e77d/1.12.6
Built: Wed Dec 13 12:18:58 2017
OS/Arch: linux/amd64*
</code></pre>
| <p>Is it possible that it is an issue with permissions?</p>
<p>Unless something has changed...for better security OpenShift by default runs containers using a user with random UID; that user is a member of the root group.</p>
<p>So, in addition to the command @Rico suggested to be added to Dockerfile, I would add:</p>
<pre><code>RUN mkdir -p /opt/app/log \
&& chown -R :root /opt/app/log \
&& chmod -R 0775 /opt/app/log
</code></pre>
<p>This <a href="https://apisimulator.io/run-docker-containers-non-root-users-random-user-ids/" rel="nofollow noreferrer">blog article</a> has more info about running Docker containers with non-root users or random user IDs. (Disclaimer: I am affiliated with that web site)</p>
|
<p>(While learning Kubernetes I never really found any good resources explaining this)</p>
<p>Scenario: <br>
I own mywebsite1.com and mywebsite2.com and I want to host them both inside a Kubernetes Cluster. <br><br></p>
<p>I deploy a generic cloud ingress controller according to the following website with 2 <br>kubectl apply -f < url > commands. (mandatory.yaml and generic ingress.yaml) <br>
<a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/</a></p>
<p>So the question is what does that architecture look like? and how does the data flow into the Cluster?</p>
| <p>I convert 2 certificates to 2 .key and 2 .crt files <br>
I use those files to make 2 TLS secrets (1 for each website so they'll have HTTPS enabled) <br> <br>
I create 2 Ingress Objects:</p>
<ul>
<li><p>one that says website1.com/, points to a service called website1fe, and references website1's HTTPS/TLS certificate secret. <br> (The website1fe service only listens on port 80, and forwards traffic to pods spawned by a website1fe deployment)</p>
</li>
<li><p>the other says website2.com/, points to a service called website2fe, and references website2's HTTPS/TLS certificate secret. <br> (The website2fe service only listens on port 80, and forwards traffic to pods spawned by a website2fe deployment)</p>
</li>
</ul>
<p>I have a 3 Node Kubernetes Cluster that exists in a Private Subnet. <br>
They have IPs</p>
<pre><code> 10.1.1.10 10.1.1.11 10.1.1.12
</code></pre>
<p>When I ran the 2 <br>
kubectl apply -f < url > commands <br>
Those commands generated: <br></p>
<ul>
<li>A kubernetes deployment containing pods running Nginx L7 LB software, that declaratively configure themselves based on Ingress .yaml objects stored in etcd, because the nginx L7 LB pods are self configuring, they're referred to as Ingress Controller Pods. (these nginx ingress controller pods listen on ports 80 and 443.)</li>
<li>A Kubernetes Service of type Load Balancer: Kubernetes Service of type Load Balancer, uses Nodeports behind the scenes, (NodePort is safe to use when the nodes have private IPs, the NodePorts randomly (Note: service type LB uses NodePorts behind the scenes and that will be picked randomly, and cloud APIs will automatically link the cloud LB to the correct random NodePort. Alternatively you can use service type NodePort and gain the option to explicitly pick the NodePort.) pick from the range of 30000 - 32767, but for clarity sake I'll say the NodePort service is listening on ports 30080 and 30443 of every node in the cluster), A Cloud LB gets auto provisioned and exists outside of the cluster with a public IP address(using default settings), and it auto routes traffic to the NodePort that the Ingress Controller is exposed on. (An example of traffic flow: LB:443 --> NP:30443 --> IngressControllerPod:443 --> Grafana:3000)</li>
</ul>
<p>kubectl get svc --all-namespaces
<br> Gives the IPv4 IP address of the L4 LB (let's say it's the publicly routable IP 1.2.3.4)</p>
<p>Since I own both domains: I configure internet DNS so that website1.com and website2.com both point to 1.2.3.4</p>
<p>Note: The ingress controller is cloud provider aware so it automatically did the following reverse proxy/load balancing configuration: <br></p>
<pre><code>L4LB 1.2.3.4:80 --(LB between)--> 10.1.1.10:30080, 10.1.1.11:30080, 10.1.1.12:30080
L4LB 1.2.3.4:443 --(LB between)--> 10.1.1.10:30443, 10.1.1.11:30443, 10.1.1.12:30443
</code></pre>
<p>KubeProxy makes it so that requests on any node's port 30080 or 30443 get forwarded inside the cluster to the Nginx L7 LB/Ingress Controller Service, which then forwards the traffic to the L7 Nginx LB Pods. <br>
The L7 Nginx LB pods terminate* the HTTPS connection and forward traffic to website1.com and website2.com services, which are listening on unencrypted port 80. <br> (It's ok that it's unencrypted because we're in the cluster where no one would be sniffing the traffic.) (*note sometimes the Cloud LB terminates HTTPS and then forwards to ingress controller over cleartext port 80 but this isn't so bad b/c the clear text happens over private IP space) <br>
(The Nginx L7 LB knows which inner cluster service/website to forward to based on the L7(http://url) address that traffic is coming in on)</p>
<hr />
<p>Note a mistake to avoid:
Let's say that website1.com wants to access some resources that exist on website2.com</p>
<p>Well website2.com actually has 2 IP addresses and 2 DNS names.<br>
website2fe.default.svc.cluster.local <-- inner cluster resolvable DNS address <br>
website2.com <-- Externally resolving DNS address <br>
<br>
Instead of having website1 access resources via website2.com
You should have website1 access resources via website2fe.default.svc.cluster.local
(It's more efficient routing)</p>
|
<p>I'm working on a python script for update the configmaps programmatically.</p>
<p>Example script at shown as below. </p>
<pre><code>import requests
headers = {"Content-Type": "application/json-patch+json"}
configData = {
"apiVersion": "v1",
"kind": "ConfigMap",
"data": {
"test2.load": "testimtest"
},
"metadata": {
"name": "nginx2"
}
}
r = requests.patch("http://localhost:8080/api/v1/namespaces/default/configmaps/nginx2", json=configData)
</code></pre>
<p>The interesting side of this problem is that I have no problem with POST and GET methods but when I want to update kubernetes configmaps with PATCH method of HTTP I'm getting </p>
<pre><code> "reason":"UnsupportedMediaType" //STATUS_CODE 415
</code></pre>
<p>How I can handle this problem. </p>
| <p>I suggest you use a Kubernetes client library, instead of making the raw HTTP calls yourself. Then you don't need to figure out the low-level connection stuff, as the library will abstract that away for you.</p>
<p>I've been using <a href="https://github.com/kelproject/pykube" rel="nofollow noreferrer">Pykube</a>, which provides a nice pythonic API, though it does appear to be abandoned now.</p>
<p>You can also use the official <a href="https://github.com/kubernetes-client/python" rel="nofollow noreferrer">client-python</a>, which is actively maintained. The library is a bit more clunky, as it's based on an autogenerated OpenAPI client, but it covers lots of use-cases like streaming results.</p>
|
<p>I'm having difficulties getting my Ingress controller running on Google Container Engine. I want to use an NGINX Ingress Controller with Basic Auth and use a reserved global static ip name (this can be made in the External IP addresses section in the Google Cloud Admin interface). When I use the gce class everything works fine except for the Basic Auth (which I think is not supported on the gce class), anenter code hered when I try to use the nginx class the Ingress Controller launches but the IP address that I reserved in the Google Cloud Admin interface will not be attached to the Ingress Controller. Does anyone know how to get this working? Here is my config file:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: webserver
annotations:
kubernetes.io/ingress.global-static-ip-name: "myreservedipname"
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/auth-type: basic
ingress.kubernetes.io/auth-realm: "Auth required"
ingress.kubernetes.io/auth-secret: htpasswd
spec:
tls:
- secretName: tls
backend:
serviceName: webserver
servicePort: 80
</code></pre>
| <p>I found a solution with helm.</p>
<pre><code>helm install --name nginx-ingress stable/nginx-ingress \
--set controller.service.loadBalancerIP=<YOUR_EXTERNAL_IP>
</code></pre>
<p>You should use the <code>external-ip</code> and not the name you gave with gcloud.</p>
<p>Also, in my case I also added <code>--set rbac.create=true</code> for permissions.</p>
|
<p>I am deploying a stateful application in K8S.</p>
<p>Before that Im trying to implement an example.</p>
<p>Before deploying my-sql in my cluster, I have created a pv and a pvc.</p>
<pre><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: mysql-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
</code></pre>
<p>At this point in time, I have not edit or created any new <code>StorageClass</code>.</p>
<p>I then go on to deploy the application using a <code>volumeMount</code> in my deployment.</p>
<pre><code>.
.
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-pv-claim
</code></pre>
<p>I bring up the application successfully and, take it down (first the pod and the the deployment) and bring back the application and notice that my application data persists under <code>/var/lib/mysql</code></p>
<p>I later on noticed that <code>/mnt/data</code> does NOT exist on my host machine. I am working in minikube.</p>
<p>I looked into the storage class and it seems to be using :</p>
<pre><code>StorageClass: manual
</code></pre>
<p>But if I check all my storage classes, I see only this one :</p>
<pre><code># kubectl describe storageclass
Name: standard
IsDefaultClass: Yes
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.beta.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile"},"name":"standard","namespace":""},"provisioner":"k8s.io/minikube-hostpath"}
,storageclass.beta.kubernetes.io/is-default-class=true
Provisioner: k8s.io/minikube-hostpath
Parameters: <none>
AllowVolumeExpansion: <unset>
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: Immediate
Events: <none>
</code></pre>
<p>But the one that my-sql is using is not there.</p>
<p>I need help in understanding this please. Where is the PV <code>/mnt/data</code> ? </p>
<p>On my host machine the result to look for <code>/mnt/data</code> is : </p>
<pre><code># cd /mnt/data
cd: no such file or directory: /mnt/data
</code></pre>
| <p>Minikube is a Virtual Machine (VM) based all-in-one solution. So you have one node where the entire control plane lives and also this is your only worker node as well:</p>
<pre><code>$ kubectl get node
NAME STATUS ROLES AGE VERSION
minikube Ready master 1d v1.10.0
</code></pre>
<p>Now, it's one node, a VM, that hosts your Kubernetes cluster. So all host-related actions have to be done on said VM:</p>
<pre><code>$ minikube ssh
_ _
_ _ ( ) ( )
___ ___ (_) ___ (_)| |/') _ _ | |_ __
/' _ ` _ `\| |/' _ `\| || , < ( ) ( )| '_`\ /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )( ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)
$ ls -al /mnt
total 4
drwxr-xr-x 3 root root 60 Sep 8 12:38 .
drwxr-xr-x 17 root root 460 Sep 8 12:38 ..
drwxr-xr-x 7 root root 4096 Sep 8 12:38 vda1
</code></pre>
<p>And here you have your <code>/mnt</code> directory.</p>
|
<p>We successfully integrated 'service-locator-dns' in Lagom and deployed in Kubernetes, All services in the Lagom project are properly resolving with Kubernetes SRV requests.</p>
<p>But even statically defined(in build.sbt) non-lagom projects also go through <code>name-translators</code> and <code>srv-translators</code> and finally not resolving.</p>
<p>I have raised the issue for the same in Github <a href="https://github.com/lightbend/service-locator-dns/issues/29" rel="nofollow noreferrer">https://github.com/lightbend/service-locator-dns/issues/29</a></p>
<p>Can we avoid this with changes in name-translators itself or do we need to do any extra changes?</p>
<p>It will be very helpful for us if you please provide support or reference any documentation.</p>
<p>Log in kubernetes</p>
<p>log</p>
<pre><code>Resolving: premium-calculator
Translated premium-calculator to _http-lagom-api._tcp.premium-calculator.staging.svc.cluster.local
Resolving _http-lagom-api._tcp.premium-calculator.staging.svc.cluster.local (SRV)
Message to /10.114.0.10:53: Message(16,<QUERY,RD,SUCCESS>,List(Question(_http-lagom-api._tcp.premium-calculator.staging.svc.cluster.local,SRV,IN)),List(),List(),List())
Received message from /10.114.0.10:53: ByteString(0, 16, -127, -125, 0, 1, 0, 0, 0, 1, 0, 0, 15, 95, 104, 116, 116, 112, 45, 108, 97, 103, 111, 109, 45, 97, 112, 105, 4, 95, 116, 99, 112, 18, 112, 114, 101, 109, 105, 117, 109, 45, 99, 97, 108, 99, 117, 108, 97, 116, 111, 114, 7, 115, 116, 97, 103, 105, 110, 103, 3, 115, 118, 99, 7, 99, 108, 117, 115, 116, 101, 114, 5, 108, 111, 99, 97, 108, 0, 0, 33, 0, 1, 7, 99, 108, 117, 115, 116, 101, 114, 5, 108, 111, 99, 97, 108, 0, 0, 6)... and [76] more
Decoded: Message(16,<AN,QUERY,RD,RA,NAME_ERROR>,Vector(Question(_http-lagom-api._tcp.premium-calculator.staging.svc.cluster.local,SRV,IN)),Vector(),Vector(UnknownRecord(cluster.local,60,6,1,ByteString(2, 110, 115, 3, 100, 110, 115, 7, 99, 108, 117, 115, 116, 101, 114, 5, 108, 111, 99, 97, 108, 0, 10, 104, 111, 115, 116, 109, 97, 115, 116, 101, 114, 7, 99, 108, 117, 115, 116, 101, 114, 5, 108, 111, 99, 97, 108, 0, 90, -80, -107, 80, 0, 0, 112, -128, 0, 0, 28, 32, 0, 9, 58, -128, 0, 0, 0, 60))),Vector())
Resolved: Vector()
java.lang.IllegalStateException: Service premium-calculator was not found by service locator
</code></pre>
<p>service trait</p>
<pre><code>trait PremiumCalculator extends Service {
def getPremiums(channelName: String): ServiceCall[JsValue, JsValue]
override final def descriptor = {
import Service._
named("premium-calculator")
.withCalls(
restCall(Method.POST, "/api/v2/premium/:channelName", getPremiums _))
.withAutoAcl(true)
}
}
</code></pre>
<p>in build.sbt</p>
<pre><code>lagomUnmanagedServices in ThisBuild := Map(
"premium-calculator" -> "https://test.in",
)
</code></pre>
| <p>For locating Non-Lagom/Third Party Service in Lagom on Kubernetes, we have to use Lagom's service locator. Like this:</p>
<pre><code>lagom.services {
"premium-calculator" = "https://test.in"
}
</code></pre>
<p>Also, we have to use <code>ConfigurationServiceLocator</code> to locate the service:</p>
<pre><code>if(environment.isProd()) {
bind(ServiceLocator.class).to(ConfigurationServiceLocator.class);
}
</code></pre>
<p>Here <code>ConfigurationServiceLocator</code> locates the service via configuration (as the name suggests).</p>
<p>I hope this helps!</p>
|
<p>I have a docker image from I am doing </p>
<pre><code>docker run --name test -h test -p 9043:9043 -p 9443:9443 -d ibmcom/websphere-traditional:install
</code></pre>
<p>I am trying to put into a kubernetes deploy file and I have this: </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: websphere
spec:
replicas: 1
template:
metadata:
labels:
app: websphere
spec:
containers:
- name: websphere
image: ibmcom/websphere-traditional:install
ports:
- containerPort: 9443
resources:
requests:
memory: 500Mi
cpu: 0.5
limits:
memory: 500Mi
cpu: 0.5
imagePullPolicy: Always
</code></pre>
<p>my service.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: websphere
labels:
app: websphere
spec:
type: NodePort #Exposes the service as a node ports
ports:
- port: 9443
protocol: TCP
targetPort: 9443
selector:
app: websphere
</code></pre>
<p>May I have guidance on how to map 2 ports in my deployment file?</p>
| <p>You can add as many ports as you need.</p>
<p>Here your <code>deployment.yml</code>:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: websphere
spec:
replicas: 1
template:
metadata:
labels:
app: websphere
spec:
containers:
- name: websphere
image: ibmcom/websphere-traditional:install
ports:
- containerPort: 9043
- containerPort: 9443
resources:
requests:
memory: 500Mi
cpu: 0.5
limits:
memory: 500Mi
cpu: 0.5
imagePullPolicy: IfNotPresent
</code></pre>
<p>Here your <code>service.yml</code>:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: websphere
labels:
app: websphere
spec:
type: NodePort #Exposes the service as a node ports
ports:
- port: 9043
name: hello
protocol: TCP
targetPort: 9043
nodePort: 30043
- port: 9443
name: privet
protocol: TCP
targetPort: 9443
nodePort: 30443
selector:
app: websphere
</code></pre>
<p>Check on your kubernetes <code>api-server</code> configuration what is the range for nodePorts (usually <code>30000-32767</code>, but it's configurable).</p>
<p><strong>EDIT</strong></p>
<p>If I remove from deployment.yml the <code>resources</code> section, it starts correctly (after about 5 mins).
Here a snippet of the logs:</p>
<blockquote>
<p>[9/10/18 8:08:06:004 UTC] 00000051 webcontainer I
com.ibm.ws.webcontainer.VirtualHostImpl addWebApplication SRVE0250I:
Web Module Default Web Application has been bound to
default_host[<em>:9080,</em>:80,<em>:9443,</em>:506 0,<em>:5061,</em>:443].</p>
</blockquote>
<p>Problems come connecting to it (I use ingress with traefik), because of certificates (I suppose):</p>
<blockquote>
<p>[9/10/18 10:15:08:413 UTC] 000000a4 SSLHandshakeE E SSLC0008E:
Unable to initialize SSL connection. Unauthorized access was denied
or security settings have expired. Exception is
javax.net.ssl.SSLException: Unrecognized SSL message, plaintext
connection?</p>
</blockquote>
<p>To solve that (I didn't go further) this may help: <a href="https://stackoverflow.com/questions/37025096/sslhandshakee-e-sslc0008e-unable-to-initialize-ssl-connection-unauthorized-acc">SSLHandshakeE E SSLC0008E: Unable to initialize SSL connection. Unauthorized access was denied or security settings have expired</a></p>
<p>Trying to connect with <code>port-forward</code>:</p>
<p><a href="https://i.stack.imgur.com/8G1K2.png" rel="noreferrer"><img src="https://i.stack.imgur.com/8G1K2.png" alt="enter image description here"></a></p>
<p>and using dthe browser to connect, I land on this page:</p>
<p><a href="https://i.stack.imgur.com/qYGXW.png" rel="noreferrer"><img src="https://i.stack.imgur.com/qYGXW.png" alt="enter image description here"></a></p>
|
<p>Could someone explain the benefits/issues with hosting a database in Kubernetes via a persistent volume claim combined with a storage volume over using an actual cloud database resource? </p>
| <p>It's essentially a trade-off: convenience vs control. Take a concrete example: let's say you pay Amazon money to use <a href="https://aws.amazon.com/athena/" rel="nofollow noreferrer">Athena</a>, which is really just a nicely packaged version of <a href="https://prestodb.io/" rel="nofollow noreferrer">Facebook Presto</a> which AWS kindly operates for you in exchange for $$$. You could run Presto on EKS yourself, but why would you. </p>
<p>Now, let's say you want to or need to use Apache Drill or Apache Impala. Amazon doesn't offer it. Nor does any of the other big public cloud providers at time of writing, as far as I know.</p>
<p>Another thought: what if you want to migrate off of AWS? Your data has gravity as well.</p>
|
<p>I'm getting the node IP address instead of the client IP. Is it possible to get the client IP with a service of type <code>LoadBalancer</code>? Or will I need to use a ingress controller?</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: app-svc
labels:
name: app-svc
environment: dev
spec:
type: LoadBalancer
loadBalancerIP: XXX.XXX.XXX.XXX
ports:
- name: http-port
port: 80
targetPort: 80
protocol: TCP
selector:
name: app-deploy
</code></pre>
| <p>You do not need any Ingress controller. However it is required to set the value of the <code>spec.externalTrafficPolicy</code> Service field to "Local" (the default is "Cluster") in Microsoft Azure.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: LoadBalancer
externalTrafficPolicy: Local
...
</code></pre>
<p>See <a href="https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-loadbalancer" rel="noreferrer">Using source IP</a>.</p>
|
<p>I'm new with service mesh thing, so I did some PoC of basic implementation of microservices in kubernetes with istio. </p>
<p>I have 2 Deployments which is supposed to be talking to each other using gRPC. When I call the grpc server it returned error <code>rpc error: code = Internal desc = server closed the stream without sending trailers</code></p>
<p>This is my grpc Service config: </p>
<p><code>
apiVersion: v1
kind: Service
metadata:
name: grpcserver
labels:
app: grpcserver
spec:
ports:
- port: 8080
name: http
selector:
app: grpcserver
</code></p>
| <p>Quoting Istio <a href="https://istio.io/docs/setup/kubernetes/spec-requirements/" rel="nofollow noreferrer">docs</a>, </p>
<blockquote>
<p>Service ports must be named. The port names must be of the form {protocol}[-{suffix}] with http, http2, grpc, mongo, or redis as the in order to take advantage of Istio’s routing features.</p>
</blockquote>
<p>So the Service configuration should be:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: grpcserver
labels:
app: grpcserver
spec:
ports:
- port: 8080
name: grpc
selector:
app: grpcserver
</code></pre>
|
<p>currently i used docker-compose to arrange my application that consists of 3 dockerimages - a postgresql database and 2 wildfly application servers (Frontend-ui, backend).</p>
<p>My <strong>docker-compose.yml</strong> looks like this:</p>
<pre><code>version: '3.0'
services:
my-webgui-service:
image: test/mywebgui
ports:
- "18081:8080"
links:
- my-app-service
my-app-service:
image: test/myapp
ports:
- "18080:8080"
- "29990:9990"
links:
- db-service
db-service:
image: test/postgres
ports:
- "15432:5432
</code></pre>
<p>Now, i would like to implement the same thing via kubernetes. </p>
<p>Is it possible to arrange this in a single yaml-File, that contains the configuration for service, deployment and pods?
I thought that it is easier to manage automated deployments when not having seperated yml-files.</p>
<p>Is this a best practise? </p>
<p>Best regards, Shane</p>
| <p>Yes it's possible, simply separate the different resources such as deployments, services, etc. with <code>---</code>. Concerning if it's a good practice or not: a matter of taste, rather. If you have all in one file it's more self-contained but for <code>kubectl apply -f</code> it doesn't really matter since it operates on directories as well.</p>
|
<p>I am using helm to deploy StatefulSet, below is yaml</p>
<pre><code>---
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: {{ .Values.database.mongo.storageClassName }}
labels:
for: for-mongo-statefulset
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
reclaimPolicy: Retain
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongo
spec:
serviceName: {{ .Values.database.mongo.serviceName }}
replicas: {{ .Values.database.mongo.replicas }}
template:
metadata:
labels:
role: mongo
environment: prod
spec:
serviceAccountName: {{ .Values.serviceAccount }}
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- mongod
- "--bind_ip"
- 0.0.0.0
- "--replSet"
- {{ .Values.database.mongo.replicaSet }}
- "--smallfiles"
- "--noprealloc"
ports:
- containerPort: {{ .Values.database.mongo.port }}
volumeMounts:
- name: {{ .Values.database.mongo.storageName }}
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongo,environment=prod"
- name: KUBERNETES_MONGO_SERVICE_NAME
value: {{ .Values.database.mongo.serviceName }}
volumeClaimTemplates:
- metadata:
name: {{ .Values.database.mongo.storageName }}
spec:
storageClassName: {{ .Values.database.mongo.storageClassName }}
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 100Gi
</code></pre>
<p>On <code>helm install . -n release-name</code> it creates <code>StorageClass</code>, <code>PersistentVolume</code> & <code>PersistentVolumeClaim</code>.</p>
<p>If I delete the release <code>helm delete release-name --purge</code> it keeps pv and pvc which is fine. But it deletes the <code>StorageClass</code> even though I have specified <code>reclaimPolicy: Retain</code> on sc.</p>
<p>Is this expected behaviour? </p>
<p><strong>Helm version</strong></p>
<pre><code>Client: v2.10.0+g9ad53aa
Server: v2.10.0+g9ad53aa
</code></pre>
<p><strong>Kubernetes version</strong></p>
<pre><code>Client Version: v1.11.1
Server Version: v1.9.7-gke.5
</code></pre>
<p><strong>Update</strong></p>
<p>I assumed <code>reclaimPolicy</code> was for both StorageClass and PV/PVC, Thanks to @Pablo for clearing my understanding regarding <code>reclaimPolicy</code></p>
<blockquote>
<p><strong><a href="https://kubernetes.io/docs/concepts/storage/storage-classes/#reclaim-policy" rel="nofollow noreferrer">Reclaim Policy</a></strong></p>
<p>Persistent Volumes that are dynamically created by a
storage class will have the reclaim policy specified in the
reclaimPolicy field of the class, which can be either Delete or
Retain. If no <code>reclaimPolicy</code> is specified when a StorageClass object is
created, it will default to Delete</p>
</blockquote>
<p>Is there anything similar to <code>reclaimPolicy</code> which will tell helm/kubernetes to not delete <code>StorageClass</code> when performing <code>helm delete release-name --purge</code>?</p>
| <p>The reclaim policy listed in the StorageClass object is used for the persistent volumes not the storage class it self. Meaning that the pvs and pvcs that are created using that storage class will inherit the reclaim policy set in the storage class.</p>
<p>You can find more info on that here: <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/storage-classes/</a></p>
|
<p>Is it possible to tell Kubernetes to just throw away my current pod and recreate it again every 48 hours? Is there some type of a scheduler on Google Cloud Kubernetes? Or can I just configure my deployment this way?
I have a Node.js application containerized with Docker running inside of a Kubernetes cluster on Google Cloud Platform.
Thank you in advance!</p>
| <p>True "Kubernetes way" to resolve this issue - is to design <code>ReadinessProbe</code>/<code>LivenessProbe</code> for your app's deployment/statefulset/pod. Once your <code>Pod</code> fall down correct probes will handle it and your <code>Pod</code> will be recreated fully automatically</p>
<p>P.S: you are the one who knows your app better then anyone. Try out to resolve "every 48h issues" and then make the right probes. Good luck!</p>
<p>Link: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/</a></p>
|
<p>my pvc.yaml</p>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: database-disk
labels:
stage: production
name: database
app: mysql
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 2Gi
</code></pre>
<p>when i ran <code>kubectl apply -f pvc.yaml</code> i got the following error
<code>Normal FailedBinding 12h (x83 over 13h) persistentvolume-controller no persistent volumes available for this claim and no storage class is set</code></p>
<p>same pvc worked fine on "GKE" (Google Kubernetes Engine) but failing in my local cluster using <a href="https://github.com/ubuntu/microk8s" rel="noreferrer">microk8s</a></p>
| <p>Did you create any PV in your cluster? </p>
<p>PV and Storage classes on local clusters should be done manually by cluster admin.</p>
<p>Check out <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/" rel="noreferrer">Kubernetes documentation</a> for the details:</p>
<blockquote>
<ol>
<li><p>A cluster administrator creates a PersistentVolume that is backed by physical storage. The administrator does not associate the volume
with any Pod.</p></li>
<li><p>A cluster user creates a PersistentVolumeClaim, which gets automatically bound to a suitable PersistentVolume.</p></li>
<li><p>The user creates a Pod that uses the PersistentVolumeClaim as storage.</p></li>
</ol>
</blockquote>
|
<p>So this is my current setup. </p>
<p>I have a k8 cluster with nginx controller installed. I installed nginx using helm. </p>
<p>So I have a simple apple service as below:</p>
<pre><code>kind: Pod
apiVersion: v1
metadata:
name: apple-app
labels:
app: apple
spec:
containers:
- name: apple-app
image: hashicorp/http-echo
args:
- "-text=apple"
---
kind: Service
apiVersion: v1
metadata:
name: apple-service
spec:
selector:
app: apple
ports:
- port: 5678 # Default port for image
</code></pre>
<p>and then I did a kubectl apply -f apples.yaml</p>
<p>Now i have an ingress.yaml as below. </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /apple
backend:
serviceName: apple-service
servicePort: 5678
</code></pre>
<p>and then I kubectl -f ingress.yaml</p>
<p>my ingress controller doesnt have an external ip address. </p>
<p>But even without the external ip, I did a</p>
<pre><code>kubectl exec -it nginxdeploy-nginx-ingress-controller-5d6ddbb677-774xc /bin/bash
</code></pre>
<p>And tried doing a curl kL <a href="http://localhost/apples" rel="nofollow noreferrer">http://localhost/apples</a></p>
<p>and its giving me a 503 error. </p>
<p>Anybody can help on this?</p>
| <p>I've tested your configuration, and it seems to be working fine to me.</p>
<p>Pod responds fine:</p>
<pre><code>$ kubectl describe pod apple-app
Name: apple-app
Namespace: default
Node: kube-helm/10.156.0.2
Start Time: Mon, 10 Sep 2018 11:53:57 +0000
Labels: app=apple
Annotations: <none>
Status: Running
IP: 192.168.73.73
...
$ curl http://192.168.73.73:5678
apple
</code></pre>
<p>Service responds fine:</p>
<pre><code>$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
apple-service ClusterIP 10.111.93.194 <none> 5678/TCP 1m
$ curl http://10.111.93.194:5678
apple
</code></pre>
<p>Ingress also responds fine, but by default it redirects http to https:</p>
<pre><code>$ kubectl exec -it nginx-ingress-controller-6c9fcdf8d9-ggrcs -n ingress-nginx /bin/bash
www-data@nginx-ingress-controller-6c9fcdf8d9-ggrcs:/etc/nginx$ curl http://localhost/apple
<html>
<head><title>308 Permanent Redirect</title></head>
<body bgcolor="white">
<center><h1>308 Permanent Redirect</h1></center>
<hr><center>nginx/1.13.12</center>
</body>
</html>
www-data@nginx-ingress-controller-6c9fcdf8d9-ggrcs:/etc/nginx$ curl -k https://localhost/apple
apple
</code></pre>
<p>If you check the nginx configuration in controller pod, you will see that redirect configuration for /apple location:</p>
<pre><code>www-data@nginx-ingress-controller-6c9fcdf8d9-ggrcs:/etc/nginx$ more /etc/nginx/nginx.conf
...
location /apple {
set $namespace "default";
set $ingress_name "example-ingress";
set $service_name "apple-service";
set $service_port "5678";
set $location_path "/apple";
rewrite_by_lua_block {
}
log_by_lua_block {
monitor.call()
}
if ($scheme = https) {
more_set_headers "Strict-Transport-Security: max-age=1572
4800; includeSubDomains";
}
port_in_redirect off;
set $proxy_upstream_name "default-apple-service-5678";
# enforce ssl on server side
if ($redirect_to_https) {
return 308 https://$best_http_host$request_uri;
}
client_max_body_size "1m";
proxy_set_header Host $best_http_host;
# Pass the extracted client certificate to the backend
# Allow websocket connections
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Request-ID $req_id;
proxy_set_header X-Real-IP $the_real_ip;
proxy_set_header X-Forwarded-For $the_real_ip;
proxy_set_header X-Forwarded-Host $best_http_host;
proxy_set_header X-Forwarded-Port $pass_port;
proxy_set_header X-Forwarded-Proto $pass_access_scheme;
proxy_set_header X-Original-URI $request_uri;
proxy_set_header X-Scheme $pass_access_scheme;
# Pass the original X-Forwarded-For
proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
# mitigate HTTPoxy Vulnerability
# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
proxy_set_header Proxy "";
# Custom headers to proxied server
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
proxy_buffering "off";
proxy_buffer_size "4k";
proxy_buffers 4 "4k";
proxy_request_buffering "on";
proxy_http_version 1.1;
proxy_cookie_domain off;
proxy_cookie_path off;
# In case of errors try the next upstream server before returning an error
proxy_next_upstream error timeout;
proxy_next_upstream_tries 3;
proxy_pass http://default-apple-service-5678;
proxy_redirect off;
}
</code></pre>
<p>You can disable this default behavior by adding <a href="https://github.com/kubernetes/ingress-nginx/issues/1567#issuecomment-364649925" rel="nofollow noreferrer">annotations</a>:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: example-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /apple
backend:
serviceName: apple-service
servicePort: 5678
www-data@nginx-ingress-controller-6c9fcdf8d9-ggrcs:/etc/nginx$ curl http://localhost/apple
apple
</code></pre>
|
<p>What does AWS' Elastic Kubernetes Service (EKS) <strong>do exactly</strong> if so much configuration is needed in CloudFormation which is (yet) another AWS service?</p>
<p>I followed the AWS EKS Getting Started in the docs at (<a href="https://docs.aws.amazon.com/eks/latest/userguide/eks-ug.pdf" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/eks-ug.pdf</a>) where it seems CloudFormation knowledge is <strong>heavily</strong> required to run EKS.<br>
Am I mistaken or something? </p>
<p>So in addition to learning the Kubernetes .yaml manifest definitions, to run k8s on EKS, AWS expects you to learn their CloudFormation .yaml configuration manifests as well (which are all PascalCase as opposed to k8s' camelCase i might add)? </p>
<p>I understand that EKS does some management of latest version of k8s and control plane, and is "secure by default" but other than that? </p>
<p>Why wouldn't I just run k8s on AWS using kops then, and deal with the slightly outdated k8s versions?<br>
Or am I supposed to do EKS + CloudFormation + kops at which point GKE looks like a really tempting alternative? </p>
<p>Update:<br>
At this point I'm really thinking EKS is just a thin wrapper over CloudFormation after searching on EKS in detail and how it is so reliant on CloudFormation manifests.<br>
Likely a business response to the alarming popularity of k8s, GKE in general with no substance to back the service.<br>
Hopefully this helps save the time of anyone evaluating the half-baked service that is EKS. </p>
| <p>To run Kubernetes on AWS you have basically 2 options:</p>
<ul>
<li>using kops, it will create Master nodes + workers node under the hood, in plain EC2 machines</li>
<li>EKS + Cloudformation workers stack (you can use also Terraform as an alternative to deploy the workers, or eksctl, that will create both the EKS cluster and the workers. I recommend you to follow <a href="https://eksworkshop.com/" rel="noreferrer">this workshop</a>)</li>
</ul>
<p>EKS alone provides only the master nodes of a kubernetes cluster, in a highly available setup. You still need to add the worker nodes, where your containers will be created.</p>
<p>I tried both kops and EKS + Workers, and I ended up using EKS, because I found it easier to setup and maintain and more fault-tolerant. </p>
|
<p>I am using Kubernetes Service of type Cluster IP, which will expose a deployment.
In my container I want to use the Service IP (cluster IP). Is there any way I can get the IP Address inside the Pod/container? </p>
<p>Is it possible to get the cluster IP from Service name?</p>
| <p>Yes, via the <a href="https://kubernetes.io/docs/concepts/services-networking/service/#discovering-services" rel="nofollow noreferrer">environment-level service discovery</a> mechanism. Note, however that any service that you want to access like this must be created <strong>before</strong> the pod itself has been launched, otherwise the environment variables are not populated.</p>
|
<p>I have a server running on Kubernetes to handle hourly processing jobs. Thinking of using a service to expose the pods, and using an (external) cron job to hit the load balancer so that kubernetes can autoscale to handle the higher load as required. However in implementation, if the cron job sends, say, 100 requests at the same time while there's only 1 pod, all the traffic will go to that pod whereas subsequently spun up pods will still not have any traffic to handle. </p>
<p>How can I get around this issue? Is it possible for me to scale up the pods first using a cron job before making the requests? Or should I make requests with a time delay so as to give time for the pods to get spun up? Or other suggestions are also welcome!</p>
| <p>If you're looking for serverless-style instant scale-up, something like <a href="https://github.com/knative/" rel="nofollow noreferrer">https://github.com/knative/</a> might be something you can use on top of Kubernetes/GKE.</p>
<p>Other than that, the only way to scale up pods on Kubernetes today is the Horizontal Pod Autoscaler, which will take a look at CPU/memory averages, (and if you're on GKE, it can use Custom Stackdriver Metrics you can expose from your app using Prometheus etc.).</p>
|
<p>As far as I can see, GKE seems to be slighty more complex to configure and deploy an application (using Kubernetes direct files or Helm Charts or something else ?). Furthermore it seems to have no better pod failure detection or better performances ?</p>
<p>Why should we use GKE whereas there is GAE which only needs dispatch.yaml, app.yaml files and gcloud cli to deploy ?</p>
<p>Is there any technical or financial feedback against GAE ?</p>
<p>Finally, how can we make a choice between GKE and GAE ? What whould be the reason to not choose GAE ?</p>
| <p>Google Kubernetes Engine(GKE) is a cluster manager and orchestration system for running your Docker containers. Google App Engine(GAE) is basically google managed containers. </p>
<p>They both try to provide you similar main benefits(scalability, redundancy, rollouts, rollbacks, etc.). <strong>The main difference is in their philosophy: GKE tries to provide you very fine grained control over everything about your cluster. GAE tries to get you run your apps with as little configuration/management as possible.</strong></p>
<p>With GKE you have more control, but also more work for you. You need to configure the network, security, software updates etc. With GAE you don't need to worry about many of these things, and you can focus on your app.</p>
|
<p>when i am trying to test the configuration of kubectl</p>
<pre><code>kubectl get svc
</code></pre>
<p>i am getting this</p>
<blockquote>
<p>error: the server doesn't have a resource type "svc"</p>
</blockquote>
<p>when I try this command</p>
<pre><code>kubectl get services
</code></pre>
<p>i am getting error:</p>
<blockquote>
<p>The connection to the server localhost:8080 was refused - did you specify the right host or port?</p>
</blockquote>
<p>and i am following this userguide to deploy a kubernetes application on my mac</p>
<p><a href="https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html#eks-create-cluster" rel="noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html#eks-create-cluster</a></p>
<pre><code>Admins-MacBook-Pro:~ Harshin$ kubectl version --short --client
Client Version: v1.10.3
</code></pre>
| <p>Make a copy of the config file and resolve this issue:</p>
<pre><code>sudo mkdir ~/.kube
sudo cp /etc/kubernetes/admin.conf ~/.kube/
cd ~/.kube
sudo mv admin.conf config
sudo service kubelet restart
</code></pre>
|
<p>I've setup prometheus to collect metrics from my pods and nodes.
I've also setup the prometheus custom metrics adapter.</p>
<p>How can I use those metrics provided by prometheus to autoscale my pods ? I tried to google it but I only find custom pods that provides their metrics on their /metrics url. I would like to be able to autoscale any of my pods that already have a prometheus metric based on the cpu or memory usage.</p>
<p>I can visualize all the metrics in grafana for all my pods and nodes but can't find a way to use it with autoscale</p>
| <p>You need to create an HPA (Horizontal Pod Autoscaler)</p>
<p>More info <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-multiple-metrics" rel="nofollow noreferrer">here</a></p>
<p><a href="https://github.com/stefanprodan/k8s-prom-hpa" rel="nofollow noreferrer">This is a good tool</a> showing you how to use an HPA with custom metrics either using a the K8s metrics server or Prometheus.</p>
|
<p>I am using <a href="https://github.com/kubernetes-sigs/kubebuilder" rel="nofollow noreferrer">kubebuilder</a> to create kubernetes operator project. After running the project init command described in <a href="https://book.kubebuilder.io/quick_start.html" rel="nofollow noreferrer">quickstart guide</a></p>
<pre><code>kubebuilder init --domain k8s.io --license apache2 --owner "The Kubernetes Authors"
</code></pre>
<p><code>dep ensure</code> returns with error log given below.</p>
<pre><code>Solving failure: No versions of k8s.io/client-go met constraints:
v8.0.0: Could not introduce k8s.io/[email protected], as it is not allowed by constraints from the following projects:
kubernetes-1.10.1 from (root)
kubernetes-1.10.1 from sigs.k8s.io/controller-runtime@master
v7.0.0: Could not introduce k8s.io/[email protected], as it is not allowed by constraints from the following projects:
kubernetes-1.10.1 from (root)
kubernetes-1.10.1 from sigs.k8s.io/controller-runtime@master
v6.0.0: Could not introduce k8s.io/[email protected], as it is not allowed by constraints from the following projects:
kubernetes-1.10.1 from (root)
kubernetes-1.10.1 from sigs.k8s.io/controller-runtime@master
</code></pre>
| <p>Try using the latest <code>kubebuilder</code> from <a href="https://github.com/kubernetes-sigs/kubebuilder/releases" rel="nofollow noreferrer">here</a>. It's likely that the dependencies for the version in the quick start are out of date.</p>
<p>It works fine for me with <code>v1.0.3</code></p>
<pre><code>~/go/src/github.com $ kubebuilder init --domain k8s.io --license apache2 --owner "The Kubernetes Authors"
Run `dep ensure` to fetch dependencies (Recommended) [y/n]?
y
dep ensure
Running make...
make
go generate ./pkg/... ./cmd/...
go fmt ./pkg/... ./cmd/...
go vet ./pkg/... ./cmd/...
go run vendor/sigs.k8s.io/controller-tools/cmd/controller-gen/main.go all
CRD manifests generated under '/root/go/src/github.com/config/crds'
RBAC manifests generated under '/root/go/src/github.com/config/rbac'
go test ./pkg/... ./cmd/... -coverprofile cover.out
? github.com/pkg/apis [no test files]
? github.com/pkg/controller [no test files]
ok github.com/pkg/errors 0.207s coverage: 100.0% of statements
? github.com/cmd/manager [no test files]
go build -o bin/manager github.com/cmd/manager
Next: Define a resource with:
$ kubebuilder create api
</code></pre>
|
<p>This is now the fourth time I set up a kubernetes cluster. It's always the same setup: basic k8s, traefik as reverse proxy, dashboard, prometheus, elk-stack. But this time something with the traefik deployment is odd...</p>
<p>So for all other clusters I just deployed my default setup with some rbac entries, a config map containing the toml file, the actual deployment, a service and the web-ui:</p>
<p>RBAC:</p>
<pre><code>---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: infra
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: infra
</code></pre>
<p>ConfigMap:</p>
<pre><code>---
apiVersion: v1
kind: ConfigMap
metadata:
name: traefik-toml
labels:
name: traefik-toml
namespace: infra
data:
traefik.toml: |-
defaultEntryPoints = ["http","https"]
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.http.redirect]
entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[[entryPoints.https.tls.certificates]]
CertFile = "/ssl/external/<EXTERNAL_URL>.crt"
KeyFile = "/ssl/external/<EXTERNAL_URL>.key"
[[entryPoints.https.tls.certificates]]
CertFile = "/ssl/internal/<INTERNAL_URL>.crt"
KeyFile = "/ssl/internal/<INTERNAL_URL>.key"
[accessLog]
</code></pre>
<p>Deployment</p>
<pre><code>---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress-controller
namespace: infra
labels:
k8s-app: traefik-ingress-lb
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
containers:
- image: traefik:v1.6.5
name: traefik-ingress-lb
volumeMounts:
- mountPath: /ssl/external
name: ssl-external
- mountPath: /ssl/internal
name: ssl-internal
- name: traefik-toml
subPath: traefik.toml
mountPath: /config/traefik.toml
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
- name: admin
containerPort: 8080
args:
- --configfile=/config/traefik.toml
- --api
- --kubernetes
- --logLevel=INFO
volumes:
- name: ssl-external
secret:
secretName: <EXTERNAL_URL>.cert
- name: ssl-internal
secret:
secretName: <INTERNAL_URL>.cert
- name: traefik-toml
configMap:
name: traefik-toml
</code></pre>
<p>Service:</p>
<pre><code>---
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: infra
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
name: web
- protocol: TCP
port: 443
name: sweb
externalIPs:
- <WORKER IP 1>
- <WORKER IP 2>
</code></pre>
<p>This works nicely for the other ones, but on the new one (where I did not setup kubernetes myself), there is the following error in the logs every 30 seconds (the <em>Error checking new version</em> not that often!):</p>
<pre><code>E0827 14:29:49.566294 1 reflector.go:205] github.com/containous/traefik/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0827 14:29:49.572633 1 reflector.go:205] github.com/containous/traefik/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
E0827 14:29:49.592844 1 reflector.go:205] github.com/containous/traefik/vendor/k8s.io/client-go/informers/factory.go:86: Failed to list *v1beta1.Ingress: Get https://10.96.0.1:443/apis/extensions/v1beta1/ingresses?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
time="2018-08-27T14:30:00Z" level=warning msg="Error checking new version: Get https://update.traefik.io/repos/containous/traefik/releases: dial tcp: i/o timeout"
</code></pre>
<p>Has someone any idea? Is this a known issue? I could not find any known problems on this topic..</p>
<p>Thanks in advance!</p>
| <p>I managed to fix the problem:</p>
<p>The problem was a faulty iptables FORWARD policy, that is set by newer docker engines: <a href="https://github.com/moby/moby/issues/35777" rel="nofollow noreferrer">https://github.com/moby/moby/issues/35777</a></p>
<p>Currently we have a workaround, that is steadily setting the policy back to ACCEPT.</p>
<p>If we have a <em>real</em> fix I will hopefully remember to come back here and post it :)</p>
|
<p>I am trying to deploy minio in kubernetes using helm stable charts,
and when I try to check the status of the release </p>
<ul>
<li>helm status minio</li>
</ul>
<p>the pod desired capacity is 4, but current is 0
I tried to look the journalctl logs for any logs from kubelet, but found none
I have attached all helm charts can some one please point out what wrong am I doing?</p>
<pre><code>---
# Source: minio/templates/secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: RELEASE-NAME-minio
labels:
app: minio
chart: minio-1.7.0
release: RELEASE-NAME
heritage: Tiller
type: Opaque
data:
accesskey: RFJMVEFEQU1DRjNUQTVVTVhOMDY=
secretkey: bHQwWk9zWmp5MFpvMmxXN3gxeHlFWmF5bXNPUkpLM1VTb3VqeEdrdw==
---
# Source: minio/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: RELEASE-NAME-minio
labels:
app: minio
chart: minio-1.7.0
release: RELEASE-NAME
heritage: Tiller
data:
initialize: |-
#!/bin/sh
set -e ; # Have script exit in the event of a failed command.
# connectToMinio
# Use a check-sleep-check loop to wait for Minio service to be available
connectToMinio() {
ATTEMPTS=0 ; LIMIT=29 ; # Allow 30 attempts
set -e ; # fail if we can't read the keys.
ACCESS=$(cat /config/accesskey) ; SECRET=$(cat /config/secretkey) ;
set +e ; # The connections to minio are allowed to fail.
echo "Connecting to Minio server: http://$MINIO_ENDPOINT:$MINIO_PORT" ;
MC_COMMAND="mc config host add myminio http://$MINIO_ENDPOINT:$MINIO_PORT $ACCESS $SECRET" ;
$MC_COMMAND ;
STATUS=$? ;
until [ $STATUS = 0 ]
do
ATTEMPTS=`expr $ATTEMPTS + 1` ;
echo \"Failed attempts: $ATTEMPTS\" ;
if [ $ATTEMPTS -gt $LIMIT ]; then
exit 1 ;
fi ;
sleep 2 ; # 1 second intervals between attempts
$MC_COMMAND ;
STATUS=$? ;
done ;
set -e ; # reset `e` as active
return 0
}
# checkBucketExists ($bucket)
# Check if the bucket exists, by using the exit code of `mc ls`
checkBucketExists() {
BUCKET=$1
CMD=$(/usr/bin/mc ls myminio/$BUCKET > /dev/null 2>&1)
return $?
}
# createBucket ($bucket, $policy, $purge)
# Ensure bucket exists, purging if asked to
createBucket() {
BUCKET=$1
POLICY=$2
PURGE=$3
# Purge the bucket, if set & exists
# Since PURGE is user input, check explicitly for `true`
if [ $PURGE = true ]; then
if checkBucketExists $BUCKET ; then
echo "Purging bucket '$BUCKET'."
set +e ; # don't exit if this fails
/usr/bin/mc rm -r --force myminio/$BUCKET
set -e ; # reset `e` as active
else
echo "Bucket '$BUCKET' does not exist, skipping purge."
fi
fi
# Create the bucket if it does not exist
if ! checkBucketExists $BUCKET ; then
echo "Creating bucket '$BUCKET'"
/usr/bin/mc mb myminio/$BUCKET
else
echo "Bucket '$BUCKET' already exists."
fi
# At this point, the bucket should exist, skip checking for existence
# Set policy on the bucket
echo "Setting policy of bucket '$BUCKET' to '$POLICY'."
/usr/bin/mc policy $POLICY myminio/$BUCKET
}
# Try connecting to Minio instance
connectToMinio
# Create the bucket
createBucket bucket none false
config.json: |-
{
"version": "26",
"credential": {
"accessKey": "DR06",
"secretKey": "lt0ZxGkw"
},
"region": "us-east-1",
"browser": "on",
"worm": "off",
"domain": "",
"storageclass": {
"standard": "",
"rrs": ""
},
"cache": {
"drives": [],
"expiry": 90,
"maxuse": 80,
"exclude": []
},
"notify": {
"amqp": {
"1": {
"enable": false,
"url": "",
"exchange": "",
"routingKey": "",
"exchangeType": "",
"deliveryMode": 0,
"mandatory": false,
"immediate": false,
"durable": false,
"internal": false,
"noWait": false,
"autoDeleted": false
}
},
"nats": {
"1": {
"enable": false,
"address": "",
"subject": "",
"username": "",
"password": "",
"token": "",
"secure": false,
"pingInterval": 0,
"streaming": {
"enable": false,
"clusterID": "",
"clientID": "",
"async": false,
"maxPubAcksInflight": 0
}
}
},
"elasticsearch": {
"1": {
"enable": false,
"format": "namespace",
"url": "",
"index": ""
}
},
"redis": {
"1": {
"enable": false,
"format": "namespace",
"address": "",
"password": "",
"key": ""
}
},
"postgresql": {
"1": {
"enable": false,
"format": "namespace",
"connectionString": "",
"table": "",
"host": "",
"port": "",
"user": "",
"password": "",
"database": ""
}
},
"kafka": {
"1": {
"enable": false,
"brokers": null,
"topic": ""
}
},
"webhook": {
"1": {
"enable": false,
"endpoint": ""
}
},
"mysql": {
"1": {
"enable": false,
"format": "namespace",
"dsnString": "",
"table": "",
"host": "",
"port": "",
"user": "",
"password": "",
"database": ""
}
},
"mqtt": {
"1": {
"enable": false,
"broker": "",
"topic": "",
"qos": 0,
"clientId": "",
"username": "",
"password": "",
"reconnectInterval": 0,
"keepAliveInterval": 0
}
}
}
}
---
# Source: minio/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: RELEASE-NAME-minio
labels:
app: minio
chart: minio-1.7.0
release: RELEASE-NAME
heritage: Tiller
spec:
type: ClusterIP
clusterIP: None
ports:
- name: service
port: 9000
targetPort: 9000
protocol: TCP
selector:
app: minio
release: RELEASE-NAME
---
# Source: minio/templates/statefulset.yaml
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: RELEASE-NAME-minio
labels:
app: minio
chart: minio-1.7.0
release: RELEASE-NAME
heritage: Tiller
spec:
serviceName: RELEASE-NAME-minio
replicas: 4
selector:
matchLabels:
app: minio
release: RELEASE-NAME
template:
metadata:
name: RELEASE-NAME-minio
labels:
app: minio
release: RELEASE-NAME
spec:
containers:
- name: minio
image: node1:5000/minio/minio:RELEASE.2018-09-01T00-38-25Z
imagePullPolicy: IfNotPresent
command: [ "/bin/sh",
"-ce",
"cp /tmp/config.json &&
/usr/bin/docker-entrypoint.sh minio -C server
http://RELEASE-NAME-minio-0.RELEASE-NAME-minio.default.svc.cluster.local/export
http://RELEASE-NAME-minio-1.RELEASE-NAME-minio.default.svc.cluster.local/export
http://RELEASE-NAME-minio-2.RELEASE-NAME-minio.default.svc.cluster.local/export
http://RELEASE-NAME-minio-3.RELEASE-NAME-minio.default.svc.cluster.local/export" ]
volumeMounts:
- name: export
mountPath: /export
- name: minio-server-config
mountPath: "/tmp/config.json"
subPath: config.json
- name: minio-config-dir
mountPath:
ports:
- name: service
containerPort: 9000
env:
- name: MINIO_ACCESS_KEY
valueFrom:
secretKeyRef:
name: RELEASE-NAME-minio
key: accesskey
- name: MINIO_SECRET_KEY
valueFrom:
secretKeyRef:
name: RELEASE-NAME-minio
key: secretkey
livenessProbe:
tcpSocket:
port: service
initialDelaySeconds: 5
periodSeconds: 30
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
readinessProbe:
tcpSocket:
port: service
periodSeconds: 15
timeoutSeconds: 1
successThreshold: 1
failureThreshold: 3
resources:
requests:
cpu: 250m
memory: 256Mi
volumes:
- name: minio-user
secret:
secretName: RELEASE-NAME-minio
- name: minio-server-config
configMap:
name: RELEASE-NAME-minio
- name: minio-config-dir
emptyDir: {}
volumeClaimTemplates:
- metadata:
name: export
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: local-fast
resources:
requests:
storage: 49Gi
---
# Source: minio/templates/ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: RELEASE-NAME-minio
labels:
app: minio
chart: minio-1.7.0
release: RELEASE-NAME
heritage: Tiller
annotations:
nginx.ingress.kubernetes.io/affinity: cookie
nginx.ingress.kubernetes.io/session-cookie-hash: sha1
nginx.ingress.kubernetes.io/session-cookie-name: route
spec:
tls:
- hosts:
- minio.sample.com
secretName: tls-secret
rules:
- host: minio.sample.com
http:
paths:
- path: /
backend:
serviceName: RELEASE-NAME-minio
servicePort: 9000
</code></pre>
| <p>I suspect you are not getting the physical volume. Check your <code>kube-controller-manager</code> logs on your active master. This will vary depending on the cloud you are using: AWS, GCP, Azure, Openstack, etc. The <code>kube-controller-manager</code> is usually running on a docker container on the master. So you can do something like:</p>
<pre><code>docker logs <kube-controller-manager-container>
</code></pre>
<p>Also, check:</p>
<pre><code>kubectl get pvc
kubectl get pv
</code></pre>
<p>Hope it helps.</p>
|
<p>I have a kubernetes service that works in a leader/follower fashion, so only one of 2 pods (for HA) can accept the connection. I want to make my service publicly available with traefik ingress controller. </p>
<p>One of the ways to achieve that - tweaks the readiness probe for the service, so slave pod will fail on readiness checks until it becomes a master. But I don't like this approach, because it will be confusing to distinguish if the pods are indeed failing, or just waiting to become a master? </p>
<p>Any thoughts are welcome</p>
| <p>So you don't need to use the k8s readiness probe for the traefik ingress, you can use a traefik backend health check defined in your k8s ingress through annotations. This way you don't forward to backends that are not active. For example:</p>
<pre><code>kind: Ingress
metadata:
name: specific-deployment
annotations:
traefik.backend.healthcheck.port: 8080
traefik.backend.healthcheck.scheme: http
traefik.backend.healthcheck.path: /health
spec:
rules:
- host: specific.minikube
http:
paths:
- path: /
backend:
serviceName: stilton
servicePort: http
</code></pre>
<p>This way you can use the k8s readiness probes for your pods.</p>
|
<p>I am using AKS cluster on Azure. I am trying to discover service using DNS (<a href="http://my-api.default.svc.cluster.local:3000/" rel="nofollow noreferrer">http://my-api.default.svc.cluster.local:3000/</a>) but, it's not working (This site can’t be reached). With service IP endpoint everything is working fine.</p>
<pre><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-api
labels:
app: my-api
spec:
replicas: 1
selector:
matchLabels:
app: my-api
template:
metadata:
labels:
app: my-api
spec:
containers:
- name: my-api
image: test.azurecr.io/my-api:latest
ports:
- containerPort: 3000
imagePullSecrets:
- name: testsecret
---
apiVersion: v1
kind: Service
metadata:
name: my-api
spec:
selector:
app: my-api
ports:
- protocol: TCP
port: 3000
targetPort: 3000
</code></pre>
<blockquote>
<p>kubectl describe services kube-dns --namespace kube-system</p>
</blockquote>
<pre><code>Name: kube-dns
Namespace: kube-system
Labels: addonmanager.kubernetes.io/mode=Reconcile
k8s-app=kube-dns
kubernetes.io/cluster-service=true
kubernetes.io/name=KubeDNS
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kube-dns","kubernet...
Selector: k8s-app=kube-dns
Type: ClusterIP
IP: 10.10.110.110
Port: dns 53/UDP
TargetPort: 53/UDP
Endpoints: 10.10.100.54:53,10.10.100.64:53
Port: dns-tcp 53/TCP
TargetPort: 53/TCP
Endpoints: 10.10.100.54:53,10.10.100.64:53
Session Affinity: None
Events: <none>
</code></pre>
<blockquote>
<p>kubectl describe svc my-api</p>
</blockquote>
<pre><code>Name: my-api
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"my-api","namespace":"default"},"spec":{"ports":[{"port":3000,"protocol":...
Selector: app=my-api
Type: ClusterIP
IP: 10.10.110.104
Port: <unset> 3000/TCP
TargetPort: 3000/TCP
Endpoints: 10.10.100.42:3000
Session Affinity: None
Events: <none>
</code></pre>
<p><strong>From Second POD</strong></p>
<pre><code>kubectl exec -it second-pod /bin/bash
curl my-api.default.svc.cluster.local:3000
Response: {"value":"Hello world2"}
</code></pre>
<p><strong>From Second POD website is running which is using the same endpoint but it's not connecting to the service.</strong></p>
<p><a href="https://i.stack.imgur.com/VSCfp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VSCfp.png" alt="enter image description here"></a></p>
| <p>Fixing the indentation of your yaml file, I was able to launch the deployment and service successfully. Also the DNS resolution worked fine.</p>
<p>Differences:</p>
<ul>
<li>Fixed indentation</li>
<li>Used <code>test1</code> namespaces instead of <code>default</code></li>
<li>Used containerPort <code>80</code> instead of <code>3000</code></li>
<li>Used my image</li>
</ul>
<p>Deployment:</p>
<pre><code>apiVersion: apps/v1beta2
kind: Deployment
metadata:
labels:
app: my-api
name: my-api
namespace: test1
spec:
replicas: 1
selector:
matchLabels:
app: my-api
template:
metadata:
labels:
app: my-api
spec:
containers:
- image: leodotcloud/swiss-army-knife
name: my-api
ports:
- containerPort: 80
protocol: TCP
</code></pre>
<p>Service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-api
namespace: test1
spec:
ports:
- port: 3000
protocol: TCP
targetPort: 80
selector:
app: my-api
type: ClusterIP
</code></pre>
<p><a href="https://i.stack.imgur.com/e4nN9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/e4nN9.png" alt="second-pod-test"></a></p>
<p>Debugging steps:</p>
<ul>
<li>Install tcpdump inside both of the kube-dns containers and start capturing DNS traffic (with filters from the second pod IP)</li>
<li>From inside the second pod, run <code>curl</code> or <code>dig</code> command using the FQDN.</li>
<li>Check if the DNS query packets are reaching the kube-dns containers.</li>
<li>If not, check for networking issues.</li>
<li>If the DNS resolution is working, then start tcpdump inside your application container and check if the curl packet is reaching the container.</li>
<li>Check the source and destination IP address of the packets.</li>
<li>Check the iptables rules on the hosts.</li>
<li>Check sysctl settings.</li>
</ul>
|
<p>I have deployed cockroachdb with a <a href="https://github.com/helm/charts/tree/master/stable/cockroachdb" rel="nofollow noreferrer">stable helm chart</a>.
Unfortunately, I didn't realize the default conf gives me a very small 1Gi, unresizable persistent volume.
I also didn't realize that the cockroachdb was using quite a lot of space to monitor itself with time series.</p>
<p>Now, my persistent volumes are full, my cockroachdb pods are crashing:</p>
<pre><code>log: exiting because of error: log: cannot create log: open /cockroach/cockroach-data/logs/cockroach.ckdb-cockroachdb-0.root.2018-09-09T14_53_47Z.000001.log: no space left on device
</code></pre>
<p>And I can't resize the volume:</p>
<pre><code>kubectl patch pvc datadir-ckdb-cockroachdb-0 -p '{"spec":{"resources":{"requests":{"storage":"10Gi"}}}}'
The PersistentVolumeClaim "datadir-ckdb-cockroachdb-0" is invalid: spec: Forbidden: field is immutable after creation
</code></pre>
<p>Now I'm stuck as I can't run a node to get my data back.
Is there anyway out of this? I would at least like to retrieve my data. My service is anyway crashed.</p>
<p>Second question: if I want to avoid this in the future, what values should be used to have dynamically resizable volumes on GKE?</p>
<p>Third question: should the default in the helm chart really stay like that?</p>
| <p>As <a href="https://stackoverflow.com/users/9231144/patrick-w">https://stackoverflow.com/users/9231144/patrick-w</a> mentioned, automatic resize of the volumes isn't possible until Kubernetes/GKE version 1.11.</p>
<p>In the meantime, it is possible to manually resize them by editing the disks in the <a href="https://console.cloud.google.com/compute/disks" rel="nofollow noreferrer">GCE management console</a>. Go there, click on the disks you want to resize, click the Edit button near the top of the page, type in the new desired size of the disk in GB, and click "Save". You'll then have to SSH into the relevant pods (e.g. <code>kubectl exec -it ckdb-cockroachdb-0 bash</code>) and resize the filesystem to use the new disk capacity with a command like <code>resize2fs</code>.</p>
<p>As for your question about changing the default disk size in the Helm Chart, it's a fair question. But what would a good default size be? Too low, and it's easy for this to happen. Too high, and it won't work in environments that don't have large enough disks for the deployment to succeed. In particular, <code>minikube</code> uses tmpfs-backed volumes, so their size is quite limited by the memory of your machine. At the very least, warning in the output after instantiating the Chart seems warranted.</p>
|
<p>After installing just the basic Kubernetes packages and working with minikube, I have started just the basic kube-system pods. I'm trying to investigate why the kube-dns is not able to resolve domain names</p>
<p>Here are the versions I'm using</p>
<pre><code>Client:
Version: 18.06.1-ce
API version: 1.38
Go version: go1.10.3
Git commit: e68fc7a
Built: Tue Aug 21 17:24:56 2018
OS/Arch: linux/amd64
Experimental: false
Server:
Engine:
Version: 18.06.1-ce
API version: 1.38 (minimum version 1.12)
Go version: go1.10.3
Git commit: e68fc7a
Built: Tue Aug 21 17:23:21 2018
OS/Arch: linux/amd64
Experimental: false
minikube version: v0.28.2
</code></pre>
<p>Kubectl:</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:17:28Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>Kubeadm:</p>
<pre><code>kubeadm version: &version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>VirtualBox:</p>
<pre><code>Version 5.2.18 r124319 (Qt5.6.2)
</code></pre>
<p>Here are the system pods I have deployed:</p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE
default busybox 1/1 Running 0 31m
kube-system etcd-minikube 1/1 Running 0 32m
kube-system kube-addon-manager-minikube 1/1 Running 0 33m
kube-system kube-apiserver-minikube 1/1 Running 0 33m
kube-system kube-controller-manager-minikube 1/1 Running 0 33m
kube-system kube-dns-86f4d74b45-xjfmv 3/3 Running 2 33m
kube-system kube-proxy-2kkzk 1/1 Running 0 33m
kube-system kube-scheduler-minikube 1/1 Running 0 33m
kube-system kubernetes-dashboard-5498ccf677-pz87g 1/1 Running 0 33m
kube-system storage-provisioner 1/1 Running 0 33m
</code></pre>
<p>I've also deployed busybox to allow me to execute commands inside the containers</p>
<pre><code>kubectl exec busybox -- cat /etc/resolv.conf
nameserver 10.96.0.10
search default.svc.cluster.local svc.cluster.local cluster.local mapleworks.com
options ndots:5
</code></pre>
<p>and </p>
<pre><code>kubectl exec busybox nslookup google.com
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
nslookup: can't resolve 'google.com'
command terminated with exit code 1
</code></pre>
<p>The same commands run on the VM itself yield the following:</p>
<pre><code>cat /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 127.0.1.1
search mapleworks.com <<< OUR local DNS server
nslookup google.com
Server: 127.0.1.1
Address: 127.0.1.1#53
Non-authoritative answer:
Name: google.com
Address: 172.217.13.174
</code></pre>
<p>Questions:
kube-dns is using the default nameserver 10.96.0.10 whereas I would have expected the VM nameserver would have been imported into kubernetes.</p>
<p>While this same nameserver deployed on a native Windows or Mac platform is able to properly resolve domain names, this VM has an issue with it.</p>
<p>Is this some sort of Firewall issue as I've seen mentioned in some other posts?</p>
<p>I have inspected the kube-dns container logs but the most relevant are from the sidecar container.</p>
<pre><code>I0910 15:47:17.667100 1 main.go:51] Version v1.14.8
I0910 15:47:17.667195 1 server.go:45] Starting server (options {DnsMasqPort:53 DnsMasqAddr:127.0.0.1 DnsMasqPollIntervalMs:5000 Probes:[{Label:kubedns Server:127.0.0.1:10053 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:33} {Label:dnsmasq Server:127.0.0.1:53 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:33}] PrometheusAddr:0.0.0.0 PrometheusPort:10054 PrometheusPath:/metrics PrometheusNamespace:kubedns})
I0910 15:47:17.667240 1 dnsprobe.go:75] Starting dnsProbe {Label:kubedns Server:127.0.0.1:10053 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:33}
I0910 15:47:17.668244 1 dnsprobe.go:75] Starting dnsProbe {Label:dnsmasq Server:127.0.0.1:53 Name:kubernetes.default.svc.cluster.local. Interval:5s Type:33}
W0910 15:50:04.780281 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:34535->127.0.0.1:53: i/o timeout
W0910 15:50:11.781236 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:50887->127.0.0.1:53: i/o timeout
W0910 15:50:24.844065 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:52865->127.0.0.1:53: i/o timeout
W0910 15:50:31.845587 1 server.go:64] Error getting metrics from dnsmasq: read udp 127.0.0.1:42053->127.0.0.1:53: i/o timeout
</code></pre>
<p>The i/o timeouts correspond to the manual DNS query I've performed on google.com, I think</p>
<p>Otherwise I see here the localhost address and the port 53</p>
<p>I just don't know what is going on...</p>
| <p>Each <code>kubelet</code> in a k8s cluster has <code>--cluster-dns</code> option. This option, in fact, provides a <code>Service</code> name for kube-dns <code>Deployment</code>. Each <code>kube-dns</code> Pod, in turn, has <code>dnsmasq</code> container, which is using a list of nameservers from the k8s node. You can check it in <code>dnsmasq</code> container's logs:</p>
<pre><code>I0720 03:49:51.081031 1 nanny.go:116] dnsmasq[13]: reading /etc/resolv.conf
I0720 03:49:51.081068 1 nanny.go:116] dnsmasq[13]: using nameserver 127.0.0.1#10053 for domain ip6.arpa
I0720 03:49:51.081099 1 nanny.go:116] dnsmasq[13]: using nameserver 127.0.0.1#10053 for domain in-addr.arpa
I0720 03:49:51.081130 1 nanny.go:116] dnsmasq[13]: using nameserver 127.0.0.1#10053 for domain cluster.local
I0720 03:49:51.081160 1 nanny.go:116] dnsmasq[13]: using nameserver <nameserver_1>#53
I0720 03:49:51.081190 1 nanny.go:116] dnsmasq[13]: using nameserver <nameserver_2>#53
I0720 03:49:51.081222 1 nanny.go:116] dnsmasq[13]: using nameserver <nameserver_N>#53
</code></pre>
<p>When any <code>Pod</code> is created, by default, it has got <code>nameserver <CLUSTER_DNS_IP></code> entry in <code>/etc/resolve.conf</code>. That's how any Pod can (or cannot) resolve certain domain name - through the <code>kube-dns</code> service.</p>
<p>For example, my cluster-dns is 10.233.0.3:</p>
<pre><code>$ kubectl -n test run -it --image=alpine:3.6 alpine -- sh
If you don't see a command prompt, try pressing enter.
/ # cat /etc/resolv.conf
nameserver 10.233.0.3
search test.svc.cluster.local svc.cluster.local cluster.local test.kz
/ # nslookup kubernetes-charts.storage.googleapis.com 10.233.0.3
Server: 10.233.0.3
Address 1: 10.233.0.3 kube-dns.kube-system.svc.cluster.local
Name: kubernetes-charts.storage.googleapis.com
Address 1: 74.125.131.128 lu-in-f128.1e100.net
Address 2: 2a00:1450:4010:c05::80 li-in-x80.1e100.net
</code></pre>
<p>So, if a <code>Node</code> (where the <code>kube-dns</code> is scheduled to) can resolve certain domain name, then any Pod can do the same.</p>
|
<p>I basically want to find the hard eviction strategy that kubelet is currently using.<br>
I checked the settings in the /etc/systemd/system/kubelet.service file on my K8s node. In that the strategy I mentioned is as follows:<br>
<code>--eviction-hard=nodefs.available<3Gi</code> </p>
<p>However, my pods seem to be evicted when the nodefs.available is <10% (default kubernetes settings)
I have been unable to find a way a way to know the current parameters that are being used by kubernetes.</p>
| <p>It is possible to <a href="https://kubernetes.io/docs/tasks/administer-cluster/reconfigure-kubelet/#generate-the-configuration-file" rel="noreferrer">dump the current kubelet configuration</a> using <code>kubectl proxy</code> along with the <code>/api/v1/nodes/${TARGET_NODE_FOR_KUBELET}/proxy/configz</code> path, details see linked Kubernetes docs.</p>
|
<p>I got an issue when I try to access an exposed <code>kubernetes</code> service through browser. Below is my Environment.</p>
<p>created two <code>ubuntu</code> EC2 instances(with all ports open in security group) and installed all kubernetes related tools like kubectl, kubeadm, docker, calico network.</p>
<p>created <code>nginx</code> pod, scaled it to 3 and exposed it with type <b>LoadBalancer</b>. When I curl from master or worker node to the exposed nginx it works fine(with public or private ip). But it does not work if i curl from outside. The request is timed out. I tried to delete service and expose it again with NodePort. But still I could not access from outside. I ensured the security group allows ingress. Is there a way to debug why it cannot be accessed from outside or I am missing something</p>
<p>I am not running <code>cloud controller manager</code> but <code>kube-controller-manager</code>. Will this be an issue.? </p>
<p>below is the output of all kubernetes components</p>
<pre><code>ubuntu@ip-172-31-29-98:~$ kubectl get all --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default pod/nginx-6f858d4d45-2wtlh 1/1 Running 0 51m
default pod/nginx-6f858d4d45-5dkws 1/1 Running 0 51m
default pod/nginx-6f858d4d45-h9cwg 1/1 Running 0 51m
kube-system pod/calico-etcd-82xkv 1/1 Running 1 18h
kube-system pod/calico-kube-controllers-74b888b647-prr2q 1/1 Running 1 18h
kube-system pod/calico-node-kbckk 2/2 Running 4 17h
kube-system pod/calico-node-n5zhr 2/2 Running 3 18h
kube-system pod/coredns-78fcdf6894-qjhlq 1/1 Running 1 18h
kube-system pod/coredns-78fcdf6894-sm7c9 1/1 Running 1 18h
kube-system pod/etcd-ip-172-31-29-98 1/1 Running 1 18h
kube-system pod/kube-apiserver-ip-172-31-29-98 1/1 Running 1 18h
kube-system pod/kube-controller-manager-ip-172-31-29-98 1/1 Running 1 18h
kube-system pod/kube-proxy-jxg88 1/1 Running 1 18h
kube-system pod/kube-proxy-knx59 1/1 Running 1 17h
kube-system pod/kube-scheduler-ip-172-31-29-98 1/1 Running 1 18h
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 18h
default service/nginx LoadBalancer 10.99.144.149 <pending> 80:31808/TCP 45m
kube-system service/calico-etcd ClusterIP 10.96.232.136 <none> 6666/TCP 18h
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 18h
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/calico-etcd 1 1 1 1 1 node-role.kubernetes.io/master= 18h
kube-system daemonset.apps/calico-node 2 2 2 2 2 <none> 18h
kube-system daemonset.apps/kube-proxy 2 2 2 2 2 beta.kubernetes.io/arch=amd64 18h
NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
default deployment.apps/nginx 3 3 3 3 51m
kube-system deployment.apps/calico-kube-controllers 1 1 1 1 18h
kube-system deployment.apps/calico-policy-controller 0 0 0 0 18h
kube-system deployment.apps/coredns 2 2 2 2 18h
NAMESPACE NAME DESIRED CURRENT READY AGE
default replicaset.apps/nginx-6f858d4d45 3 3 3 51m
kube-system replicaset.apps/calico-kube-controllers-74b888b647 1 1 1 18h
kube-system replicaset.apps/calico-policy-controller-55b469c8fd 0 0 0 18h
kube-system replicaset.apps/coredns-78fcdf6894 2 2 2 18h
</code></pre>
<p><strong>Edit 1:</strong>
Tried to do the same in <code>GCloud</code>. It is same in <code>GCloud</code> too. The <code>nginx</code> Service is accessible through private/public ip inside the nodes. But when I curl from outside, it does not work. I spin up a simply python server in one of the node and I am able to access the python server from outside. But only the services exposed through <code>kubernetes</code> are not curl-able from outside. </p>
<p>I think I am missing some fundamental understanding about <code>kubernetes</code> networking(especially in cloud). Can I get any help from experts?</p>
| <p>The internet firewall on my network was blocking the access to specific ports. There was no issues with the Cloud Provider Firewall or Instance Firewall.</p>
|
<p>I have a server running on Kubernetes to handle hourly processing jobs. Thinking of using a service to expose the pods, and using an (external) cron job to hit the load balancer so that kubernetes can autoscale to handle the higher load as required. However in implementation, if the cron job sends, say, 100 requests at the same time while there's only 1 pod, all the traffic will go to that pod whereas subsequently spun up pods will still not have any traffic to handle. </p>
<p>How can I get around this issue? Is it possible for me to scale up the pods first using a cron job before making the requests? Or should I make requests with a time delay so as to give time for the pods to get spun up? Or other suggestions are also welcome!</p>
| <p>I wrote a simple client go based application which you can pair with CronJob to scale up ir down the deployment. You can take inspiration from it and write it yourself or just use it. I hope this helps.</p>
<p><a href="https://github.com/balchua/boink" rel="nofollow noreferrer">https://github.com/balchua/boink</a></p>
|
<p>I have a Kubernetes Service that selects by doing:</p>
<pre><code>spec:
selector:
backend: nlp-server
</code></pre>
<p>If there are multiple <code>Pods</code> which match the selector, which <code>Pod</code> does the <code>Service</code> route a request to? </p>
<p>I am using the default <code>ClusterIP</code> setup. Search for "ClusterIP: Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default ServiceType." in the <a href="https://kubernetes.io/docs/concepts/services-networking/" rel="nofollow noreferrer">docs</a></p>
<p>If I want the Service to route to a Pod that makes sense (having lesser load),<br>
is the <a href="https://kubernetes.io/docs/concepts/services-networking/#internal-load-balancer" rel="nofollow noreferrer">internal load-balancer</a> what I need?</p>
| <p>In a nutshell, no you don't need the internal load-balancer you linked to. The <code>Service</code> resource <em>is</em> indeed a load-balancer. Depending on the <a href="https://kubernetes.io/docs/concepts/services-networking/service/#virtual-ips-and-service-proxies" rel="noreferrer">proxy mode</a> it could be round-robin or random. If you're going with the default (iptables-based proxy) it would be a <a href="http://kubernetesbyexample.com/services/" rel="noreferrer">random pod selected</a> every time you hit the virtual IP of the service.</p>
<p>Note: you <em>could</em> use the internal load-balancer type, typically in a cloud environment <a href="https://medium.com/google-cloud/internal-load-balancing-for-kubernetes-services-on-google-cloud-f8aef11fb1c4" rel="noreferrer">such as GKE</a>, for example to cut down on costs when all you need is cluster-internal connectivity, however they are (as far as I know) usually L4 load-balancers. </p>
|
<p>I have set up Kubernetes secrets.</p>
<pre><code>kubectl create secret generic mysecret --from-file=mysecret=/home/ubuntu/secret.txt
</code></pre>
<p>And this secret can be converted to plaintext using the same <code>kubectl</code> command:</p>
<pre><code>kubectl get secret mysecret -o yaml
# and base64 decode
</code></pre>
<p>How do I limit access to this secret? I only want a certain pods and only me as an operator to have access to this secret.</p>
| <p>OK, so you need to define a (cluster) role and then bind it to you (== human user is the target entity) and/or to a service account (== app is the target entity) which you then <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/" rel="noreferrer">use in the pod</a> instead of the <code>default</code> one.</p>
<p>The respective <code>secretadmin</code> role (or choose whatever name you prefer) would look something like this (vary verbs as required):</p>
<pre><code>$ kubectl create clusterrole secretadmin \
--verb=get --verb=list --verb=create --verb=update \
--resource=secret \
--namespace=mysuperproject
</code></pre>
<p>Once you've defined the role, you can attach (or: bind) it to a certain entity. Let's go through the case of the service account (similar then for a human user, just simpler). So first we need to create the service account, here called <code>thepowerfulapp</code> which you will then use in your deployment/pod/whatever:</p>
<pre><code>$ kubectl -n mysuperproject create sa thepowerfulapp
</code></pre>
<p>And now it's time to tie everything together with the following binding called <code>canadminsecret</code></p>
<pre><code>$ kubectl create clusterrolebinding canadminsecret \
--role=secretadmin \
--serviceaccount=mysuperproject:thepowerfulapp \
--namespace=mysuperproject
</code></pre>
|
<p>I have a daemonset which deploys containers to check status of few mount points. This daemonset deployment is done from a python script and soon after the deployment command I am collecting the logs which will be created by above daemonset pods. I see that the logs are getting copied soon after the daemonset creation is triggered and for the same reason , logs are not complete. </p>
<p>After some investigation , I found that the daemonset pods are still in process of script execution and writing logs.Meanwhile the main script has already jumped to the next command of copying logs using <code>kubectl cp</code> command. </p>
<p>Is there any way I can put a wait based on the condition that copy logs only if script execution is completed.</p>
<p>This is the part of code I have</p>
<pre><code># Create diag pod
cmd = "kubectl apply -f diagnostic_daemon.yaml"
(rc, cmd_out, cmd_err) = cmdHandle.cmd_run(cmd)
if cmd_err.strip():
print "ERROR: Unbale to create diag POD. Exiting!!!"
print "> " + REDC + cmd_err.strip() + ENDC
sys.exit(1)
# wait for the daemonset pod to reach running state
check_daemonset_state("cos-plugin-diag")
print "\n*****Collecting cos-plugin-diag logs*****"
# Collect diag pod logs
global nodeQdigl
</code></pre>
| <p>So you also need to check for the pods to be in <code>Running</code> state (all containers in the pod)</p>
<pre><code># wait for the pod
check_pod_state("cos-plugin-diag")
</code></pre>
<p>So if the Pod has 2 containers, for example, you should check for <code>2/2</code></p>
<p>Example:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
calico-node-9wnst 2/2 Running 0 6d
</code></pre>
<p>The other aspect that you need to check a string in the log file that determines the end of your log and you can check it with</p>
<pre><code>kubectl logs <pod-name> -c <container-if-multiple-containers> -n <your-namespace>
</code></pre>
<p>Hope it helps</p>
|
<p>We are trying to implement Presto with Kubernetes. We have a kubernetes cluster running on cloud as a service. I tried to google on this but could not find a conclusive result as to what may be the best practices to deploy Presto with Kubernetes. Though there exists the official github of Presto - but does not help. Below are the two questions I am trying to seek an answer for:</p>
<ol>
<li>What should be the best approach to configure Presto with Kubernetes - metrics such as ideal worker replicas?</li>
<li>How can we go ahead and performance test this deployment?</li>
</ol>
| <p>You could install with the official helm chart from <a href="https://github.com/helm/charts/tree/master/stable/presto" rel="noreferrer">https://github.com/helm/charts/tree/master/stable/presto</a> It provides an option to set the number of workers. With the official chart you should be able to ask questions in the Kubernetes charts slack channel (through <a href="http://slack.k8s.io" rel="noreferrer">http://slack.k8s.io</a>) and raise issues in GitHub if you hit any. Or there are non-helm examples such as <a href="https://github.com/dharmeshkakadia/presto-kubernetes" rel="noreferrer">https://github.com/dharmeshkakadia/presto-kubernetes</a></p>
<p>The question of how many workers isn't specific to Kubernetes. It's a question of how much and what kind of load you will need the deployment to handle and will also depend on what hardware your Kubernetes cluster is using. If you're not sure then perhaps you can deploy with the defaults and adjust as needed. This is suggested by <a href="https://prestodb.io/presto-admin/docs/current/installation/presto-configuration.html" rel="noreferrer">https://prestodb.io/presto-admin/docs/current/installation/presto-configuration.html</a> You'll find some of the settings such as memory per node set in the Deployment parts of the kubenernetes yaml descriptors or in the values.yaml in the case of the helm chart. </p>
<p>To performance test your deployment you will need test data and can then run queries against the cluster. So the same process you would follow outside of Kubernetes. There are tools to help such as <a href="https://www.lewuathe.com/use-benchto-for-evaluation-of-presto.html" rel="noreferrer">https://www.lewuathe.com/use-benchto-for-evaluation-of-presto.html</a> or <a href="https://github.com/prestodb/tempto" rel="noreferrer">https://github.com/prestodb/tempto</a> You may also want to look at <a href="https://kognitio.com/blog/presto-performance-powerful-or-problematic/" rel="noreferrer">https://kognitio.com/blog/presto-performance-powerful-or-problematic/</a> </p>
|
<p>I am trying to create a deployment on a K8s cluster with one master and two worker nodes. The cluster is running on 3 AWS EC2 instances. I have been using this environment for quite sometime to play with Kubernetes. Three days back, I have started to see all the pods status to change to <code>ContainerCreating</code> from <code>Running</code>. Only the pods that are scheduled on master are shown as <code>Running</code>. The pods running on worker nodes are shown as <code>ContainerCreating</code>. When I run <code>kubectl describe pod <podname></code>, it shows in the event the following</p>
<pre><code> Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 34s default-scheduler Successfully assigned nginx-8586cf59-5h2dp to ip-172-31-20-57
Normal SuccessfulMountVolume 34s kubelet, ip-172-31-20-57 MountVolume.SetUp succeeded for volume "default-token-wz7rs"
Warning FailedCreatePodSandBox 4s kubelet, ip-172-31-20-57 Failed create pod sandbox.
Normal SandboxChanged 3s kubelet, ip-172-31-20-57 Pod sandbox changed, it will be killed and re-created.
</code></pre>
<p>This error has been bugging me now. I tried to search around online on related error but I couldn't get anything specific. I did kubeadm reset on the cluster including master and worker nodes and brought up the cluster again. The nodes status shows ready. But I run into the same problem again whenever I try to create a deployment using the below command for example:</p>
<pre><code>kubectl run nginx --image=nginx --replicas=2
</code></pre>
| <p>This can occur if you specify a limit or request on memory and use the wrong unit.</p>
<p>Below triggered the message:</p>
<pre><code>resources:
limits:
cpu: "300m"
memory: "256m"
requests:
cpu: "50m"
memory: "64m"
</code></pre>
<p>The correct line would be:</p>
<pre><code>resources:
limits:
cpu: "300m"
memory: "256Mi"
requests:
cpu: "50m"
memory: "64Mi"
</code></pre>
|
<h1>Helm _helpers.tpl?</h1>
<p>Helm allows for the use of <a href="https://golang.org/pkg/text/template/" rel="noreferrer">Go templating</a> in resource files for Kubernetes. </p>
<p>A file named <code>_helpers.tpl</code> is usually used to define Go template helpers with this syntax:</p>
<pre><code>{{- define "yourFnName" -}}
{{- printf "%s-%s" .Values.name .Values.version | trunc 63 -}}
{{- end -}}
</code></pre>
<p>Which you can then use in your <code>*.yaml</code> resource files like so:</p>
<pre><code>{{ template "yourFnName" . }}
</code></pre>
<h1>The Question</h1>
<p>How can I use the helpers I define, in other helper definitions? </p>
<p>For example, what if I have a helper for the application name, and want to use that in the definition for a helper which determines the ingress host name? </p>
<p>I have tried calling helpers in other definitions a couple different ways. Given this basic helper function:</p>
<pre><code>{{- define "host" -}}
{{- printf "%.example.com" <Somehow get result of "name" helper here> -}}
{{- end -}}
</code></pre>
<p>I have tried the following:</p>
<pre><code>{{- printf "%.example.com" {{ template "name" . }} -}}
{{- printf "%.example.com" {{- template "name" . -}} -}}
{{- printf "%.example.com" ( template "name" . ) -}}
{{- printf "%.example.com" template "name" . -}}
# Separator
{{- $name := {{ template "environment" . }} -}}
{{- printf "%.example.com" $name -}}
# Separator
{{- $name := template "environment" . -}}
{{- printf "%.example.com" $name -}}
# Separator
{{- $name := environment -}}
{{- printf "%.example.com" $name -}}
</code></pre>
<p>Is it possible to even do this? If so, how?</p>
| <p>You can use <code>(include ... )</code> syntax. Example of including previously defined template <code>foo</code>:</p>
<pre><code>{{- define "bar" -}}
{{- printf "%s-%s" (include "foo" .) .Release.Namespace | trunc 63 | trimSuffix "-" -}}
{{- end -}}
</code></pre>
|
<p>Basically, i have a Deployment that creates 3 containers which scale automatically: PHP-FPM, NGINX and the container that contains the application, all set up with secrets, services and ingress. The application also share the project between PHP-FPM and NGINX, so it's all set up.</p>
<p>Since i want to explore more with K8s, i decided to create a pod with Redis that also mounts a persistent disk (but that's not important). I have also created a service for redis and all works perfectly fine if i SSH into the Redis container and run <code>redis-cli</code>.</p>
<p>The fun part is that the project can't connect to the pod on which Redis is on. I understand that the containers between pods share the same "local" network and they can be accessed using <code>localhost</code>.</p>
<p>How do i connect my project to the redis server that is running in other pod, that scales independently? What's wrong with the Redis service?</p>
<hr>
<p>My Redis service is this:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: redis-service
spec:
ports:
- port: 6379
targetPort: 6379
selector:
app: redis
</code></pre>
<p>My Redis pod is powered by a deployment configuration file (i don't necessarily scale it, but i'll look forward into it):</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
labels:
app: redis
spec:
selector:
matchLabels:
app: redis
strategy:
type: Recreate
template:
metadata:
labels:
app: redis
spec:
volumes:
- name: redis-persistent-volume
persistentVolumeClaim:
claimName: redis-pvc
containers:
- image: redis:4.0.11
command: ['redis-server']
name: redis
imagePullPolicy: Always
resources:
limits:
cpu: 250m
memory: 512Mi
requests:
cpu: 250m
memory: 512Mi
ports:
- containerPort: 6379
name: redis
volumeMounts:
- name: redis-persistent-volume
mountPath: /data
</code></pre>
<p>Also, when i tap into the <code>kubectl get service</code>, the Redis server has a Cluster IP:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 21h
nginx-service NodePort 10.100.111.16 <none> 80:30312/TCP 21h
redis-service ClusterIP 10.99.80.141 <none> 6379/TCP 6s
</code></pre>
| <blockquote>
<p>How do I connect my project to the redis server that is running in other pod, that scales independently?</p>
</blockquote>
<p>You have three possible states here:</p>
<ul>
<li><p>To connect to Redis pod from <strong>within</strong> any other pod running <strong>in the same namespace</strong> as Redis pod is running. In this case you will use service name <code>redis-service</code> and designates service port <code>6379</code> to reach it over it's current ClusterIP (kube-dns is making DNS resolution for you there). I'm guessing that you are asking for this scenario.</p>
</li>
<li><p>Here is just an example of accessing one pod from within another pod (in your case). First run:</p>
<pre><code> kubectl run -it --rm test --image=busybox --restart=Never -- sh
</code></pre>
<p>this will run new test pod and give you <code>sh</code> within that pod. Now if you type <code>nslookup redis-service</code> there (within test pod) you will check that DNS is working correctly between pods. You can also try to see if redis port is actually open with <code>nc -zv redis-service 6379</code>. If your kube-dns is working properly you should see that the port is opened.</p>
</li>
<li><p>To connect to Redis pod from <strong>within</strong> any other pod running in the same kubernetes cluster but <strong>in different namespace</strong>. In this case, you will use FQDN consisting of the service name and namespace name like it is given in <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/#namespaces-and-dns" rel="noreferrer">the documentation</a>.</p>
</li>
<li><p>To connect to Redis pod from <strong>outside</strong> of the kubernetes cluster. In this case, you will need some kind of ingress or nodePort of similar mechanism to expose redis service to outside world. More on this you can read in <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/" rel="noreferrer">the official documentation</a>.</p>
</li>
</ul>
|
<p>For my AKS container setup, I would like to pass the requested number of replicas of a given statefulset to each pod through environment variables.</p>
<p>I was trying to do this without repeating myself (once in the "replicas" setting and once in the setting of the environment variables).</p>
<p>The only real solution I could find of how to do this was using anchors and aliases as such (based on <a href="https://stackoverflow.com/questions/48632096/kubernetes-statefulset-obtain-spec-replicas-metadata-and-reference-elsewhere-i">Kubernetes StatefulSet - obtain spec.replicas metadata and reference elsewhere in configuration</a>):</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: solr
spec:
selector:
matchLabels:
app: solr
serviceName: solr-hs
replicas: &numReplicas 3
updateStrategy:
type: RollingUpdate
# Make sure pods get created sequentially
podManagementPolicy: OrderedReady
template:
metadata:
labels:
app: solr
spec:
containers:
- name: kubernetes-solr
imagePullPolicy: Always
image: "..."
resources:
requests:
memory: "8Gi"
cpu: "0.5"
ports:
- containerPort: 8983
env:
- name: N_O_REPLICAS
value: *numReplicas
</code></pre>
<p>Unfortunately, it seems like the "env" value has to be a string, and the integer value of "replicas" does not get cast or transformed. The following error gets thrown instead:</p>
<blockquote>
<p>v1.EnvVar.v1.EnvVar.Value: ReadString: expects " or n, but found 3,
error found in #10 byte of ...|,"value":3},{"name":|..., bigger
context ...|:"solr-config"}}},{"name":"N_O_REPLICAS","value":3},</p>
</blockquote>
<p>I tried casting to a string manually by writing:</p>
<pre><code>value: !!str *numReplicas
</code></pre>
<p>But this also doesn't work and throws the following error:</p>
<blockquote>
<p>error converting YAML to JSON: yaml: line 52: did not find expected key</p>
</blockquote>
<p>Is there any way to create a Kubernetes YAML file that allows the reusing of integer values as strings? Or is there another solution for this particular situation?</p>
| <p>Although your approach is interesting, <code>!!str</code> is not a casting operator and the <a href="http://yaml.org/spec/1.2/spec.html#id2768011" rel="nofollow noreferrer">YAML specification</a> clearly indicates that what you tried is not going to work:</p>
<blockquote>
<p>When a node has more than one occurrence (using aliases), tag resolution must depend only on the path to the first (anchored) occurrence of the node.</p>
</blockquote>
<p>So within YAML this is not possible, unless the parser/loader is non-conform. </p>
<p>The best solution for your problem, IMO, would be that kubernetes explicitly casts all parameters that will be environment variables to a string before adding them to the environment. That way you could use booleans, dates, etc. as well. </p>
<p>You can also use any templating system that you like to generate the YAML input for kubernetes as long as such a system allows you to "stringify" your integer parameter.</p>
|
<p>For my AKS container setup, I would like to pass the requested number of replicas of a given statefulset to each pod through environment variables.</p>
<p>I was trying to do this without repeating myself (once in the "replicas" setting and once in the setting of the environment variables).</p>
<p>The only real solution I could find of how to do this was using anchors and aliases as such (based on <a href="https://stackoverflow.com/questions/48632096/kubernetes-statefulset-obtain-spec-replicas-metadata-and-reference-elsewhere-i">Kubernetes StatefulSet - obtain spec.replicas metadata and reference elsewhere in configuration</a>):</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: solr
spec:
selector:
matchLabels:
app: solr
serviceName: solr-hs
replicas: &numReplicas 3
updateStrategy:
type: RollingUpdate
# Make sure pods get created sequentially
podManagementPolicy: OrderedReady
template:
metadata:
labels:
app: solr
spec:
containers:
- name: kubernetes-solr
imagePullPolicy: Always
image: "..."
resources:
requests:
memory: "8Gi"
cpu: "0.5"
ports:
- containerPort: 8983
env:
- name: N_O_REPLICAS
value: *numReplicas
</code></pre>
<p>Unfortunately, it seems like the "env" value has to be a string, and the integer value of "replicas" does not get cast or transformed. The following error gets thrown instead:</p>
<blockquote>
<p>v1.EnvVar.v1.EnvVar.Value: ReadString: expects " or n, but found 3,
error found in #10 byte of ...|,"value":3},{"name":|..., bigger
context ...|:"solr-config"}}},{"name":"N_O_REPLICAS","value":3},</p>
</blockquote>
<p>I tried casting to a string manually by writing:</p>
<pre><code>value: !!str *numReplicas
</code></pre>
<p>But this also doesn't work and throws the following error:</p>
<blockquote>
<p>error converting YAML to JSON: yaml: line 52: did not find expected key</p>
</blockquote>
<p>Is there any way to create a Kubernetes YAML file that allows the reusing of integer values as strings? Or is there another solution for this particular situation?</p>
| <p><a href="https://helm.sh" rel="nofollow noreferrer">Helm</a> is what you need. </p>
<p>Actually, <code>Helm</code> is something more, than what you need, but it has a template engine (just like in Ansible), which could help with your case. Moreover, today using <code>Helm</code> it's almost mandatory with the Kubernetes, just because it has got a huge library of charts, which could help you to deploy different software very fast, e.g Elastic stack or Redis via one command...(almost). So, try this out, it could improve your work with the Kubernetes</p>
|
<p>I have failed to use HostPath <code>/var/lib/docker/containers</code> as a volume with the following error:</p>
<pre><code> Error response from daemon: linux mounts: Path /var/lib/docker/containers is
mounted on /var/lib/docker/containers but it is not a shared or slave mount.
</code></pre>
<p>Here is my YAML spec (note: this is just an example for reproducing my problem in doing log collection): </p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: test
namespace: logging
labels:
app: test
spec:
selector:
matchLabels:
app : test
template:
metadata:
labels:
app: test
spec:
containers:
- name: nginx
image: nginx:stable-alpine
securityContext:
privileged: true
ports:
- containerPort : 8003
volumeMounts:
- name: docker
mountPath: /var/lib/docker/containers
readOnly: true
volumes:
- name: docker
hostPath:
path: /var/lib/docker/containers
</code></pre>
<p>And my kubernetes version.</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.1",
GitCommit:"d4ab47518836c750f9949b9e0d387f20fb92260b", GitTreeState:"clean",
BuildDate:"2018-04-12T14:26:04Z", GoVersion:"go1.9.3", Compiler:"gc",
Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0",
GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean",
BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc",
Platform:"linux/amd64"}
</code></pre>
<p>Very much appreciating your help!</p>
| <blockquote>
<p>very appreciated for any help!</p>
</blockquote>
<p>You are most probably hit by a version specific issue:</p>
<pre><code>/var/lib/docker/containers is intentionally mounted by Docker with private mount
propagation and thus conflicts with Kubernetes trying to mount this directory
as rslave when running the container
</code></pre>
<p>You should try with 1.10.3+ where it is resolved. See <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#changelog-since-v1102" rel="nofollow noreferrer">the official changelog</a> for kubernetes and check entry related to "Default mount propagation". Also check related (see the error) <a href="https://github.com/kubernetes/kubernetes/issues/62396" rel="nofollow noreferrer">fluentd issue</a> for more in-depth analysis.</p>
<p>Now, with that said...</p>
<p>David's seasoned comment with question and caution word still stands and I second that: this is quite an eyebrow raiser - nginx pod digging deep into docker engine internals (hope it is just for sake of minimal reproducible example, or log collection case, you know, something...)... Just make sure you know exactly what you are doing and why.</p>
|
<p>I am using the ELK stack (elasticsearch, logsash, kibana) for log processing and analysis in a Kubernetes (minikube) environment. To capture logs I am using filebeat. Logs are propagated successfully from filebeat through to elasticsearch and are viewable in Kibana. </p>
<p>My problem is that I am unable to get the pod name of the actual pod issuing log records. Rather I only get the filebeat podname which is gathering log files and not name of the pod that is originating log records.</p>
<p>The information I can get from filebeat are (as viewed in Kibana)</p>
<ul>
<li>beat.hostname: the value of this field is the filebeat pod name</li>
<li>beat.name: value is the filebeat pod name</li>
<li>host: value is the filebeat pod name</li>
</ul>
<p>I can also see/discern container information in Kibana which flow through from filebeat / logstash / elasticsearch:</p>
<ul>
<li>app: value is {log-container-id}-json.log</li>
<li>source: value is /hostfs/var/lib/docker/containers/{log-container-id}-json.log</li>
</ul>
<p>As shown above, I seem to be able to get the container Id but not the pod name.</p>
<p>To mitigate the situation, I could probably embed the pod-name in the actual log message and parse it from there, but I am hoping there is a solution in which I can configure filebeat to emit actual pod names.</p>
<p>Does anyone now how to configure filebeat (or other components) to capture kubernetes (minikube) pod names in their logs?</p>
<p>My current filebeat configuration is listed below:</p>
<p>ConfigMap is shown below:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat
namespace: logging
labels:
component: filebeat
data:
filebeat.yml: |
filebeat.prospectors:
- input_type: log
tags:
- host
paths:
- "/hostfs/var/log"
- "/hostfs/var/log/*"
- "/hostfs/var/log/*/*"
exclude_files:
- '\.[0-9]$'
- '\.[0-9]\.gz$'
- input_type: log
tags:
- docker
paths:
- /hostfs/var/lib/docker/containers/*/*-json.log
json:
keys_under_root: false
message_key: log
add_error_key: true
multiline:
pattern: '^[[:space:]]+|^Caused by:'
negate: false
match: after
output.logstash:
hosts: ["logstash:5044"]
logging.level: info
</code></pre>
<p>DamemonSet is shown below:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: filebeat
namespace: logging
spec:
template:
metadata:
labels:
component: filebeat
spec:
containers:
- name: filebeat
image: giantswarm/filebeat:5.2.2
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: 100m
requests:
cpu: 100m
volumeMounts:
- name: config
mountPath: /etc/filebeat
readOnly: true
- name: hostfs-var-lib-docker-containers
mountPath: /hostfs/var/lib/docker/containers
readOnly: true
- name: hostfs-var-log
mountPath: /hostfs/var/log
readOnly: true
volumes:
- name: config
configMap:
name: filebeat
- name: hostfs-var-log
hostPath:
path: /var/log
- name: hostfs-var-lib-docker-containers
hostPath:
path: /var/lib/docker/containers
</code></pre>
| <p>For future people coming here, it is now already in place in a filebeat processor :</p>
<pre><code>filebeat.prospectors:
- type: log
enabled: true
paths:
- /var/log/*.log
- /var/log/messages
- /var/log/syslog
- type: docker
containers.ids:
- "*"
processors:
- add_kubernetes_metadata:
in_cluster: true
- drop_event:
when:
equals:
kubernetes.container.name: "filebeat"
</code></pre>
<p>helm chart default values : <a href="https://github.com/helm/charts/blob/master/stable/filebeat/values.yaml" rel="nofollow noreferrer">https://github.com/helm/charts/blob/master/stable/filebeat/values.yaml</a></p>
<p>doc : <a href="https://www.elastic.co/guide/en/beats/filebeat/current/add-kubernetes-metadata.html" rel="nofollow noreferrer">https://www.elastic.co/guide/en/beats/filebeat/current/add-kubernetes-metadata.html</a></p>
|
<p>I have a grpc micro-service A running in kubernetes cluster. When this service is called, I would like to create an instance of another grpc micro-service B and run it in the same cluster. I have the image of the second service included in the container A. Now how can I create and run the micro-service B as another pod in the cluster. </p>
<p>Thanks
Madhu</p>
| <p>You do not need an image inside, what you need is a kubernetes client that you will use to create deployment/job/pod. Your pod needs a serviceaccount that has RBAC role/clusterrole allowing for creation of what you need. This way you can create a service that on demand interacts with kubernetes api and creates what you want.</p>
<p>All in all, it's a very similar concept to how operators work, so looking at <a href="https://github.com/operator-framework" rel="nofollow noreferrer">https://github.com/operator-framework</a> might provide some usefull insights, but even just launching kubectl inside the pod might be good enough for your needs.</p>
|
<p>I am a beginner to Kubernetes and starting off with <a href="https://kubernetes.io/docs/tutorials/hello-minikube/#create-your-node-js-application" rel="noreferrer">this</a> tutorial. I installed <a href="https://www.virtualbox.org/wiki/Downloads" rel="noreferrer">VM</a> and expected to be able to start a cluster by using the command: </p>
<pre><code>minikube start
</code></pre>
<p>But I get the error: </p>
<pre><code>Starting local Kubernetes v1.10.0 cluster...
Starting VM...
E0911 13:34:45.394430 41676 start.go:174] Error starting host: Error
creating host: Error executing step: Creating VM.
: Error setting up host only network on machine start: The host-only
adapter we just created is not visible. This is a well known
VirtualBox bug. You might want to uninstall it and reinstall at least
version 5.0.12 that is is supposed to fix this issue.
</code></pre>
<p>It says that it is a well known bug in Virtualbox but I installed its latest version. Any ideas?</p>
| <p>Figured out the issue. VirtualBox was not installed correctly as Mac had blocked it. It wasn't obvious at first. </p>
<ul>
<li><p>Restarting won't work if VirtualBox isn't installed correctly. </p></li>
<li><p>System Preferences -> Security & Privacy -> Allow -> Then allow the software corporation (in this case Oracle)</p></li>
<li><p>Restart </p></li>
</ul>
<p>Now it worked as expected.</p>
|
<p>I'm trying to configure an elastic ip with Amazon's Elastic Kubernetes service so I can expose a static public IP address. So far it seems the only way to expose a static public IP address is through a load balancer which is kind of a waste since I have a static private IP address endpoint for the service but no way to expose it publicly. And I only need 1 instance of the service running since HA isn't a requirement here. I have tried everything I can think of even to manually configuring an elastic ip but it seems that if that is a solution, the steps are a little convoluted and seems odd that you would have to do such a thing. </p>
| <p>Short answer: AFAIK it's not possible through K8s. If you don't want to waste EIPs then why don't you use an Ingress Controller (Something like traefik or nginx) that way your ingress uses a single IP as a service and then you can expose other services from there.</p>
<p><a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">nginx</a></p>
<p><a href="https://docs.traefik.io/configuration/backends/kubernetes/" rel="nofollow noreferrer">traefik</a></p>
<p>Also, you can track or open issues in <a href="https://github.com/kubernetes-incubator/kube-aws/blob/master/proposals/plugins.md" rel="nofollow noreferrer">kube-aws</a> or in k8s itself in the <a href="https://github.com/kubernetes/kubernetes/tree/master/pkg/cloudprovider/providers" rel="nofollow noreferrer">AWS provider part of the code</a> </p>
|
<p>I have a 4 node Kubernetes cluster, 1 x controller and 3 x workers. The following shows how they are configured with the versions.</p>
<p><code>
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-ctrl-1 Ready master 1h v1.11.2 192.168.191.100 <none> Ubuntu 18.04.1 LTS 4.15.0-1021-aws docker://18.6.1
turtle-host-01 Ready <none> 1h v1.11.2 192.168.191.53 <none> Ubuntu 18.04.1 LTS 4.15.0-29-generic docker://18.6.1
turtle-host-02 Ready <none> 1h v1.11.2 192.168.191.2 <none> Ubuntu 18.04.1 LTS 4.15.0-34-generic docker://18.6.1
turtle-host-03 Ready <none> 1h v1.11.2 192.168.191.3 <none> Ubuntu 18.04.1 LTS 4.15.0-33-generic docker://18.6.1
</code></p>
<p>Each of the nodes has two network interfaces, for arguments sake <code>eth0</code> and <code>eth1</code>. <code>eth1</code> is the network that I want to the cluster to work on. I setup the controller using <code>kubeadm init</code> and passed <code>--api-advertise-address 192.168.191.100</code>. The worker nodes where then joined using this address.</p>
<p>Finally on each node I modified the kubelet service to have <code>--node-ip</code> set so that the layout looks as above.</p>
<p>The cluster appears to be working correctly and I can create pods, deployments etc. However the issue I have is that none of the pods are able to use the <code>kube-dns</code> service for DNS resolution.</p>
<p>This is not a problem with resolution, rather that the machines cannot connect to the DNS service to perform the resolution. For example if I run a <code>busybox</code> container and access it to perform <code>nslookup</code> i get the following:</p>
<p><code>
/ # nslookup www.google.co.uk
nslookup: read: Connection refused
nslookup: write to '10.96.0.10': Connection refused
</code></p>
<p>I have a feeling that this is down to not using the default network and because of that I suspect some Iptables rules are not correct, that being said these are just guesses.</p>
<p>I have tried both the Flannel overlay and now Weave net. The pod CIDR range is <code>10.32.0.0/16</code> and the service CIDR is as default.</p>
<p>I have noticed that with Kubernetes 1.11 there are now pods called <code>coredns</code> rather than one <code>kube-dns</code>.</p>
<p>I hope that this is a good place to ask this question. I am sure I am missing something small but vital so if anyone has any ideas that would be most welcome.</p>
<p><strong>Update #1:</strong></p>
<p>I should have said that the nodes are not all in the same place. I have a VPN running between them all and this is the network I want things to communicate over. It is an idea I had to try and have distributed nodes.</p>
<p><strong>Update #2:</strong></p>
<p>I saw another answer on SO (<a href="https://stackoverflow.com/questions/30992961/dns-in-kubernetes-not-working?answertab=active#tab-top">DNS in Kubernetes not working</a>) that suggested <code>kubelet</code> needed to have <code>--cluster-dns</code> and <code>--cluster-domain</code> set. This is indeed the case on my DEV K8s cluster that I have running at home (on one network).</p>
<p>However it is not the case on this cluster and I suspect this is down to a later version. I did add the two settings to all nodes in the cluster, but it did not make things work.</p>
<p><strong>Update #3</strong></p>
<p>The topology of the cluster is as follows.</p>
<ul>
<li>1 x Controller is in AWS</li>
<li>1 x Worker is in Azure</li>
<li>2 x Worker are physical machines in a colo Data Centre</li>
</ul>
<p>All machines are connected to each other using ZeroTier VPN on the 192.168.191.0/24 network.</p>
<p>I have <em>not</em> configured any special routing. I agree that this is probably where the issue is, but I am not 100% sure what this routing should be.</p>
<p>WRT to <code>kube-dns</code> and <code>nginx</code>, I have not tainted my controller so <code>nginx</code> is not on the master, not is <code>busybox</code>. <code>nginx</code> and <code>busybox</code> are on workers 1 and 2 respectively.</p>
<p>I have used <code>netcat</code> to test connection to <code>kube-dns</code> and I get the following:</p>
<p><code>
/ # nc -vv 10.96.0.10 53
nc: 10.96.0.10 (10.96.0.10:53): Connection refused
sent 0, rcvd 0
/ # nc -uvv 10.96.0.10 53
10.96.0.10 (10.96.0.10:53) open
</code></p>
<p>The UDP connection does not complete.</p>
<p>I modified my setup so that I could run containers on the controller, so <code>kube-dns</code>, <code>nginx</code> and <code>busybox</code> are all on the controller, and I am able to connect and resolve DNS queries against 10.96.0.10.</p>
<p>So all this does point to routing or IPTables IMHO, I just need to work out what that should be.</p>
<p><strong>Update #4</strong></p>
<p>In response to comments I can confirm the following ping test results.</p>
<pre><code>Master -> Azure Worker (Internet) : SUCCESS : Traceroute SUCCESS
Master -> Azure Worker (VPN) : SUCCESS : Traceroute SUCCESS
Azure Worker -> Master (Internet) : SUCCESS : Traceroute FAIL (too many hops)
Azure Worker -> Master (VPN) : SUCCESS : Traceroute SUCCESS
Master -> Colo Worker 1 (Internet) : SUCCESS : Traceroute SUCCESS
Master -> Colo Worker 1 (VPN) : SUCCESS : Traceroute SUCCESS
Colo Worker 1 -> Master (Internet) : SUCCESS : Traceroute FAIL (too many hops)
Colo Worker 1 -> Master (VPN) : SUCCESS : Traceroute SUCCESS
</code></pre>
<p><strong>Update 5</strong></p>
<p>After running the tests above, it got me thinking about routing and I wondered if it was as simple as providing a route to the controller over the VPN for the service CIDR range (<code>10.96.0.0/12</code>).</p>
<p>So on a host, not included in the cluster, I added a route thus:</p>
<pre><code>route add -net 10.96.0.0/12 gw 192.168.191.100
</code></pre>
<p>And I could then resolve DNS using the <code>kube-dns</code> server address:</p>
<pre><code>nslookup www.google.co.uk 10.96.0.10
</code></pre>
<p>SO I then added a route, as above, to one of the worker nodes and tried the same. But it is blocked and I do not get a response.
Given that I can resolve DNS over the VPN with the appropriate route from a non-kubernetes machine, I can only think that there is an IPTables rule that needs updating or adding.</p>
<p>I think this is almost there, just one last bit to fix.</p>
<p><em>I realise this is wrong as it it the <code>kube-proxy</code> should do the DNS resolution on each host. I am leaving it here for information.</em></p>
| <p>Sounds like you are running on AWS. I suspect that your AWS security group is not allowing DNS traffic to go through. You can try allowing all traffic to the Security Group(s) where all your master and nodes are, to see if that's the problem.</p>
<p><a href="https://i.stack.imgur.com/ta227.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ta227.png" alt="sg"></a></p>
<p>You can also check that all your masters and nodes are allowing routing:</p>
<pre><code>cat /proc/sys/net/ipv4/ip_forward
</code></pre>
<p>If not </p>
<pre><code>echo 1 > /proc/sys/net/ipv4/ip_forward
</code></pre>
<p>Hope it helps.</p>
|
<p>I have a 3-nodes bare metal cluster installed with KUBEADM (buildt with intel NUC running CentOS 7).</p>
<p>The master node is provided with 2 different network interfaces, one for external access and the second is configured to be the DHCP server of the cluster local network.
IP forwarding and masquerading between the two netowrks is enabled through the following iptables rules </p>
<p><code>iptables -A INPUT -i lo -j ACCEPT
iptables -A INPUT -i {{lan_interface_name}} -j ACCEPT
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -t nat -A POSTROUTING -o {{wan_interface_name}} -j MASQUERADE
iptables -A FORWARD -i {{wan_interface_name}} -o {{lan_interface_name}} -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -I FORWARD -i {{lan_interface_name}} -o {{wan_interface_name}} -j ACCEPT
</code></p>
<p>In addition I also enabled forwarding between docker interface and lan interface for every node in the cluster</p>
<p><code>iptables -I INPUT -i docker0 -j ACCEPT
iptables -t nat -A POSTROUTING -o {{lan_interface_name}} -j MASQUERADE
iptables -I FORWARD -i {{lan_interface_name}} -o docker0 -m state --state RELATED,ESTABLISHED -j ACCEPT
iptables -I FORWARD -i docker0 -o {{lan_interface_name}} -j ACCEPT
</code></p>
<p>In order to be sure to have no iptables problems I enabled all the traffic by default</p>
<p><code>iptables -P OUTPUT ACCEPT
iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
</code></p>
<p>While this is the /etc/sysctl.conf file</p>
<p><code>net.ipv4.ip_forward=1
net.bridge.bridge-nf-call-iptables=1
</code></p>
<p>I followed the kubeadm instructions to launch a cluster with CALICO (<a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/</a>) and everything seems to work properly except (even kube-dns is reachable from every POD in every node) for the exposed services.</p>
<p>Just for testing I created a nginx deployment and exposed it through nodeport:
<code>
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 30m
nginx NodePort 10.96.227.49 <none> 80:31242/TCP 22m
</code></p>
<p>If I try to curl the cluster IP within the same node of the POD everything is fine, while if I try to curl it in another node curl request timeout. </p>
<p>Any help would be really appreciated. </p>
| <p>Kubernetes and Calico do convoluted things with <code>iptables</code>, so I believe you have a conflict or blocking rule somewhere. I recommend disabling all the rules to start then install docker and then your k8s with <code>kubeadm</code>.</p>
|
<p>I have a 4 node Kubernetes cluster, 1 x controller and 3 x workers. The following shows how they are configured with the versions.</p>
<p><code>
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-ctrl-1 Ready master 1h v1.11.2 192.168.191.100 <none> Ubuntu 18.04.1 LTS 4.15.0-1021-aws docker://18.6.1
turtle-host-01 Ready <none> 1h v1.11.2 192.168.191.53 <none> Ubuntu 18.04.1 LTS 4.15.0-29-generic docker://18.6.1
turtle-host-02 Ready <none> 1h v1.11.2 192.168.191.2 <none> Ubuntu 18.04.1 LTS 4.15.0-34-generic docker://18.6.1
turtle-host-03 Ready <none> 1h v1.11.2 192.168.191.3 <none> Ubuntu 18.04.1 LTS 4.15.0-33-generic docker://18.6.1
</code></p>
<p>Each of the nodes has two network interfaces, for arguments sake <code>eth0</code> and <code>eth1</code>. <code>eth1</code> is the network that I want to the cluster to work on. I setup the controller using <code>kubeadm init</code> and passed <code>--api-advertise-address 192.168.191.100</code>. The worker nodes where then joined using this address.</p>
<p>Finally on each node I modified the kubelet service to have <code>--node-ip</code> set so that the layout looks as above.</p>
<p>The cluster appears to be working correctly and I can create pods, deployments etc. However the issue I have is that none of the pods are able to use the <code>kube-dns</code> service for DNS resolution.</p>
<p>This is not a problem with resolution, rather that the machines cannot connect to the DNS service to perform the resolution. For example if I run a <code>busybox</code> container and access it to perform <code>nslookup</code> i get the following:</p>
<p><code>
/ # nslookup www.google.co.uk
nslookup: read: Connection refused
nslookup: write to '10.96.0.10': Connection refused
</code></p>
<p>I have a feeling that this is down to not using the default network and because of that I suspect some Iptables rules are not correct, that being said these are just guesses.</p>
<p>I have tried both the Flannel overlay and now Weave net. The pod CIDR range is <code>10.32.0.0/16</code> and the service CIDR is as default.</p>
<p>I have noticed that with Kubernetes 1.11 there are now pods called <code>coredns</code> rather than one <code>kube-dns</code>.</p>
<p>I hope that this is a good place to ask this question. I am sure I am missing something small but vital so if anyone has any ideas that would be most welcome.</p>
<p><strong>Update #1:</strong></p>
<p>I should have said that the nodes are not all in the same place. I have a VPN running between them all and this is the network I want things to communicate over. It is an idea I had to try and have distributed nodes.</p>
<p><strong>Update #2:</strong></p>
<p>I saw another answer on SO (<a href="https://stackoverflow.com/questions/30992961/dns-in-kubernetes-not-working?answertab=active#tab-top">DNS in Kubernetes not working</a>) that suggested <code>kubelet</code> needed to have <code>--cluster-dns</code> and <code>--cluster-domain</code> set. This is indeed the case on my DEV K8s cluster that I have running at home (on one network).</p>
<p>However it is not the case on this cluster and I suspect this is down to a later version. I did add the two settings to all nodes in the cluster, but it did not make things work.</p>
<p><strong>Update #3</strong></p>
<p>The topology of the cluster is as follows.</p>
<ul>
<li>1 x Controller is in AWS</li>
<li>1 x Worker is in Azure</li>
<li>2 x Worker are physical machines in a colo Data Centre</li>
</ul>
<p>All machines are connected to each other using ZeroTier VPN on the 192.168.191.0/24 network.</p>
<p>I have <em>not</em> configured any special routing. I agree that this is probably where the issue is, but I am not 100% sure what this routing should be.</p>
<p>WRT to <code>kube-dns</code> and <code>nginx</code>, I have not tainted my controller so <code>nginx</code> is not on the master, not is <code>busybox</code>. <code>nginx</code> and <code>busybox</code> are on workers 1 and 2 respectively.</p>
<p>I have used <code>netcat</code> to test connection to <code>kube-dns</code> and I get the following:</p>
<p><code>
/ # nc -vv 10.96.0.10 53
nc: 10.96.0.10 (10.96.0.10:53): Connection refused
sent 0, rcvd 0
/ # nc -uvv 10.96.0.10 53
10.96.0.10 (10.96.0.10:53) open
</code></p>
<p>The UDP connection does not complete.</p>
<p>I modified my setup so that I could run containers on the controller, so <code>kube-dns</code>, <code>nginx</code> and <code>busybox</code> are all on the controller, and I am able to connect and resolve DNS queries against 10.96.0.10.</p>
<p>So all this does point to routing or IPTables IMHO, I just need to work out what that should be.</p>
<p><strong>Update #4</strong></p>
<p>In response to comments I can confirm the following ping test results.</p>
<pre><code>Master -> Azure Worker (Internet) : SUCCESS : Traceroute SUCCESS
Master -> Azure Worker (VPN) : SUCCESS : Traceroute SUCCESS
Azure Worker -> Master (Internet) : SUCCESS : Traceroute FAIL (too many hops)
Azure Worker -> Master (VPN) : SUCCESS : Traceroute SUCCESS
Master -> Colo Worker 1 (Internet) : SUCCESS : Traceroute SUCCESS
Master -> Colo Worker 1 (VPN) : SUCCESS : Traceroute SUCCESS
Colo Worker 1 -> Master (Internet) : SUCCESS : Traceroute FAIL (too many hops)
Colo Worker 1 -> Master (VPN) : SUCCESS : Traceroute SUCCESS
</code></pre>
<p><strong>Update 5</strong></p>
<p>After running the tests above, it got me thinking about routing and I wondered if it was as simple as providing a route to the controller over the VPN for the service CIDR range (<code>10.96.0.0/12</code>).</p>
<p>So on a host, not included in the cluster, I added a route thus:</p>
<pre><code>route add -net 10.96.0.0/12 gw 192.168.191.100
</code></pre>
<p>And I could then resolve DNS using the <code>kube-dns</code> server address:</p>
<pre><code>nslookup www.google.co.uk 10.96.0.10
</code></pre>
<p>SO I then added a route, as above, to one of the worker nodes and tried the same. But it is blocked and I do not get a response.
Given that I can resolve DNS over the VPN with the appropriate route from a non-kubernetes machine, I can only think that there is an IPTables rule that needs updating or adding.</p>
<p>I think this is almost there, just one last bit to fix.</p>
<p><em>I realise this is wrong as it it the <code>kube-proxy</code> should do the DNS resolution on each host. I am leaving it here for information.</em></p>
| <p>Following the instruction at this <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">page</a>, try to run this:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
namespace: default
name: dns-example
spec:
containers:
- name: test
image: nginx
dnsPolicy: "None"
dnsConfig:
nameservers:
- 1.2.3.4
searches:
- ns1.svc.cluster.local
- my.dns.search.suffix
options:
- name: ndots
value: "2"
- name: edns0
</code></pre>
<p>and see if a manual configuration works or you have some networking DNS problem.</p>
|
<p>I am new to AWS EKS - I have an application for which I need one worker node (a pod) of the Kubernetes to run on my on-premise infrastructure. Is that possible and if yes then how can I achieve that ?</p>
| <p>In theory you can run EKS and on-prem kubernetes clusters at the same time, but managing them via a single <a href="https://kubernetes.io/docs/concepts/cluster-administration/federation/" rel="nofollow noreferrer">federation</a> control plane. But I've never tried to use it with EKS, although EKS is mostly vanilla kubernetes, so it should work.</p>
|
<p>While the <a href="https://github.com/kubernetes/client-go/blob/master/examples/out-of-cluster-client-configuration/main.go" rel="nofollow noreferrer">kubernetes golang api example for out-of-cluster authentication works fine</a>, and <a href="https://gist.github.com/innovia/fbba8259042f71db98ea8d4ad19bd708" rel="nofollow noreferrer">creating a service account and exporting the bearer token works great</a>, it feels silly to write the pieces to a temporary file only to tell the API to read it. Is there an API way to pass these pieces as an object rather than write to a file?</p>
<pre><code> clusterData := map[string]string{
"BEARER_TOKEN": bearerToken,
"CA_DATA": clusterCA,
"ENDPOINT": clusterUrl,
}
const kubeConfigTmpl = `
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: {{.CA_DATA}}
server: {{.HOST_IP_ADDRESS}}
name: kubernetes
contexts:
- context:
cluster: kubernetes
namespace: default
user: lamdba-serviceaccount-default-kubernetes
name: lamdba-serviceaccount-default-kubernetes
current-context: lamdba-serviceaccount-default-kubernetes
kind: Config
preferences: {}
users:
- name: lamdba-serviceaccount-default-kubernetes
user:
token: {{.BEARER_TOKEN}}
`
t := template.Must(template.New("registration").Parse(kubeConfigTmpl))
buf := &bytes.Buffer{}
if err := t.Execute(buf, clusterData); err != nil {
panic(err)
}
registrationPayload := buf.String()
d1 := []byte(registrationPayload)
err := ioutil.WriteFile("/tmp/config", d1, 0644)
</code></pre>
| <p>The <code>rest.Config</code> struct passed to the <code>NewFromConfig</code> client constructors lets you specify bearer tokens and/or client certificate/key data directly. </p>
|
<p>I want setup Traefik backend health check via Kubernetes annotation, but looks like <a href="https://docs.traefik.io/v1.7/configuration/backends/kubernetes/" rel="nofollow noreferrer">Kubernetes Ingress</a> does not support that functionality according to official documentation.</p>
<p>Is any particular reason why Traefik does not support that functionality for Kubernetes Ingress? I'm wondering because <a href="https://docs.traefik.io/v1.7/configuration/backends/mesos/" rel="nofollow noreferrer">Mesos</a> support health checks for a backend.</p>
<p>I know that in Kubernetes you can configure readiness/liveness probe for the pods, but I have leader/follower fashion service, so Traefik should route the traffic only to the leader.</p>
<p><strong>UPD:</strong> </p>
<ul>
<li>The only leader can accept the connection from Traefik; a follower will refuse the connection. </li>
<li>I have two readiness checks in my mind:
<ul>
<li>Service is up and running, and ready to be elected as a leader (kubernetes readiness probe)</li>
<li>Service is up and running and promoted as a leader (traefik health check)</li>
</ul></li>
</ul>
| <p>Traefik relies on Kubernetes to provide an indication of the health of the underlying pods to ascertain whether they are ready to provide service. Kubernetes exposes two mechanisms in a pod to communicate information to the orchestration layer:</p>
<ul>
<li><strong>Liveness checks</strong> to provide an indication to Kubernetes when the process(es) running in the pod have transitioned to a broken state. A failing liveness check will cause Kubernetes to destroy the pod and recreate it.</li>
<li><strong>Readiness checks</strong> to determine when a pod is ready to provide service. A failing readiness check will cause the Endpoint Controller to remove the pod from the list of endpoints of any services it provides. However, it will remain running.</li>
</ul>
<p>In this instance, you would expose information to Traefik via a readiness check. Configure your pods with a readiness check which fails if they are in a state in which they should not receive any traffic. When the readiness state changes, Kubernetes will update the list of endpoints against any services which route traffic to the pod to add or remove the pod. Traefik will accordingly update its view of the world to add or remove the pod from the list of endpoints backing the Ingress.</p>
<p>There is no reason for this model to be incompatible with your master/follower architecture, provided each pod can ascertain whether it is the master or follower and provide an appropriate indication in its readiness check. However, without taking special care, there <strong>will</strong> be races between the master/follower state changing and Kubernetes updating its endpoints, as readiness probes are only made periodically. I recommend assuming this will be the case and building-in logic to reject requests received by non-master pods.</p>
<hr>
<p>As a future consideration to increase robustness, you might split the ingress layer of your service from the business logic implementing the master/follower system, allowing all instances to communicate with Traefik and enqueue work for consideration by whatever is the "master" node at this point.</p>
|
<p>I've installed metrics-server on kubernetes v1.11.2.</p>
<p>I'm running a bare-metal cluster using 3 nodes and 1 master</p>
<p>In the metrics-server log I have the following errors:</p>
<pre><code>E0907 14:29:51.774592 1 manager.go:102] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:vps01: unable to
fetch metrics from Kubelet vps01 (vps01): Get https://vps01:10250/stats/summary/: dial tcp: lookup vps01 on 10.96.0.10:53: no such host, unable to fully scr
ape metrics from source kubelet_summary:vps04: unable to fetch metrics from Kubelet vps04 (vps04): Get https://vps04:10250/stats/summary/: dial tcp: lookup
vps04 on 10.96.0.10:53: no such host, unable to fully scrape metrics from source kubelet_summary:vps03: unable to fetch metrics from Kubelet vps03 (vps03):
Get https://vps03:10250/stats/summary/: dial tcp: lookup vps03 on 10.96.0.10:53: no such host, unable to fully scrape metrics from source kubelet_summary:vp
s02: unable to fetch metrics from Kubelet vps02 (vps02): Get https://vps02:10250/stats/summary/: dial tcp: lookup vps02 on 10.96.0.10:53: no such host]
E0907 14:30:01.694794 1 reststorage.go:98] unable to fetch pod metrics for pod boxweb/boxweb-deployment-7756c49688-fz625: no metrics known for pod "bo
xweb/boxweb-deployment-7756c49688-fz625"
E0907 14:30:10.517886 1 reststorage.go:112] unable to fetch node metrics for node "vps01": no metrics known for node "vps01"
</code></pre>
<p>I also can't get any metrics using
kubectl top node vps01</p>
<p>Same for autoscale it is not working</p>
<pre><code> unable to get metrics for resource cpu: unable to fetch metrics from
resource metrics API: the server could not find the requested resource (get pods.metrics.k8s.io)
</code></pre>
| <p>I found the following solution:</p>
<p>Change the <code>metrics-server-deployment.yaml</code> file and add:</p>
<pre><code>command:
- /metrics-server
- --kubelet-preferred-address-types=InternalIP
- --kubelet-insecure-tls
</code></pre>
|
<p>While the <a href="https://github.com/kubernetes/client-go/blob/master/examples/out-of-cluster-client-configuration/main.go" rel="nofollow noreferrer">kubernetes golang api example for out-of-cluster authentication works fine</a>, and <a href="https://gist.github.com/innovia/fbba8259042f71db98ea8d4ad19bd708" rel="nofollow noreferrer">creating a service account and exporting the bearer token works great</a>, it feels silly to write the pieces to a temporary file only to tell the API to read it. Is there an API way to pass these pieces as an object rather than write to a file?</p>
<pre><code> clusterData := map[string]string{
"BEARER_TOKEN": bearerToken,
"CA_DATA": clusterCA,
"ENDPOINT": clusterUrl,
}
const kubeConfigTmpl = `
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: {{.CA_DATA}}
server: {{.HOST_IP_ADDRESS}}
name: kubernetes
contexts:
- context:
cluster: kubernetes
namespace: default
user: lamdba-serviceaccount-default-kubernetes
name: lamdba-serviceaccount-default-kubernetes
current-context: lamdba-serviceaccount-default-kubernetes
kind: Config
preferences: {}
users:
- name: lamdba-serviceaccount-default-kubernetes
user:
token: {{.BEARER_TOKEN}}
`
t := template.Must(template.New("registration").Parse(kubeConfigTmpl))
buf := &bytes.Buffer{}
if err := t.Execute(buf, clusterData); err != nil {
panic(err)
}
registrationPayload := buf.String()
d1 := []byte(registrationPayload)
err := ioutil.WriteFile("/tmp/config", d1, 0644)
</code></pre>
| <p>Looking at the source code, this should work:</p>
<pre><code>// error handling omitted for brevity
cc, _ := clientcmd.NewClientConfigFromBytes([]byte(d1))
config, _ := cc.ClientConfig()
clientset, _ := kubernetes.NewForConfig(config)
</code></pre>
|
<p>We have an <a href="https://cloud.google.com/load-balancing/docs/https/" rel="noreferrer">HTTP(s) Load Balancer</a> created by a kubernetes ingress, which points to a backend formed by set of pods running nginx and Ruby on Rails.</p>
<p>Taking a look to the load balancer logs we have detected an increasing number of requests with a response code of <code>0</code> and <a href="https://cloud.google.com/load-balancing/docs/https/https-logging-monitoring#what_is_logged" rel="noreferrer"><code>statusDetails</code></a> = <code>client_disconnected_before_any_response</code>.</p>
<p>We're trying to understand why this his happening, but we haven't found anything relevant. There is nothing in the nginx access or error logs.</p>
<p>This is happening for multiple kind of requests, from GET to POST.</p>
<p>We also suspect that sometimes despite of the request being logged with that error, the requests is actually passed to the backend. For instance we're seeing PG::UniqueViolation errors, due to idential sign up requests being sent twice to the backend in our sign up endpoint.</p>
<p>Any kind of help would be appreciated. Thanks!</p>
<hr />
<h2> UPDATE 1</h2>
<p>As requested <a href="https://gist.github.com/javiercr/ae9bdbea6bbcf29e0d39e24fdb483533" rel="noreferrer">here is the yaml</a> file for the ingress resource:</p>
<hr />
<h3> UPDATE 2</h3>
<p>I've created a log-based Stackdriver metric, to count the number of requests that present this behavior. Here is the chart:</p>
<p><a href="https://i.stack.imgur.com/a5cD1.png" rel="noreferrer"><img src="https://i.stack.imgur.com/a5cD1.png" alt="chart" /></a></p>
<p>The big peaks approximately match the timestamp for these kubernetes events:</p>
<p><a href="https://i.stack.imgur.com/A8FQj.png" rel="noreferrer"><img src="https://i.stack.imgur.com/A8FQj.png" alt="events" /></a></p>
<p>Full error: <code>Readiness probe failed: Get http://10.48.1.28:80/health_check: net/http: request canceled (Client.Timeout exceeded while awaiting headers)"</code></p>
<p>So it seems sometimes the readiness probe for the pods behind the backend fails, but not always.</p>
<p>Here is the definition of the readinessProbe</p>
<pre><code>readinessProbe:
failureThreshold: 3
httpGet:
httpHeaders:
- name: X-Forwarded-Proto
value: https
- name: Host
value: [redacted]
path: /health_check
port: 80
scheme: HTTP
initialDelaySeconds: 1
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 5
</code></pre>
| <p>A response code of 0 and statusDetails = client_disconnected_before_any_response means the client closed the connection before the Load Balancer being able to provide a response as per this <a href="https://cloud.google.com/load-balancing/docs/https/https-logging-monitoring#statusdetail_http_failure_messages" rel="nofollow noreferrer">GCP documentation</a>.</p>
<p>Investigating why it did not respond in time, one of the reasons could be the difference between the <a href="https://cloud.google.com/load-balancing/docs/https/#timeouts_and_retries" rel="nofollow noreferrer">keepalive timeouts</a> from nginx and the GCP Load Balancer, even if this will most-like provide a backend_connection_closed_before_data_sent_to_client caused by a <a href="https://blog.percy.io/tuning-nginx-behind-google-cloud-platform-http-s-load-balancer-305982ddb340" rel="nofollow noreferrer">502 Bad Gateway race condition</a>.</p>
<p>To make sure the backend responds to the request and to see if how long it takes, you can repeat this process for a couple of times (since you still get some valid responses):</p>
<p>curl response time</p>
<p>$ curl -w "@curl.txt" -o /dev/null -s IP_HERE</p>
<p>curl.txt content(create and save this file first):</p>
<pre><code> time_namelookup: %{time_namelookup}\n
time_connect: %{time_connect}\n
time_appconnect: %{time_appconnect}\n
time_pretransfer: %{time_pretransfer}\n
time_redirect: %{time_redirect}\n
time_starttransfer: %{time_starttransfer}\n
----------\n
time_total: %{time_total}\n
</code></pre>
<p>If this is the case, please review the sign up endpoint code for any type of loop like the PG::UniqueViolation errors that you mentioned.</p>
|
<p>I'm trying to configure an elastic ip with Amazon's Elastic Kubernetes service so I can expose a static public IP address. So far it seems the only way to expose a static public IP address is through a load balancer which is kind of a waste since I have a static private IP address endpoint for the service but no way to expose it publicly. And I only need 1 instance of the service running since HA isn't a requirement here. I have tried everything I can think of even to manually configuring an elastic ip but it seems that if that is a solution, the steps are a little convoluted and seems odd that you would have to do such a thing. </p>
| <p>The only possible way with EKS is to use a Load Balancer. In our case, we needed to have a fixed CNAME to use in Route53. We ended up using a Load Balancer that points to our web server, that was set up as a Deployment.
As you mentioned I thought that using a Load Balancer was a waste because we only have 1 deployment, but it ended up to be quite helpful, especially because we configure the deployment to use a readinessProbe, in that way the Balancer switch to the Pod only when it's ready.
The Load Balancer CNAME is then used as a RecordSet in Route53.</p>
|
<p>How do I enable a port on Google Kubernetes Engine to accept websocket connections? Is there a way of doing so other than using an ingress controller? </p>
| <p>As per <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/service" rel="nofollow noreferrer">this article</a> in the GCP documentation, there are 4 ways that you may expose a Service to external applications. </p>
<p>It can be exposed with a ClusterIP, a <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">NodePort</a>, a <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer" rel="nofollow noreferrer">(TCP/UDP) Load Balancer</a>, or an External Name. </p>
|
<p>I installed a filebeat -> logstash -> elasticsearch -> kibana stack in Kubernetes with helm charts :</p>
<pre><code>helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
helm install --name elastic --namespace monitoring incubator/elasticsearch --set client.replicas=1,master.replicas=2,data.replicas=1
helm install --name logstash --namespace monitoring incubator/logstash -f logstash_values.yaml
helm install --name filebeat stable/filebeat -f filebeat_values.yaml
helm install stable/kibana --name kibana --namespace monitoring
</code></pre>
<p>The logs are indexed in ES, but the "message" contains the whole string, not the defined fields. My grok filter doesn't seem to work in logstash conf.</p>
<p>The is no documentation on <a href="https://github.com/helm/charts/tree/master/incubator/logstash" rel="nofollow noreferrer">https://github.com/helm/charts/tree/master/incubator/logstash</a> about how to set the patterns.</p>
<p>Here is what I tried :</p>
<p>my log's format :</p>
<pre><code>10-09-2018 11:57:55.906 [Debug] [LOG] serviceName - Technical - my specific message - correlationId - userId - data - operation - error - stackTrace escaped on one line
</code></pre>
<p>logstash_values.yaml (from <a href="https://github.com/helm/charts/blob/master/incubator/logstash/values.yaml" rel="nofollow noreferrer">https://github.com/helm/charts/blob/master/incubator/logstash/values.yaml</a>) :</p>
<pre><code>elasticsearch:
host: elasticsearch-client.default.svc.cluster.local
port: 9200
patterns:
main: |-
(?<time>(?:2[0123]|[01]?[0-9]):(?:[0-5][0-9]):(?:(?:[0-5]?[0-9]|60)(?:[:.,][0-9]+)?)\.(?:[0-9]){3})} [(?<logLevel>.*)] [(?<code>.*)] (?<caller>.*) - (?<logMessageType>.*) - (?<message>.*) - (?<correlationId>.*) - (?<userId>.*) - (?<data>.*) - (?<operation>.*) - (?<error>.*) - (?<stackTrace>.*)
inputs:
main: |-
input {
beats {
port => 5044
}
}
filters:
outputs:
main: |-
output {
elasticsearch {
hosts => ["${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}"]
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
</code></pre>
<p>This becomes a Kubernetes configMap "logstash-patterns" :</p>
<pre><code>apiVersion: v1
kind: ConfigMap
data:
main: (?<time>(?:2[0123]|[01]?[0-9]):(?:[0-5][0-9]):(?:(?:[0-5]?[0-9]|60)(?:[:.,][0-9]+)?)\.(?:[0-9]){3}) [(?<code>.*)] [(?<logLevel>.*)] (?<service>.*) - (?<logMessageType>.*) - (?<message>.*) - (?<correlationId>.*) - (?<userId>.*) - (?<data>.*) - (?<operation>.*) - (?<error>.*) - (?<stackTrace>.*)
</code></pre>
<p>I don't see any error logs in logstash pod.</p>
<p>Do you have any idea how to configure patterns in logstash in Kubernetes ?</p>
<p>Thanks.</p>
| <p>I was mistaking "pattern" and "filter".</p>
<p>In the Helm chart, "pattern" is for specifying our custom grok patterns (<a href="https://grokdebug.herokuapp.com/patterns" rel="nofollow noreferrer">https://grokdebug.herokuapp.com/patterns</a>) : </p>
<blockquote>
<p>MY_CUSTOM_ALL_CHARS .*</p>
</blockquote>
<p>My grok filter should be in the filter section :</p>
<pre><code>patterns:
# nothing here for me
filters:
main: |-
filter {
grok {
match => { "message" => "\{%{TIMESTAMP_ISO8601:time}\} \[%{DATA:logLevel}\] \[%{DATA:code}\] %{DATA:caller} &\$ %{DATA:logMessageType} &\$ %{DATA:message} &\$ %{DATA:correlationId} &\$ %{DATA:userId} &\$ %{DATA:data} &\$ %{DATA:operation} &\$ %{DATA:error} &\$ (?<stackTrace>.*)" }
overwrite => [ "message" ]
}
date {
match => ["time", "ISO8601"]
target => "time"
}
}
</code></pre>
|
<p>I got an issue with the deployment of a WordPress + MySQL application on a Kubernetes cluster.</p>
<p>When using <code>HorizontalPodAutoscaler</code> to autoscale my <code>wordpress</code> and <code>wordpress-mysql</code> deployments, it works fine for the <code>wordpress</code> one but not the <code>wordpress-mysql</code> one.<br>
Indeed, when multiple MySQL pods are created, some go in the <code>CrashLoopBackOff</code> status:</p>
<pre><code>$ kubectl get all -n wordpress
NAME READY STATUS RESTARTS AGE
po/wordpress-3874566264-7031k 1/1 Running 0 16h
po/wordpress-mysql-898811424-2bdnn 0/1 CrashLoopBackOff 6 16h
po/wordpress-mysql-898811424-dxj92 1/1 Running 146 16h
po/wordpress-mysql-898811424-vs29j 0/1 CrashLoopBackOff 148 16h
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/wordpress 10.254.121.190 10.0.0.13 80:30003/TCP 16h
svc/wordpress-mysql None <none> 3306/TCP 16h
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
hpa/wordpress Deployment/wordpress 28% / 80%, 0% / 80% 1 10 1 16h
hpa/wordpress-mysql Deployment/wordpress-mysql 90% / 80%, 0% / 80% 1 10 3 16h
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/wordpress 1 1 1 1 16h
deploy/wordpress-mysql 3 3 3 1 16h
NAME DESIRED CURRENT READY AGE
rs/wordpress-3874566264 1 1 1 16h
rs/wordpress-mysql-898811424 3 3 1 16h
</code></pre>
<p>And when I take a look at their logs, I get this:</p>
<pre><code>$ kubectl logs -p wordpress-mysql-898811424-2bdnn -n wordpress
2018-09-12 08:04:12 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use --explicit_defaults_for_timestamp server option (see documentation for more details).
2018-09-12 08:04:12 0 [Note] mysqld (mysqld 5.6.41) starting as process 436 ...
2018-09-12 08:04:12 436 [Note] Plugin 'FEDERATED' is disabled.
2018-09-12 08:04:12 436 [Note] InnoDB: Using atomics to ref count buffer pool pages
2018-09-12 08:04:12 436 [Note] InnoDB: The InnoDB memory heap is disabled
2018-09-12 08:04:12 436 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2018-09-12 08:04:12 436 [Note] InnoDB: Memory barrier is not used
2018-09-12 08:04:12 436 [Note] InnoDB: Compressed tables use zlib 1.2.3
2018-09-12 08:04:12 436 [Note] InnoDB: Using Linux native AIO
2018-09-12 08:04:12 436 [Note] InnoDB: Using CPU crc32 instructions
2018-09-12 08:04:12 436 [Note] InnoDB: Initializing buffer pool, size = 128.0M
2018-09-12 08:04:12 436 [Note] InnoDB: Completed initialization of buffer pool
2018-09-12 08:04:12 436 [ERROR] InnoDB: Unable to lock ./ibdata1, error: 11
2018-09-12 08:04:12 436 [Note] InnoDB: Check that you do not already have another mysqld process using the same InnoDB data or log files.
2018-09-12 08:04:12 436 [Note] InnoDB: Retrying to lock the first data file
2018-09-12 08:04:13 436 [ERROR] InnoDB: Unable to lock ./ibdata1, error: 11
2018-09-12 08:04:13 436 [Note] InnoDB: Check that you do not already have another mysqld process using the same InnoDB data or log files.
2018-09-12 08:04:14 436 [ERROR] InnoDB: Unable to lock ./ibdata1, error: 11
2018-09-12 08:04:14 436 [Note] InnoDB: Check that you do not already have another mysqld process using the same InnoDB data or log files.
2018-09-12 08:04:15 436 [ERROR] InnoDB: Unable to lock ./ibdata1, error: 11
[...]
2018-09-12 08:05:51 436 [Note] InnoDB: Check that you do not already have another mysqld process using the same InnoDB data or log files.
2018-09-12 08:05:52 436 [ERROR] InnoDB: Unable to lock ./ibdata1, error: 11
2018-09-12 08:05:52 436 [Note] InnoDB: Check that you do not already have another mysqld process using the same InnoDB data or log files.
2018-09-12 08:05:52 436 [Note] InnoDB: Unable to open the first data file
2018-09-12 08:05:52 7f57a329f5c0 InnoDB: Operating system error number 11 in a file operation.
InnoDB: Error number 11 means 'Resource temporarily unavailable'.
InnoDB: Some operating system error numbers are described at
InnoDB: http://dev.mysql.com/doc/refman/5.6/en/operating-system-error-codes.html
2018-09-12 08:05:52 436 [ERROR] InnoDB: Can't open './ibdata1'
2018-09-12 08:05:52 436 [ERROR] InnoDB: Could not open or create the system tablespace. If you tried to add new data files to the system tablespace, and it failed here, you should now edit innodb_data_file_path in my.cnf back to what it was, and remove the new ibdata files InnoDB created in this failed attempt. InnoDB only wrote those files full of zeros, but did not yet use them in any way. But be careful: do not remove old data files which contain your precious data!
2018-09-12 08:05:52 436 [ERROR] Plugin 'InnoDB' init function returned error.
2018-09-12 08:05:52 436 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed.
2018-09-12 08:05:52 436 [ERROR] Unknown/unsupported storage engine: InnoDB
2018-09-12 08:05:52 436 [ERROR] Aborting
2018-09-12 08:05:52 436 [Note] Binlog end
2018-09-12 08:05:52 436 [Note] Shutting down plugin 'partition'
2018-09-12 08:05:52 436 [Note] Shutting down plugin 'PERFORMANCE_SCHEMA'
2018-09-12 08:05:52 436 [Note] Shutting down plugin 'INNODB_SYS_DATAFILES'
2018-09-12 08:05:52 436 [Note] Shutting down plugin 'INNODB_SYS_TABLESPACES'
2018-09-12 08:05:52 436 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN_COLS'
2018-09-12 08:05:52 436 [Note] Shutting down plugin 'INNODB_SYS_FOREIGN'
2018-09-12 08:05:52 436 [Note] Shutting down plugin 'INNODB_SYS_FIELDS'
2018-09-12 08:05:52 436 [Note] Shutting down plugin 'INNODB_SYS_COLUMNS'
2018-09-12 08:05:52 436 [Note] Shutting down plugin 'INNODB_SYS_INDEXES'
2018-09-12 08:05:52 436 [Note] Shutting down plugin 'INNODB_SYS_TABLESTATS'
2018-09-12 08:05:52 436 [Note] Shutting down plugin 'INNODB_SYS_TABLES'
2018-09-12 08:05:52 436 [Note] Shutting down plugin 'INNODB_FT_INDEX_TABLE'
2018-09-12 08:05:52 436 [Note] Shutting down plugin 'INNODB_FT_INDEX_CACHE'
2018-09-12 08:05:52 436 [Note] Shutting down plugin 'INNODB_FT_CONFIG'
2018-09-12 08:05:52 436 [Note] Shutting down plugin 'INNODB_FT_BEING_DELETED'
2018-09-12 08:05:52 436 [Note] Shutting down plugin 'INNODB_FT_DELETED'
2018-09-12 08:05:52 436 [Note] Shutting down plugin 'INNODB_FT_DEFAULT_STOPWORD'
2018-09-12 08:05:52 436 [Note] Shutting down plugin 'INNODB_METRICS'
2018-09-12 08:05:52 436 [Note] Shutting down plugin 'INNODB_BUFFER_POOL_STATS'
2018-09-12 08:05:52 436 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE_LRU'
2018-09-12 08:05:52 436 [Note] Shutting down plugin 'INNODB_BUFFER_PAGE'
2018-09-12 08:05:52 436 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX_RESET'
2018-09-12 08:05:52 436 [Note] Shutting down plugin 'INNODB_CMP_PER_INDEX'
2018-09-12 08:05:52 436 [Note] Shutting down plugin 'INNODB_CMPMEM_RESET'
2018-09-12 08:05:52 436 [Note] Shutting down plugin 'INNODB_CMPMEM'
2018-09-12 08:05:52 436 [Note] Shutting down plugin 'INNODB_CMP_RESET'
2018-09-12 08:05:52 436 [Note] Shutting down plugin 'INNODB_CMP'
2018-09-12 08:05:52 436 [Note] Shutting down plugin 'INNODB_LOCK_WAITS'
2018-09-12 08:05:52 436 [Note] Shutting down plugin 'INNODB_LOCKS'
2018-09-12 08:05:52 436 [Note] Shutting down plugin 'INNODB_TRX'
2018-09-12 08:05:52 436 [Note] Shutting down plugin 'BLACKHOLE'
2018-09-12 08:05:52 436 [Note] Shutting down plugin 'ARCHIVE'
2018-09-12 08:05:52 436 [Note] Shutting down plugin 'MRG_MYISAM'
2018-09-12 08:05:52 436 [Note] Shutting down plugin 'MyISAM'
2018-09-12 08:05:52 436 [Note] Shutting down plugin 'MEMORY'
2018-09-12 08:05:52 436 [Note] Shutting down plugin 'CSV'
2018-09-12 08:05:52 436 [Note] Shutting down plugin 'sha256_password'
2018-09-12 08:05:52 436 [Note] Shutting down plugin 'mysql_old_password'
2018-09-12 08:05:52 436 [Note] Shutting down plugin 'mysql_native_password'
2018-09-12 08:05:52 436 [Note] Shutting down plugin 'binlog'
2018-09-12 08:05:52 436 [Note] mysqld: Shutdown complete
</code></pre>
<p>So it might be quite normal because each MySQL pod is trying to access <code>./ibdata1</code> at the same time, but then here is my question: <strong>is it really possible to have several MySQL pods sharing the same data?</strong> If the answer is yes, then how should I proceed to avoid these annoying errors?</p>
<p>If you need some other information, just tell me and I will edit my post.</p>
<p>Thank you in advance for your help!</p>
| <blockquote>
<p>So it might be quite normal because each MySQL pod is trying to access ./ibdata1 at the same time</p>
</blockquote>
<p>Yes, if you try to do that (you didn't supply manifests), then that's the very reason you have CrashLoopBackOff state. First started instace will lock it and all subsequent will fail.</p>
<blockquote>
<p>... trying to access ./ibdata1 at the same time ... is it really possible to have several MySQL pods sharing the same data?</p>
</blockquote>
<p>If we talk about same data folder (the very same persistent volume, or hostpath or nfs share...) over two independent mysql instances - then no, not really, and not advisable for a number of reasons.</p>
<p>If you need to have multiple mysql instances (processes, containers or pods) sharing same data (not data folder!) you need to use replication (with read replicas or whatever...) where each of instances has own data folder structure but they sync data between them in some manner. Here is one example of <a href="https://kubernetes.io/docs/tasks/run-application/run-replicated-stateful-application/" rel="nofollow noreferrer">a MySQL single-master topology with multiple slaves running asynchronous replication</a> on kubernetes official documentation. Note that this is not a production setup, not HA setup, but just an illustration of simple replication scenario to give you an idea.</p>
<p>Now, some simple questions: are you sure you can't handle the load with single mysql instance that is serving several wordpress instances? Are you trying to make HA setup? Because answer to each of those question requires a bit different approach and architecture decisions than "increase number of pods from 1 up.</p>
|
<p>What happened:
I have been following this guidelines: <a href="https://kubernetes.io/docs/setup/minikube/" rel="noreferrer">https://kubernetes.io/docs/setup/minikube/</a> and I have the "connection refused" issue when trying to curl the application. Here are the steps I did</p>
<pre><code>~~> minikube status
minikube: Stopped
cluster:
kubectl:
~~> minikube start
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.
~~> kubectl run hello-minikube --image=k8s.gcr.io/echoserver:1.10 --port=9500
deployment.apps/hello-minikube created
~~> kubectl expose deployment hello-minikube --type=NodePort
service/hello-minikube exposed
~~> kubectl get pod
NAME READY STATUS RESTARTS AGE
hello-minikube-79577c5997-24gt8 1/1 Running 0 39s
~~> curl $(minikube service hello-minikube --url)
curl: (7) Failed to connect to 192.168.99.100 port 31779: Connection refused
</code></pre>
<p><strong>What I expect to happen:</strong>
When I curl the pod, it should give a proper reply (like in the quickstart: <a href="https://kubernetes.io/docs/setup/minikube/" rel="noreferrer">https://kubernetes.io/docs/setup/minikube/</a>)</p>
<p>minikube logs: <a href="https://docs.google.com/document/d/1o2-ebiZTsoCzQNSn_rQSkcuVzOJABmwT2KKzGoUQNiQ/edit" rel="noreferrer">https://docs.google.com/document/d/1o2-ebiZTsoCzQNSn_rQSkcuVzOJABmwT2KKzGoUQNiQ/edit</a></p>
| <p>Not sure where you got the port <code>9500</code> from but that's the reason it doesn't work. NGINX serves on port <code>8080</code>. This should work (it does for me, at least):</p>
<pre><code>$ kubectl expose deployment hello-minikube \
--type=NodePort \
--port=8080 --target-port=8080
$ curl $(minikube service hello-minikube --url)
Hostname: hello-minikube-79577c5997-tf49z
Pod Information:
-no pod information available-
Server values:
server_version=nginx: 1.13.3 - lua: 10008
Request Information:
client_address=172.17.0.1
method=GET
real path=/
query=
request_version=1.1
request_scheme=http
request_uri=http://192.168.64.11:8080/
Request Headers:
accept=*/*
host=192.168.64.11:32141
user-agent=curl/7.54.0
Request Body:
-no body in request-
</code></pre>
|
<p>I am wonder, whether it could work..</p>
<p>We have services FOO and BAR, they are running it the same cluster with Docker Registry. Let's imagine this cluster is for production, not for development</p>
<p>We have
CI/CD system which is responsible for building images and pushing them to docker registry.</p>
<p>Docker registry is used only in kubernetes private network, we wont push or pull images outside of cluster, because... Why should i do that?</p>
<pre><code>+-----------------------------------------------+
| |
| KUBERNETES |
| +-------+ |
| | VCS | +----------+ |
| | <----------+ | |
| | | | CI/CD | |
| +-------+ +-------+ | |
| | +----------+ |
| | |
| | +-----+ |
| +--------v-----+ <-----+FOO | |
| | INSECURE | +-----+ |
| | DOCKER | +-------+ |
| | REGISTRY | <---------+BAR | |
| +--------------+ +-------+ |
+-----------------------------------------------+
^
|
|
+
USERS
</code></pre>
<p>Is it possible to create docker registry with self signed certificate, and setup kubernetes to trust this registry?</p>
<p>Or this is overhead and it's better just use good certificate and go over public network?</p>
<p>Where do you store production ready docker images and where for staging? </p>
| <p>Well, this looks like a very theoretical question. The only question which is could be answered unequivocally is:</p>
<blockquote>
<p>Is it possible to create docker registry with self signed certificate,
and setup kubernetes to trust this registry?</p>
</blockquote>
<p>Of course, you can deploy your own Docker registry, e.g. <code>Artifactory</code> or something else. You definitely can create self signed certificate an use it, as well as you can use certificate issued by one of the Certificate Authorities. (note that it could be free, via <code>Let's Encrypt</code>, for example)
Moving forward, to trust registry or not - it is not Kubernetes' task. It is a runtime's task, e.g. <code>Docker</code> or <code>Rkt</code>. So, if you want to use private registry, you will have to configure runtime's client to work with your registry, no matter secure or not. </p>
<p>Everything else is not so clear-cut as we might think. The only thing i want to say is: practice shows that if You going to do something You have to do it Your way</p>
|
<p>I'm trying to run a <code>StatefulSet</code> on my Kubernetes Cluster which has preemptible nodes in it, but I don't want to run StatefulSets on preemptible nodes as they are available for 24hrs at max.</p>
<p>As mentioned in this <a href="https://medium.com/google-cloud/using-preemptible-vms-to-cut-kubernetes-engine-bills-in-half-de2481b8e814" rel="nofollow noreferrer">post</a>, We can do it with Deployments / Pods like this:</p>
<pre><code>affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: cloud.google.com/gke-preemptible
operator: DoesNotExist
</code></pre>
<p>But How am I supposed to do this on statefulsets?</p>
| <p>You can use it in the spec definition just like in deployments:</p>
<p>Example:</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx # has to match .spec.template.metadata.labels
serviceName: "nginx"
replicas: 3 # by default is 1
template:
metadata:
labels:
app: nginx # has to match .spec.selector.matchLabels
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: cloud.google.com/gke-preemptible
operator: DoesNotExist
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "my-storage-class"
resources:
requests:
storage: 1Gi
</code></pre>
|
<p>I am new to EKS and looking for the number of pods per node and sizes of EC2 instances for nodes, recommended by AWS in EKS for better performance and HA?
I found limitations set by Kubernetes.io in <a href="https://kubernetes.io/docs/setup/cluster-large/" rel="nofollow noreferrer">here</a>. But I want to know what AWS's schools of thoughts when we run our clusters with EKS? You may share your experience too.</p>
<p>This is not for polling, but to know the standard usages.</p>
| <p>There is a hard limit to the number of pods that can be run on a particular worker instance type. This is because, by default, Amazon's VPC CNI assigns a subnet IP to each pod. <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-eni.html#AvailableIpPerENI" rel="noreferrer">This</a> page lists how many interfaces/ips per interface a particular instance type can have. One interface is reserved for the host, so you get your answer by ( Maximum Network Interfaces ) * ( IPv4/6 Addresses per Interface ) - 1. For example, with a t2.medium, you get 17 pods. </p>
<p>Picking the right type, as mentioned by RickyA, is an exercise you will have to go through.</p>
|
<p>I'm using Jenkins deployed on Kubernetes. Jenkins pods are deployed in 'kubernetes-plugin' namespace, and uses service account 'jenkins', which is defined below:</p>
<pre><code>---
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: jenkins
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
- apiGroups: ["", "extensions", "apps"]
resources: ["deployments", "replicasets", "pods"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: jenkins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins
subjects:
- kind: ServiceAccount
name: jenkins
</code></pre>
<p>But when I use <code>kubectl apply -f web-api-deploy.yaml -n default</code> in the jenkins pipeline, it report the following error:</p>
<pre><code>deployments.extensions "news-app-web-api-dev" is forbidden: User "system:serviceaccount:kubernetes-plugin:jenkins" cannot get deployments.extensions in the namespace "default"
</code></pre>
<p>which means: you cannot deploy on namespace 'default' when using service account 'jenkins' in namespace 'kubernetes-plugin'</p>
<p>So is there a way to deploy a deployment in another namespace?? How.</p>
| <blockquote>
<p>So is there a way to deploy a deployment in another namespace?? How.</p>
</blockquote>
<p>If I'm not mistaken, <a href="https://github.com/jenkinsci/kubernetes-plugin" rel="nofollow noreferrer">this github project</a> gives steps to run in different namespace. It all boils down to this:</p>
<p>You need to craete ServiceAccount, Role and RoleBinding in different namespace and use it like noted in documentation. Here is relevant part:</p>
<pre><code>Ensure you create the namespaces and roles with the following commands,
then run the tests in namespace kubernetes-plugin with the service account
jenkins (edit src/test/kubernetes/service-account.yml to use a different
service account)
kubectl create namespace kubernetes-plugin-test
kubectl create namespace kubernetes-plugin-test-overridden-namespace
kubectl create namespace kubernetes-plugin-test-overridden-namespace2
kubectl apply -n kubernetes-plugin-test -f src/main/kubernetes/service-account.yml
kubectl apply -n kubernetes-plugin-test-overridden-namespace -f src/main/kubernetes/service-account.yml
kubectl apply -n kubernetes-plugin-test-overridden-namespace2 -f src/main/kubernetes/service-account.yml
kubectl apply -n kubernetes-plugin-test -f src/test/kubernetes/service-account.yml
kubectl apply -n kubernetes-plugin-test-overridden-namespace -f src/test/kubernetes/service-account.yml
kubectl apply -n kubernetes-plugin-test-overridden-namespace2 -f src/test/kubernetes/service-account.yml
</code></pre>
<p>Also applicable to your situation is to create new Role and RoleBinding in default namespace referencing jenkins ServiceAccount from kubernetes-plugin namespace like so:</p>
<pre><code>kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: role-jenkins-default
namespace: default
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
- apiGroups: ["", "extensions", "apps"]
resources: ["deployments", "replicasets", "pods"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: roleb-jenkins-default
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: role-jenkins-default
subjects:
- kind: ServiceAccount
name: jenkins
namespace: kubernetes-plugin
</code></pre>
<p>Note that <code>role-</code> and <code>roleb-</code> prefixes as well as <code>-deault</code> suffix are added to name for clarity. Same goes for explicitly listing namespace <code>default</code> for easier bookkeeping and clarity.</p>
<p>This change should get you past by the error mentioned in your question.</p>
|
<p>I have some apps in production working in Azure. All these applications belong to the same company and communicate with each other. I want to migrate them to Kubernetes.</p>
<p><strong>My question is:</strong> What are the best practices in this case and why ?</p>
<p>Some peoples recommend one cluster and multiples namespaces and I don't know why.</p>
<p>For example: <a href="https://www.youtube.com/watch?v=xygE8DbwJ7c" rel="noreferrer">https://www.youtube.com/watch?v=xygE8DbwJ7c</a> recommends apps within a cluster doing intra-cluster multi-tenancy but the arguments of this choice are not enough for me.</p>
| <blockquote>
<p><strong>My question is:</strong> What are the best practices in this case? and why ?</p>
</blockquote>
<p>Answer is: it depends...</p>
<p>To try to summarize it from our experience:</p>
<ul>
<li><p>Cluster for each app is usually quite a bit waste of resources, especially giving HA clusters requirements, and it can mainly be justified in case when single app is comprised of larger number of microservices that are naturally clustered together or when some special security considerations has to be taken into account. That is, however, in our experience, rare the case (but it depends)... </p></li>
<li><p>Namespaces for apps in a cluster are more in line with our experience and needs, but again, this should not be overdone either (so, again it depends) since, for example your CNI can be bottleneck leading to one rogue app (or setup) degrading performance for other apps in seemingly unrelated case. Loadbanalcing and rollout downtimes, clashes for resources and other things can happen if all is crammed into one cluster at all cost. So this has it's limits as well.</p></li>
<li><p>Best of both worlds - we started with single cluster, and when we reached naturally separate (and separately performant) use cases (say, qa, dev, stage environments, different client with special security considerations etc) we migrated to more clusters, keeping in each cluster reasonably namespaced apps.</p></li>
</ul>
<p>So all in all: depending on available machine pool (number of nodes), size of the cluster, size of apps themselves (microservice/service complexity), HA requirements, redundance, security considerations etc.... you might want to fit all into one cluster with namespaced apps, then again maybe separate in several clusters (again with namespaced apps within each cluster) or keep everything totally separate with one app per cluster. So - it depends.</p>
|
<p>Currently I am having an issue with one of my services set to be a load balancer. I am trying to get the source ip preservation like its stated in the <a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip" rel="nofollow noreferrer">docs</a>. However when I set the <code>externalTrafficPolicy</code> to local I lose all traffic to the service. Is there something I'm missing that is causing this to fail like this?</p>
<p><strong>Load Balancer Service:</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: loadbalancer
role: loadbalancer-service
name: lb-test
namespace: default
spec:
clusterIP: 10.3.249.57
externalTrafficPolicy: Local
ports:
- name: example service
nodePort: 30581
port: 8000
protocol: TCP
targetPort: 8000
selector:
app: loadbalancer-example
role: example
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: *example.ip*
</code></pre>
| <p>Could be several things. A couple of suggestions:</p>
<ol>
<li>Your service is getting an external IP and doesn't know how to reply back based on the local IP address of the pod.
<ul>
<li>Try running a sniffer on your pod see if you are getting packets from the external source.</li>
<li>Try checking at logs of your application.</li>
</ul></li>
<li>Healthcheck in your load balancer is failing. Check the load balancer for your service on GCP console.
<ul>
<li>Check the instance port is listening. (probably not if your health check is failing)</li>
</ul></li>
</ol>
<p><a href="https://i.stack.imgur.com/mZULU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mZULU.png" alt="lb"></a></p>
<p>Hope it helps.</p>
|
<p>I have a failing <code>public docker hub</code> container and if I <code>kubectl apply -f ...</code> with the same version, <code>:latest</code> in this case, I am getting:</p>
<pre><code>Container image "<name/name>:latest" already present on machine
</code></pre>
<p>I don't see the image anywhere, in this case I am running on Google Kubernetes Engine - and it is not in the google <code>container registry</code>.</p>
<p>The solution or workaround, is of course to fix the code error in the docker container, and add to the version number and push again - then it all works and get pulled.</p>
<p>But is there no way to clear the image in Kubernetes, something like in docker <code>docker rmi <name/name>:latest</code>?</p>
| <p>I think use <code>latest</code> tag - not the best. But if it is necessary, <a href="https://kubernetes.io/docs/concepts/containers/images/#updating-images" rel="nofollow noreferrer">official</a> workaround <code>imagePullPolicy=Always</code>.</p>
<p>Why this not best way? More info can find <a href="https://kubernetes.io/docs/concepts/configuration/overview/#container-images" rel="nofollow noreferrer">this</a>.</p>
|
<p>I followed this tutorial: <a href="https://kubecloud.io/setting-up-a-highly-available-kubernetes-cluster-with-private-networking-on-aws-using-kops-65f7a94782ef" rel="nofollow noreferrer">Kubernetes Cluster with private networking on AWS using Kops</a></p>
<p>However, after creating the kubernetes cluster, I am getting the following error:</p>
<pre><code>$ kops validate cluster
Using cluster from kubectl context: k8s-cluster.mydomain.com
Validating cluster k8s-cluster.mydomain.com
</code></pre>
<h2>Error Message:</h2>
<pre><code>unexpected error during validation: error listing nodes: Get https://subdomain.eu-central-1.elb.amazonaws.com/api/v1/nodes: EOF
</code></pre>
<p>Any ideas on how to debug or resolve this issue?</p>
<hr>
<p>Steps I used to create are below:</p>
<p><strong>Setup VPC and Subnets</strong></p>
<p>Create the VPC</p>
<pre><code>$ aws ec2 create-vpc --cidr-block 10.0.0.0/16 --region eu-central-1
</code></pre>
<p>Allow DNS hostnames</p>
<pre><code>$ aws ec2 modify-vpc-attribute --vpc-id ${VPC_ID} --enable-dns-hostnames "{\"Value\":true}" --region ${REGION}
</code></pre>
<p>Create internet gateway</p>
<pre><code>$ aws ec2 create-internet-gateway --region ${REGION}
</code></pre>
<p>Attach internet gateway to VPC</p>
<pre><code>$ aws ec2 attach-internet-gateway --internet-gateway-id ${INTERNET_GATEWAY_ID} --vpc-id ${VPC_ID} --region ${REGION}
</code></pre>
<p>[PUBLIC SUBNETS] Create three public zones / subnets (3x)</p>
<pre><code>$ aws ec2 create-subnet --vpc-id ${VPC_ID} --cidr-block 10.0.0.0/20 --availability-zone ${AVAILABILITY_ZONE_1} --region ${REGION}
</code></pre>
<p>Set public subnets to auto-assign public ip to instances (3x)</p>
<pre><code>$ aws ec2 modify-subnet-attribute --subnet-id ${PUBLIC_SUBNET_1} --map-public-ip-on-launch --region ${REGION}
</code></pre>
<p>[PRIVATE SUBNETS] Create three private zones / subnets (3x)</p>
<pre><code>$ aws ec2 create-subnet --vpc-id ${VPC_ID} --cidr-block 10.0.48.0/20 --availability-zone ${AVAILABILITY_ZONE_1} --region ${REGION}
</code></pre>
<p>[Setup NAT Gateways] Allocate address (3x)</p>
<pre><code>$ aws ec2 allocate-address --domain vpc --region ${REGION}
</code></pre>
<p>Create NAT gateway for public zones (3x)</p>
<pre><code>$ aws ec2 create-nat-gateway --subnet-id ${PUBLIC_SUBNET_1} --allocation-id ${EIP_ALLOCATION_ID_1} --region ${REGION}
</code></pre>
<p>[CONFIGURE ROUTE TABLES] Create route table</p>
<pre><code>$ aws ec2 create-route-table --vpc-id ${VPC_ID} --region ${REGION}
</code></pre>
<p>Create route for internet gateway</p>
<pre><code>$ aws ec2 create-route --route-table-id ${RTB_PUBLIC_1} --destination-cidr-block 0.0.0.0/0 --gateway-id ${INTERNET_GATEWAY_ID} --region ${REGION}
</code></pre>
<p>Associate public subnets with route table (3x)</p>
<pre><code>$ aws ec2 associate-route-table --route-table-id ${RTB_PUBLIC_1} --subnet-id ${PUBLIC_SUBNET_1} --region ${REGION}
</code></pre>
<p>[ROUTE TABLE FOR PRIVATE ZONES] Create route table for each private zone (3x)</p>
<pre><code>$ aws ec2 create-route-table --vpc-id ${VPC_ID} --region ${REGION}
</code></pre>
<p>Create route to NAT Gateway (3x)</p>
<pre><code>$ aws ec2 create-route --route-table-id ${RTB_PRIVATE_1} --destination-cidr-block 0.0.0.0/0 --nat-gateway-id ${NAT_GW_1} --region ${REGION}
</code></pre>
<p>Associate subnets (3x)</p>
<pre><code>$ aws ec2 associate-route-table --route-table-id ${RTB_PRIVATE_1} --subnet-id ${PRIVATE_SUBNET_1} --region ${REGION}
</code></pre>
<hr>
<p><strong>Other Configuration</strong></p>
<p>Set up S3 Bucket as Kops state store</p>
<pre><code>$ aws s3api create-bucket --bucket my-state-store --region ${REGION} --create-bucket-configuration LocationConstraint=eu-central-1
</code></pre>
<p>Create cluster</p>
<pre><code>$ kops create cluster --node-count 3 --zones ${AVAILABILITY_ZONE_1},${AVAILABILITY_ZONE_2},${AVAILABILITY_ZONE_3} --master-zones ${AVAILABILITY_ZONE_1},${AVAILABILITY_ZONE_2},${AVAILABILITY_ZONE_3} --state ${KOPS_STATE_STORE} --dns-zone=${DNS_ZONE_PRIVATE_ID} --dns private --node-size m5.large --master-size m5.large --topology private --networking weave --vpc=${VPC_ID} --bastion ${NAME}
</code></pre>
<p>Edit cluster to config subnets</p>
<pre><code>$ kops edit cluster ${NAME}
</code></pre>
<p>Note: update subnets to correspond with created public/private subnets above</p>
<pre><code>$ kops update cluster ${NAME} --yes
</code></pre>
| <p>Issue resolved. It was not a <code>kops</code> problem, the issue was with AWS M5 and linux version.</p>
<blockquote>
<p>The kops default Debian jessie images do not support nvme for EBS
volumes, which is used by the AWS M5 instance types. As a result,
masters fail to start, as they can not mount the EBS volumes.</p>
</blockquote>
<p>Source: <a href="https://github.com/kubernetes/kops/issues/4873" rel="nofollow noreferrer">https://github.com/kubernetes/kops/issues/4873</a></p>
|
<p>I was looking at the <a href="https://kubernetes.io/docs/getting-started-guides/windows/" rel="noreferrer">kubernetes documentation</a> which seems to have a windows compability, however I don't find completely clear if both Linux and Windows can live together (I mean, in diferent VMs but the same cluster).</p>
<p>I would like to know if there is any support for this scenario in <code>gcloud</code>, <code>azure</code> or <code>aws</code>. And also, the procedure or example to make it work. Like how to create a pod in the correct VM (windows or linux) and how horizontal and cluster autoscalers work.</p>
<p>The use case is 2 APIs, one running in windows (.NET Framework) and other in linux (python/c++) and I want to be able to reroute them, be able to call each other, scale them and so on with kubernetes. As a note, the <code>.NET Framework</code> application have dependencies (mainly for mathematical optimization) that cannot be passed to <code>.NET Core</code>, this implies that I cannot convert the application to <a href="https://learn.microsoft.com/en-us/dotnet/core/linux-prerequisites?tabs=netcore2x" rel="noreferrer">linux-based</a>.</p>
| <p>Some history, so containers is a Linux thing so there are no containers per se on Windows. Docker created Docker for Windows but essentially what it does is run a Hyper-V Linux VM (used to be VirtualBox) and inside it runs your containers. As of the latest Docker version, Microsoft has added capabilities on Hyper-V to allow running these containers kinda natively making it easy to run <a href="https://learn.microsoft.com/en-us/aspnet/mvc/overview/deployment/docker-aspnetmvc" rel="noreferrer">.NET apps</a> in containers.</p>
<p>K8s is implemented in Golang so it was generally easier to port main components like the <code>kubelet</code>, <code>kube-proxy</code>, <code>kubectl</code> to Windows, by using the Golang cross-compiler (or native on Windows)</p>
<p>A tricky part is the networking but looks like they've got it figured out in the <a href="https://kubernetes.io/docs/getting-started-guides/windows/#upstream-l3-routing-topology" rel="noreferrer">docs</a></p>
<p>As far as public cloud support from major providers:</p>
<ul>
<li><p>AWS </p>
<ul>
<li>Hypervisor: Modified Xen or KVM. No nested virtualization support.</li>
<li>VMs: Windows VMs. Can't take advantage of Hyper-V with nested virtualization, but can run Docker for Windows. </li>
<li>Bare-metal: (i3.metal as of this writing). Run Hyper-V natively and Docker for Windows.</li>
</ul></li>
<li><p>Azure</p>
<ul>
<li>Hypervisor: Hyper-V. Supports nested virtualization on some instance types.</li>
<li>VMs: Windows VMs, can use nested virtualization with Hyper-V, and can run Docker for Windows.</li>
<li>ACS, AKS, ACE: Should be able to take advantage of Hyper-V with nested virtualization and some cases natively.</li>
</ul></li>
<li><p>GCP</p>
<ul>
<li>Hypervisor: KVM. Supports nested virtualization on some instance types.</li>
<li>VMs: Windows VMs. Can run Hyper-V with nested virtualization and can run Docker for Windows.</li>
</ul></li>
</ul>
<p>Other than that I don't know what else there's to it (Other than what's in the <a href="https://kubernetes.io/docs/getting-started-guides/windows/" rel="noreferrer">docs</a>) The question is very broad. Just install Docker for Windows, setup networking, join your cluster with <code>kubeadm</code> and schedule your Windows workloads using the <code>nodeSelector</code> spec in your pods and make sure you label your Windows nodes with <code>beta.kubernetes.io/os=windows</code></p>
<p>There's another good guide on to set up Kubernetes with Windows nodes <a href="https://onedrive.live.com/view.aspx?resid=E2B6765015E5FA01!339&ithint=file%2Cdocx&app=Word&authkey=!AGvs_s_hWs7xHGs" rel="noreferrer">here</a></p>
|
<p>I'm deploying the <strong>hyperledger/fabric-couchdb</strong> docker image on Rancher-Kubernetes. In the cluster, it's not allowed run container as ROOT. So we need select as Nonroot while deploying images. </p>
<p>After deploying <strong>hyperledger/fabric-couchdb</strong>, the pod is not getting started. When I checked logs, the message is <strong>su-exec: setgroups: Operation not permitted</strong>. In the below image, I have attached a screenshot from Event as well. Please suggest what needs to done to make it work or am I doing something wrong here.</p>
<p><a href="https://i.stack.imgur.com/DIs27.png" rel="nofollow noreferrer">Event screenshot</a></p>
| <p>That's the problem, you are not running as 'root' and the container entrypoint executes a call to <code>setgroups</code> which requires 'root'. You will have to either run as 'root' somehow or you can modify your container image and the entrypoint to perhaps make those calls where 'root' is require using something like 'sudo'. </p>
<p>Note that whatever user call 'sudo' needs to have 'root' like permissions to execute <code>setgroups</code></p>
|
<p>View node status:kubectl get csr</p>
<pre><code>[root@kube1 ~]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr--jJF_sRckTdhoqAOYB4fEaA06Juwv32d1RFwzcbbE0c 150m system:bootstrap:gn5vla Pending
node-csr-KMkTDLPqhj52YxZFS8vEOiqMt1NXVEcYvmvUJAhxhwg 150m system:bootstrap:xay6t6 Pending
node-csr-bv18tH4pK-xq7Ekwv0IuzD4CcBuvKjjdonBjpKqHuPQ 150m system:bootstrap:v1g4p2 Pending
</code></pre>
<p>Perform a refusal:</p>
<pre><code>kubectl get csr | grep Pending| awk '{print $1}' | xargs kubectl certificate deny
</code></pre>
<p>View node status again:kubectl get csr</p>
<pre><code>[root@kube1 ~]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr--jJF_sRckTdhoqAOYB4fEaA06Juwv32d1RFwzcbbE0c 150m system:bootstrap:gn5vla Denied
node-csr-KMkTDLPqhj52YxZFS8vEOiqMt1NXVEcYvmvUJAhxhwg 150m system:bootstrap:xay6t6 Denied
node-csr-bv18tH4pK-xq7Ekwv0IuzD4CcBuvKjjdonBjpKqHuPQ 150m system:bootstrap:v1g4p2 Denied
</code></pre>
<p>How can I do this to approve a CSR in the Denied state?</p>
| <p>Short answer, you can't. Once you deny a CSR you need issue a new CSR and approve it if you want to. You can delete denied CSRs if you don't want to see them there with:</p>
<pre><code> kubectl delete csr <csr-name>
</code></pre>
<p>Additionally, To delete all denied requests use:</p>
<pre><code>kubectl get csr | grep Denied | awk '{print $1;}' | xargs kubectl delete csr
</code></pre>
|
<p>I just installed the controller via Helm, I can list the helm packages via <code>helm list</code>, but is it possible to list all the controllers running in the cluster via <code>kubectl</code> or <code>api-query</code>?</p>
| <p>If you mean replication controller then you can list them by <code>kubectl</code>:</p>
<pre><code>kubectl get replicationcontroller -n my-namespace
</code></pre>
<p>Or list them all from all the namespaces:</p>
<pre><code>kubectl get rc --all-namespaces
</code></pre>
<p>And you can also use API:</p>
<pre><code>curl http://localhost:8080/api/v1/replicationcontrollers
</code></pre>
<p><strong>Update:</strong>
You can list other controller types like <code>replicaset</code> (<code>rs</code>), <code>deployment</code> (<code>deploy</code>), <code>statefulset</code>, <code>daemonset</code> (<code>ds</code>) and <code>job</code> in the same way.</p>
|
<p>We've experienced 4 <code>AUTO_REPAIR_NODES</code> events(revealed by the command <code>gcloud container operations list</code>) on our GKE cluster during the past 1 month. The consequence of node-auto-repair is that the node gets recreated and gets attached a new external IP, and the new external IP, which was not whitelisted by third-party services, eventually caused failure of services running on that the new node.</p>
<p>I noticed that we have "<strong>Automatic node repair</strong>" enabled in our Kubernetes cluster and felt tempted to disable that, but before I do that, I need to know more about the situation. </p>
<p>My questions are:</p>
<ol>
<li>What are some common causes that makes a node unhealthy in the first place? I'm aware of this article <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-repair#node_repair_process" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-repair#node_repair_process</a> which says, "a node reports a <strong>NotReady</strong> status on consecutive checks over the given time threshold" would trigger auto repair. But what could cause a node to become <strong>NotReady</strong>?</li>
<li>I'm also aware of this article <a href="https://kubernetes.io/docs/concepts/architecture/nodes/#node-status" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/architecture/nodes/#node-status</a> which mentions the full list of node status: {OutOfDisk, Ready, MemoryPressure, PIDPressure, DiskPressure, NetworkUnavailable, ConfigOK}. I wonder, if any of {OutOfDisk, MemoryPressure, PIDPressure, DiskPressure, NetworkUnavailable} becomes true for a node, would that node becomes NotReady?</li>
<li>What negative consequences could I get after I disable "Automatic node repair" in the cluster? <strong>I'm basically wondering whether we could end up in a worse situation than auto-repaired nodes and newly-attached-not-whitelisted IP</strong>. Once "Automatic node repair" is disabled, then for the pods that are running on an Unhealthy node that would've been auto-repaired, would Kubernetes create new pods on other nodes?</li>
</ol>
| <p>The confusion lies here in that there are 'Ready' and 'NotReady' states that are shown when you run <code>kubectl get nodes</code> which are reported by the kube-apiserver. But these are independent and unclear from the docs how they relate to the kubelet states described <a href="https://kubernetes.io/docs/concepts/architecture/nodes/#node-status" rel="nofollow noreferrer">here</a>
You can also see the kubelet states (in events) when you run <code>kubectl describe nodes</code> </p>
<p>To answer some parts of the questions:</p>
<ol>
<li><p>As reported by the kube-apiserver</p>
<ul>
<li>Kubelet down</li>
<li>docker or containerd or crio down (depending on the shim you are using)</li>
<li>kubelet states - unclear.</li>
</ul></li>
<li><p>For these, the kubelet will start evicting or not scheduling pods except for Ready (<a href="https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/out-of-resource/</a>). Unclear from the docs how these get reported from the kubeapi-server.</p></li>
<li><ul>
<li>You could have nodes on your cluster not being used and you'd be paying for that usage.</li>
<li>Yes, k8s will reschedule the pods after a certain readiness probes fail (configurable). If the kubelet is down or the node down k8s will think the pods are down.</li>
<li>Assuming your nodes go down, you could end up with less capacity than what you need to schedule your workloads to k8s would not be able to schedule them anyway.</li>
</ul></li>
</ol>
<p>Hope it helps!</p>
|
<p>Is it possible to have the log output for icecast read a variable from a header sent to the server? Currently I am setting up an icecast server in Kuberenetes and I'm trying to get source IP preservation onto the stream for analyzing log data. However even with the necessary steps on kubernetes side I am not seeing the logs for icecast have source IP. How ever I was able to sniff incoming request and I am seeing:</p>
<pre><code>X-Real-IP: 142.x.x.x
X-Forwarded-For: 142.x.x.x
</code></pre>
<p>As headers coming into the server. </p>
<p>Is it possible to get these into the logs somehow?</p>
| <p>Not at the moment.</p>
<p>We plan to support reverse proxying in release 2.5.</p>
<p>Our general recommendation at the moment is to <em>not</em> reverse proxy Icecast due to many possible problems beyond just losing the originating IP address.</p>
<p>Just expose the Icecast ports directly to the Internet, e.g. through port forwarding.</p>
<p><strong>edit:</strong> <a href="https://kubernetes.io/docs/concepts/services-networking/service/#multi-port-services" rel="nofollow noreferrer">You can just declare <code>protocol: TCP</code> ports for your service.</a> also "Proxy-mode: iptables"</p>
|
<p>I am trying to add new user to EKS cluster and giving then access. So far I was able to add the user just by editing <code>configmap/aws-auth</code> (<code>kubectl edit -n kube-system configmap/aws-auth</code>) and adding new user to </p>
<pre><code>mapUsers: |
- userarn: arn:aws:iam::123456789:user/user01
username: user01
groups:
- system:masters
</code></pre>
<p>How can add user to EKS cluster and give full access to to specific namespace, but nothing outside of it ?</p>
<p>I tried to create Roles & RoleBinding as </p>
<pre><code>---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: namespace1
name: namespace1-user
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
# This role binding allows "user01" to read pods in the "namespace1" namespace.
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: namespace1-user-role-binding
namespace: namespace1
subjects:
- kind: User
name: user01
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: namespace1-user
</code></pre>
<p><code>user01</code> can see all the pods from other users with <code>kubectl get pods --all-namespaces</code>, is there any way to restrict this ?</p>
| <p>Essentially what you want is to define a cluster role and use a role binding to apply it to a specific namespace. Using a cluster role (rather than a role) allows you to re-use it across namespaces. Using a role binding allows you to target a specific namespace rather than giving cluster-wide permissions.</p>
|
<p>I have a container that runs some data fetching from a MySQL database and simply displays the result in console.log(), and want to run this as a cron job in GKE. So far I have the container working on my local machine, and have successfully deployed this to GKE (in terms of there being no errors thrown so far as I can see). </p>
<p>However, the pods that were created were just left as Running instead of stopping after completion of the task. Are the pods supposed to stop automatically after executing all the code, or do they require explicit instruction to stop and if so what is the command to terminate a pod after creation (by the Cron Job)?</p>
<p>I'm reading that there is supposedly some kind of termination grace period of ~30s by default, but after running a minutely-executed cronjob for ~20minutes, all the pods were still running. Not sure if there's a way to terminate the pods from inside the code, otherwise it would be a little silly to have a cronjob generating lots of pods left running idly..My cronjob.yaml below:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: test
spec:
schedule: "5 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: test
image: gcr.io/project/test:v1
# env:
# - name: "DELAY"
# value: 15
restartPolicy: OnFailure
</code></pre>
| <p>A <code>CronJob</code> is essentially a cookie cutter for jobs. That is, it knows how to create jobs and execute them at a certain time. Now, that being said, when looking at garbage collection and clean up behaviour of a <code>CronJob</code>, we can simply look at what the Kubernetes docs have to say about this topic <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#job-termination-and-cleanup" rel="nofollow noreferrer">in the context of jobs</a>:</p>
<blockquote>
<p>When a Job completes, no more Pods are created, but the Pods are not deleted either. Keeping them around allows you to still view the logs of completed pods to check for errors, warnings, or other diagnostic output. The job object also remains after it is completed so that you can view its status. It is up to the user to delete old jobs after noting their status. Delete the job with kubectl (e.g. <code>kubectl delete jobs/pi</code> or <code>kubectl delete -f ./job.yaml</code>). </p>
</blockquote>
|
<p>I'm using Minikube for working with Kubernetes on my local machine, and would like to run a command on the VM just after startup (preferably before the Pods start). I can run it manually with <code>minikube ssh</code>, but that's a bit of a pain to do after every restart, and is difficult to wrap in a script.</p>
<p>Is there an easy way to do this?</p>
<p>The command in my case is this, so that paths on the VM match paths on my host machine:</p>
<pre><code>sudo mount --bind /hosthome/<user> /home/<user>
</code></pre>
| <p>Maybe flags which can you pass to <code>minikube start</code> would be useful in your case:</p>
<pre><code> --mount This will start the mount daemon and automatically mount files into minikube
--mount-string string The argument to pass the minikube mount command on start (default "/home/user:/minikube-host")
</code></pre>
<p>Edit:
Maybe you could write script for starting your minikube like: </p>
<pre><code>minikube start && ssh -t -i ~/.minikube/machines/minikube/id_rsa docker@$(minikube ip) "sudo mount --bind /hosthome/<user> /home/<user>"
</code></pre>
<p>that will start minikube and issue bind command using SSH after start</p>
|
<p>When I want to execute this command:</p>
<pre><code>kubectl apply -f namespace.yaml
</code></pre>
<p>I had this error message:</p>
<pre><code>The connection to the server localhost:8080 was refused - did you specify the right host or port?
</code></pre>
<p>Some answers have been posted in Stackoverflow like using <code>kubectl</code> with:</p>
<pre><code>--kubeconfig=~/.kube/config
</code></pre>
<p>But this didn't solve my problem.</p>
<p>This is my <code>namespace.yaml</code> file:</p>
<pre><code>apiVersion: v1
kind: Namespace
metadata:
name: my_namespace
</code></pre>
<p>I'm using GKE (1.10.6-gke.3).</p>
| <p>You need to configure kubectl to use your cluster as described here: <a href="https://cloud.google.com/kubernetes-engine/docs/quickstart" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/quickstart</a></p>
|
<p>In the Kubernetes <a href="https://kubernetes.io/docs/tutorials/hello-minikube/#create-your-node-js-application" rel="noreferrer">minikube tutorial</a> there is this command to use Minikube Docker daemon :</p>
<pre><code>$ eval $(minikube docker-env)
</code></pre>
<p>What exactly does this command do, that is, what exactly does <code>minikube docker-env</code> mean? </p>
| <p>The command <code>minikube docker-env</code> returns a set of Bash environment variable exports to configure your local environment to re-use the Docker daemon inside the Minikube instance.</p>
<p>Passing this output through <code>eval</code> causes bash to evaluate these exports and put them into effect.</p>
<p>You can review the specific commands which will be executed in your shell by omitting the evaluation step and running <code>minikube docker-env</code> directly. However, <em>this will not perform the configuration</em> – the output needs to be evaluated for that.</p>
<hr />
<p>This is a workflow optimization intended to improve your experience with building and running Docker images which you can run inside the minikube environment. It is not mandatory that you re-use minikube's Docker daemon to use minikube effectively, but doing so will significantly improve the speed of your code-build-test cycle.</p>
<p>In a normal workflow, you would have a separate Docker registry on your host machine to that in minikube, which necessitates the following process to build and run a Docker image inside minikube:</p>
<ol>
<li>Build the Docker image on the host machine.</li>
<li>Re-tag the built image in your local machine's image registry with a remote registry or that of the minikube instance.</li>
<li>Push the image to the remote registry or minikube.</li>
<li>(If using a remote registry) Configure minikube with the appropriate permissions to pull images from the registry.</li>
<li>Set up your deployment in minikube to use the image.</li>
</ol>
<p>By re-using the Docker registry inside Minikube, this becomes:</p>
<ol>
<li>Build the Docker image using Minikube's Docker instance. This pushes the image to Minikube's Docker registry.</li>
<li>Set up your deployment in minikube to use the image.</li>
</ol>
<hr />
<p>More details of the purpose can be found in the <a href="https://minikube.sigs.k8s.io/docs/handbook/pushing/#1-pushing-directly-to-the-in-cluster-docker-daemon-docker-env" rel="noreferrer">minikube docs</a>.</p>
|
<p>How can I aggregate the log events to a single entry even though it is logged in multiple lines through application logger when docker is deployed to gcp kubernetes cluster.</p>
<p>For AWS we can use the date time format to identify the start of an event. What is the substitute in GCP.</p>
<p>Thanks.</p>
| <p>In my opinion, you need a dedicated solution to manage your logs really effectively.</p>
<p>One of the most popular solutions for aggregating/managing/sharing logs is <a href="https://www.elastic.co/products" rel="nofollow noreferrer">ELK stack</a>, i.e. <code>ElasticSearch, Logstash, Kibana</code> or another version of similar stack, but with <code>Fluentd</code> instead of <code>Logstash</code>: <code>EFK stack</code>.</p>
<p>ELK Stack has a list of streamers or "<a href="https://www.elastic.co/products/beats" rel="nofollow noreferrer">data shippers</a>", which names are <code>beats</code>. One of them is <code>Filebeat</code>, which unsurprisingly works with files. In the nutshell it can read a file via <code>tail</code> method. So you can read any file. </p>
<p>The <code>Filebeat</code> <a href="https://www.elastic.co/guide/en/beats/filebeat/current/multiline-examples.html" rel="nofollow noreferrer">supports configuration options</a> to resolve your issue. They are:</p>
<pre><code>multiline.pattern:
multiline.negate:
multiline.match:
</code></pre>
<p>Generally, you should define regular expression which is describing the beginning of your lines unequivocally.</p>
<p>So, try this out. This stack supports different types of integration with the Kubernetes, e.g. in-cluster deployment and <a href="https://www.elastic.co/guide/en/beats/filebeat/current/configuration-autodiscover.html" rel="nofollow noreferrer">autodiscover</a></p>
|
<p>What is the difference between Objects and Resouces in Kubernetes world? </p>
<p>I couldn't find it from <a href="https://kubernetes.io/docs/concepts/" rel="noreferrer">https://kubernetes.io/docs/concepts/</a> . I wonder they make no distinction about them but seems they see objects as a high-level concept of resources. </p>
| <p>A representation of a specific group+version+kind is an object. For example, a v1 Pod, or an apps/v1 Deployment. Those definitions can exist in manifest files, or be obtained from the apiserver.</p>
<p>A specific URL used to obtain the object is a resource. For example, a list of v1 Pod objects can be obtained from the <code>/api/v1/pods</code> resource. A specific v1 Pod object can be obtained from the <code>/api/v1/namespaces/<namespace-name>/pods/<pod-name></code> resource.</p>
<p>API discovery documents (like the one published at /api/v1) can be used to determine the resources that correspond to each object type.</p>
<p>Often, the same object can be retrieved from and submitted to multiple resources. For example, v1 Pod objects can be submitted to the following resource URLs:</p>
<ol>
<li><code>/api/v1/namespaces/<namespace-name>/pods/<pod-name></code></li>
<li><code>/api/v1/namespaces/<namespace-name>/pods/<pod-name>/status</code></li>
</ol>
<p>Distinct resources allow for different server-side behavior and access control. The first URL only allows updating parts of the pod spec and metadata. The second URL only allows updating the pod status, and access is typically only given to kubelets.</p>
<p>Authorization rules are based on the resources for particular requests.</p>
|
<p>I'm using this helm chart: <a href="https://github.com/helm/charts/tree/master/incubator/kafka" rel="nofollow noreferrer">https://github.com/helm/charts/tree/master/incubator/kafka</a> </p>
<p>and these overrides in values.yaml</p>
<pre><code>configurationOverrides:
advertised.listeners: |-
EXTERNAL://kafka-${KAFKA_BROKER_ID}.host-removed:$((31090 + ${KAFKA_BROKER_ID}))
listener.security.protocol.map: |-
PLAINTEXT:SASL_PLAINTEXT,EXTERNAL:SASL_PLAINTEXT
sasl.enabled.mechanisms: SCRAM-SHA-256
auto.create.topics.enable: false
inter.broker.listener.name: PLAINTEXT
sasl.mechanism.inter.broker.protocol: SCRAM-SHA-256
listener.name.EXTERNAL.scram-sha-256.sasl.jaas.config: org.apache.kafka.common.security.scram.ScramLoginModule required username="user" password="password";
</code></pre>
<p>based on this documentation: <a href="https://kafka.apache.org/documentation/#security_jaas_broker" rel="nofollow noreferrer">https://kafka.apache.org/documentation/#security_jaas_broker</a> </p>
<p>(quick summary)</p>
<pre><code>Brokers may also configure JAAS using the broker configuration property sasl.jaas.config. The property name must be prefixed with the listener prefix including the SASL mechanism, i.e. listener.name.{listenerName}.{saslMechanism}.sasl.jaas.config. Only one login module may be specified in the config value. If multiple mechanisms are configured on a listener, configs must be provided for each mechanism using the listener and mechanism prefix
listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
username="admin" \
password="admin-secret";
</code></pre>
<p>The problem is that when I start Kafka I get the following error: </p>
<pre><code>java.lang.IllegalArgumentException: Could not find a 'KafkaServer' or 'plaintext.KafkaServer' entry in the JAAS configuration. System property 'java.security.auth.login.config' is not set
</code></pre>
<p>According to the order of precedence, it should use the static jass file if the above config is NOT set. </p>
<pre><code>If JAAS configuration is defined at different levels, the order of precedence used is:
</code></pre>
<ul>
<li>Broker configuration property listener.name.{listenerName}.{saslMechanism}.sasl.jaas.config</li>
<li>{listenerName}.KafkaServer section of static JAAS configuration</li>
<li>KafkaServer section of static JAAS configuration</li>
</ul>
<p>The helm chart doesn't support a way to configure this jaas file so using this property seems to be the desired way, I'm just confused as to what is configured incorrectly.</p>
<p>Note: The cluster works fine if I disable all SASL and just use plain text but that's not much good in a real environment.</p>
| <p>We've defined 2 listeners: <code>PLAINTEXT</code> and <code>EXTERNAL</code>. You've mapped both to <code>SASL_PLAINTEXT</code>.</p>
<p>Is this really what you wanted to do? or did you want <code>PLAINTEXT</code> to not require SASL but just be Plaintext?</p>
<ul>
<li><p>If you really want both to be SASL, then both of them need a JAAS configuration. In your question, I only see a JAAS configuration for EXTERNAL:</p>
<pre><code>listener.name.EXTERNAL.scram-sha-256.sasl.jaas.config: org.apache.kafka.common.security.scram.ScramLoginModule required username="user" password="password";
</code></pre>
<p>As you've mapped <code>PLAINTEXT</code> to SASL_PLAINTEXT, it also requires a JAAS configuration. You can specify it using for example:</p>
<pre><code> listener.name.PLAINTEXT.scram-sha-256.sasl.jaas.config: org.apache.kafka.common.security.scram.ScramLoginModule required username="user" password="password";
</code></pre></li>
<li><p>If you wanted your <code>PLAINTEXT</code> listener to actually be Plaintext without SASL, then you need to update the listener mapping:</p>
<pre><code>listener.security.protocol.map: |-
PLAINTEXT:PLAINTEXT,EXTERNAL:SASL_PLAINTEXT
</code></pre></li>
</ul>
|
<p>I have a K8s config map that defines an ENVIRONMENT parameter.</p>
<p>That value is mounted as an environment variable on the deployment yaml using an excerpt in src/fabric8/deployment.yml:</p>
<pre><code>spec:
template:
spec:
containers:
- env:
- name: "ENVIRONMENT"
valueFrom:
configMapKeyRef:
name: global-configmap
key: ENVIRONMENT
</code></pre>
<p>I would like to use that ENVIRONMENT env variable to configure the <code>spring.active.profiles</code> property.</p>
<p>Is it supported in some way by fabric8 maven plugin? If not, can you suggest some workaround?</p>
| <p>One thing to note first is that the name given to the environment variable injected into the Pod and the key being used from the configmap don't have to match. So you could do:</p>
<pre><code> - env:
- name: SPRING_PROFILES_ACTIVE
valueFrom:
configMapKeyRef:
name: global-configmap
key: ENVIRONMENT
</code></pre>
<p>If ENVIRONMENT is a key within the configmap called <code>global-configmap</code>. If it's not then naturally you want to use whatever the key is that matches the value you're looking for (something like spring.profiles.active might be clearer if possible but from your description it sounds like you have an existing configmap called global-configmap with the key called ENVIRONMENT). I expect you'll need to call the environment variable (the name section) SPRING_PROFILES_ACTIVE because this will match to the property spring.profiles.active by <a href="https://github.com/spring-projects/spring-boot/wiki/Relaxed-Binding-2.0" rel="noreferrer">relaxed binding</a>.</p>
<p>Alternatively, you do have other options with the fabric8 maven plugin, which it seems you're using for generation. You could simply <a href="https://maven.fabric8.io/#how-do-i-define-an-environment-variable" rel="noreferrer">set an environment variable directly</a> or set the spring.profiles.active value directly in your property file, <a href="https://maven.fabric8.io/#how-do-i-mount-a-config-file-from-a-configmap" rel="noreferrer">which you could mount as a configmap</a>.</p>
|
<p>Is there a way to tell Kubernetes to just destroy a pod and create a new one if the liveness probe fails? What I see from logs now: my node js application is just restarted and runs in the same pod.</p>
<p>The liveness probe is defined in my YAML specification as follows:</p>
<pre><code>livenessProbe:
httpGet:
path: /app/check/status
port: 3000
httpHeaders:
- name: Accept
value: application/x-www-form-urlencoded
initialDelaySeconds: 60
periodSeconds: 60
</code></pre>
<p>Disclaimer:<br></p>
<p>I am fully aware that recreating a pod if a liveness prove fails is probably not the best idea and a right way would be to get a notification that something is going on.</p>
| <p>So liveness and readiness probes are defined in <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/#define-a-liveness-command" rel="noreferrer">containers not pods</a> so if you have 1 container in your pod and you specify <code>restartPolicy</code> to <code>Never</code>. Then your pod will go into a <code>Failed</code> state and will be scrapped at some point based on the <code>terminated-pod-gc-threshold</code> value.</p>
<p>If you have more than one container in your pod it becomes tricker because of your other container(s) running making the pod still be in <code>Running</code> status. You can build your own automation or try <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-readiness-gate" rel="noreferrer">Pod Readiness</a> which is still in alpha as of this writing.</p>
|
<p>I have istio set up with mtls globally enabled. I have verifed it by bashing into a pot without an envoy sidecar, and running a curl command on http, which failed. Then run a curl on HTTPS using the Istio certs, which worked.</p>
<p>When I port-forward to a service, eq <code>kubectl port-forward svc/my-svc 8080:80</code> I am able to access my app by going to <code>http://localhost:8080</code>
I would expect that this wouldn't work, since mtls is being enforced.
How does this port-forward with kubernetes work? Does it go directly to the node, thereby by-passing the sidecar?</p>
| <p>I found out that <strong>Istio</strong> is working in parallel with standard Kubernetes network services and does not affect traffic in case of <code>port-forwarding</code>.</p>
<p><strong>Istio</strong> network service will create iptables rule exactly the same as <code>kubectl port-forward</code> or any other standard command do.</p>
<p>Unfortunately, I did not find any official documentation with an explanation of how it works.</p>
|
<p>Is there a nice way to delete a specific key from a Kubernetes configmap, using kubectl?</p>
<p>Right now I run: </p>
<pre><code>$ kubectl edit configmap myconfigmap
</code></pre>
<p>and then I delete the entry but I would like a solution that can be run as a script.</p>
| <p>This works but I wonder if there is a simpler way</p>
<pre><code>$ kubectl patch configmap myconfigmap --type=json -p='[{"op": "remove", "path": "/data/mykey"}]'
</code></pre>
|
<p>I'd like to launch a pod which runs all the time in the cluster and it can be used by the users as a jump box for the k8s cluster. So I have created a <code>Deployment</code> and run a pod. However I'm not sure which should be the <code>CMD</code> for this container. I could use <code>/bin/bash</code> or /bin/sh` however when then the user wants to attach to it, it would create another shell inside the pod and I would not want this. Any suggestions?</p>
<p><strong>Update</strong></p>
<p>I've put as <code>CMD</code> the following one:</p>
<pre><code>CMD [ "/bin/sh", "-c", "trap : TERM INT; (while true; do sleep 1000; done) & wait" ]
</code></pre>
<p>When I try to attach to the pod I get this:</p>
<pre><code>Unable to use a TTY - container test did not allocate one
If you don't see a command prompt, try pressing enter.
</code></pre>
<p>And pressing enter doesn't help.</p>
| <p>If you want a jump box why not specify the the <code>CMD</code> to be <code>sshd</code> just like <a href="https://docs.docker.com/engine/examples/running_ssh_service/" rel="nofollow noreferrer">here</a>.</p>
<p>That should be able to allocate TTYs. </p>
|
<p>By mistake I created a service account to give admin permission for dashboard. But now I am unable to delete it.</p>
<p>The reason I want to get rid of that service account is if I follow the steps here <a href="https://github.com/kubernetes/dashboard" rel="noreferrer">https://github.com/kubernetes/dashboard</a>. When I jump to the URL it doesn't ask for config/token anymore.</p>
<pre><code>$ kubectl get serviceaccount --all-namespaces | grep dashboard
NAMESPACE NAME SECRETS AGE
kube-system kubernetes-dashboard 1 44m
$ kubectl delete serviceaccount kubernetes-dashboard
Error from server (NotFound): serviceaccounts "kubernetes-dashboard" not found
</code></pre>
| <p>You have to specify the namespace when deleting it:</p>
<pre><code>kubectl delete serviceaccount -n kube-system kubernetes-dashboard
</code></pre>
|
<p>I'm installing Prometheus on GKE with Helm using the standard chart as in</p>
<p><code>helm install -n prom stable/prometheus --namespace hal</code></p>
<p>but I need to be able to pull up the Prometheus UI in the browser. I know that I can do it with port forwarding, as in</p>
<p><code>kubectl port-forward -n hal svc/prom-prometheus-server 8000:80</code></p>
<p>but I'm being told "No, just expose it." Of course, there's already a service so just doing</p>
<p><code>kubectl expose deploy -n hal prom-prometheus-server</code> </p>
<p>isn't going to work. I assume there's some value I can set in values.yaml that will give me an external IP, but I can't figure out what it is. </p>
<p>Or am I misunderstanding when they tell me "Just expose it"?</p>
| <p>It is generally a very bad idea to expose Prometheus itself as it has no authentication mechanism, but you can absolutely set up a LoadBalancer service or Ingress aimed at the HTTP port if you want.</p>
<p>More commonly (and supported by the chart) you'll use Grafana for the public view and only connect to Prom itself via port-forward when needed for debugging.</p>
|
<p>I have a jenkins instance created using <code>docker run -d -v /Users/dlovison/Documents/DockerVolumes/jenkins_home:/var/jenkins_home -p 8080:8080 -p 50000:50000 jenkins/jenkins:lts</code> deployed only on my local environment</p>
<p>I would like to connect in my remote openshift instance (openshift.com)</p>
<p>I followed the tutorial <a href="http://v1.uncontained.io/playbooks/continuous_delivery/external-jenkins-integration.html" rel="nofollow noreferrer">http://v1.uncontained.io/playbooks/continuous_delivery/external-jenkins-integration.html</a> and all steps are working except when my local jenkins try to connect via jnlp</p>
<p>The errors are:</p>
<pre><code>Waiting for Pod to be scheduled (65/100): jenkins-slave-j8gmr-28v7x
Container is waiting jenkins-slave-j8gmr-28v7x [jnlp]: ContainerStateWaiting(message=Error response from daemon: create 110861e4dd42a6343dbd584f09a7967e714feb0155cf070aff167495c01ada39: error while creating volume path '/var/lib/docker/volumes/110861e4dd42a6343dbd584f09a7967e714feb0155cf070aff167495c01ada39/_data': mkdir /var/lib/docker/volumes/110861e4dd42a6343dbd584f09a7967e714feb0155cf070aff167495c01ada39: permission denied, reason=CreateContainerError, additionalProperties={})
Waiting for Pod to be scheduled (66/100): jenkins-slave-j8gmr-28v7x
Container is waiting jenkins-slave-j8gmr-28v7x [jnlp]: ContainerStateWaiting(message=Error response from daemon: create 110861e4dd42a6343dbd584f09a7967e714feb0155cf070aff167495c01ada39: error while creating volume path '/var/lib/docker/volumes/110861e4dd42a6343dbd584f09a7967e714feb0155cf070aff167495c01ada39/_data': mkdir /var/lib/docker/volumes/110861e4dd42a6343dbd584f09a7967e714feb0155cf070aff167495c01ada39: permission denied, reason=CreateContainerError, additionalProperties={})
</code></pre>
<p>My service account contains the role: 'admin' and 'edit'</p>
<p>Here is my pipeline</p>
<pre><code>apiVersion: v1
kind: BuildConfig
metadata:
name: sample-pipeline-v4
labels:
name: sample-pipeline-v4
spec:
strategy:
type: JenkinsPipeline
jenkinsPipelineStrategy:
env:
- name: "FOO"
value: "BAR"
jenkinsfile: |-
def label = "diego-pod-${UUID.randomUUID().toString()}"
podTemplate(label: label, containers: [
containerTemplate(name: 'maven', image: 'registry.access.redhat.com/openshift3/jenkins-slave-maven-rhel7', ttyEnabled: true, command: 'cat')
]) {
node(label) {
stage('Build a Maven project') {
git 'https://github.com/jenkinsci/kubernetes-plugin.git'
container('maven') {
sh 'mvn -B clean package'
}
}
}
}
</code></pre>
<p>I have the plugin <a href="https://github.com/openshift/jenkins-sync-plugin/" rel="nofollow noreferrer">https://github.com/openshift/jenkins-sync-plugin/</a> that do all the hard work</p>
| <p>Basically, it cannot create a volume on your Kubernetes/Openshift cluster. It's unclear from the question where your Openshift cluster is running, so creating a volume will depend on your cloud/StorageClass. Can you Try these commands?</p>
<pre><code>oc get pvc
oc get pv
</code></pre>
<p>With the output</p>
<pre><code>oc describe pvc <name-from-previous-step>
oc describe pv <name-from-previous-step>
</code></pre>
|
<p>I have an Angular 6 application that I'm required to deploy onto a Kubernetes cluster as a Docker container (Nginx Base Image).</p>
<p><strong>Question</strong>: How do I accommodate using the same Docker image that's built once, but to be able to run in dev & prod and point to different API_URL?</p>
<p>My <strong>Environment Files</strong>:</p>
<p><em>environment.ts</em></p>
<pre><code>export const environment = {
production: false,
API_URL: 'https://dev-server.domain.com'
};
</code></pre>
<p><em>environment.prod.ts</em></p>
<pre><code>export const environment = {
production: true,
API_URL: 'https://prod-server.domain.com'
};
</code></pre>
<p>My <strong>nginx.conf</strong></p>
<pre><code>worker_processes 1;
events {
worker_connections 1024;
}
http {
server {
listen 80;
server_name localhost;
root /usr/share/nginx/html;
index index.html index.htm;
include /etc/nginx/mime.types;
gzip on;
gzip_min_length 1000;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;
location / {
try_files $uri $uri/ /index.html;
}
}
}
</code></pre>
<p>My <strong>Dockerfile</strong>:</p>
<pre><code># base image
FROM nginx:alpine
# private and public mapping
EXPOSE 80
COPY nginx.conf /etc/nginx/nginx.conf
WORKDIR /usr/share/nginx/html
COPY dist/my_project .
</code></pre>
<p>At this point, I build my project on my local machine and optimize it for production:</p>
<pre><code>ng build --prod --build-optimizer
</code></pre>
<p>So, at this point my file structure looks like the following:</p>
<pre><code>root
|--Dockerfile
|--nginx.conf
|--angular.json
|--dist
|--my_project
|--styles.3ed0e5393a1386f6fc48.css
|--runtime.a66f828dca56eeb90e02.js
|--polyfills.2f5aa8fb3d2aea854d83.js
|--main.2b3e9b16d82428586ae5.js
|--index.html
|--favicon.ico
|--3rdpartylicenses.txt
|--assets
</code></pre>
<p>Now, I'm ready to create the docker image:</p>
<pre><code>docker build -t my_application_image .
</code></pre>
<p>Then, I push the image to the Docker registry, and then used in Kubernetes.</p>
<p>Now that you know my setup, can you suggest how I can modify this such that different environment files can be used for dev & prod?</p>
| <p>One of the ways to solve it would be to add an entrypoint script to your docker image and a default prod env. The use only one environment.ts with something like :</p>
<pre><code>export const environment = {
production: {{production}},
API_URL: '{{api_url}}'
};
</code></pre>
<p>Dockerfile with</p>
<pre><code>ENV PRODUCTION=true API_URL=https://prod-server.domain.com
ADD entrypoint.sh /entrypoint.sh
ENTRYPOINT /entrypoint.sh
</code></pre>
<p>and finally entrypoint it self</p>
<pre><code>#!/bin/bash
sed -i "s/{{production}}/${PRODUCTION}/g" environment.ts
sed -i "s/{{api_url}}/${API_URL}/g" environment.ts
exec $@
</code></pre>
|
<p>I want to access my Kubernetes cluster API in Go to run <code>kubectl</code> command to get available namespaces in my k8s cluster which is running on google cloud.</p>
<p>My sole purpose is to get namespaces available in my cluster by running <code>kubectl</code> command: kindly let me know if there is any alternative.</p>
| <p>You can start with <a href="https://github.com/kubernetes/client-go" rel="nofollow noreferrer"><code>kubernetes/client-go</code></a>, the Go client for Kubernetes, made for talking to a kubernetes cluster. (not through kubectl though: directly through the Kubernetes API)</p>
<p>It includes a <a href="https://github.com/kubernetes/client-go/blob/03bfb9bdcfe5482795b999f39ca3ed9ad42ce5bb/listers/core/v1/namespace.go#L28-L35" rel="nofollow noreferrer"><code>NamespaceLister</code>, which helps list Namespaces</a>.</p>
<p>See "<a href="https://medium.com/programming-kubernetes/building-stuff-with-the-kubernetes-api-part-4-using-go-b1d0e3c1c899" rel="nofollow noreferrer">Building stuff with the Kubernetes API — Using Go</a>" from <strong><a href="https://twitter.com/VladimirVivien" rel="nofollow noreferrer">Vladimir Vivien</a></strong></p>
<p><a href="https://i.stack.imgur.com/xVxMG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xVxMG.png" alt="https://cdn-images-1.medium.com/max/1000/1*4noxYkVSvMPmlbt1kSOIHw.png"></a></p>
<p><a href="https://stackoverflow.com/users/396567/michael-hausenblas">Michael Hausenblas</a> (Developer Advocate at Red Hat) proposes <a href="https://stackoverflow.com/questions/52325091/how-to-access-the-kubernetes-api-in-go-and-run-kubectl-commands#comment91597517_52325186">in the comments</a> documentations with <a href="http://using-client-go.cloudnative.sh/" rel="nofollow noreferrer"><code>using-client-go.cloudnative.sh</code></a></p>
<blockquote>
<p>A versioned collection of snippets showing how to use <code>client-go</code>. </p>
</blockquote>
|
<p>For our use-case, we need to access a lot of services via NodePort. By default, the NodePort range is 30000-32767. With <strong>kubeadm</strong>, I can set the port range via <em>--service-node-port-range</em> flag.</p>
<p>We are using Google Kubernetes Engine (GKE) cluster. How can I set the port range for a GKE cluster?</p>
| <p>In GKE, the control plane is managed by Google. This means you don't get to set things on the API Server yourself. That being sad, I <em>believe</em> you can use the <code>kubemci</code> CLI tool to achieve it, see <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/multi-cluster-ingress" rel="nofollow noreferrer">Setting up a multi-cluster Ingress</a>.</p>
|
<p>Reading the Kubernetes "Run to Completion" documentation, it says that jobs can be run in parallel, but is it possible to chain together a series of jobs that should be run in sequential order (parallel and/or non-parallel).</p>
<p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/</a></p>
<hr>
<p>Or is it up to the user to keep track of which jobs have finished and triggering the next job using a PubSub messaging service?</p>
| <p>I have used initContainers under the PodSpec in the past to solve problems like this: <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/init-containers/</a></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
initContainers:
- name: init-myservice
image: busybox
command: ['sh', '-c', 'until nslookup myservice; do echo waiting for myservice; sleep 2; done;']
- name: init-mydb
image: busybox
command: ['sh', '-c', 'until nslookup mydb; do echo waiting for mydb; sleep 2; done;']
</code></pre>
<p>Take a look here for the chaining of containers using the "depends" keyword is also an option:</p>
<p><a href="https://github.com/kubernetes/kubernetes/issues/1996" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/1996</a></p>
|
<p>I'm using a Helm chart to deploy an app in the Kubernetes. After deployment, I want to copy a file from the chart repository to the container. </p>
<p>Currently I am doing this manually:</p>
<pre><code>kubectl cp custom-samples.json che-8467596d54-7c2hg:/data/templates
</code></pre>
<p>But I want to make this step a part of the deployment that will be performed automatically. Note that I took a look at <code>post-install</code> hooks but I'm not sure it's a good solution.</p>
<p>[UPD] I created this init container:</p>
<pre><code> - name: add-custom-samples
image: alpine:3.5
command: ["sh", "-c", "cd /data/templates; touch custom.json;"]
volumeMounts: [{
"mountPath": "/data",
"name": "che-data-volume"
}]
</code></pre>
<p>But the file <code>custom.json</code> is missing in the mounted volume.</p>
| <p><code>kubectl cp</code> (in the form you're using it) takes a file from the host on which it runs. If you are always initiating deployment from that same host, you could (in theory) arrange for the file to be copied - either by having <code>kubectl cp ...</code> in a script that you use to control the deployment or by setting up a watch and performing the copy every time a new container appears in the deployment that needs it.</p>
<p>However, it is probably better to have the pods <strong>get</strong> that file by themselves (e.g., over http or from github) as part of their startup. If you have control of the container's startup (e.g., you own its code or you can specify the command that it runs), you can do that easily. You could also define an init container that runs before the container that does the real work and have it pull the file into a volume that's shared with the main container.</p>
|
<p>We try to deploy the hello world application from istio (booking info).</p>
<h1>Environment</h1>
<p>Region: Ireland
Service: EKS v2
Istio: 1.0.1
Helm:</p>
<pre><code> Client: Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: Server: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
</code></pre>
<h1>Context</h1>
<p>We have intalled istio 1.0.1 with helm, with this command:</p>
<pre><code>helm install install/kubernetes/helm/istio --name istio --namespace istio-system --set sidecarInjectorWebhook.enabled=true --set galley.enabled=true
</code></pre>
<p>We tried also to install istio without galley and without auto sidecar injection without success. Our ingress controller does not obtain an IP. </p>
<p>But unfortunately, our istio-ingressgateway has no external-ip. The status PENDING means that the platform (here AWS) can't create a LoadBalancer.
It can't be the case, because we were also successfull doing that, when we were in region Oregon with EKS v1. The LoadBalancer was created. </p>
<pre><code>kubectl get services -n istio-system -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) SELECTOR
istio-ingressgateway LoadBalancer 172.20.195.15 <pending> 80:31380/TCP,443:31390/TCP,31400:31400/TCP,15011:31020/TCP,8060:30312/TCP,853:31767/TCP15030:32216/TCP,15031:32384/TCP 17h app=istio-ingressgateway,istio=ingressgateway
</code></pre>
| <p>A public subnet in EKS is needed for the loadbalancer.
After adding a public subnet, everthing works fine.</p>
|
<p><strong>NGINX Ingress controller version:</strong> 0.18.0
<strong>Kubernetes version (use kubectl version):</strong> 1.11.1
<strong>Cloud provider or hardware configuration:</strong> Azure (AKS)
<strong>Install tools:</strong> helm</p>
<p><strong>What happened:</strong> The Ingress controller gives a 400 error with a certain GET when a request url/header is "too long".</p>
<p><strong>What you expected to happen:</strong>
The request is passed on to the correct service and pod.</p>
<p><strong>How to reproduce it (as minimally and precisely as possible):</strong>
We use keycloak for authentication. When a user logs in a GET is done with an access token that is generated by keycloak. The access token gives a user certain rights within the application. Users have roles that gives them other/more permissions. When a user has a lot of roles the access token get significally longer which causes the 400 on the ingres controller. With a user with less roles it works fine and we can see the GET request being passed on to the right service..</p>
<p><strong>Anything else we need to know:</strong>
We already tried adjusting header buffer sizes etc from 4k to 8 and 16k but that didn't do anything.
I set the ingress controller loglevel to debug, but it doesn't give any more info on the 400 error.</p>
| <p>We solved this issue by also setting the backend service, a Spring Boot service with embedded Tomcat, configuration with server.max-http-header-size: 64000. So the Nginx service and the backend service settings had to be raised.</p>
|
<p>I'm using a Helm chart to deploy an app in the Kubernetes. After deployment, I want to copy a file from the chart repository to the container. </p>
<p>Currently I am doing this manually:</p>
<pre><code>kubectl cp custom-samples.json che-8467596d54-7c2hg:/data/templates
</code></pre>
<p>But I want to make this step a part of the deployment that will be performed automatically. Note that I took a look at <code>post-install</code> hooks but I'm not sure it's a good solution.</p>
<p>[UPD] I created this init container:</p>
<pre><code> - name: add-custom-samples
image: alpine:3.5
command: ["sh", "-c", "cd /data/templates; touch custom.json;"]
volumeMounts: [{
"mountPath": "/data",
"name": "che-data-volume"
}]
</code></pre>
<p>But the file <code>custom.json</code> is missing in the mounted volume.</p>
| <p>You can <a href="https://docs.helm.sh/chart_template_guide/#accessing-files-inside-templates" rel="noreferrer">include your file in the Helm chart</a>. You'd generally include that in a Kubernetes ConfigMap object, which can then be <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#add-configmap-data-to-a-volume" rel="noreferrer">mounted in a Pod as a volume</a>.</p>
<p>You need to move the file to somewhere in the Helm chart directory; say it's in <code>charts/mychart/files/custom-samples.json</code>. You can create a ConfigMap in, say, <code>charts/mychart/templates/configmap.yaml</code> that would look like</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-configmap
data:
custom-samples.json" |-
{{ .Files.Get "custom-samples.json" | indent 4 }}
</code></pre>
<p>Then in your Deployment's Pod spec, you'd reference this:</p>
<pre><code>apiVersion: v1
kind: Deployment
spec:
template:
spec:
volumes:
- name: config
configMap:
name: {{ .Release.Name }}-configmap
containers:
- name: ...
volumeMounts:
- name: config
mountPath: /data/templates
</code></pre>
<p>Note that this approach causes the file to be stored as a Kubernetes object, and there are somewhat modest size limits; something that looks like a text file and is sized in kilobytes should be fine. Also, if there are other files in the <code>/data/templates</code> directory, this approach will cause them to be hidden in favor of whatever's in the ConfigMap.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.