Question
stringlengths 65
39.6k
| Answer
stringlengths 38
29.1k
|
---|---|
<p>When deploying my system using kubernetes, autofs service is not running on the container. </p>
<p>running <code>service autofs status</code> returns the following error:</p>
<p>[FAIL] automount is not running ... failed!</p>
<p>running <code>service autofs start</code> returns the following error:</p>
<p>[....] Starting automount.../usr/sbin/automount: test mount forbidden or incorrect kernel protocol version, kernel protocol version 5.00 or above required.
failed (no valid automount entries defined.).</p>
<ul>
<li>/etc/fstab file does exist in my file system.</li>
</ul>
|
<p>You probably didn't load the module for it. Official documenatation: <a href="https://wiki.archlinux.org/index.php/Autofs" rel="nofollow noreferrer">autofs</a>.</p>
<p>One of the reason for this error too,can be <strong>/tmp</strong> directory is not present or it's permission/ownership is wrong.
Check if your <strong>/etc/fstab</strong> file exists.</p>
<p>Useful blog: <a href="https://www.linuxtechi.com/automount-nfs-share-in-linux-using-autofs/" rel="nofollow noreferrer">nfs-autofs</a>.</p>
|
<p>I try to mount a folder that is non-root user(xxxuser) at kubernetes and I use hostPath for mounting. But whenever container is started, it is mounted with user (1001) not xxxuser. It is always started with user (1001). How can I mount this folder with xxxuser ? </p>
<p>There is many types of volumes but I use hostPath. Before started; I change folder user and group with chown and chgrp commands. And then; mounted this folder as volume. Container started and I checked user of folder but it always user (1001). Such as;</p>
<p>drwxr-x---. 2 1001 1001 70 May 3 14:15 configutil/</p>
<pre><code>volumeMounts:
- name: configs
mountPath: /opt/KOBIL/SSMS/home/configutil
volumes:
- name: configs
hostPath:
path: /home/ssmsuser/configutil
type: Directory
</code></pre>
<p>drwxr-x---. 2 xxxuser xxxuser 70 May 3 14:15 configutil/</p>
|
<p>You may specify the desired owner of mounted volumes using following syntax:</p>
<pre><code>spec:
securityContext:
fsGroup: 2000
</code></pre>
|
<p>How does an ingress forward https traffic to port 443 of the service(eventually to 8443 on my container)? Do I have to make any changes to my ingress or is this done automatically. </p>
<p>On GCP, I have a layer 4 balancer -> nginx-ingress controller -> ingress</p>
<p>My ingress is:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-keycloak
annotations:
kubernetes.io/ingress.class: "nginx"
certmanager.k8s.io/issuer: "letsencrypt-prod"
certmanager.k8s.io/acme-challenge-type: http01
spec:
tls:
- hosts:
- mysite.com
secretName: staging-iam-tls
rules:
- host: mysite.com
http:
paths:
- path: /auth
backend:
serviceName: keycloak-http
servicePort: 80
</code></pre>
<p>I searched online but I don't see explicit examples of hitting 443. It's always 80(or 8080)</p>
<p>My service <code>keycloak-http</code> is(elided and my container is actually listening at 8443) </p>
<pre><code>apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2019-05-15T12:45:58Z
labels:
app: keycloak
chart: keycloak-4.12.0
heritage: Tiller
release: keycloak
name: keycloak-http
namespace: default
..
spec:
clusterIP: ..
externalTrafficPolicy: Cluster
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: 8443
selector:
app: keycloak
release: keycloak
sessionAffinity: None
type: NodePort
status:
loadBalancer: {}
</code></pre>
|
<p>Try this:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-keycloak
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
certmanager.k8s.io/issuer: "letsencrypt-prod"
certmanager.k8s.io/acme-challenge-type: http01
spec:
tls:
- hosts:
- mysite.com
secretName: staging-iam-tls
rules:
- host: mysite.com
http:
paths:
- path: /auth
backend:
serviceName: keycloak-http
servicePort: 443
</code></pre>
|
<p>I am new bee to Kubernetes and I am doing some workaround on these pods.
I have 3 pods running in 3 different nodes. One of the Pod App is taking more usage 90+ and I want to create a health check for that.
Is there any way for creating a health check in Kubernetes ?
If I mention 80 CPU limit, Kubernetes will create new pod or not ?</p>
|
<p>You need a <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">Horizontal Pod Autoscaler</a> to scale pods. There is a simple <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/" rel="nofollow noreferrer">guide</a> that will walk you through creating one. Here's a resource example from the mentioned guide:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: php-apache
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: php-apache
minReplicas: 1
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
</code></pre>
|
<p>We have a platform built using microservices architecture, which is deployed using Kubernetes and Ingress. One of the platform's components is a Django Rest API.
The yaml for the Ingress is the below (I have changed only the service names & endpoints):</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: dare-ingress
annotations:
kubernetes.io/ingress.provider: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /$1
certmanager.k8s.io/issuers: "letsencrypt-prod"
certmanager.k8s.io/acme-challenge-type: http01
spec:
tls:
- hosts:
- demo-test.com
secretName: dare-ingress-tls
rules:
- host: demo-test.com
http:
paths:
- path: /prov/?(.*)
backend:
serviceName: prov
servicePort: 8082
- path: /(prov-ui/?(.*))
backend:
serviceName: prov-ui
servicePort: 8080
- path: /flask-api/?(.*)
backend:
serviceName: flask-api
servicePort: 80
- path: /django-rest/?(.*)
backend:
serviceName: django-rest
servicePort: 8000
</code></pre>
<p>The django component is the last one. I have a problem with the swagger documentation. While all the Rest calls work fine, when I want to view the documentation the page is not load. This is because it requires login and the redirection to the documentation does not work. </p>
<p>I mean that, without Ingress the documentation url is for example: <a href="https://demo-test.com/docs" rel="nofollow noreferrer">https://demo-test.com/docs</a> but using Ingress, the url should be <a href="https://demo-test.com/django-rest/login" rel="nofollow noreferrer">https://demo-test.com/django-rest/login</a> and then <a href="https://demo-test.com/django-rest/docs" rel="nofollow noreferrer">https://demo-test.com/django-rest/docs</a> but the redirection does not work, I get a 404 error.
Does anyone have any idea how to fix this in Ingress?</p>
|
<p>I managed to fix the redirection error (and stay logged in) using the FORCE_SCRIPT_NAME as suggested in a comment in this <a href="https://stackoverflow.com/questions/58723145/hosting-django-on-aks-behind-nginx-ingress">thread</a> </p>
<p>However, now the swagger documentation is not properly loaded:</p>
<p><a href="https://i.stack.imgur.com/APvBG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/APvBG.png" alt="enter image description here"></a></p>
<p>I followed the suggestions from <a href="https://github.com/marcgibbons/django-rest-swagger/issues/758" rel="nofollow noreferrer">here</a> regarding the swagger documentation but still the page cannot be loaded properly. Any ideas?</p>
|
<p>I want to do the opposite of this question:</p>
<p><a href="https://stackoverflow.com/questions/46763148/how-to-create-secrets-using-kubernetes-python-client">How to create secrets using Kubernetes Python client?</a></p>
<p>i.e.:</p>
<p><strong>How do I read an existing secret from a kubernetes cluster via the kubernetes-python API?</strong></p>
<p>The use case is: I want to authenticate to mongodb (running in my cluster) from a jupyter notebook (also running in my cluster) without, for obvious reasons, saving the mongodb auth password inside the jupyter notebook.</p>
<p>Thanks!</p>
|
<ol>
<li>Install <a href="https://github.com/kubernetes-client/python#kubernetes-python-client" rel="noreferrer">Kubernetes client</a> for python</li>
<li>Now you can pull the secret. For example secret name - <code>mysql-pass</code>, namespace - <code>default</code></li>
</ol>
<pre><code>from kubernetes import client, config
config.load_kube_config()
v1 = client.CoreV1Api()
secret = v1.read_namespaced_secret("mysql-pass", "default")
print(secret)
</code></pre>
<ol start="3">
<li>If you need to extract decoded password from the secret</li>
</ol>
<pre><code>from kubernetes import client, config
import base64
import sys
config.load_kube_config()
v1 = client.CoreV1Api()
sec = str(v1.read_namespaced_secret("mysql-pass", "default").data)
pas = base64.b64decode(sec.strip().split()[1].translate(None, '}\''))
print(pas)
</code></pre>
<p>Hope this will help.</p>
|
<p>I have a bare-metal kubernetes cluster (1.13) and am running nginx ingress controller (deployed via helm into the default namespace, v0.22.0).</p>
<p>I have an ingress in a different namespace that attempts to use the nginx controller.</p>
<pre><code>#ingress.yaml
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: myapp
annotations:
kubernetes.io/backend-protocol: https
nginx.ingress.kubernetes.io/enable-rewrite-log: "true"
nginx.ingress.kubernetes.io/rewrite-target: "/$1"
spec:
tls:
- hosts:
- my-host
secretName: tls-cert
rules:
- host: my-host
paths:
- backend:
servicename: my-service
servicePort: https
path: "/api/(.*)"
</code></pre>
<p>The nginx controller successfully finds the ingress, and says that there are endpoints. If I hit the endpoint, I get a 400, with no content. If I turn on <a href="https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/configmap.md#custom-http-errors" rel="nofollow noreferrer">custom-http-headers</a> then I get a 404 from nginx; my service is not being hit. According to <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#enable-rewrite-log" rel="nofollow noreferrer">re-write logging</a>, the url is being re-written correctly.</p>
<p>I have also hit the service directly from inside the pod, and that works as well.</p>
<pre><code>#service.yaml
kind: Service
apiVersion: v1
metadata:
name: my-service
spec:
ports:
- name: https
protocol: TCP
port: 5000
targetPort: https
selector:
app: my-app
clusterIP: <redacted>
type: ClusterIP
sessionAffinity: None
</code></pre>
<p>What could be going wrong?</p>
<p><strong>EDIT</strong>: Disabling https all over still gives the same 400 error. However, if my app is expecting HTTPS requests, and nginx is sending HTTP requests, then the requests get to the app (but it can't processes them)</p>
|
<p>Nginx will silently fail with 400 if request headers are invalid (like special characters in it). You can debug that using tcpdump.</p>
|
<p>My goal is to create a <code>StatefulSet</code> in the <code>production</code> namespace and the <code>staging</code> namespace. I am able to create the production StatefulSet however when deploying one to the staging namespace, I receive the error:</p>
<pre><code>failed to connect to server [127.0.0.1:27017] on first connect [MongoError: connect ECONNREFUSED 127.0.0.1:27017]
</code></pre>
<p>The YAML I am using for the staging setup is as so:</p>
<p><strong>staging-service.yml</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: mongodb-staging
namespace: staging
labels:
app: ethereumdb
environment: staging
spec:
ports:
- name: http
protocol: TCP
port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongodb
environment: staging
</code></pre>
<p><strong>staging-statefulset.yml</strong></p>
<pre><code>apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongodb-staging
namespace: staging
labels:
app: ethereumdb
environment: staging
annotations:
prometheus.io.scrape: "true"
spec:
serviceName: "mongodb-staging"
replicas: 1
template:
metadata:
labels:
role: mongodb
environment: staging
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: role
operator: In
values:
- mongo
- key: environment
operator: In
values:
- staging
topologyKey: "kubernetes.io/hostname"
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
command:
- mongod
- "--replSet"
- rs0
- "--smallfiles"
- "--noprealloc"
- "--bind_ip_all"
- "--wiredTigerCacheSizeGB=0.5"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
env:
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongodb,environment=staging"
- name: KUBERNETES_MONGO_SERVICE_NAME
value: "mongodb-staging"
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: fast-storage
resources:
requests:
storage: 1Gi
</code></pre>
<p>The <code>production</code> namespace deployment differs only in:</p>
<ul>
<li><code>--replSet</code> value (<code>rs0</code> instead of <code>rs1</code>)</li>
<li>Use of the name 'production' to describe values</li>
</ul>
<p>Everything else remains identical in both deployments.</p>
<p>The only thing I can imagine is that it is not possible to run both these deployments on the port <code>27017</code> despite being in separate namespaces.</p>
<p>I am stuck as to what is causing the <code>failed to connect to server</code> error described above.</p>
<p><strong>Full log of the error</strong></p>
<pre><code>Error in workloop { MongoError: failed to connect to server [127.0.0.1:27017] on first connect [MongoError: connect ECONNREFUSED 127.0.0.1:27017]
at Pool.<anonymous> (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/topologies/server.js:336:35)
at Pool.emit (events.js:182:13)
at Connection.<anonymous> (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/pool.js:280:12)
at Object.onceWrapper (events.js:273:13)
at Connection.emit (events.js:182:13)
at Socket.<anonymous> (/opt/cvallance/mongo-k8s-sidecar/node_modules/mongodb-core/lib/connection/connection.js:189:49)
at Object.onceWrapper (events.js:273:13)
at Socket.emit (events.js:182:13)
at emitErrorNT (internal/streams/destroy.js:82:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:50:3)
name: 'MongoError',
message:
'failed to connect to server [127.0.0.1:27017] on first connect [MongoError: connect ECONNREFUSED 127.0.0.1:27017]' }
</code></pre>
|
<p>It seems that the problem is similar to:
<a href="https://stackoverflow.com/questions/46523321/mongoerror-connect-econnrefused-127-0-0-127017">mongodb-error</a> but still you have two database listen on the same port.</p>
<p>In the context of two mongoDB databases listening on the same port:</p>
<p>The answer differs depending on what OS is being considered. In general though:</p>
<ul>
<li>For TCP, no. You can only have one application listening on the same port at one time. Now if you had 2 network cards, you could have one application listen on the first IP and the second one on the second IP using the same port number.</li>
<li>For UDP (Multicasts), multiple applications can subscribe to the same port.</li>
</ul>
<p>But since Linux Kernel 3.9 and later, support for multiple applications listening to the same port was added using the SO_REUSEPORT option. More information is available at this lwn.net article.</p>
<p><strong>But there is workaround.</strong> </p>
<p>Run containers on different port and set up Apache or Nginx. As Apache/Nginx works on port 80, you'll not lose any traffic, as 80 is common port.</p>
<p>I recommend Nginx - I find it much easier to set up reverse proxy with Nginx and it's lighter on resource compared to Apache.
For nginx, you need to set up it and learn more about Server Blocks:
How To Install Nginx on Ubuntu 16.04.
How To Set Up Nginx Server Blocks (Virtual Hosts) on Ubuntu 16.04.
In Server Blocks, you need to use proxy_pass, for which you can learn more about on <a href="https://nginx.org/en/docs/" rel="nofollow noreferrer">nginx</a> site.</p>
|
<p>I'm building a microservices application using Kubernetes as a containers orchestrator. The app is up and running now but i have an other issue. That is in my services, i have a scheduled task running daily and when service get deployed, multiple service instances will run (by set replica number), that created multiple tasks running at the same time.
What i expect is only ONE instance of service task will be running rather than multiple. Is there any technique to handle this case?</p>
<h1>Technologies Stack</h1>
<ol>
<li>Kubernetes</li>
<li>Asp.net core to build Micro service</li>
<li>Bedrock implementation for CI/CD</li>
<li>Fabrikate</li>
<li>Traefik as a load balancer</li>
</ol>
<p>Please help me with this, Thanks!</p>
|
<p>You have 2 ways of solving this problem:</p>
<p>1) Use task coordination in your application. Like there is a lock and only application owning a lock can run task. Take a look towards <a href="https://data-flair.training/blogs/zookeeper-locks/" rel="nofollow noreferrer">ZooKeeper</a> for a distributed lock logic. This is the preferred solution.</p>
<p>2) Use Kubernetes CronJob which runs daily.</p>
|
<p>Worker nodes upgraded to latest k8s version and I am facing the 'NotReady' status of those worker nodes. I see 'PIDPressure - False' when I do a 'kubectl describe' for these nodes.</p>
|
<blockquote>
<p>PIDPressure - False</p>
</blockquote>
<p>^^ is an expected condition of the healthy node, it means that kubelet has sufficient PIDs available for normal operation. </p>
<p>Need more details:</p>
<ul>
<li>Kubelet version? <code>kubectl version</code></li>
<li>Can you share a full output of the describe command? <code>kubectl describe node <node-name></code></li>
<li>ssh to the node and run <code>sudo journalctl -u kubelet --all|tail</code></li>
</ul>
|
<p>I have some services running on the cluster, and the ALB is working fine. But I have a CloudFront distribution, and I want to use the cluster as an entry point because of some internal factors. So, I am trying to add an ingress to redirect the requests to the CloudFront distribution based on the default rule or a named host, both will work.</p>
<p>I tried 2 different ways, but no dice:</p>
<p>Creating an external name and the ingress:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: "internet-facing"
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80,"HTTPS": 443}]'
alb.ingress.kubernetes.io/certificate-arn: <my-cert-arn>
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": {
"Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
alb.ingress.kubernetes.io/group.name: <my-group-name>
spec:
rules:
- host: test.my-host.net
HTTP:
paths:
- backend:
serviceName: test
servicePort: use-annotation
path: /
---
apiVersion: v1
kind: Service
metadata:
name: test
spec:
type: ExternalName
externalName: test.my-host.net
</code></pre>
<p>I also tried to create the ingress with the redirect with the annotation on the ALB Ingress v2 just like the docs:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: "internet-facing"
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80,"HTTPS": 443}]'
alb.ingress.kubernetes.io/certificate-arn: <my-cert-arn>
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": {
"Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
alb.ingress.kubernetes.io/actions.redirect-to-eks: >
{"type":"redirect","redirectConfig":{"host":"my-
dist.cloudfront.net","path":"/","port":"443",
"protocol":"HTTPS","query":"k=v","statusCode":"HTTP_302"}}
alb.ingress.kubernetes.io/group.name: <my-group-name>
spec:
rules:
- host: <my-host-name>
HTTP:
paths:
- backend:
serviceName: ssl-redirect
servicePort: use-annotation
</code></pre>
|
<p>The reason why it isn't working with <code>ExternalName</code> service type is because it has no endpoints. ALB can set target only to <code>instance</code> or <code>ip</code> (<a href="https://github.com/HotelsDotCom/alb-ingress-controller/blob/master/docs/ingress-resources.md" rel="noreferrer">documentation</a>) so <code>ExternalName</code> isn't an option here.</p>
<p>You can create redirect with annotation but it is a bit tricky. <strong>First</strong>, you need at least two subnets with public access for <code>alb.ingress.kubernetes.io/subnets</code> annotation. Subnets can be automatically discovered but I don't know if it can pick public ones from all assigned to EKS cluster, so it's better to set explicitly. <strong>Second</strong>, you need to use <code>alb.ingress.kubernetes.io/actions.<ACTION NAME></code> annotation (<a href="https://github.com/HotelsDotCom/alb-ingress-controller/blob/master/docs/ingress-resources.md" rel="noreferrer">documentation</a>), where <code><ACTION NAME></code> must match <code>serviceName</code> from ingress rules. <strong>Third</strong>, you need to specify <code>Host</code> in redirect config or it'll be <code>host</code> from ingress <code>spec</code>.</p>
<p>Here is a working example:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-alb
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: "internet-facing"
alb.ingress.kubernetes.io/subnets: "public-subnet-id1,public-subnet-id2"
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80,"HTTPS": 443}]'
alb.ingress.kubernetes.io/actions.redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Host":"example.com", "port":"443", "StatusCode": "HTTP_301"}}'
spec:
rules:
- host: alb-test.host.name
http:
paths:
- backend:
serviceName: redirect
servicePort: use-annotation
</code></pre>
<p>Your <code>ssl-redirect</code> annotation is almost correct, except that it redirects to <code><my-host-name></code>. I recommend you to use the web console (change region in the link) <a href="https://console.aws.amazon.com/ec2/v2/home?region=eu-central-1#LoadBalancers:sort=loadBalancerName" rel="noreferrer">https://console.aws.amazon.com/ec2/v2/home?region=eu-central-1#LoadBalancers:sort=loadBalancerName</a> to see generated redirect rules and debug other options.</p>
<p>There are other options to solve this, a dedicated deployment with <code>nginx</code> for example. Or you can use <a href="https://kubernetes.github.io/ingress-nginx/" rel="noreferrer">nginx ingress controller</a>, from my experience it is much easier to set redirect with it.</p>
|
<p>I have a docker image that I run with specific runtime arguments. When I install a helm chart to deploy the kubernetes pod with this image, the output from the pod is different from when I use 'docker run.' I found that I should use command and args parameters in the values.yml file and the templates/deployment directory but I'm still not getting the desired output.</p>
<p>I've tried different variations from these links but no luck:
<a href="https://stackoverflow.com/questions/44098164/how-to-pass-docker-run-flags-via-kubernetes-pod">How to pass docker run flags via kubernetes pod</a>
<a href="https://stackoverflow.com/questions/52636571/how-to-pass-dynamic-arguments-to-a-helm-chart-that-runs-a-job">How to pass dynamic arguments to a helm chart that runs a job</a>
<a href="https://stackoverflow.com/questions/59093384/how-to-pass-arguments-to-docker-container-in-kubernetes-or-openshift-through-com">How to pass arguments to Docker container in Kubernetes or OpenShift through command line?</a></p>
<p>Here's the docker run command:
<code>docker run --it --rm --network=host --ulimit rtptrio=0 --cap-add=sys_nice --ipc=private --sysctl fs.msqueue.msg_max="10000" image_name:tag</code></p>
|
<p>Please try something like that:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: example
spec:
hostNetwork: true
securityContext:
capabilities:
add: ["SYS_NICE"]
containers:
- name: main
image: image_name:tag
</code></pre>
|
<p>I am using this command to deploy kubernetes dashboard:</p>
<pre><code> wget -c https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
kubectl create -f kubernetes-dashboard.yaml
</code></pre>
<p>and the result is:</p>
<pre><code>[root@iZuf63refzweg1d9dh94t8Z ~]# kubectl -n kube-system get svc kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard ClusterIP 10.254.19.89 <none> 443/TCP 15s
</code></pre>
<p>check the pod:</p>
<pre><code>[root@iZuf63refzweg1d9dh94t8Z ~]# kubectl get pod --namespace=kube-system
No resources found.
</code></pre>
<p>is there any possible to output the logs of kubectl create, so I can know the kubernetes dashboard create status,where is going wrong.how to fix it.Now I am hardly know where is going wrong and what should I do to fix the problem.</p>
<pre><code>[root@iZuf63refzweg1d9dh94t8Z ~]# kubectl get all -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP 102d
service/kubernetes-dashboard ClusterIP 10.254.19.89 <none> 443/TCP 22h
service/metrics-server ClusterIP 10.43.96.112 <none> 443/TCP 102d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kubernetes-dashboard 0/1 0 0 22h
NAME DESIRED CURRENT READY AGE
replicaset.apps/kubernetes-dashboard-7d75c474bb 1 1 0 9d
</code></pre>
|
<p>Make sure you follow all steps from here: <a href="https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/" rel="nofollow noreferrer">kubernetes-dashboard-doc</a>.</p>
<p>Try to follow all steps, delete previous deployment (I see that you don't use command </p>
<pre><code>$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml
</code></pre>
<p>which is in official documentation - this may cause the problem), but firstly to see systems logs simply execute:</p>
<pre><code>$ journalctl
</code></pre>
<p>Perhaps the most useful way of filtering is by the unit you are interested in. We can use the -u option to filter in this way.</p>
<p>For instance, to see all of the logs from an Nginx unit on our system, we can type:</p>
<pre><code>$ journalctl -u nginx.service
</code></pre>
<p>or some services spawn a variety of child processes to do work. If you have scouted out the exact PID of the process you are interested in, you can filter by that as well.</p>
<p>For instance if the PID we’re interested in is 8088, we could type:</p>
<pre><code>$ journalctl _PID=8088
</code></pre>
<p>More information you can find here: <a href="https://www.digitalocean.com/community/tutorials/how-to-use-journalctl-to-view-and-manipulate-systemd-logs" rel="nofollow noreferrer">journalctl</a>.</p>
<p>Useful documentation: <a href="https://github.com/kubernetes/dashboard/blob/master/docs/README.md" rel="nofollow noreferrer">kubernetes-dashboard</a>.</p>
<p>Take notice that your kubernetes-dashboard service doesn't have external_ip associated to. </p>
<p>Describe deployment to see what happend there simply executing command:</p>
<pre><code>$ kubectl describe deployment deployment.apps/kubernetes-dashboard -n kube-system
</code></pre>
|
<p>I am currently deploying <a href="https://airflow.apache.org/" rel="nofollow noreferrer">Apache Airflow</a> on K8s (on EKS). I have managed to successfully deploy an Ingress Controller (used <a href="https://github.com/kubernetes-sigs/aws-load-balancer-controller" rel="nofollow noreferrer">AWS Load Balancer Controller</a>) and an ingress, and I can access the ingress through the internet. My objective is to be able to access the host and a path rather than the address.</p>
<p>In other words, right now to access the web-server I need to input an address like: <code>internal-k8s-airflow-deployment-aws-region.elb.amazonaws.com</code>. My goal is to access it using something like: <code>meow.myimaginarywebsite.com/airflow</code>.</p>
<h2>What I have tried to do</h2>
<p>I am using <a href="https://github.com/airflow-helm/charts/tree/main/charts/airflow" rel="nofollow noreferrer">stable airflow Helm chart</a> (not particularly looking for an answer referencing the chart), and I have modified the values for the web-server like:</p>
<pre class="lang-yaml prettyprint-override"><code>web:
## annotations for the web Ingress
annotations:
kubernetes.io/ingress.class: "alb"
alb.ingress.kubernetes.io/target-type: "ip"
labels: { }
## the path for the web Ingress
path: "/airflow"
## the hostname for the web Ingress
host: "meow.myimaginarywebsite.com"
livenessPath: ""
tls:
enabled: false
</code></pre>
<p>Running <code>kubectl get ingress -n dp-airflow</code> I get:</p>
<pre><code>NAME CLASS HOSTS ADDRESS PORTS AGE
airflow-web <none> meow.myimaginarywebsite.com internal-k8s-airflow-deployment-aws-region.elb.amazonaws.com 80 141m
</code></pre>
<p>I have tried curling <code>meow.myimaginarywebsite.com</code> or <code>meow.myimaginarywebsite.com/airflow</code> but got: <code>Could not resolve host: meow.myimaginarywebsite.com/airflow</code></p>
<p>For more context, It's worth mentioning that <code>meow.myimaginarywebsite.com</code> is a hosted zone. For example, running <code>aws route53 list-hosted-zones</code> I get (I also verified that these hosted zones are associated with the VPC I am deploying EKS on):</p>
<pre><code>{
"HostedZones": [
{
"Id": "/hostedzone/ABCEEIAMAREDACTEDIDEEE",
"Name": "meow.myimaginarywebsite.com.",
"CallerReference": "terraform-2012012012012",
"Config": {
"Comment": "Managed by Terraform",
"PrivateZone": true
},
"ResourceRecordSetCount": 3
}
]
}
</code></pre>
<p>It's worth mentioning that I am new to this task, so I would benefit the most from a conceptual understanding of what I need to do or guidance to be on the right track. To re-state my objective, I basically want to be able to put something like <code>meow.myimaginarywebsite.com/airflow</code> into the browser and then be able to connect to the webserver, rather than something like: <code>internal-k8s-airflow-deployment-aws-region.elb.amazonaws.com</code>. I am also happy to provide further details.</p>
|
<p>First of all, you need to add A or ALIAS record to Route 53 zone like that:</p>
<p><code>meow.myimaginarywebsite.com</code> --> <code>internal-k8s-airflow-deployment-aws-region.elb.amazonaws.com</code></p>
<p>After that at least you should be fine with DNS resolution.</p>
|
<p>I have built a microservice backend deployed on kubernetes on Digital Ocean.</p>
<p>I am trying to connect my react code to the backend and getting the below error:</p>
<pre><code>Access to XMLHttpRequest at 'http://cultor.dev/api/users/signin' from origin 'http://localhost:3000'
has been blocked by CORS policy: Response to preflight request doesn't pass access control check:
Redirect is not allowed for a preflight request.
</code></pre>
<blockquote>
<p><strong>Index.ts</strong> settings:</p>
</blockquote>
<pre class="lang-js prettyprint-override"><code>app.use((req, res, next) => {
res.header('Access-Control-Allow-Origin', '*');
res.header('Access-Control-Allow-Headers', '*');
res.header('Access-Control-Request-Headers', '*');
if (req.method === "OPTIONS") {
res.header('Access-Control-Allow-Methods', '*');
return res.status(200).json({});
}
next();
});
</code></pre>
<p>I would really appreciate the help. Thanks!</p>
|
<p>Install the cors middleware with</p>
<pre><code>npm install cors
</code></pre>
<p>And you can use it directly like this:</p>
<pre><code>const cors = require('cors');
app.use(cors());
</code></pre>
<p>Or you can set specific options like this:</p>
<pre><code>const cors = require('cors');
const corsOpts = {
origin: '*',
credentials: true,
methods: ['GET','POST','HEAD','PUT','PATCH','DELETE'],
allowedHeaders: ['Content-Type'],
exposedHeaders: ['Content-Type']
};
app.use(cors(corsOpts));
</code></pre>
<p>You can replace origin with your website and allowedHeaders with the headers you're going to use.</p>
|
<p>I am following the <a href="https://kubernetes.github.io/ingress-nginx/deploy/#quick-start" rel="nofollow noreferrer">Quick start</a> instructions. I have other LoadBalancer services running on my cluster. They are exposing EXTERNAL-IP values just fine. NGINX Ingress Controller seems to be the only one having this issue.</p>
<p>I executed the first command:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.0/deploy/static/provider/cloud/deploy.yaml
</code></pre>
<p>There seems to be an issue with my LoadBalancer service. It has already been more than 1h, but EXTERNAL-IP remains in <code><pending></code> state:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl get svc ingress-nginx-controller
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.106.240.88 <pending> 80:31352/TCP,443:31801/TCP 32m
</code></pre>
<p>How do I progress from here? Is this an issue with my provider?</p>
|
<p>My provider Oktawave replied, explaining additional annotations are necessary for LoadBalancers with 2 ports:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: wordpress-lb
annotations:
k44sServiceType: HTTP
k44sSslEnabled: "True"
labels:
app: hello-wordpress
spec:
ports:
- port: 80
name: http
protocol: TCP
- port: 443
name: https
protocol: TCP
selector:
app: hello-wordpress
type: LoadBalancer
</code></pre>
<p>I was able to get EXTERNAL-IP assigned to <code>ingress-nginx-controller</code> by editing the <a href="https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.0/deploy/static/provider/cloud/deploy.yaml" rel="nofollow noreferrer">YAML</a> to include these annotations:</p>
<pre class="lang-yaml prettyprint-override"><code>(...)
---
apiVersion: v1
kind: Service
metadata:
annotations:
k44sServiceType: HTTP
k44sSslEnabled: "True"
labels:
helm.sh/chart: ingress-nginx-4.0.10
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 1.1.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ipFamilyPolicy: SingleStack
ipFamilies:
- IPv4
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
appProtocol: http
- name: https
port: 443
protocol: TCP
targetPort: https
appProtocol: https
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
---
(...)
</code></pre>
|
<p>I deployed wordpress and mysql,everything works fine I just want to know how can I access my mysql from command line</p>
<pre><code>My services
wordpress LoadBalancer 10.102.29.45 <pending> 80:31262/TCP 39m
wordpress-mysql ClusterIP None <none> 3306/TCP 42m
</code></pre>
<p>Describe pod</p>
<pre><code>kubectl describe pod wordpress-mysql-5fc97c66f7-jx42l
Name: wordpress-mysql-5fc97c66f7-jx42l
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: minikube/10.0.2.15
Start Time: Mon, 11 Mar 2019 08:49:19 +0100
Labels: app=wordpress
pod-template-hash=5fc97c66f7
tier=mysql
Annotations: <none>
Status: Running
IP: 172.17.0.15
Controlled By: ReplicaSet/wordpress-mysql-5fc97c66f7
Containers:
mysql:
Container ID: docker://dcfe9037ab14c3615aec4000f89f28992c52c40a03581215d921564e5b3bec58
Image: mysql:5.6
Image ID: docker-pullable://mysql@sha256:36cad5daaae69fbcc15dd33b9f25f35c41bbe7e06cb7df5e14d8b966fb26c1b4
Port: 3306/TCP
Host Port: 0/TCP
State: Running
Started: Mon, 11 Mar 2019 08:50:33 +0100
Ready: True
</code></pre>
<p>Is there any analogy to docker?
How to query mysql?</p>
<p>If I try with exec</p>
<pre><code>kubectl exec -it wordpress-mysql-5fc97c66f7-jx42l -- /bin/bash
root@wordpress-mysql-5fc97c66f7-jx42l:/# mysql
ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: NO)
</code></pre>
|
<p>To fix the issue, you have to set up a password for your MySQL user.</p>
<p>According to the official <a href="https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/#create-a-secret-for-mysql-password" rel="nofollow noreferrer">documentation</a>, you need to create a Secret object containing your password:</p>
<pre><code>kubectl create secret generic mysql-pass --from-literal=password=YOUR_PASSWORD
</code></pre>
<p>And update your deployment spec with the following part:</p>
<pre><code>env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-pass
key: password
</code></pre>
|
<p>I have a .bat file right now that runs a docker command with </p>
<pre><code>docker run --rm -it --env-file .env --name my-name my-name /bin/sh -c "mvn test -Dtest=my.runner.Runner"
</code></pre>
<p>I have a k8s config-map.yml and a deployment.yml with a configMapRef to replace the <strong>.env</strong> file, but I don’t know how to reference it in my docker run command. </p>
<p>It should work that every time I deploy the image from my docker registry , it picks up the config-map and uses that for the repos envs. </p>
|
<p>If you try to set the value of an environment variable from <strong>inside a RUN statement</strong> like <code>RUN export VARI=5 && ...</code>, you won’t have access to it in any of the next RUN statements. The reason for this, is that for each RUN statement, a new container is launched from an intermediate image. An image is saved by the end of the command, but environment variables do not persist that way.</p>
<p>So firstly you should make changes in your image, build it and then change its reference in deployment.yaml. After wole process just execute configmap and deployment. Now everything should works fine. Trying to combine this process (docker run)with Kubernetes components it is not good approach.</p>
<p>Useful article: <a href="https://vsupalov.com/docker-arg-env-variable-guide/" rel="nofollow noreferrer">docker-arg-env</a>.</p>
<p>Overall documentation: <a href="https://docs.docker.com/engine/reference/commandline/run/" rel="nofollow noreferrer">docker-run</a>.</p>
|
<p><strong>Background:</strong> There are Cluster A and Cluster B in Azure AKS. Create a pod called <strong>Agent</strong> running linux container in cluster A in namespace <strong>test</strong> (which is <strong>non-default</strong> namespace). In the linux container, pwsh and kubectl are installed.</p>
<p><strong>Operation:</strong> Get into the pod/Agent in cluster A (kubectl exec -it pod/agent -- bash), and get-credential of Cluster B, configfile will be setup with cluster name and user name, but <strong>NO namespace</strong>.<br />
When connect to cluster B from pod/Agent, then execute <code>kubectl get pods</code>, the resource within namespace <strong>test</strong> is returned instead of the resources within namespace <strong>default</strong>.<br />
Since, there is no namespace called <strong>test</strong> in cluster B, so no resource is returned.</p>
<p>So I wonder where the namespace <strong>test</strong> is defined/setup in the pod/Agent as the default namespace.</p>
<p>Spent some time try to dive in kubectl code in github, without luck..</p>
<p>I also tried to use alias, but it only works for bash/sh, not for pwsh, since I don't want to change command name kubectl, if I do alias kubectl='kubectl -n default', pwsh would stuck into a loop.</p>
<p>Any answer is appreciated.</p>
|
<p>From <a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/#directly-accessing-the-rest-api-1" rel="nofollow noreferrer">docs</a>:</p>
<blockquote>
<p>Finally, the default namespace to be used for namespaced API
operations is placed in a file at
/var/run/secrets/kubernetes.io/serviceaccount/namespace in each
container.</p>
</blockquote>
<p>Simple test from a pod:</p>
<pre><code>root@ubuntu:/# strace -eopenat kubectl get pod 2>&1 | grep namespace
openat(AT_FDCWD, "/var/run/secrets/kubernetes.io/serviceaccount/namespace", O_RDONLY|O_CLOEXEC) = 6
Error from server (Forbidden): pods is forbidden: User "system:serviceaccount:default:default" cannot list resource "pods" in API group "" in the namespace "default"
</code></pre>
<p>Directory <code>/run/secrets/kubernetes.io/serviceaccount</code> is by default always mounted to pod and contains serviceaccount token to access Kube API.</p>
|
<p>In brief,These are the steps I have done :</p>
<ol>
<li><p>Launched <strong>2</strong> new <code>t3 - small</code> instances in aws, pre-tagged with key
<code>kubernetes.io/cluster/<cluster-name></code> and value <code>member</code>.</p></li>
<li><p>Tagged the new security with same tag and opened all ports mentioned
here -
<a href="https://kubernetes.io/docs/setup/independent/install-kubeadm/#check-required-ports" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/independent/install-kubeadm/#check-required-ports</a></p></li>
<li><p>Changed <code>hostname</code> to the output of <code>curl
http://169.254.169.254/latest/meta-data/local-hostname</code> and verified
with <code>hostnamectl</code></p></li>
<li><p>Rebooted</p></li>
<li><p>Configured aws with
<code>https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html</code></p></li>
<li><p>Created <code>IAM role</code> with full (<code>"*"</code>) permissions and assigned to EC2
instances.</p></li>
<li><p>Installed <code>kubelet kubeadm kubectl</code> using <code>apt-get</code></p></li>
<li><p>Created <code>/etc/default/kubelet</code> with content
<code>KUBELET_EXTRA_ARGS=--cloud-provider=aws</code></p></li>
<li><p>Ran <code>kubeadm init --pod-network-cidr=10.244.0.0/16</code> on one instance
and used output to <code>kubeadm join ...</code> other node.</p></li>
<li><p>Installed <a href="https://www.digitalocean.com/community/tutorials/how-to-install-software-on-kubernetes-clusters-with-the-helm-package-manager" rel="nofollow noreferrer">Helm</a>.</p></li>
<li><p>Installed <a href="https://akomljen.com/aws-cost-savings-by-utilizing-kubernetes-ingress-with-classic-elb/" rel="nofollow noreferrer">ingress controller</a> with default backend.</p></li>
</ol>
<p>Previously I have tried the above steps, but, installed ingress from the instructions on <a href="https://kubernetes.github.io/ingress-nginx/deploy/#aws" rel="nofollow noreferrer">kubernetes.github.io</a>. Both ended up with same status, <code>EXTERNAL-IP</code> as <code><pending></code>.</p>
<hr>
<p>Current status is :</p>
<p><code>kubectl get pods --all-namespaces -o wide</code></p>
<pre><code>NAMESPACE NAME IP NODE
ingress ingress-nginx-ingress-controller-77d989fb4d-qz4f5 10.244.1.13 ip-YYY-YY-Y-YYY.ap-south-1.compute.internal
ingress ingress-nginx-ingress-default-backend-7f7bf55777-dhj75 10.244.1.12 ip-YYY-YY-Y-YYY.ap-south-1.compute.internal
kube-system coredns-86c58d9df4-bklt8 10.244.1.14 ip-YYY-YY-Y-YYY.ap-south-1.compute.internal
kube-system coredns-86c58d9df4-ftn8q 10.244.1.16 ip-YYY-YY-Y-YYY.ap-south-1.compute.internal
kube-system etcd-ip-XXX-XX-XX-XXX.ap-south-1.compute.internal 172.31.12.119 ip-XXX-XX-XX-XXX.ap-south-1.compute.internal
kube-system kube-apiserver-ip-XXX-XX-XX-XXX.ap-south-1.compute.internal 172.31.12.119 ip-XXX-XX-XX-XXX.ap-south-1.compute.internal
kube-system kube-controller-manager-ip-XXX-XX-XX-XXX.ap-south-1.compute.internal 172.31.12.119 ip-XXX-XX-XX-XXX.ap-south-1.compute.internal
kube-system kube-flannel-ds-amd64-87k8p 172.31.12.119 ip-XXX-XX-XX-XXX.ap-south-1.compute.internal
kube-system kube-flannel-ds-amd64-f4wft 172.31.3.106 ip-YYY-YY-Y-YYY.ap-south-1.compute.internal
kube-system kube-proxy-79cp2 172.31.3.106 ip-YYY-YY-Y-YYY.ap-south-1.compute.internal
kube-system kube-proxy-sv7md 172.31.12.119 ip-XXX-XX-XX-XXX.ap-south-1.compute.internal
kube-system kube-scheduler-ip-XXX-XX-XX-XXX.ap-south-1.compute.internal 172.31.12.119 ip-XXX-XX-XX-XXX.ap-south-1.compute.internal
kube-system tiller-deploy-5b7c66d59c-fgwcp 10.244.1.15 ip-YYY-YY-Y-YYY.ap-south-1.compute.internal
</code></pre>
<p><code>kubectl get svc --all-namespaces -o wide</code></p>
<pre><code>NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 73m <none>
ingress ingress-nginx-ingress-controller LoadBalancer 10.97.167.197 <pending> 80:32722/TCP,443:30374/TCP 59m app=nginx-ingress,component=controller,release=ingress
ingress ingress-nginx-ingress-default-backend ClusterIP 10.109.198.179 <none> 80/TCP 59m app=nginx-ingress,component=default-backend,release=ingress
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 73m k8s-app=kube-dns
kube-system tiller-deploy ClusterIP 10.96.216.119 <none> 44134/TCP 67m app=helm,name=tiller
</code></pre>
<hr>
<pre><code>kubectl describe service -n ingress ingress-nginx-ingress-controller
Name: ingress-nginx-ingress-controller
Namespace: ingress
Labels: app=nginx-ingress
chart=nginx-ingress-1.4.0
component=controller
heritage=Tiller
release=ingress
Annotations: service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: *
Selector: app=nginx-ingress,component=controller,release=ingress
Type: LoadBalancer
IP: 10.104.55.18
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 32318/TCP
Endpoints: 10.244.1.20:80
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 32560/TCP
Endpoints: 10.244.1.20:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
</code></pre>
<p>attached IAM role inline policy</p>
<pre><code>{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "*",
"Resource": "*"
}
]
}
</code></pre>
<hr>
<p>kubectl get nodes -o wide</p>
<pre><code>NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
ip-172-31-12-119.ap-south-1.compute.internal Ready master 6d19h v1.13.4 172.31.12.119 XX.XXX.XXX.XX Ubuntu 16.04.5 LTS 4.4.0-1077-aws docker://18.6.3
ip-172-31-3-106.ap-south-1.compute.internal Ready <none> 6d19h v1.13.4 172.31.3.106 XX.XXX.XX.XXX Ubuntu 16.04.5 LTS 4.4.0-1077-aws docker://18.6.3
</code></pre>
<hr>
<p>Could someone please point out what am I missing here, as everywhere on the internet it says a <code>Classic ELB</code> will be deployed automatically ?</p>
|
<p>For AWS ELB (type Classic) you have to </p>
<ol>
<li><p>Explicitly specify <code>--cloud-provider=aws</code> in kube services manifests
located in <code>/etc/kubernetes/manifests</code> on the master node:</p>
<p><code>kube-controller-manager.yaml
kube-apiserver.yaml</code></p></li>
<li><p>Restart services:</p>
<p><code>sudo systemctl daemon-reload</code></p>
<p><code>sudo systemctl restart kubelet</code></p></li>
</ol>
<hr>
<p>Along with other commands, add at bottom or top as you wish. The result should be similar to :</p>
<p>in <em>kube-controller-manager.yaml</em> </p>
<pre><code>spec:
containers:
- command:
- kube-controller-manager
- --cloud-provider=aws
</code></pre>
<p>in <em>kube-apiserver.yaml</em> </p>
<pre><code>spec:
containers:
- command:
- kube-apiserver
- --cloud-provider=aws
</code></pre>
|
<p>I do have an Ubuntu VM in VirtualBox on Windows 10. If i follow the instructions to install Minikube I get a start error:</p>
<pre><code>> minikube start &
[1] 4297
vagrant@ubuntu-xenial:~$ o minikube v0.35.0 on linux (amd64)
> Creating virtualbox VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
@ Downloading Minikube ISO ...
184.42 MB / 184.42 MB [============================================] 100.00%
0s
! Unable to start VM: create: precreate: VBoxManage not found. Make sure
VirtualBox is installed and VBoxManage is in the path
</code></pre>
<p>Does it mean i need to install VirtualBox in the Ubuntu VM too? Kind of VB inside VB..</p>
<p>thanks</p>
|
<p>I'd recommend to install Minikube on your host OS (<a href="https://kubernetes.io/docs/tasks/tools/install-minikube/#windows" rel="nofollow noreferrer">Windows</a>) and use the already installed Virtual box as a hypervisor provider.</p>
<p>If for any reason you want to launch it on Ubuntu VM, there are two options:</p>
<p><strong>I.</strong> Minikube supports a --vm-driver=none option that runs the Kubernetes components on the host and not in a VM. Using this driver requires Docker and a Linux environment, but not a hypervisor. In this case you have to provide an address to you local API server </p>
<pre><code> `minikube start --vm-driver=none --apiserver-ips 127.0.0.1 --apiserver-name localhost`
</code></pre>
<p>And then go and edit ~/.kube/config, replacing the server IP that was
detected from the main network interface with "localhost". For example:</p>
<pre><code>apiVersion: v1
clusters:
- cluster:
certificate-authority-data:/home/asuh/.minikube/ca.crt
server: https://localhost:8443
name: minikube
</code></pre>
<p><strong>II.</strong> Install VM Ware on Windows and run Ubuntu within installed Virtualbox
and and enabled VT-X/AMD-v in outer VM.</p>
<hr>
<p>Regarding the error you have at the moment:</p>
<blockquote>
<p>However now i get another error like: /usr/local/bin/minikube: cannot
execute binary file</p>
</blockquote>
<p>Make sure you have installed a proper version of Minikube. For your Ubuntu VM it should be</p>
<pre><code>curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 \
&& chmod +x minikube
</code></pre>
|
<p>I am trying to create Kubernetes cluster using kubespray with single master and 3 working node. I cloned the github kubespray repository and running the ansible playbook from my control node for forming cluster.</p>
<p>I am trying the following command:</p>
<pre><code>ansible-playbook \
-i inventory/sample/hosts.ini \
cluster.yml \
--become \
--ask-become-pass
</code></pre>
<p>When I am running the command, 2 worker node getting final status ok. But for the master node it showing failed and getting error like the following:</p>
<pre><code>fatal: [mildevkub020]: FAILED! => {
"changed": false,
"msg": "error running kubectl (/usr/local/bin/kubectl apply
--force --filename=/etc/kubernetes/k8s-cluster-critical-pc.yml)
command (rc=1), out='', err='error: unable to recognize
\"/etc/kubernetes/k8s-cluster-critical-pc.yml\": Get
http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080:
connect: connection refused\n'"
}
</code></pre>
<p>I am adding the screenshot for the error below:</p>
<p><a href="https://i.stack.imgur.com/M3jkO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/M3jkO.png" alt="enter image description here" /></a></p>
<p><strong>Modification</strong></p>
<p>I removed my older kubespray repo and cloned the fresh one from the following link,</p>
<p><a href="https://github.com/kubernetes-sigs/kubespray.git" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/kubespray.git</a></p>
<p>And updated my inventory. But still getting the same error. When I run "journalctl" command for logs, I am getting like the following:</p>
<pre><code>Oct 15 09:56:17 mildevdcr01 kernel: NX (Execute Disable) protection: active
Oct 15 09:56:17 mildevdcr01 kernel: SMBIOS 2.4 present.
Oct 15 09:56:17 mildevdcr01 kernel: DMI: VMware, Inc. VMware Virtual
Platform/440BX Desktop Reference Platform, BIOS 6.00 09/22/2009
Oct 15 09:56:17 mildevdcr01 kernel: Hypervisor detected: VMware
Oct 15 09:56:17 mildevdcr01 kernel: Kernel/User page tables isolation: disabled
Oct 15 09:56:17 mildevdcr01 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Oct 15 09:56:17 mildevdcr01 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Oct 15 09:56:17 mildevdcr01 kernel: AGP: No AGP bridge found
Oct 15 09:56:17 mildevdcr01 kernel: e820: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Oct 15 09:56:17 mildevdcr01 kernel: MTRR default type: uncachable
Oct 15 09:56:17 mildevdcr01 kernel: MTRR fixed ranges enabled:
Oct 15 09:56:17 mildevdcr01 kernel: 00000-9FFFF write-back
Oct 15 09:56:17 mildevdcr01 kernel: A0000-BFFFF uncachable
Oct 15 09:56:17 mildevdcr01 kernel: C0000-CBFFF write-protect
</code></pre>
<p>Error ,</p>
<pre><code>fatal: [mildevkub020]: FAILED! => {"attempts": 10, "changed": false, "msg": "error running kubectl (/usr/local/bin/kubectl apply --force --filename=/etc/kubernetes/node-crb.yml) command (rc=1), out='', err='W1016 06:50:31.365172 22692 loader.go:223] Config not found: etc/kubernetes/admin.conf\nerror: unable to recognize \"/etc/kubernetes/node-crb.yml\": Get http://localhost:8080/api?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused\n'"}
</code></pre>
|
<p>Make sure that you have followed all <a href="https://github.com/kubernetes-sigs/kubespray#requirements" rel="nofollow noreferrer">requirements</a> before cluster installation.
Especially copying ssh key to all the servers part of your inventory.</p>
<p>Reset environment after previous installation:</p>
<pre><code>$ sudo ansible-playbook -i inventory/mycluster/hosts.yml reset.yml -b -v \
--private-key=~/.ssh/private_key
</code></pre>
<p>Remember to change <a href="https://github.com/kubernetes-sigs/kubespray/blob/master/inventory/sample/group_vars/k8s-cluster/k8s-cluster.yml" rel="nofollow noreferrer">cluster configuration</a> file and personalize it.
You can change network plugin - default is Calico.</p>
<p>Then run ansible playbook again using this command:</p>
<pre><code>$ sudo ansible-playbook -i inventory/sample/hosts.ini cluster.yml -b -v \
--private-key=~/.ssh/private_key
</code></pre>
<p>Try to copy /sample folder and rename it and then change k8s-cluster and hosts file.</p>
<p>Check hosts file:
Remember to not modify the children of k8s-cluster, like putting the etcd group into the k8s-cluster, unless you are certain to do that.</p>
<pre><code>k8s-cluster ⊂ etcd => kube-node ∩ etcd = etcd
</code></pre>
<p>Example inventory file you can find here: <a href="https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md" rel="nofollow noreferrer">inventory</a>.</p>
<p>If problem still exists please execute command journalctl and check what logs show.</p>
<p><strong>EDIT:</strong></p>
<p>As you provided more information. From your logs it looks like you have to
set the VM hardware version to the highest available in your VMware setup, and install all available updates on this system.</p>
|
<p>I'm creating a Cassandra cluster in Google Cloud Platform in Kubernetes.</p>
<p>I saw that google provides different type of disks, the question is: "Is 'google Kubernetes standard disks' quickly enough for Cassandra?", or I should to change to SSD disks?</p>
<p>I think that the best solution is Local SSD disks, but I don't know if is an overkill.</p>
<p>Anyone have experience with this?</p>
<p><a href="https://cloud.google.com/compute/docs/disks/" rel="nofollow noreferrer">https://cloud.google.com/compute/docs/disks/</a></p>
<p><a href="https://i.stack.imgur.com/kYWHv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/kYWHv.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/cTJed.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cTJed.png" alt="enter image description here"></a></p>
|
<p>According to cassandra's <a href="http://cassandra.apache.org/doc/latest/operating/hardware.html" rel="nofollow noreferrer">documentation</a>, they recommend </p>
<blockquote>
<p>local ephemeral SSDs</p>
</blockquote>
<p>Though, you won't notice significant performance degradation running on regional/zonal SSD's.</p>
<p>It is more important to allocate <strong>commitlogs</strong> (<code>commitlog_directory</code>) and <strong>data dictionary</strong> (<code>data_file_directories</code>) to the separate physical drives. </p>
|
<p>My application has it's backend and it's frontend. The frontend is currently hosted in a Google Cloud Storage bucket and I am migrating the backend from Compute Engine VMs to Kubernetes Engine Autopilot.</p>
<p>When migrating, does it make more sense for me to migrate everything to Kubernetes Engine or would I be better off keeping the frontend in the bucket? Backend and frontend are different projects in different git repositories.</p>
<p>I am asking because I saw that it is possible to manage Kubernetes services' exposure, even at the level of URL Maps and Load Balancer, so I thought of perhaps entrusting all my projects' (backend and frontend) hosting to Kubernetes, since I know that Kubernetes is a very complete and powerful solution.</p>
|
<p>There isn't problem to keep your front on Cloud Storage (or elsewhere) and to have your backend on kubernetes (GKE).</p>
<p>It's not a "perfect pattern" because you can't handle and deploy all the part of your application only with Kubernetes and you haven't a end to end control plane management.</p>
<p>You have, one side to deploy your frontend and to configure your load balancer. On the other hand, you have to deploy your backend on Kubernetes with YAML.</p>
<p>In addition, your application is not portable on other kubernetes cluster (because it's not full kubernetes deployment, but hybrid between Kubernetes and Google Cloud, and you are partly sticky to Google Cloud). But if it's not a requirement, it's fine.</p>
<hr />
<p>At the end, if you expose your app behind a load balancer with the front on Cloud Storage and the back on GKE, the user will see nothing. If a day, you want to package your front in a container and deploy it on GKE, keep the same load balancer (at least the same domain name) and you user won't notice the difference!!</p>
<p>No worry for now, you can keep going! (and it's cheaper for now! You don't pay processing to serve static resource with Cloud Storage)</p>
|
<p>I am trying to add Zeppelin to a Kubernetes cluster.</p>
<p>So, using the Zeppelin (0.8.1) docker image from <a href="https://hub.docker.com/r/apache/zeppelin/tags" rel="nofollow noreferrer">apache/zeppelin</a>, I created a K8S Deployment and Service as follow : </p>
<p>Deployment : </p>
<pre><code>kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: zeppelin-k8s
spec:
replicas: 1
selector:
matchLabels:
component: zeppelin-k8s
template:
metadata:
labels:
component: zeppelin-k8s
spec:
containers:
- name: zeppelin-k8s
image: apache/zeppelin:0.8.1
ports:
- containerPort: 8080
resources:
requests:
cpu: 100m
</code></pre>
<p>Service : </p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: zeppelin-k8s
spec:
ports:
- name: zeppelin
port: 8080
targetPort: 8080
selector:
component: zeppelin-k8s
</code></pre>
<p>To expose the interface, I created the following Ingress : </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: minikube-ingress
annotations:
spec:
rules:
- host: spark-kubernetes
http:
paths:
- path: /zeppelin
backend:
serviceName: zeppelin-k8s
servicePort: 8080
</code></pre>
<p>Using the Kubernetes dashboard, everything look fine (Deployment, Pods, Services and Replica Sets are green). There is a bunch of <code>jersey.internal</code> warning in the Zeppelin Pod, but it <a href="https://issues.apache.org/jira/browse/ZEPPELIN-3504" rel="nofollow noreferrer">look like they are not relevant</a>.</p>
<p>With all that, I expect to access the Zeppelin web interface through the URL <code>http://[MyIP]/zeppelin</code>.</p>
<p>But when I do that, I get :</p>
<pre><code>HTTP ERROR 404
Problem accessing /zeppelin. Reason:
Not Found
</code></pre>
<p>What am I missing to access Zeppelin interface ?</p>
<p>Note : </p>
<ul>
<li>I use a Minikube cluster with Kubernetes 1.14 </li>
<li>I also have a Spark cluster on my K8S cluster, and I am able to access the spark-master web-ui correctly in this way (Here I have omitted the spark part in the Ingress configuration)</li>
</ul>
|
<p>Why you just don't expose your zeppelin service via NodePort?</p>
<p>1) update yaml as :</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: zeppelin-k8s
spec:
ports:
- name: zeppelin
port: 8080
targetPort: 8080
type: NodePort
selector:
component: zeppelin-k8s
</code></pre>
<p>2) Expose access by</p>
<pre><code>minikube service zeppelin-k8s --url
</code></pre>
<p>3) Follow the link you </p>
|
<p>I'm trying to containerize a workflow that touches nfs shares.
For a successful run it requires user to have default uid:gid and also additional 4 or 5 groupid access.
group ids are random and ideally i would like to avoid giving range of gid's in the yaml file.
Is there an efficient way to get this done ? Would anyone be able to show any examples in yaml or point me to reference documents please. Thanks</p>
|
<p>The setting is called <code>supplementalGroups</code>. Take a look at the example:</p>
<pre><code>apiVersion: v1
kind: Pod
...
spec:
containers:
- name: ...
image: ...
volumeMounts:
- name: nfs
mountPath: /mnt
securityContext:
supplementalGroups:
- 5555
- 6666
- 12345
volumes:
- name: nfs
nfs:
server: <nfs_server_ip_or_host>
path: /opt/nfs
</code></pre>
|
<p>I want to connect to MySQL docker hosted on GCP Kubernetes through Python to edit a database.
I encounter the error: </p>
<pre><code>2003, "Can't connect to MySQL server on '35.200.250.69' ([Errno 61] Connection refused)"
</code></pre>
<p>I've tried also to connect throught MySQL, not working either</p>
<h2>Docker environment</h2>
<p>My Dockerfile:</p>
<pre><code>FROM mysql:latest
ENV MYSQL_ROOT_PASSWORD password
# Derived from official mysql image (our base image)
FROM mysql
# Add a database
ENV MYSQL_DATABASE test-db
ENV MYSQL_USER=dbuser
ENV MYSQL_PASSWORD=dbpassword
# Add the content of the sql-scripts/ directory to your image
# All scripts in docker-entrypoint-initdb.d/ are automatically
# executed during container startup
COPY ./sql-scripts/ /docker-entrypoint-initdb.d/
EXPOSE 50050
CMD echo "This is a test." | wc -
CMD ["mysqld"]
</code></pre>
<p>The <em>sql-scripts</em> folder content 2 files in it:</p>
<pre><code>CREATE USER 'newuser'@'%' IDENTIFIED BY 'newpassword';
GRANT ALL PRIVILEGES ON * to 'newuser'@'%';
</code></pre>
<p>and</p>
<pre><code>CREATE DATABASE test_db;
</code></pre>
<h2>Setting up GCP</h2>
<p>I launch the container with the following command:</p>
<pre><code>kubectl run test-mysql --image=gcr.io/data-sandbox-196216/test-mysql:latest --port=50050 --env="MYSQL_ROOT_PASSWORD=root_password"
</code></pre>
<p>on GCP, the container seems running properly:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
test-mysql LoadBalancer 10.19.249.10 35.200.250.69 50050:30626/TCP 2m
</code></pre>
<h2>Connect with Python</h2>
<p>And the python file to connect to the MySQL:</p>
<pre><code>import sqlalchemy as db
# specify database configurations
config = {
'host': '35.200.250.69',
'port': 50050,
'user': 'root',
'password': 'root_password',
'database': 'test_db'
}
db_user = config.get('user')
db_pwd = config.get('password')
db_host = config.get('host')
db_port = config.get('port')
db_name = config.get('database')
# specify connection string
connection_str = f'mysql+pymysql://{db_user}:{db_pwd}@{db_host}:{db_port}/{db_name}'
# connect to database
engine = db.create_engine(connection_str)
connection = engine.connect()
</code></pre>
<h2>What I want to do</h2>
<p>I would like to be able to write this MySQL database with Python, and read it on PowerBI.</p>
<p>Thanks for your help!</p>
|
<p>You have exposed port <strong>50050</strong> while MySQL server by default is listening port <strong>3306</strong></p>
<p><strong>Option I.</strong> Change default port in <code>my.cfg</code> and set <code>port=50050</code> </p>
<p><strong>Option II.</strong> Expose default MySQL port</p>
<p>Dockerfile:</p>
<pre><code>FROM mysql:latest
ENV MYSQL_ROOT_PASSWORD password
# Derived from official mysql image (our base image)
FROM mysql
# Add a database
ENV MYSQL_DATABASE test-db
ENV MYSQL_USER=dbuser
ENV MYSQL_PASSWORD=dbpassword
# Add the content of the sql-scripts/ directory to your image
# All scripts in docker-entrypoint-initdb.d/ are automatically
# executed during container startup
COPY ./sql-scripts/ /docker-entrypoint-initdb.d/
EXPOSE 3306
CMD echo "This is a test." | wc -
CMD ["mysqld"]
</code></pre>
<p>Start container:</p>
<pre><code>kubectl run test-mysql --image=gcr.io/data-sandbox-196216/test-mysql:latest --port=3306 --env="MYSQL_ROOT_PASSWORD=root_password"
</code></pre>
|
<p>I have been readying here and there online but the answer does not come thoroughly explained. I hope this question here if answered, can provide an updated and thorough explanation of the matter.</p>
<p>Why would someone define a container with the following parameters:</p>
<pre><code>stdin: true
tty: true
</code></pre>
<p>Also if</p>
<pre><code>`docker run -it`
</code></pre>
<p>bind the executed container process to the calling client stdin and tty, what would setting those flag on a container bind its executed process it to ?</p>
<p>I could only envision one scenario, which is, if the command is let say bash, then you can <strong>attach</strong> to it (i.e. that bash running instance) later after the container is running.</p>
<p>But again one could just run <code>docker run it</code> when necessary. I mean one launch a new bash and do whatever needs to be done. No need to attach to a running one</p>
<p>So the first part of the question is:</p>
<p>a) What is happening under the hood ?</p>
<p>b) Why and when to use it, what difference does it make and what is the added value ?</p>
|
<p>AFAIK, setting <code>stdin: true</code> in container spec will simply keep container process stdin open waiting for somebody to attach to it with <code>kubectl attach</code>.</p>
<p>As for <code>tty: true</code> - this simply tells Kubernetes that stdin should be also a terminal. Some applications may change their behavior based on the fact that stdin is a terminal, e.g. add some interactivity, command completion, colored output and so on. But in most cases you generally don't need it.</p>
<p>Btw <code>kubectl exec -it POD bash</code> also contains flags <code>-it</code> but in this case this really needed because you're spawning shell process in container's namespace which expects both stdin and terminal from user.</p>
|
<p>I follow <a href="https://raw.githubusercontent.com/Kong/kubernetes-ingress-controller/master/deploy/single/all-in-one-postgres.yaml" rel="nofollow noreferrer">this</a> to install kong-ingress-controller in my master node. But when I deploy postgres-0 it created volume which is pending. I using my own cloud. Here is my yaml to create persistanvolume: </p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: postgre-pv-volume
namespace : kong
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/var/lib/postgresql/data"
</code></pre>
<p>When I run <code>kubectl describe pod postgres-0 -n kong</code> </p>
<p>Result:</p>
<pre><code>Name: postgres-0
Namespace: kong
Priority: 0
Node: <none>
Labels: app=postgres
controller-revision-hash=postgres-59ccf8fcf7
statefulset.kubernetes.io/pod-name=postgres-0
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: StatefulSet/postgres
Containers:
postgres:
Image: postgres:9.5
Port: 5432/TCP
Host Port: 0/TCP
Environment:
POSTGRES_USER: kong
POSTGRES_PASSWORD: kong
POSTGRES_DB: kong
PGDATA: /var/lib/postgresql/data/pgdata
Mounts:
/var/lib/postgresql/data from datadir (rw,path="pgdata")
/var/run/secrets/kubernetes.io/serviceaccount from default-token-g7828 (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
datadir:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: datadir-postgres-0
ReadOnly: false
default-token-g7828:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-g7828
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling <unknown> default-scheduler pod has unbound immediate PersistentVolumeClaims
Warning FailedScheduling <unknown> default-scheduler pod has unbound immediate PersistentVolumeClaims
</code></pre>
<p>Please help me. Thanks</p>
|
<p>Problem may lay in bad or none configuration of <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/" rel="nofollow noreferrer">StorageClass</a>. </p>
<p><strong>1.</strong> Firstly you have to ensure that you have storageclass called manual.</p>
<pre><code>$ kubectl get storageclass
</code></pre>
<p>The name of a StorageClass object is significant, and is how users can request a particular class. </p>
<p><strong>2.</strong> To create StorageClass you have to define configuration file, here is example:</p>
<pre><code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: manual
provisioner: xxx
reclaimPolicy: Retain
allowVolumeExpansion: true
mountOptions:
- debug
volumeBindingMode: Immediate
</code></pre>
<p>Storage classes have a <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/#provisioner" rel="nofollow noreferrer">provisioner</a> that determines what volume plugin is used for provisioning PVs. This field must be specified (xxx).</p>
<p>Take note on such definition:</p>
<pre><code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
</code></pre>
<p><a href="https://kubernetes.io/docs/concepts/storage/storage-classes/#local" rel="nofollow noreferrer">Local volumes</a> do not currently support dynamic provisioning, however a StorageClass should still be created to delay volume binding until pod scheduling. This is specified by the WaitForFirstConsumer volume binding mode.</p>
<p>Delaying volume binding allows the scheduler to consider all of a pod’s scheduling constraints when choosing an appropriate PersistentVolume for a PersistentVolumeClaim.</p>
<p>Let me know if it helps.</p>
|
<p>I have a daemon running on my kubernetes cluster whose purpose is to accept gRPC requests and turn those into commands for creating, deleting, and viewing pods in the k8s cluster. It runs as a service in the cluster and is deployed through helm.</p>
<p>The helm chart creates a service account for the daemon, "tass-daemon", and gives it a cluster role that is supposed to allow it to manipulate pods in a specific namespace, "tass-arrays".</p>
<p>However, I'm finding that the service account does not appear to be working, with my daemon reporting a permissions error when it tries to contact the K8S API server:</p>
<pre><code>2021/03/04 21:17:48 pods is forbidden: User "system:serviceaccount:default:tass-daemon" cannot list resource "pods" in API group "" in the namespace "tass-arrays"
</code></pre>
<p>I confirmed that the code works if I use the default service account with a manually added clusterrole, but attempting to do the setup through the helm chart appears to not work.</p>
<p>However, if I compare the tass-daemon clusterrole to that of admin (which clearly has the permissions to manipulate pods in all namespaces), they appear to be identical:</p>
<pre><code>[maintainer@headnode helm]$ kubectl describe clusterrole admin | grep -i pods
pods [] [] [create delete deletecollection patch update get list watch]
pods/attach [] [] [get list watch create delete deletecollection patch update]
pods/exec [] [] [get list watch create delete deletecollection patch update]
pods/portforward [] [] [get list watch create delete deletecollection patch update]
pods/proxy [] [] [get list watch create delete deletecollection patch update]
pods/log [] [] [get list watch]
pods/status [] [] [get list watch]
[maintainer@headnode helm]$ kubectl describe clusterrole tass-daemon | grep -i pods
pods/attach [] [] [create delete deletecollection patch update get list watch]
pods [] [] [create delete deletecollection patch update get list watch]
pods.apps [] [] [create delete deletecollection patch update get list watch]
pods/status [] [] [get list watch]
</code></pre>
<p>Based on this setup, I would expect the tass-daemon service account to have the appropriate permissions for pod management.</p>
<p>The following is my clusterrole.yaml from my helm chart:</p>
<pre><code>{{- if .Values.rbac.create -}}
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
labels:
app: {{ template "tass-daemon.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "tass-daemon.fullname" . }}
namespace: "tass-arrays"
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- create delete deletecollection patch update get list watch
- apiGroups:
- ""
resources:
- pods/attach
verbs:
- create delete deletecollection patch update get list watch
- apiGroups:
- ""
resources:
- pods/status
verbs:
- get list watch
- apiGroups:
- apps
</code></pre>
<p>And my clusterrolebinding.yaml:</p>
<pre><code>{{- if .Values.rbac.create -}}
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
labels:
app: {{ template "tass-daemon.name" .}}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "tass-daemon.fullname" . }}
namespace: "tass-arrays"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: {{ template "tass-daemon.fullname" . }}
subjects:
- kind: ServiceAccount
name: {{ template "tass-daemon.fullname" . }}
namespace: {{ .Release.Namespace }}
{{- end -}}
</code></pre>
<p>If I change the roleRef name to "admin", it works, but admin is more permissive than we'd prefer.</p>
<p>And finally here's my serviceaccount.yaml:</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app: {{ template "tass-daemon.name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version }}
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "tass-daemon.fullname" . }}
</code></pre>
<p>Clearly I'm doing something wrong, so what is the proper way to configure the clusterrole so that my daemon can manipulate pods in the "tass-arrays" namespace?</p>
|
<p>As I have mentioned in comment section there is a deprecation on <code>apiVersion</code> <code>rbac.authorization.k8s.io/v1beta1</code>, instead use <code>rbac.authorization.k8s.io/v1</code> instead .
The API <code>v1</code> is stable. You should use stable version if it possible.</p>
<p>Read more: <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="nofollow noreferrer">rbac-kubernetes</a>.</p>
<p>About problem with <code>RBAC</code>, part of your <code>ClusterRole</code> below rules section should looks like:</p>
<pre><code>rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
</code></pre>
<p>See: <a href="https://stackoverflow.com/questions/52951777/listing-pods-from-inside-a-pod-forbidden">pod-rbac-forbidden</a>.</p>
|
<p>I use GKE for years and I wanted to experiment with GKE with AutoPilot mode, and my initial expectation was, it starts with 0 worker nodes, and whenever I deploy a workload, it automatically scales the nodes based on requested memory and CPU. However, I created a GKE Cluster, there is nothing related to nodes in UI, but in <code>kubectl get nodes</code> output I see there are 2 nodes. Do you have any idea how to start that cluster with no node initially?</p>
<p><a href="https://i.stack.imgur.com/fcqXO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fcqXO.png" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/wGkGW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wGkGW.png" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/73efy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/73efy.png" alt="enter image description here" /></a></p>
|
<p>The principle of GKE autopilot is NOT TO worry about the node, it's managed for you. No matter if there is 1, 2 or 10 node to your cluster, you don't pay for them, you pay only when a POD run in your cluster (CPU and Memory time usage).</p>
<p>So, you can't handle the number of node, number of pools and low level management like that, something similar to serverless product (Google prefers saying "nodeless" cluster)</p>
<p>At the opposite, it's great to already have resources provisioned that you don't pay on your cluster, you will deploy and scale quicker!</p>
<hr />
<p><strong>EDIT 1</strong></p>
<p>You can have a look to <a href="https://cloud.google.com/kubernetes-engine/pricing#cluster_management_fee_and_free_tier" rel="nofollow noreferrer">the pricing</a>. You have a flat fee of $74.40 per month ($0.10/hour) for the control plane. And then you pay your pods (CPU + Memory).</p>
<p>You have 1 free cluster per Billing account.</p>
|
<p>How to reload open shift config yaml file automatically when config value was updated?</p>
<p>Current I am redeploying to reload open shift config yaml. I don't want to redeploy application one more time for a config value change?</p>
<p>Below is my config.yaml file. How to write a syntax to Trigger pod and redepoy pod automatically and config value got change automatically?</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: template.openshift.io/v1
kind: Template
metadata:
name: config
parameters:
- name: APPLICATION
required: true
- name: BLUE_GREEN_VERSION
required: true
- name: SQL_CONNECTION
required: true
- name: API_URL
required: true
objects:
- apiVersion: v1
kind: ConfigMap
metadata:
name: ${APPLICATION}-${BLUE_GREEN_VERSION}
data:
appsettings.json: |-
{
"ConnectionStrings": {
"SQLDB": "${SQL_CONNECTION}"
},
"URL":"${API_URL}",
"AllowedHosts": "*"
}
</code></pre>
|
<p>I was checking the watch command, and it did not behave as expected.</p>
<p>So I made a simple script and tested it.</p>
<pre><code>#!/bin/bash
# Set initial time of the last file update
INTIAL_LAST_MODIFICATION_TIME=`stat -c %Z $1`
while true
do
# Check time of the last time update
LAST_MODIFICATIO_TIME=`stat -c %Z $1`
# Execute command, If file has changed
if [[ "$LAST_MODIFICATIO_TIME" != "$INTIAL_LAST_MODIFICATION_TIME" ]]
then
$2
INTIAL_LAST_MODIFICATION_TIME=$LAST_MODIFICATIO_TIME
fi
sleep ${3:-5}
done
</code></pre>
<p>Example usage will be,</p>
<pre><code>./name_of_the_script
{file_to_watch}
{command_to_execute_if_file_have_changed}
{time_to_wait_before_checking_again_in_seconds_default_is_5}
</code></pre>
<p>In your case, would be something like the following
<code>./watch config.yaml "oc apply -f config.yaml" 2</code></p>
<p>Hope this will be helpful. And there are few 3rd party libraries that will provide more options. like <a href="https://github.com/eradman/entr" rel="nofollow noreferrer">entr</a>, <a href="https://github.com/inotify-tools/inotify-tools/wiki" rel="nofollow noreferrer">inotify-tools</a> and <a href="https://github.com/watchexec/watchexec" rel="nofollow noreferrer">watch exec</a></p>
|
<p>I am following this guide to consume secrets: <a href="https://docs.spring.io/spring-cloud-kubernetes/docs/current/reference/html/index.html#secrets-propertysource" rel="nofollow noreferrer">https://docs.spring.io/spring-cloud-kubernetes/docs/current/reference/html/index.html#secrets-propertysource</a>.</p>
<p>It says roughly.</p>
<ol>
<li><p>save secrets</p>
</li>
<li><p>reference secrets in deployment.yml file</p>
<pre><code> containers:
- env:
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: db-secret
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-secret
key: password
</code></pre>
</li>
<li><p>Then it says "You can select the Secrets to consume in a number of ways:" and gives 3 examples. However without doing any of these steps I can still see the secrets in my env perfectly. Futhermore the operations in step 1 and step 2 operate independently of spring boot(save and move secrets into environment variables)</p>
</li>
</ol>
<p>My questions:</p>
<ol>
<li>If I make the changes suggested in step 3 what changes/improvements does it make for my container/app/pod?</li>
<li>Is there no way to be able to avoid all the mapping in step 1 and put all secrets in an env?</li>
<li>they write -Dspring.cloud.kubernetes.secrets.paths=/etc/secrets to source all secrets, how is it they knew secrets were in a folder called /etc/</li>
</ol>
|
<p>You can mount all env variables from secret in the following way:</p>
<pre><code> containers:
- name: app
envFrom:
- secretRef:
name: db-secret
</code></pre>
<p>As for where Spring gets secrets from - I'm not an expert in Spring but it seems there is already an explanation in the link you provided:</p>
<blockquote>
<p>When enabled, the Fabric8SecretsPropertySource looks up Kubernetes for
Secrets from the following sources:</p>
<p>Reading recursively from secrets mounts</p>
<p>Named after the application (as defined by spring.application.name)</p>
<p>Matching some labels</p>
</blockquote>
<p>So it takes secrets from secrets mount (if you mount them as volumes). It also scans Kubernetes API for secrets (i guess in the same namespaces the app is running in). It can do it by utilizing Kubernetes serviceaccount token which by default is always <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server" rel="nofollow noreferrer">mounted</a> into the pod. It is up to what Kubernetes RBAC permissions are given to pod's serviceaccount.</p>
<p>So it tries to search secrets using Kubernetes API and match them against application name or application labels.</p>
|
<p>I'm hosting my frontend & backend servers with GKE (Gcloud Kubernetes Engine) with private nodes in a default VPC network like this</p>
<pre class="lang-sh prettyprint-override"><code>gcloud beta container clusters create-auto my-production-cluster \
--enable-private-nodes \
--network "projects/$PROJECT_ID/global/networks/default" \
--subnetwork "projects/$PROJECT_ID/regions/$_GKE_LOCATION/subnetworks/default" \
--cluster-ipv4-cidr "/17" \
--services-ipv4-cidr "/22"
</code></pre>
<p>I ssh pods using <code>kubectl</code> like this:</p>
<pre class="lang-sh prettyprint-override"><code>gcloud container clusters get-credentials my-production-cluster
kubectl exec --stdin --tty my-pod-abcd-xyz -- bash
</code></pre>
<p><strong>So my question is:</strong></p>
<ol>
<li>Is that safe? Can hackers access our cluster & pods somehow?</li>
<li>If it's not safe, what should I do to improve it?</li>
<li>Does a bastion host provide any benefit in my case? AFAIK, it doesn't because the cluster exposes only ports that I specify (ingress & load balancer). I only specify port 80 for Cloudflare HTTPS mapping</li>
</ol>
|
<p>It's a best practice to deploy a private cluster. That means the control plane and the workers are private and you haven't public IP, so, no public access and hackers from the internet can't access them.</p>
<p>If you want to access to that internal resource, you must be in the internal network. A common way is to have a bastion with a leg in public access, and another one in the internal network.</p>
<p>Another solution, if you want to interact with the control plane, is to allow <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/authorized-networks" rel="nofollow noreferrer">authorized network</a> to whitelist some IPs allowed to access the control plane. I don't like that solution but it exists!</p>
<p>In terms of security, yes it's safer to keep your internal resources internal, but even in case of public exposure you must have an authorized credential to access your control plane. It's an additional layer of security!</p>
<hr />
<p>Then, for your services, you can expose them externally through Load Balancer and Ingress config in K8S. No bastion requirement for the services.</p>
|
<p>I'm using Python
and want to be able to check the status of a pod with:</p>
<pre><code>kubernetes.client.CoreV1Api().read_namespaced_pod_status(name=name, namespace='default')
</code></pre>
<p>but this give my a forbidden, 403 response,
while:</p>
<pre><code>kubernetes.client.CoreV1Api().list_pod_for_all_namespaces()
</code></pre>
<p>works fine.
The rights I have setup in a ClusterRole looks like this:</p>
<pre><code>rules:
- apiGroups: ["", "extensions"]
resources: ["pods", "services", "ingresses"]
verbs: ["get", "watch", "list", "create", "delete"]
</code></pre>
<p>So what do I need to modify to make it work?</p>
|
<p>Pod's status is a sub-resource of the ["pod"] resource, so you have to define it for your ClusterRole as follows:</p>
<pre><code>rules:
- apiGroups: ["", "extensions"]
resources: ["pods","pods/status" "services", "ingresses"]
verbs: ["get", "watch", "list", "create", "delete"]
</code></pre>
|
<p>I have a 3 node K8 v1.21 cluster in AWS and looking for SOLID config to run a script using a cronjob. I have seen many documents on here and Google using cronjob and hostPath to Persistent Volumes/Claims to using ConfigMaps, the list goes one.</p>
<p>I keep getting "Back-off restarting failed container/CrashLoopBackOff" errors.</p>
<p>Any help is much appreciated.</p>
<p><a href="https://i.stack.imgur.com/XdRzy.jpg" rel="nofollow noreferrer">cronjob.yaml</a></p>
<p>The script I am trying to run is basic for testing only</p>
<pre><code>#! /bin/<br/>
kubectl create deployment nginx --image=nginx
</code></pre>
<p>Still getting the same error.</p>
<p><a href="https://i.stack.imgur.com/WHTSg.png" rel="nofollow noreferrer">kubectl describe pod/xxxx</a></p>
<p>This hostPath in AWS cluster created using eksctl works.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: redis-hostpath
spec:
containers:
- image: redis
name: redis-container
volumeMounts:
- mountPath: /test-mnt
name: test-vol
volumes:
- name: test-vol
hostPath:
path: /test-vol
</code></pre>
<p><strong>UPDATE</strong></p>
<p>Tried running your config in GCP on a fresh cluster. Only thing I changed was the /home/script.sh to /home/admin/script.sh</p>
<p>Did you test this on your cluster?</p>
<pre><code>Warning FailedPostStartHook 5m27s kubelet Exec lifecycle hook ([/home/mchung/script.sh]) for Container "busybox" in Pod "dumb-job-1635012900-qphqr_default(305c4ed4-08d1-4585-83e0-37a2bc008487)" failed - error: rpc error: code = Unknown desc = failed to exec in container: failed to create exec "0f9f72ccc6279542f18ebe77f497e8c2a8fd52f8dfad118c723a1ba025b05771": cannot exec in a deleted state: unknown, message: ""
Normal Killing 5m27s kubelet FailedPostStartHook
</code></pre>
|
<p>Assuming you're running it in a remote multi-node cluster (since you mentioned AWS in your question), <code>hostPath</code> is NOT an option there for volume mount. Your best choice would be to use a <strong>ConfigMap</strong> and use it as volume mount.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ConfigMap
metadata:
name: redis-script
data:
script.sh: |
# write down your script here
</code></pre>
<p>And then:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: batch/v1
kind: CronJob
metadata:
name: redis-job
spec:
schedule: '*/5 * * * *'
jobTemplate:
spec:
template:
spec:
containers:
- name: redis-container
image: redis
args:
- /bin/sh
- -c
- /home/user/script.sh
volumeMounts:
- name: redis-data
mountPath: /home/user/script.sh
subPath: script.sh
volumes:
- name: redis-data
configMap:
name: redis-script
</code></pre>
<p>Hope this helps. Let me know if you face any difficulties.</p>
<h2>Update:</h2>
<p>I think you're doing something wrong. <code>kubectl</code> isn't something you should run from another container / pod. Because it requires the necessary binary to be existed into that container and an appropriate context set. I'm putting a working manifest below for you to understand the whole concept of running a script as a part of cron job:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ConfigMap
metadata:
name: script-config
data:
script.sh: |-
name=StackOverflow
echo "I love $name <3"
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: dumb-job
spec:
schedule: '*/1 * * * *' # every minute
jobTemplate:
spec:
template:
spec:
containers:
- name: busybox
image: busybox:stable
lifecycle:
postStart:
exec:
command:
- /home/script.sh
volumeMounts:
- name: some-volume
mountPath: /home/script.sh
volumes:
- name: some-volume
configMap:
name: script-config
restartPolicy: OnFailure
</code></pre>
<p>What it'll do is it'll print some texts in the STDOUT in every minute. Please note that I have put only the commands that container is capable to execute, and <code>kubectl</code> is certainly not one of them which exists in that container out-of-the-box. I hope that is enough to answer your question.</p>
|
<p>So I was able to connect to a GKE cluster from a java project and run a job using this:
<a href="https://github.com/fabric8io/kubernetes-client/blob/master/kubernetes-examples/src/main/java/io/fabric8/kubernetes/examples/JobExample.java" rel="nofollow noreferrer">https://github.com/fabric8io/kubernetes-client/blob/master/kubernetes-examples/src/main/java/io/fabric8/kubernetes/examples/JobExample.java</a></p>
<p>All I needed was to configure my machine's local kubectl to point to the GKE cluster.</p>
<p>Now I want to ask if it is possible to trigger a job inside a GKE cluster from a Google Cloud Function, which means using the same library <a href="https://github.com/fabric8io/kubernetes-client" rel="nofollow noreferrer">https://github.com/fabric8io/kubernetes-client</a>
but from a serverless environment. I have tried to run it but obviously kubectl is not installed in the machine where the cloud function runs. I have seen something like this working using a lambda function from AWS that uses <a href="https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/ecs/AmazonECS.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/ecs/AmazonECS.html</a>
to run jobs in an ECS cluster. We're basically trying to migrate from that to GCP, so we're open to any suggestion regarding triggering the jobs in the cluster from some kind of code hosted in GCP in case the cloud function can't do it.</p>
|
<p>Yes you can trigger your GKE method from a serverless environment. But you have to be aware to some edge cases.</p>
<p>With Cloud Functions, you don't manage the runtime environment. Therefore, you can't control what is installed on the container. And you can't control, or install Kubectl on it.</p>
<p>You have 2 solutions:</p>
<ul>
<li>Kubernetes expose API. From your Cloud Functions you can simply call an API. Kubectl is an APIf call wrapper, nothing more! Of course, it requires more efforts, but if you want to stay on Cloud Functions you don't have any other choice</li>
<li>You can switch to Cloud Run. With Cloud Run, you can define your own container and therefore install Kubectl in it, in addition of your webserver (you have to wrap your Function in a webserver with Cloud Run, but it's pretty easy ;) )</li>
</ul>
<p>Whatever the solution chosen, you also have to be aware of the GKE control plane exposition. If it is publicly exposed (generally not recommended), there is no issue. But, if you have a private GKE cluster, your control plane is only accessible from the internal network. To solve that with a serverless product, you have to create a serverless VPC connector to bridge the serverless Google-managed VPC with your GKE control-plane VPC.</p>
|
<h2>Current Architecture</h2>
<p>I have a microservice (Let's call it the publisher) which generates events for a particular resource. There's a web app which shows the updated list of the resource, but this view doesn't talk to the microservice directly. There's a view service in between which calls the publisher whenevever the web app request the list of resources.<br />
To show the latest update, the web app uses the long polling method which calls the view service and the view service calls the publisher (The view service does more than that, for eg. collecting data from different other sources, but that's probably not relevant here). The publisher also publishes any update on all the resources using pubsub which the view service is currently not using.</p>
<h2>Proposed Changes</h2>
<p>To reduce the API calls and make the system more efficient, I am trying to implement websockets instead of the long polling. In theory it works like the web app will subscribe to events from the view service. The view service will listen to the resource update events from the publisher and whenever there's any message on the pubsub topic, it'll send an event to the web app.</p>
<h2>Problem</h2>
<p>The problem with websocket which I am experiencing now is that the view service is deployed using Kubernetes and thus can have multiple pods running at the same time. Different instances of the web app can listen to the events from those different pods, thus it may happen that the pubsub message is received by pod A, but the websocket listener which require this resource is connected to the pod B. If pod A ack the message to discard it since it can't find any event listener, we will lose the event and the web app will not have the updated data. If pod A nack the message so that it can be listened by any other pod which may be benefiting with that message, it may happen that there's no pod which have any websocket event listener which can be benefited with that message and the message will keep circulating blocking the pubsub queue forever.</p>
<h2>Possible Solutions</h2>
<p>The first solution which came into my mind is to create different subscription for different pods so every pod will receive the same event at least once and we won't be blocking the pubsub queue. However, the challenge in this approach is that the pods can die anytime thus leaving the subscription abandoned and after few weeks I'll be dealing with tons of abandoned subscription with overflowing messages.</p>
<p>Another solution is to have a database where the pubsub messages will be stored and the different pods will query it to receive the events periodically check for any listener and remove it from there, but it doesn't solve the problem when there's no listener for the events. Additionally, I don't want to add a database just because of this issue (Current long polling is much better architecture than this).</p>
<p>Third solution is to implement the websockets inside the publisher, however, this is the least possible solution as the codebase is really huge there and no one likes to add any new functionality there.</p>
<p>Final solution is to have just one pod of the view service all the time, but then it defeats the purpose of having a microservice and being on Kubernetes. Additionally, it won't be scalable.</p>
<h2>Question</h2>
<p>Is there any recommended way or any other way I can connect the pubsub events to the web app using websockets without adding unnecessary complexity? I would love an example if there's one available anywhere else.</p>
|
<p>There is no easy solution. First of all, in websocket pattern, the pod is responsible to send the event to the web app. And therefore to gather the correct backend events to the web app. In this design, the pod need to filter the correct messages to deliver.</p>
<p>The most naive implementation will be to duplicate all the messages, in all pod (and therefore in all subscription), but it's not really efficient and money consuming (in addition to time consuming to discard all the messages).</p>
<hr />
<p>We can imagine a more efficient solution, and to create, on each pod, a list of subscription, one per open webapp channel. On these subscription you can add a filter parameter. Of course, the publisher need to add an attribute to allow the subscription to filter on that.</p>
<p>When the session is over, the subscription must be deleted.</p>
<p>In case of crash, I propose this pattern: Store, in a database, the subscription ID, the filter content (the webapp session) and the pod ID in charge of the filtering and delivering. Then, detect the pod crash, or run a scheduled pod, to check if all the running pod are registered in database. If one pod is in database and not running, delete all the related subscription.</p>
<p>If you are able to detect in realtime the pod crash, you can dispatch the active webapp sessions to the other running pods, or on the new one created.</p>
<hr />
<p>As you see, the design isn't simple and required controls, check, and garbage collection.</p>
|
<p>I am running a simple app based on an api and web interface in Kubernetes. However, I can't seem to get the api to talk to the web interface. In my local environment, I just define a variable API_URL in the web interface with eg. localhost:5001 and the web interface correctly connects to the api. As api and web are running in different pods I need to make them talk to each other via services in Kubernetes. So far, this is what I am doing, but without any luck.</p>
<p>I set-up a deployment for the API</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: api-deployment
spec:
replicas: 1
selector:
matchLabels:
component: api
template:
metadata:
labels:
component: api
spec:
containers:
- name: api
image: gcr.io/myproject-22edwx23/api:latest
ports:
- containerPort: 5001
</code></pre>
<p>I attach a service to it:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: api-cluster-ip-service
spec:
type: NodePort
selector:
component: api
ports:
- port: 5001
targetPort: 5001
</code></pre>
<p>and then create a web deployment that should connect to this api.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: web-deployment
spec:
replicas: 1
selector:
matchLabels:
component: web
template:
metadata:
labels:
component: web
spec:
containers:
- name: web
image: gcr.io/myproject-22edwx23/web:latest
ports:
- containerPort: 5000
env:
- name: API_URL
value: http://api-cluster-ip-service:5001
</code></pre>
<p>afterwards, I add a service for the web interface + ingress etc but that seems irrelevant for the issues. I am wondering if the setting of API_URL correctly picks up the host of the api via <a href="http://api-cluster-ip-service:5001" rel="nofollow noreferrer">http://api-cluster-ip-service:5001</a>?</p>
<p>Or can I not rely on Kubernetes getting the appropriate dns for the api and should the web app call the api via the public internet.</p>
|
<p>If you want to check <em>API_URL</em> variable value, simply run</p>
<pre><code>kubectl exec -it web-deployment-pod env | grep API_URL
</code></pre>
<p>The kube-dns service listens for service and endpoint events from the Kubernetes API and updates its DNS records as needed. These events are triggered when you create, update or delete Kubernetes services and their associated pods.</p>
<p>kubelet sets each new pod's search option in <strong>/etc/resolv.conf</strong></p>
<p>Still, if you want to http from one pod to another via cluster service it is recommended to refer service's ClusterIP as follows</p>
<pre><code>api-cluster-ip-service.default.svc.cluster.local
</code></pre>
<p>You should have service IP assigned to env variable within your web pod, so there's no need to re-invent it:</p>
<pre><code>sukhoversha@sukhoversha:~/GCP$ kk exec -it web-deployment-675f8fcf69-xmqt8 env | grep -i service
API_CLUSTER_IP_SERVICE_PORT=tcp://10.31.253.149:5001
API_CLUSTER_IP_SERVICE_PORT_5001_TCP=tcp://10.31.253.149:5001
API_CLUSTER_IP_SERVICE_PORT_5001_TCP_PORT=5001
API_CLUSTER_IP_SERVICE_PORT_5001_TCP_ADDR=10.31.253.149
</code></pre>
<p>To read more about <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#a-records" rel="nofollow noreferrer">DNS for Services</a>. A service defines <a href="https://kubernetes.io/docs/concepts/services-networking/service/#environment-variables" rel="nofollow noreferrer">environment variables naming the host and port</a>.</p>
|
<p>I have a GKE Ingress that creates a L7 Load Balancer. I'd like to use a Cloud Tasks Queue to manage asynchronous tasks for one of the web applications running behind the GKE Ingress. This documentation says it is possible to use Cloud Tasks with GKE <a href="https://cloud.google.com/tasks/docs/creating-http-target-tasks#java" rel="nofollow noreferrer">https://cloud.google.com/tasks/docs/creating-http-target-tasks#java</a>.</p>
<p>I'm connecting the dots here, I'd really appreciate it if someone can help answer these questions.</p>
<ul>
<li>What HTTP endpoint should I configure for the Cloud Tasks queue?</li>
</ul>
<p>Is it better to create a separate Internal HTTP load balancer to target the Kubernetes Services?</p>
|
<p>The HTTP endpoint is the public URL that you want to call to run your async task. Use the public IP/FQDN of your L7 load balancer, following by the correct path to reach your service and trigger the correct endpoint on it.</p>
<p>You can't use internal HTTP load balancer (even if it's a pleasant solution to increase security and external/unwanted call.). Indeed, Cloud Task (and Cloud Scheduler, PubSub and others) can, for now, only reach public URL, not private/VPC related IPs.</p>
|
<p>After applying the following <code>ResourceQuota</code> <code>compute-resources</code> to my GKE Cluster</p>
<pre><code>apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-resources
spec:
hard:
limits.cpu: "1"
limits.memory: 1Gi
</code></pre>
<p>and updating a <code>Deployment</code> to</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-service
spec:
selector:
matchLabels:
app: my-service
tier: backend
track: stable
replicas: 2
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 50%
template:
metadata:
labels:
app: my-service
tier: backend
track: stable
spec:
containers:
- name: my-service
image: registry/namespace/my-service:latest
ports:
- name: http
containerPort: 8080
resources:
requests:
memory: "128Mi"
cpu: "125m"
limits:
memory: "256Mi"
cpu: "125m"
</code></pre>
<p>the scheduling fails 100% of tries due to <code>pods "my-service-5bc4c68df6-4z8wp" is forbidden: failed quota: compute-resources: must specify limits.cpu,limits.memory</code>. Since <code>limits</code> and <code>requests</code> are specified and they fulfill the limit, I don't see a reason why the pods should be forbidden.</p>
<p><a href="https://stackoverflow.com/questions/32034827/how-pod-limits-resource-on-kubernetes-enforced-when-the-pod-exceed-limits-after">How pod limits resource on kubernetes enforced when the pod exceed limits after pods is created ?</a> is a different question.</p>
<p>I upgraded my cluster to 1.13.6-gke.0.</p>
|
<p>I was about to suggest to test within separate namespace, but see that you already tried.</p>
<p>As another workaround try to setup default limits by enabling LimitRanger admission controller and setting it up e.g.</p>
<pre><code>apiVersion: v1
kind: LimitRange
metadata:
name: cpu-limit-range
spec:
limits:
- default:
memory: 256Mi
cpu: 125m
defaultRequest:
cpu: 125m
memory: 128Mi
type: Container
</code></pre>
<p>Now if a Container is created in the default namespace, and the Container does not specify its own values for CPU request and CPU limit, the Container is given a default CPU limits of 125m and a default memory limit of 256Mi</p>
<p>Also, after setting up LimitRange, make sure you removed your deployment and there are no pods stuck in failed state. </p>
|
<p>I'm trying to get visitor IP on my Laravel application that uses Nginx on Google Cloud Kubernetes Engine, under load balancer.</p>
<p>I have set up TrustProxies.php like this:</p>
<pre><code><?php
namespace App\Http\Middleware;
use Illuminate\Http\Request;
use Fideloper\Proxy\TrustProxies as Middleware;
class TrustProxies extends Middleware
{
/**
* The trusted proxies for this application.
*
* @var array
*/
protected $proxies = '*';
/**
* The headers that should be used to detect proxies.
*
* @var int
*/
protected $headers = Request::HEADER_X_FORWARDED_ALL;
}
</code></pre>
<p>I have also tried</p>
<pre><code>protected $proxies = '**';
</code></pre>
<p>And</p>
<pre><code>protected $proxies = ['loadbalancer_ip_here'];
</code></pre>
<p>No matter what I have tried, it will always return load balancer ip.</p>
<p>Might this be caused by Nginx configuration? Help appreciated.</p>
|
<p>You have to set traffic policy in your nginx service </p>
<pre><code>externalTrafficPolicy: "Local"
</code></pre>
<p>and also </p>
<pre><code>healthCheckNodePort: "numeric port number for the service"
</code></pre>
<p>More details in <a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip" rel="nofollow noreferrer">Preserving the client source IP</a> doc</p>
|
<p>I am currently trying to implement the CI/CD pipeline using docker , Kubernetes and Jenkins. When I created the pipeline deployment Kubernetes deployment YAML file, I was not included the time stamp. Only I was using the imagePullPolicy as <code>latest</code> in YAML file. Regarding with latest pull I had already one discussion here, The following is the link for that discussion,</p>
<p><a href="https://stackoverflow.com/questions/58539362/docker-image-not-pulling-latest-from-dockerhub-com-registry?noredirect=1#comment103401768_58539362">Docker image not pulling latest from dockerhub.com registry</a></p>
<p>After This discussion , I included the time stamp in my deployment YAML like the following,</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: test-kube-deployment
labels:
app: test-kube-deployment
spec:
replicas: 3
selector:
matchLabels:
app: test-kube-deployment
template:
metadata:
labels:
app: test-kube-deployment
annotations:
date: "+%H:%M:%S %d/%m/%y"
spec:
imagePullSecrets:
- name: "regcred"
containers:
- name: test-kube-deployment-container
image: spacestudymilletech010/spacestudykubernetes:latest
imagePullPolicy: Always
ports:
- name: http
containerPort: 8085
protocol: TCP
</code></pre>
<p>Here I modified my script to include the time stamp by adding the following in template, </p>
<pre><code>annotations:
date: "+%H:%M:%S %d/%m/%y"
</code></pre>
<p>My service file like following,</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
ports:
- port: 8085
targetPort: 8085
protocol: TCP
name: http
selector:
app: test-kube-deployment
</code></pre>
<p>My jenkinsfile conatining the following,</p>
<pre><code>stage ('imagebuild')
{
steps
{
sh 'docker build -f /var/lib/jenkins/workspace/jpipeline/pipeline/Dockerfile -t spacestudymilletech010/spacestudykubernetes:latest /var/lib/jenkins/workspace/jpipeline/pipeline'
sh 'docker login --username=<my-username> --password=<my-password>'
sh 'docker push spacestudymilletech010/spacestudykubernetes:latest'
}
}
stage ('Test Deployment')
{
steps
{
sh 'kubectl apply -f deployment/testdeployment.yaml'
sh 'kubectl apply -f deployment/testservice.yaml'
}
}
</code></pre>
<p>But still the deployment not pulling the latest one from Dockerhub registry. How I can modify these script for resolving the latest pulling problem?</p>
|
<pre><code>The default pull policy is IfNotPresent which causes the Kubelet to skip pulling an image if it already exists. If you would like to always force a pull, you can do one of the following:
set the imagePullPolicy of the container to Always.
omit the imagePullPolicy and use :latest as the tag for the image to use.
omit the imagePullPolicy and the tag for the image to use.
enable the AlwaysPullImages admission controller.
</code></pre>
<p>Basically, either use <code>:latest</code> or then use <code>imagePullPolicy: Always</code> </p>
<p>Try it and let me know how it goes!</p>
<p>Referenced from <a href="https://kubernetes.io/docs/concepts/containers/images/" rel="nofollow noreferrer">here</a> </p>
|
<p>I have the following configuration:</p>
<p>daemonset:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: nginx-ingress
namespace: nginx-ingress
spec:
selector:
matchLabels:
app: nginx-ingress
template:
metadata:
labels:
app: nginx-ingress
spec:
serviceAccountName: nginx-ingress
containers:
- image: nginx/nginx-ingress:1.4.2-alpine
imagePullPolicy: Always
name: nginx-ingress
ports:
- name: http
containerPort: 80
hostPort: 80
- name: https
containerPort: 443
hostPort: 443
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
args:
- -nginx-configmaps=$(POD_NAMESPACE)/nginx-config
- -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret
</code></pre>
<p>main config:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
namespace: nginx-ingress
data:
proxy-set-headers: "nginx-ingress/custom-headers"
proxy-connect-timeout: "11s"
proxy-read-timeout: "12s"
client-max-body-size: "5m"
gzip-level: "7"
use-gzip: "true"
use-geoip2: "true"
</code></pre>
<p>custom headers:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: custom-headers
namespace: nginx-ingress
data:
X-Forwarded-Host-Test: "US"
X-Using-Nginx-Controller: "true"
X-Country-Name: "UK"
</code></pre>
<p>I am encountering the following situations:</p>
<ul>
<li>If I change one of "proxy-connect-timeout", "proxy-read-timeout" or "client-max-body-size", I can see the changes appearing in the generated configs in the controller pods</li>
<li>If I change one of "gzip-level" (even tried "use-gzip") or "use-geoip2", I see no changes in the generated configs (eg: "gzip on;" is always commented out and there's no other mention of zip, the gzip level doesn't appear anywhere)</li>
<li>The custom headers from "ingress-nginx/custom-headers" are not added at all (was planning to use them to pass values from geoip2)</li>
</ul>
<p>Otherwise, all is well, the controller logs show that my only backend (an expressJs app that dumps headers) is server correctly, I get expected responses and so on.</p>
<p>I've copied as much as I could from the examples on github, making a minimum of changes but no results (including when looking at the examples for custom headers).</p>
<p>Any ideas or pointers would be greatly appreciated.</p>
<p>Thanks!</p>
|
<p>Use ingress rule annotations.</p>
<pre><code>Example:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "server: hide";
more_set_headers "X-Content-Type-Options: nosniff";
more_set_headers "X-Frame-Options: DENY";
more_set_headers "X-Xss-Protection: 1";
name: myingress
namespace: default
spec:
tls:
- hosts:
</code></pre>
<p>I used nginx server 1.15.9</p>
|
<p>Why cant I have a setup like below.
I want to map vol1 to a pod with different subpath has xyz and vol2 to the same pod with subpath abc.</p>
<pre><code> volumes:
- name:vol1
persistentVolumeClaim:
claimName: testclaim
- name: vol2
persistentVolumeClaim:
claimName: testclaim
</code></pre>
<p>containers volume mounts:</p>
<pre><code> volumeMounts:
- name: vol1
mountPath: /test/
subPath: abc
- name: vol2
mountPath: /test2/
subPath: xyz
</code></pre>
<p>What is the alternative for this kind of setup?</p>
|
<p>Try this</p>
<pre><code> volumeMounts:
- name: vol1
mountPath: /test
subPath: abc
- name: vol1
mountPath: /test2
subPath: xyz
volumes:
- name: vol1
persistentVolumeClaim:
claimName: testclaim
</code></pre>
|
<p>If Pod's status is <code>Failed</code>, Kubernetes will try to create new Pods until it reaches <code>terminated-pod-gc-threshold</code> in <code>kube-controller-manager</code>. This will leave many <code>Failed</code> Pods in a cluster and need to be cleaned up.</p>
<p>Are there other reasons except <code>Evicted</code> that will cause Pod <code>Failed</code>?</p>
|
<p>There can be many causes for the POD status to be <code>FAILED</code>. You just need to check for problems(if there exists any) by running the command</p>
<pre><code>kubectl -n <namespace> describe pod <pod-name>
</code></pre>
<p>Carefully check the <code>EVENTS</code> section where all the events those occurred during POD creation are listed. Hopefully you can pinpoint the cause of failure from there.</p>
<p>However there are several reasons for POD failure, some of them are the following:</p>
<ul>
<li>Wrong image used for POD.</li>
<li>Wrong command/arguments are passed to the POD.</li>
<li>Kubelet failed to check POD liveliness(i.e., liveliness probe failed).</li>
<li>POD failed health check.</li>
<li>Problem in network CNI plugin (misconfiguration of CNI plugin used for networking).</li>
</ul>
<p><br>For example:<br><br>
<a href="https://i.stack.imgur.com/VMHTf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VMHTf.png" alt="pod failed due to image pull error" /></a></p>
<p>In the above example, the image "not-so-busybox" couldn't be pulled as it doesn't exist, so the pod FAILED to run. The pod status and events clearly describe the problem.</p>
|
<p>Can we run minikube without RBAC. Please see the attached screenshot. Looks like RBAC is enabled by default.</p>
<p><a href="https://i.stack.imgur.com/3P1mZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3P1mZ.png" alt="enter image description here"></a></p>
|
<p>Since minikube v0.26.0 the default bootstrapper is kubeadm - which enables RBAC by default.</p>
<p>For the previous versions, you have to explicitly enable it by adding a flag </p>
<pre><code>-extra-config=apiserver.Authorization.Mode=RBAC
</code></pre>
<p>Otherwise it will not be enabled</p>
<p>To disable RBAC in latest versions of the minikube, set following flag</p>
<pre><code>--extra-config=apiserver.authorization-mode=AlwaysAllow
</code></pre>
<p>If it won't work, try workaround suggested <a href="https://github.com/kubernetes/minikube/issues/2342#issuecomment-479179415" rel="nofollow noreferrer">here</a></p>
|
<p>This question is more to give me some direction on how to go about the problem in general, not a specific solution to the problem.</p>
<p>I have a working kubernetes cluster that's using an nginx ingress as the gate to the outside world. Right now everything is on minikube, but the end goal is to move it eventually to GKE, EKS or AKS (for on premise clients that want our software).</p>
<p>For this I'm going to use helm charts to paremetrize the yaml files and ENV variables needed to setup the resources. I will keep using nginx as ingress to avoid maintining alb ingress or other cloud-specific ingress controllers.</p>
<p>My question is:
I'm not sure how to manage TLS certificates and then how to point the ingress to a public domain for people to use it.</p>
<p>I wanted some guidance on how to go about this in general. Is the TLS certificate something that the user can provide to the helm chart before configuring it? Where can I see a small exmaple of this. And finally, is the domain responbility of the helm chart? Or is this something that has to be setup on the DNS provide (Route53) for example. Is there an example you can suggest me to take a look at?</p>
<p>Thanks a lot for the help.</p>
|
<p>Installing certificates using Helm is perfectly fine just make sure you don't accidentally put certificates into public Git repo. Best practice is to have those certificates only on your local laptop and added to .gitignore. After that you may tell Helm to <a href="https://helm.sh/docs/chart_template_guide/accessing_files/" rel="nofollow noreferrer">grab</a> those certificates from their directory.</p>
<p>Regarding the DNS - you may use <a href="https://github.com/kubernetes-sigs/external-dns" rel="nofollow noreferrer">external-dns</a> to make Kubernetes create DNS records for you. You will need first to integrate external-dns with your DNS provider and then it will watch ingress resources for domain names and automatically create them.</p>
|
<p>I have a single node Kubernetes cluster. I want the pod I make to have access to /mnt/galahad on my local computer (which is the host for the cluster).</p>
<p>Here is my Kubernetes config yaml:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: galahad-test-distributor
namespace: galahad-test
spec:
volumes:
- name: place-for-stuff
hostPath:
path: /mnt/galahad
containers:
- name: galahad-test-distributor
image: vergilkilla/distributor:v9
volumeMounts:
- name: place-for-stuff
mountPath: /mnt
resources:
limits:
memory: "200Mi"
requests:
memory: "100Mi"
</code></pre>
<p>I start my pod like such:</p>
<pre><code>kubectl apply -f ./create-distributor.yaml -n galahad-test
</code></pre>
<p>I get a terminal into my newly-made pod:</p>
<pre><code>kubectl exec -it galahad-test-distributor -n galahad-test -- /bin/bash
</code></pre>
<p>I go to /mnt in my pod and it doesn't have anything from /mnt/galahad. I make a new file in the host /mnt/galahad folder - doesn't reflect in the pod. How do I achieve this functionality to have the host path files/etc. reflect in the pod? Is this possible in the somewhat straightforward way I am trying here (defining it per-pod definition without creating separate PersistentVolumes and PersistentVolumeRequests)?</p>
|
<p>Your yaml file looks good.</p>
<p>Using this configuration:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: galahad-test-distributor
namespace: galahad-test
spec:
volumes:
- name: place-for-stuff
hostPath:
path: /mnt/galahad
containers:
- name: galahad-test-distributor
image: busybox
args: [/bin/sh, -c,
'i=0; while true; do echo "$i: $(date)"; i=$((i+1)); sleep 1; done']
volumeMounts:
- name: place-for-stuff
mountPath: /mnt
resources:
limits:
memory: "200Mi"
requests:
memory: "100Mi"
</code></pre>
<p>I ran this and everything worked as expected:</p>
<pre><code>>>> kubectl apply -f create-distributor.yaml # side node: you don't need
# to specify the namespace here
# since it's inside the yaml file
pod/galahad-test-distributor created
>>> touch /mnt/galahad/file
>>> kubectl -n galahad-test exec galahad-test-distributor ls /mnt
file
</code></pre>
<p>Are you sure you are adding your files in the right place? For instance, if you are running your cluster inside a VM (e.g. minikube), make sure you are adding the files inside the VM, not on the machine hosting the VM.</p>
|
<p>For setting these values, it is unclear as to what the best practices are. Is the following an accurate <strong><em>generalization</em></strong> ?</p>
<p>Memory and CPU <em>request</em> values should be slightly higher than what the container requires to idle or do very minimal work.</p>
<p>Memory and CPU <em>limit</em> values should be slightly higher than what the container requires when operating at maximum capacity.</p>
<p>References:</p>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#resource-requests-and-limits-of-pod-and-container" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#resource-requests-and-limits-of-pod-and-container</a></li>
<li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/</a></li>
<li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/</a></li>
</ul>
|
<p>Request values should be set around the regular workload numbers. It shouldn't be close to the idle work numbers because request values are what the Scheduler uses to determine where to put Pods. More specifically, the Scheduler will schedule Pods on a Node as long as the sum of the requests is lower than the Node maximum capacity. If your request values are too low, you risk overpopulating your Node, causing some of the Pods to get evicted. </p>
<p>For the limit, it should be set at the value at which point you consider it ok for the Scheduler to evict the Pod where the container is running. It should be larger than a regular high load or your Pod risks getting evicted whenever it experiences a peak in workload.</p>
|
<p>I'm new to istio, and I want to access my app through istio ingress gateway, but I do not know why it does not work.
This is my <code>kubenetes_deploy.yaml</code> file content:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: batman
labels:
run: batman
spec:
#type: NodePort
ports:
- port: 8000
#nodePort: 32000
targetPort: 7000
#protocol: TCP
name: batman
selector:
run: batman
#version: v1
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: batman-v1
spec:
replicas: 1
selector:
matchLabels:
run: batman
template:
metadata:
labels:
run: batman
version: v1
spec:
containers:
- name: batman
image: leowu/batman:v1
ports:
- containerPort: 7000
env:
- name: MONGODB_URL
value: mongodb://localhost:27017/articles_demo_dev
- name: mongo
image: mongo
</code></pre>
<p>And here is my istio <code>ingress_gateway.yaml</code> config file:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: batman-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 15000
name: http
protocol: HTTP
hosts:
- "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: batman
spec:
hosts:
- "*"
gateways:
- batman-gateway
http:
- match:
route:
- destination:
host: batman
port:
number: 7000
</code></pre>
<p>I created the ingress gateway from example, and it looks well but when I run <code>kubectl get svc istio-ingressgateway -n istio-system</code> I can't see the listening port <code>15000</code> in the output。I donot know way.</p>
<p>Is there anyone can help me? Thanks.</p>
|
<p>First of all, as @Abhyudit Jain mentioned you need to correct port in VirtualService to 8000</p>
<p>And then you just add another port to your <strong>istio-ingressgateway</strong> service</p>
<pre><code>kubectl edit svc istio-ingressgateway -n istio-system
</code></pre>
<p>add section:</p>
<pre><code>ports:
- name: http
nodePort: 30001
port: 15000
protocol: TCP
targetPort: 80
</code></pre>
<p>This will accept HTTP traffic on port <strong>15000</strong> and rout it to your destination service on port <strong>8000</strong></p>
<p>simple schema as follows: </p>
<pre><code>incoming traffic --> istio-gateway service --> istio-gateway --> virtual service --> service --> pod
</code></pre>
|
<p>So I have GCP set up and Kubernetes, I have a web app (Apache OFBiz) running on pods in the GKE cluster. We have a domain that points itself to the web app, so essentially it's accessible from anywhere on the internet. Our issue is since this is a school project, we want to limit the access to the web app to the internal network on GCP, we want to simulate a VPN connection. I have a VPN gateway set up, but I have no idea what to do on any random computer to simulate a connection to the internal network on GCP. Do I need something else to make this work? What are the steps on the host to connect to GCP? And finally, how do I go about limiting access to the webapp so only people in the internal network have access to the webapp?</p>
|
<p>When I want to test a VPN, I simply create a new VPC in my project and I connect both with Cloud VPN. Then, in the new VPC, you can create VM that simulate computer in the other side of the VPN and thus simulate what you want.</p>
|
<p>Starting point: I have a universal JS app (built on Next.js, Nuxt.js, Universal Angular, etc.) and I want it to be up & running on GCP (I think the same question can refer to AWS, principles are the same). The app is not a "real" back-end (with db connections, business logic, etc.), it's more kinda "frontend-backend" - all it does is only SSR of frontend. The app is containerized using Docker. The application should be production-ready (shouldn't be deployed on some beta services).</p>
<p>I have encountered 4 possible options:</p>
<ol>
<li>Compute engine</li>
<li>GKE (Kubernetes)</li>
<li>Cloud Run</li>
<li>App engine</li>
</ol>
<p>The question is next: what is the GCP service, that best fits this app's need?</p>
|
<p>I'm a big fan of Cloud Run, and I can't recommend you another best place. But why in words</p>
<ul>
<li>Compute Engine: Traditional server with all the boring things to manage yourselves (High Availability, backups, patch/update/upgrade). And don't scale to 0. for HA you need 3 VM (at least, in a same zone). Quite expensive.</li>
<li>GKE: Very similar to compute engine. In addition you need skills on K8S.</li>
<li>AppEngine: great solution, but less customizable as Cloud Run. In addition, you can't serve directly container on App Engine standard, it's only possible on the flex version, custom runtime (you don't scale to 0, but to 1); the main advantage here, is the easier server management compare to Compute Engine, and a native regional HA included.</li>
</ul>
<p>For Cloud Run, Cloud Function and App Engine (standard version with automatic/basic scaling mode), the service can scale to 0. Thus, when a request come in, the service is started and took a while before being able to serve the request (about 300 -> 500 ms, except if you use an heavy framework, like Spring Boot, it takes several seconds).</p>
<p>If this cold start is a problem, you can set a min instance to keep warm one instance and thus to discard this cold start.</p>
<ul>
<li>You can't do this with Cloud Functions</li>
<li>App Engine, you pay, without discount, the unused instance (kept warm but not serving traffic)</li>
<li>With Cloud run, you pay 10x less the instance cost when it's idle (90% of discount).</li>
</ul>
<p>Sadly the min-instance on Cloud Run is still in Beta (I'm sure it will be very soon in GA, but it's not today "production ready" as you say.</p>
<p><em>Note: from my experience, Beta version are production ready, you simply don't have financial compensation in case of issue</em></p>
<p>IMO, I recommend you to have a test on Cloud Run (which is in GA) without the min instance param and see if the cold start is a real issue for you. If it is, you have the beta param, but it's possible that it will be GA when you consider it!</p>
|
<p>When you edit a deployment to update the docker image, I need to run a one-time script which changes parts of my application database and sends an email that the rolling upgrade process is complete and the result is passed / failed.</p>
<p>Is there a hook where I can attach this script to?</p>
|
<p>No, there is no such thing in Kubernetes. Usually this should be done by CI/CD pipeline.</p>
|
<p>I have GKE cluster, 2 node pools and around 9 helm charts, only 1 helm chart should use one node pool and all other charts use the other one, I made on this 1 chart the node affinity to go only to the specific node pool - but is there a way to create node anti-affinity on node pools? or do I have to create anti-affinity on all the other 8 charts so they use only second node pool? Seems kinda excessive and there should be easier way but I don't see it in docs.</p>
|
<p>The principle is to deploy pod on node. When this operation is performed, the constraints are checked (CPU, Memory, affinity, nodeSelector,...) and enforced</p>
<p>So, if you want to prevent execution on the specific node pool for the 8 other charts, yes, you need to explicitly set on each pod their affinity or antiAffinity.</p>
<p>You can also use the NodeSelector feature for this.</p>
|
<h1>My Objective</h1>
<p>I want to use GCP impersonation to fetch my GKE cluster credentials. And then I want to run <code>kubectl</code> commands.</p>
<h1>Initial Context</h1>
<ol>
<li>I have a GCP project called <code>rakib-example-project</code></li>
<li>I have 2 ServiceAccounts in the project called:
<ul>
<li><strong>[email protected]</strong>
<ul>
<li>it has the project-wide <code>roles/owner</code> role - so it can do anything and everything inside the GCP project</li>
</ul>
</li>
<li><strong>[email protected]</strong>
<ul>
<li>it only has the project-wide <code>roles/iam.serviceAccountTokenCreator</code> role - so it can <strong>impersonate</strong> the <strong>owner</strong> ServiceAccount in the GCP project</li>
</ul>
</li>
</ul>
</li>
<li>I have 1 GKE cluster in the project called <code>my-gke-cluster</code></li>
</ol>
<h1>The Problem</h1>
<p>✅ I have authenticated as the <strong>executor</strong> ServiceAccount:</p>
<pre class="lang-sh prettyprint-override"><code>$ gcloud auth activate-service-account --key-file=my_executor_sa_key.json
Activated service account credentials for: [[email protected]]
</code></pre>
<p>✅ I have fetched GKE cluster credentials by impersonating the <strong>owner</strong>:</p>
<pre class="lang-sh prettyprint-override"><code>$ gcloud container clusters get-credentials my-gke-cluster \
--zone asia-southeast1-a \
--project rakib-example-project \
--impersonate-service-account=owner@rakib-example-project.iam.gserviceaccount.com
WARNING: This command is using service account impersonation. All API calls will be executed as [[email protected]].
WARNING: This command is using service account impersonation. All API calls will be executed as [[email protected]].
Fetching cluster endpoint and auth data.
kubeconfig entry generated for my-gke-cluster.
</code></pre>
<p>❌ I am failing to list cluster nodes due to missing <code>container.nodes.list</code> permission:</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl get nodes
Error from server (Forbidden): nodes is forbidden: User "[email protected]" cannot list resource "nodes" in API group "" at the cluster scope: requires one of ["container.nodes.list"] permission(s).
</code></pre>
<p>But I have already impersonated the <strong>Owner</strong> ServiceAccount. Why would it still have missing permissions? 😧😧😧</p>
<h1>My Limitations</h1>
<p>It works well if i grant my <strong>executor</strong> ServiceAccount the <code>roles/container.admin</code> role. However, I am not allowed to grant such roles to my <strong>executor</strong> ServiceAccount due to compliance requirements. I can only impersonate the <strong>owner</strong> ServiceAccount and THEN do whatever I want through it - not directly.</p>
|
<p>If you have a look to your kubeconfig file at this location <code>~/.kube/config</code>, you can see the list of authorization and the secrets, such as</p>
<pre><code>- name: gke_gdglyon-cloudrun_us-central1-c_knative
user:
auth-provider:
config:
access-token: ya29.<secret>-9XQmaTQodj4kS39w
cmd-args: config config-helper --format=json
cmd-path: /usr/lib/google-cloud-sdk/bin/gcloud
expiry: "2020-08-25T17:48:39Z"
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
</code></pre>
<p>You see external references (<code>expiry-key</code> and <code>token-key</code>) and a <code>cmd-path</code>. The command path is interesting because when a new token need to be generated, it will be called.</p>
<p>However, you see any mention of the impersonation. You have to add it in the command path, to be used by default. For this, add it in your config like this:</p>
<pre><code>gcloud config set auth/impersonate_service_account [email protected]
</code></pre>
<p>Now, every use of the gcloud CLI will use the impersonate service account, and it's what you want to generate a valid access_token to reach your GKE cluster</p>
|
<p>I'm new to istio. I have a simple ingress gateway yaml file, and the listenling port is 26931, but after I applied the yaml, the port 26931 does not appear in the set of ports which ingress gateway expose. So am I lack of some necessary step or something else?</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: batman-gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 26931
name: http
protocol: HTTP
hosts:
- "*"
</code></pre>
|
<p>You are exposing ports not with Gateway object, but with istio-ingressgateway service. </p>
<pre><code>kubectl edit svc istio-ingressgateway -n istio-system
</code></pre>
<p>So if you want to expose port 26931, you should do it with gateway service </p>
<pre><code> ports:
- name: http
nodePort: 30001
port: 26931
protocol: TCP
targetPort: 80
</code></pre>
<p>Also commented on your previous post- <a href="https://stackoverflow.com/questions/56643594">How to configure ingress gateway in istio?</a></p>
|
<p>I am new to the whole container-orchestration world and was wondering if the microservices I deploy with Kubernetes need a secure connection too or if the Ingress TLS termination is enough.</p>
<p>For example I have an NGINX microservice with currently no SSL/TLS setup whatsoever. When users communicate with this microservice, the connection is encrypted because I set up an Ingress with TLS termination.</p>
<p>Are there any security drawbacks in such a scenario? I find it very hard to find proper literature regarding this topic.</p>
|
<p>It definitely will work. I mean ingress with TLS termination. It depends on security requirements of your project. If you ok with un-encripted traffic inside your cluster, you can go with it.</p>
<p>Though, if you will be running micro-services in production, the best practice for secure service-to-service communication is <a href="https://istio.io/docs/concepts/security/" rel="nofollow noreferrer">Istio mesh</a> with <a href="https://istio.io/docs/concepts/security/#mutual-tls-authentication" rel="nofollow noreferrer">mutual TLS authentication</a></p>
<p>What it does is injects sidecar proxy (envoy) for each of your services
<a href="https://i.stack.imgur.com/BUt8E.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BUt8E.png" alt="enter image description here"></a></p>
|
<p>i set up a testservice in Kubernetes in my default namespace. Following this <a href="https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/" rel="nofollow noreferrer">Kubernetes Tutorial</a></p>
<pre><code>NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default example-service NodePort 10.102.218.101 <none> 8080:30242/TCP 3h38m
</code></pre>
<p>A Curl to
<code>curl http://my.server.com:30242</code> returns the correct output
==> <code>Hello Kubernetes!</code>
Now i want to setup an ingress which makes the application available on a different endpoint and without using the 30242 port.
So i set up an ingress </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: 5ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: my.server.com
http:
paths:
- path: /testing
backend:
serviceName: example-service
servicePort: 80
</code></pre>
<p>The Ingress is deployed but the new path is not working.
kubectl get ingress --all-namespaces</p>
<pre><code>kube-system 5ingress my.server.com 80, 443 16m
</code></pre>
<p>A Curl to curl curl <a href="http://my.server.com/testing" rel="nofollow noreferrer">http://my.server.com/testing</a>
returns</p>
<pre><code><html>
<head><title>308 Permanent Redirect</title></head>
<body>
<center><h1>308 Permanent Redirect</h1></center>
<hr><center>nginx/1.15.10</center>
</body>
</html
</code></pre>
<p>></p>
<p>My expected output would be that curl <a href="http://my.server.com/testing" rel="nofollow noreferrer">http://my.server.com/testing</a> returns Hello Kubernetes! What do i miss and how can i solve it ?</p>
|
<p>The first step will be correcting servicePort in your ingress spec.</p>
<p>Your <code>example-service</code> is exposed on port <strong>8080</strong> while servicePort in the ingress definition is <strong>80</strong> .</p>
<p>Would be also good to see your ingress controller description if there's any</p>
|
<p>I am writing ansible scripts for deploying services using Kubernetes, I am stuck with a step that is for the post-deployment process:</p>
<p>I have deployed a service having "<strong>replicas: 3</strong>", and all the replicas are up and running now my problem is to I have to do a migration for which I have to get into the container and run a script already present there.</p>
<p>I can do it manually by getting into the container individually and then run the script but this will again require manual intervention.</p>
<p>What I want to achieve is once the deployment is done and all the replicas are up and running I want to run the scripts by getting into the containers and all these steps should be performed by ansible script and no manual effort required.</p>
<p>Is there a way to do this?</p>
|
<p>Take a look at <a href="https://docs.ansible.com/ansible/latest/collections/community/kubernetes/k8s_exec_module.html" rel="nofollow noreferrer">k8s_exec</a> module.</p>
<pre><code>- name: Check RC status of command executed
community.kubernetes.k8s_exec:
namespace: myproject
pod: busybox-test
command: cmd_with_non_zero_exit_code
register: command_status
ignore_errors: True
- name: Check last command status
debug:
msg: "cmd failed"
when: command_status.return_code != 0
</code></pre>
|
<p>I have a single image that I'm trying to deploy to an AKS cluster. The image is stored in Azure container registry and I'm simply trying to apply the YAML file to get it loaded into AKS using the following command:</p>
<blockquote>
<p>kubectl apply -f myPath\myimage.yaml</p>
</blockquote>
<p>kubectl keeps complaining that I'm missing the required "selector" field and that the field "spec" is unknown. This seems like a basic image configuration so I don't know what else to try.</p>
<blockquote>
<p>kubectl : error: error validating "myimage.yaml": error validating
data: [ValidationError(Deployment.spec): unknown field "spec" in
io.k8s.api.apps.v1.DeploymentSpec, ValidationError(Deployment.spec):
missing required field "selector" in
io.k8s.api.apps.v1.DeploymentSpec]; if you choose to ignore these
errors, turn validation off with --validate=false At line:1 char:1</p>
</blockquote>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: myimage
spec:
replicas: 1
template:
metadata:
labels:
app: myimage
spec:
containers:
- name: myimage
image: mycontainers.azurecr.io/myimage:v1
ports:
- containerPort: 5000
</code></pre>
|
<p>As specified in the error message, <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">deployments</a> require a selector field inside their spec. You can look at the link for some examples.</p>
<p>Also, do note that there are two spec fields. One for the deployment and one for the pod used as template. Your spec for the pod is misaligned. It should be one level deeper.</p>
|
<p>I've installed Seldon on a K8s cluster with Istio enabled. I want to use Istio to secure the REST APIs using security protocols from GCP (such as IAP or JWT using a service account). What is the configuration needed to enforce both authentication and authorization for APIs deployed using Seldon Core? Would really appreciate it if there were some examples or boilerplate YAML files I could follow.</p>
|
<p>You can use IAP on your backend if you have an HTTPS load balancer. So, configure your cluster to use <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress" rel="nofollow noreferrer">external HTTPS load balancer</a>. Because you use ISTIO, with an TLS terminaison, I recommend to have a look on <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress-xlb#https_tls_between_load_balancer_and_your_application" rel="nofollow noreferrer">this part of the documentation</a>.</p>
<p>Then, you can go to the IAP menu and activate it on the backend of your choice.</p>
|
<p>I've read a lot about Services (NodePort and LB) <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/</a> but I still have dilemma what to use.
I have AKS cluster in Azure. In the same Virtual Network I have VM outside the cluster that should target specific app, container pod on port 9000.
App in container pod is being run on Port 9000.
I have two options:</p>
<ul>
<li>expose that service as NodePort on some port in range from 30000-32767 let's say 30001 but in that case I must change all my outside VMs and apps that they should not target and connect to port 9000 which is regular port for this application but this new Port 30001 which for this app really sounds strange</li>
<li>expose that service in Azure as Load Balancer (I can do that because it is Cloud Platform but I do not like that in that case it will expose my service via Public Address. This is very bad, I do not want for this to be accessible from the internet on public IP address.</li>
</ul>
<p>I am really confused what should I choose.
I will appreciate advices.</p>
<p>Thank you</p>
|
<p>There is a good option to create <a href="https://learn.microsoft.com/en-us/azure/aks/internal-lb" rel="nofollow noreferrer">Internal Load Balancer</a> which is accessible only within Virtual Network.</p>
|
<p>How can my containerized app determine its own current resource utilization, as well as the maximum limit allocated by Kubernetes? Is there an API to get this info from cAdvisor and/or Kubelet?</p>
<p>For example, my container is allowed to use maximum 1 core, and it's currently consuming 800 millicores. In this situation, I want to drop/reject all incoming requests that are marked as "low priority".</p>
<p>-How can I see my resource utilization & limit from within my container?</p>
<p>Note that this assumes auto-scaling is not available, e.g. when cluster resources are exhausted, or our app is not allowed to auto-scale (further).</p>
|
<p>You can use the Kubernetes <a href="https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/" rel="nofollow noreferrer">Downward Api</a> to fetch the limits and requests. The syntax is:</p>
<pre><code>volumes:
- name: podinfo
downwardAPI:
items:
- path: "cpu_limit"
resourceFieldRef:
containerName: client-container
resource: limits.cpu
divisor: 1m
</code></pre>
|
<p>I installed a "nginx ingress controller" on my GKE cluster.
I followed <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="nofollow noreferrer">this guide</a> to install the nginx ingress controller in the GKE.</p>
<p>When deploying resources for the service and ingress resource I realized that the ingress controller was at <code>0/1</code>
<a href="https://i.stack.imgur.com/r3luB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/r3luB.png" alt="enter image description here"></a></p>
<p>Events telling me:</p>
<pre><code>0/1 nodes are available: 1 node(s) didn't match node selector.
</code></pre>
<p>Now I checked the yaml/describe: <a href="https://pastebin.com/QG3GKxh1" rel="nofollow noreferrer">https://pastebin.com/QG3GKxh1</a>
And found that:</p>
<pre><code>nodeSelector:
kubernetes.io/os: linux
</code></pre>
<p>Which looks fine in my opinion. Since I just used the command of the guide to install the controller I have no idea what went wrong from my side.</p>
<h2>Solution:</h2>
<p>The provided answer showed me the way. My node was labeled with <code>beta.kubernetes/io: linux</code> while the controller was looking for <code>kubernetes/io: linux</code>.
Renaming the <code>nodeSelector</code> in the controller worked.</p>
|
<p><code>nodeSelector</code> is used to constraint the nodes on which your Pods can be scheduled.</p>
<p>With:</p>
<pre><code>nodeSelector:
kubernetes.io/os: linux
</code></pre>
<p>You are saying that Pods must be assigned to a node that has the label
<code>kubernetes.io/os: linux</code>. If none of your nodes has that label, the Pod will never get scheduled.</p>
<p>Removing the selector from the nginx ingress controller or adding the label <code>kubernetes.io/os: linux</code> to any node should fix your issue.</p>
|
<p>Flannel on node restarts always.</p>
<p>Log as follows:</p>
<pre><code>root@debian:~# docker logs faa668852544
I0425 07:14:37.721766 1 main.go:514] Determining IP address of default interface
I0425 07:14:37.724855 1 main.go:527] Using interface with name eth0 and address 192.168.50.19
I0425 07:14:37.815135 1 main.go:544] Defaulting external address to interface address (192.168.50.19)
E0425 07:15:07.825910 1 main.go:241] Failed to create SubnetManager: error retrieving pod spec for 'kube-system/kube-flannel-ds-arm-bg9rn': Get https://10.96.0.1:443/api/v1/namespaces/kube-system/pods/kube-flannel-ds-arm-bg9rn: dial tcp 10.96.0.1:443: i/o timeout
</code></pre>
<p>master configuration:
<code>ubuntu: 16.04</code></p>
<p>node:</p>
<pre><code>embedded system with debian rootfs(linux4.9).
kubernetes version:v1.14.1
docker version:18.09
flannel version:v0.11.0
</code></pre>
<p>I hope flannel run normal on node.</p>
|
<p>First, for flannel to work correctly, you must pass <code>--pod-network-cidr=10.244.0.0/16</code> to kubeadm init.</p>
<pre><code>kubeadm init --pod-network-cidr=10.244.0.0/16
</code></pre>
<p>Set <code>/proc/sys/net/bridge/bridge-nf-call-iptables</code> to <strong>1</strong> by running </p>
<pre><code>sysctl net.bridge.bridge-nf-call-iptables=1
</code></pre>
<p>Next is to create the clusterrole and clusterrolebinding</p>
<p>as follows:</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml
</code></pre>
|
<p>I have a situation where on my GCP Cluster , I have two ingresses(A) and (B), which take me to my default-backend service (my UI). These ingresses are of type "gce" since I am hosting my applications on Google-cloud and letting it do the load balancing for me</p>
<p>The kind of situation I am in requires me to have two ingresses (taking in some factors for downtime and outage). I am trying to explore an option where I can setup some kind of internal k8s service/deployment, that can take my calls from my Ingress(A) and redirect to Ingress(B) where Ingress(B) will be taking me to my default backend-UI.</p>
<p>How do I go about solving this problem?</p>
<p>In the example below, how do I write my <code>My-redirection-service-to-app-system-test</code>. My end users will only have to type "system.test.com" in their browsers and that should take them to my UI</p>
<p>Its likely a Kubernetes service/deployment. Uncertain how to proceed </p>
<p>Thanks for the help in advance</p>
<p>Example Ingress(B)</p>
<pre><code> rules:
- host: app.system.test.com
http:
paths:
- path: /ui/*
backend:
serviceName:ui-frontend
servicePort: 80
</code></pre>
<p>Example Ingress (A)</p>
<pre><code> rules:
- host: system.test.com
http:
paths:
- path: /ui/*
backend:
serviceName: My-redirection-service-to-app-system-test
servicePort: 80
</code></pre>
|
<p>I haven't tried this myself but I would consider using a Service of type <a href="https://kubernetes.io/docs/concepts/services-networking/service/#externalname" rel="nofollow noreferrer">ExternalName</a>. Those services can map requests directed at itself to any DNS name. </p>
<p>Something like this should do the trick:</p>
<p>Service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: namespace
spec:
type: ExternalName
externalName: app.system.test.com
</code></pre>
<p>ingress A</p>
<pre><code> rules:
- host: system.test.com
http:
paths:
- path: /ui/*
backend:
serviceName: my-service
servicePort: 80
</code></pre>
<p>Ingress B</p>
<pre><code>rules:
- host: app.system.test.com
http:
paths:
- path: /ui/*
backend:
serviceName:ui-frontend
servicePort: 80
</code></pre>
|
<p><strong>Google Cloud Platform has made hybrid- and multi-cloud computing</strong> a reality through Anthos which is an open application modernization platform. <strong>How does Anthos work for distributed data platforms?</strong> </p>
<p>For example, I have my data in Teradata On-premise, AWS Redshift and Azure Snowflake. Can Anthos joins all datasets and allow users to query or perform reporting with low latency? What is the equivalent of GCP Anthos in AWS and Azure? </p>
|
<p>Your question is wide. Anthos is designed for managing and distributing container accross several K8S cluster. </p>
<p>For a simpler view, imagine this: you have the Anthos master, and its direct node are K8S masters. If you ask Anthos Master to deploy a pod on AWS for example. Anthos master forward the query to K8S master deployed on EKS, and your pod is deployed on AWS.</p>
<p>Now, rethink your question: what about the data? Nothing magic, if your data are shared across several clusters you have to federate them with a system designed for this. It's quite similar than with only one cluster and with data on different node.</p>
<p>Anyway, you point here the real next challenge of multi-cloud/hybrid deployment. Solutions will emerge from this empty space.</p>
<p>Finally your last point: Azure and AWS equivalent. There isn't. </p>
<p>The newest Azure ARC seems to be light: it only allow to manage VM out of Azure Platform with an agent on it. Nothing as manageable as Anthos. for example: You have 3 VM on GCP and you manage them with Azure ARC. You deployed on each an NGINX and you want to set up a loadbalancer in from of your 3 VM. I don't catch how you can do this with Azure ARC. With Anthos, it's simply a service exposition of K8S -> The Loadbalancer will be deployed according with the cloud platform implementation.</p>
<p>About AWS, outpost is an hardware solution: you have to buy AWS specific hardware and to plug it in your OnPrem infrastructure. Need more investment on prem in your move to cloud strategy? Hard to convince. And not compliant with other cloud provider. <strong>BUT</strong> ReInvent is coming next month. Maybe an outsider?</p>
|
<p>I have a GKE clusters setup, dev and stg let's say, and wanted apps running in pods on stg nodes to connect to dev master and execute some commands on that's GKE - I have all the setup I need and when I add from hand IP address of the nodes all works fine but the IP's are changing,</p>
<p>so my question is how can I add to Master authorised networks the ever-changing default-pool IPs of nodes from the other cluster?</p>
<hr>
<p>EDIT: I think I found the solution, it's not the node IP but the NAT IP I have added to authorized networks, so assuming I don't change those I just need to add the NAT I guess, unless someone knows better solution ?</p>
|
<p>I'm not sure that you are doing the correct things. In kubernetes, your communication is performed between services, that represents deployed pods, on one or several nodes.</p>
<p>When you communicate with the outside, you reach an endpoint (an API or a specific port). The endpoint is materialized by a loadbalancer that routes the traffic.</p>
<p>Only the kubernetes master care about the node as resources (CPU, memory, GPU,...) provider inside the cluster. You should never have to directly reach the node of a cluster without using the standard way. </p>
<p>Potentially you can reach the NodePort service exposal on the NodeIP+servicePort.</p>
|
<p>I am trying to kill a container using client-go and <a href="https://github.com/kubernetes-sigs/e2e-framework/blob/1af0fd64ebd2474f40cbb1a29c8997ed56aba89d/klient/k8s/resources/resources.go#L293" rel="nofollow noreferrer">e2e</a> framework in Golang but not able to do it successfully.
Example of the full implementation can be accessed <a href="https://github.com/kubernetes-sigs/e2e-framework/tree/1af0fd64ebd2474f40cbb1a29c8997ed56aba89d/examples/pod_exec" rel="nofollow noreferrer">e2e</a> apart from this I am using kind image as "kindest/node:v1.26.6"</p>
<p>I have tried the following commands but none using the following pieces of code.</p>
<pre><code>args := []string{"kill", "1"}
var stdout, stderr bytes.Buffer
err := cfg.Client().Resources().ExecInPod(ctx, namespace, podName, containerName, args, &stdout, &stderr)
</code></pre>
<pre><code>args = []string{"/bin/sh", "-c", "'kill", "1'"}
err = cfg.Client().Resources().ExecInPod(ctx, namespace, podName, containerName, args, &stdout, &stderr)
</code></pre>
<pre><code>args = []string{"/bin/sh", "-c", "\"kill 1\""}
err = cfg.Client().Resources().ExecInPod(ctx, namespace, podName, containerName, args, &stdout, &stderr)
</code></pre>
<p>But all of them are giving error. Some are giving</p>
<p>exec failed: unable to start container process: exec: "kill": executable file not found in $PATH: unknown"</p>
<p>while some are giving</p>
<p>"command terminated with exit code 127" or
"command terminated with exit code 2"</p>
<p>I have also tried the following and it is working but in this case I have a dependency on kubectl which I want to avoid.</p>
<pre><code>cmdString := fmt.Sprintf("/c kubectl exec -it %s -n %s -c %s -- bash -c 'kill 1'", podName, namespace, containerName)
args := strings.Split(cmdString, " ")
cmd := exec.Command("powershell", args...)
err := cmd.Run()
</code></pre>
|
<p>AFAIK it's generally not possible to kill a container from within a container. That's because PID 1 inside container ignores any signal.</p>
<p>From <a href="https://docs.docker.com/engine/reference/run/#foreground" rel="nofollow noreferrer">Docker documentation</a>:</p>
<blockquote>
<p>A process running as PID 1 inside a container is treated specially by
Linux: it ignores any signal with the default action. As a result, the
process will not terminate on SIGINT or SIGTERM unless it is coded to
do so.</p>
</blockquote>
<p>Instead you should rely on <code>kubectl delete pod</code> functionality.</p>
|
<p>I have the following network policy for restricting access to a frontend service page: </p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
namespace: namespace-a
name: allow-frontend-access-from-external-ip
spec:
podSelector:
matchLabels:
app: frontend-service
ingress:
- from:
- ipBlock:
cidr: 0.0.0.0/0
ports:
- protocol: TCP
port: 443
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
ports:
- protocol: TCP
port: 443
</code></pre>
<p>My question is: can I enforce HTTPS with my egress rule (port restriction on 443) and if so, how does this work? Assuming a client connects to the frontend-service, the client chooses a random port on his machine for this connection, how does Kubernetes know about that port, or is there a kind of port mapping in the cluster so the traffic back to the client is on port 443 and gets mapped back to the clients original port when leaving the cluster?</p>
|
<p>You might have a wrong understanding of the network policy(NP). </p>
<p>This is how you should interpret this section:</p>
<pre><code>egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
ports:
- protocol: TCP
port: 443
</code></pre>
<p>Open port <code>443</code> for outgoing traffic for all pods within <code>0.0.0.0/0</code> cidr. </p>
<p>The thing you are asking </p>
<blockquote>
<p>how does Kubernetes know about that port, or is there a kind of port
mapping in the cluster so the traffic back to the client is on port
443 and gets mapped back to the clients original port when leaving the
cluster?</p>
</blockquote>
<p>is managed by kube-proxy in following way:</p>
<p>For the traffic that goes from pod to external addresses, Kubernetes simply uses SNAT. What it does is replace the pod’s internal source IP:port with the host’s IP:port. When the return packet comes back to the host, it rewrites the pod’s IP:port as the destination and sends it back to the original pod. The whole process is transparent to the original pod, who doesn’t know the address translation at all.</p>
<p>Take a look at <a href="https://kubernetes.io/blog/2019/03/29/kube-proxy-subtleties-debugging-an-intermittent-connection-reset/" rel="nofollow noreferrer">Kubernetes networking basics</a> for a better understanding. </p>
|
<p>I have created a private EKS cluster using Terraform EKS module, but the Node group failed to join the group.
The auth config seems to be correct.</p>
<pre><code>data:
mapRoles: |
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::accountId:role/eks-node-role
username: system:node:{{EC2PrivateDNSName}}
</code></pre>
<p>However when i check the control plane log, i am seeing, seems like the private dns name is not correctly populated when authenticate with EKS, what could be the reason for that?
How does the Node retrieve the privateDNSName? We are working in a restricted VPC environment, I am suspecting it might be some incorrect security group configuration.</p>
<pre><code>annotations.authorization.k8s.io/decision
forbid
annotations.authorization.k8s.io/reason
unknown node for user "system:node:"
</code></pre>
|
<p>Make sure to create the proper node IAM role: <a href="https://docs.aws.amazon.com/eks/latest/userguide/create-node-role.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/create-node-role.html</a></p>
<p>Then follow the usual steps to troubleshoot node join failure. It always helps me to find the issue: <a href="https://docs.aws.amazon.com/eks/latest/userguide/troubleshooting.html#worker-node-fail" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/troubleshooting.html#worker-node-fail</a></p>
|
<p>Obviously, I'm doing something in the wrong way, but I can't understand where is the problem. I'm a new to Kubernetes.</p>
<p>There is the Node.js app, I'm able to wrap it to the Docker and deploy to Google Compute engine (it works with Git trigger and locally). The most important thing here - there are env variables, some of them are secret, encrypted with key. Google uses it too to decrypt values and give them to the application during the build process (everything is done based on Google docs). Now I'm trying to change <code>cloudbuild.yaml</code> file to get Kubernetes config.</p>
<p><strong>cloudbuild.yaml</strong> (part of settings may be redundant after switching from Docker to Kubernetes). Without marked section below in <code>cloudbuild.yaml</code> I'm getting the following error:</p>
<blockquote>
<p>Error merging substitutions and validating build: Error validating
build: key "_DB_HOST" in the substitution data is not matched in the
template;key "_STATIC_SECRET" in the substitution data is not matched
in the template;key "_TYPEORM_DATABASE" in the substitution data is
not matched in the template;key "_TYPEORM_PASSWORD" in the
substitution data is not matched in the template;key
"_TYPEORM_USERNAME" in the substitution data is not matched in the
template
Blockquote</p>
</blockquote>
<p>which is correct because Google considers unused substitutions as errors. But if I leave marked section I'm getting this error:</p>
<blockquote>
<p>Error merging substitutions and validating build: Error validating
build: invalid .secrets field: secret 0 defines no secretEnvs</p>
</blockquote>
<p>which is totally unclear for me.</p>
<p>cloudbuild file:</p>
<pre><code>steps:
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args: [
'-c',
'docker pull gcr.io/$PROJECT_ID/myproject:latest || exit 0'
]
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'-t',
'gcr.io/$PROJECT_ID/myproject:$BRANCH_NAME-$COMMIT_SHA',
'-t',
'gcr.io/$PROJECT_ID/myproject:latest',
# <<<<<------- START OF DESCRIBED SECTION
'DB_HOST=${_DB_HOST}',
'TYPEORM_DATABASE=${_TYPEORM_DATABASE}',
'TYPEORM_PASSWORD=${_TYPEORM_PASSWORD}',
'TYPEORM_USERNAME=${_TYPEORM_USERNAME}',
'STATIC_SECRET=${_STATIC_SECRET}',
# <<<<<------- END OF DESCRIBED SECTION
'.'
]
- name: 'gcr.io/cloud-builders/kubectl'
args: [ 'apply', '-f', '/' ]
env:
- 'CLOUDSDK_COMPUTE_ZONE=<region>'
- 'CLOUDSDK_CONTAINER_CLUSTER=myproject'
- name: 'gcr.io/cloud-builders/kubectl'
args: [
'set',
'image',
'deployment',
'myproject',
'myproject=gcr.io/$PROJECT_ID/myproject:$BRANCH_NAME-$COMMIT_SHA'
]
env:
- 'CLOUDSDK_COMPUTE_ZONE=<region>'
- 'CLOUDSDK_CONTAINER_CLUSTER=myproject'
- 'DB_PORT=5432'
- 'DB_SCHEMA=public'
- 'TYPEORM_CONNECTION=postgres'
- 'FE=myproject'
- 'V=1'
- 'CLEAR_DB=true'
- 'BUCKET_NAME=myproject'
- 'BUCKET_TYPE=google'
- 'KMS_KEY_NAME=storagekey'
secretEnv:
- DB_HOST,
- TYPEORM_DATABASE,
- TYPEORM_PASSWORD,
- TYPEORM_USERNAME,
- STATIC_SECRET
timeout: 1600s
substitutions:
_DB_HOST: $DB_HOST
_TYPEORM_DATABASE: $TYPEORM_DATABASE
_TYPEORM_PASSWORD: $TYPEORM_PASSWORD
_TYPEORM_USERNAME: $TYPEORM_USERNAME
_STATIC_SECRET: $STATIC_SECRET
secrets:
- kmsKeyName: projects/myproject/locations/global/keyRings/storage/cryptoKeys/storagekey
- secretEnv:
DB_HOST: <encrypted base64 here>
TYPEORM_DATABASE: <encrypted base64 here>
TYPEORM_PASSWORD: <encrypted base64 here>
TYPEORM_USERNAME: <encrypted base64 here>
STATIC_SECRET: <encrypted base64 here>
images:
- 'gcr.io/$PROJECT_ID/myproject:$BRANCH_NAME-$COMMIT_SHA'
- 'gcr.io/$PROJECT_ID/myproject:latest'
</code></pre>
<p><strong>secret.yaml</strong> file (registered in kubectl as it should be):</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: myproject
type: Opaque
data:
DB_HOST: <encrypted base64 here>
TYPEORM_DATABASE: <encrypted base64 here>
TYPEORM_PASSWORD: <encrypted base64 here>
TYPEORM_USERNAME: <encrypted base64 here>
STATIC_SECRET: <encrypted base64 here>
</code></pre>
<p><strong>pod.yaml</strong> file</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: myproject
spec:
containers:
- name: myproject
image: gcr.io/myproject/myproject:latest
# project ID is valid here, don't bother on mock values
env:
- name: DB_HOST
valueFrom:
secretKeyRef:
name: myproject
key: DB_HOST
- name: TYPEORM_DATABASE
valueFrom:
secretKeyRef:
name: myproject
key: TYPEORM_DATABASE
- name: TYPEORM_PASSWORD
valueFrom:
secretKeyRef:
name: myproject
key: TYPEORM_PASSWORD
- name: TYPEORM_USERNAME
valueFrom:
secretKeyRef:
name: myproject
key: TYPEORM_USERNAME
- name: STATIC_SECRET
valueFrom:
secretKeyRef:
name: myproject
key: STATIC_SECRET
restartPolicy: Never
</code></pre>
|
<p>I think, you mix too many things, your legacy build and your new one. If your secrets are already set in your cluster, you don't need them at the build time.</p>
<p>Try this, with only the required step for deploying (no substitution, no secret, no KMS)</p>
<pre><code>steps:
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args: [
'-c',
'docker pull gcr.io/$PROJECT_ID/myproject:latest || exit 0'
]
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'-t',
'gcr.io/$PROJECT_ID/myproject:$BRANCH_NAME-$COMMIT_SHA',
'-t',
'gcr.io/$PROJECT_ID/myproject:latest',
'.'
]
- name: 'gcr.io/cloud-builders/kubectl'
args: [ 'apply', '-f', '/' ]
env:
- 'CLOUDSDK_COMPUTE_ZONE=<region>'
- 'CLOUDSDK_CONTAINER_CLUSTER=myproject'
- name: 'gcr.io/cloud-builders/kubectl'
args: [
'set',
'image',
'deployment',
'myproject',
'myproject=gcr.io/$PROJECT_ID/myproject:$BRANCH_NAME-$COMMIT_SHA'
]
env:
- 'CLOUDSDK_COMPUTE_ZONE=<region>'
- 'CLOUDSDK_CONTAINER_CLUSTER=myproject'
- 'DB_PORT=5432'
- 'DB_SCHEMA=public'
- 'TYPEORM_CONNECTION=postgres'
- 'FE=myproject'
- 'V=1'
- 'CLEAR_DB=true'
- 'BUCKET_NAME=myproject'
- 'BUCKET_TYPE=google'
- 'KMS_KEY_NAME=storagekey'
timeout: 1600s
images:
- 'gcr.io/$PROJECT_ID/myproject:$BRANCH_NAME-$COMMIT_SHA'
- 'gcr.io/$PROJECT_ID/myproject:latest
</code></pre>
|
<p>I am trying to deploy my infra on GCE and I host UI using GCP Storage Buckets and Backend APIs using a GKE cluster. I already deployed my front-end apps and added a load balancer to route requests to the UI bucket. I would like to use the same load balancer for the API traffic as well. I am wondering if it is possible to use one single load balancer for both workloads. When I create an Ingress resource on a GKE cluster, a new HTTP(S) load balancer gets automatically created as explained in this <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/load-balance-ingress" rel="nofollow noreferrer">tutorial</a>. Is it possible to create an Ingress rule which only adds a HTTP rule to an existing load balancer but does not create a new load balancer?</p>
|
<p>You can expose your service to a <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-intro/" rel="nofollow noreferrer"><code>NodePort</code></a> (Change the type <code>LoadBalancer</code> to <code>NodePort</code>)</p>
<p>Then, you can create a new backend in your loadbalancer, select your Instance Group (your node pool), and set the correct port.</p>
<p>It should work.</p>
|
<p>I have a kubernetes service which I put behind a load balancer. The load balancer is on a regional static IP. The reason I can't use a global IP is because when I assign it to my service, it refuses to accept it. Others have faced the same <a href="https://serverfault.com/questions/796881/error-creating-gce-load-balancer-requested-address-ip-is-neither-static-nor-ass">problem</a>.</p>
<p>I'm trying to assign a SSL certificate to the TCP load balancer(regional IP) created but in the Frontend configuration, I don't see an option.</p>
<p>If I use a global IP, I can see the option to create/assign a certificate but my service refuses the IP as shown in the link above.
How can I assign SSL certificates to a regional ip which is a loadbalancer to a kubernetes service? or if you know a way of my service accepting a loadbalancer on a global IP for a kubernetes service, please let me know.</p>
<p>Note: I have disabled the default gce ingress controller and I'm using my own ingress controller. So it does not create an external ip automatically.</p>
|
<p>If you use regional TCP balancer then it is simply impossible to assign certificate to load balancer because it operates on level 4 (TCP) while SSL is at level 7. That's why you don't see an option of assigning certificate.</p>
<p>You need to assign SSL certificates on ingress controller level like this:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: foo
namespace: default
spec:
tls:
- hosts:
- foo.bar.com
secretName: foo-secret
rules:
- host: foo.bar.com
http:
paths:
- backend:
serviceName: foo
servicePort: 80
path: /
</code></pre>
|
<p>How to delete the failed jobs in the kubernetes cluster using a cron job in gke?. when i tried to delete the failed jobs using following YAML, it has deleted all the jobs (including running)</p>
<pre><code>
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: XXX
namespace: XXX
spec:
schedule: "*/30 * * * *"
failedJobsHistoryLimit: 1
successfulJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
serviceAccountName: XXX
containers:
- name: kubectl-runner
image: bitnami/kubectl:latest
command: ["sh", "-c", "kubectl delete jobs $(kubectl get jobs | awk '$2 ~ 1/1' | awk '{print $1}')"]
restartPolicy: OnFailure
</code></pre>
|
<p>This one visually looks better for me:</p>
<pre><code>kubectl delete job --field-selector=status.phase==Failed
</code></pre>
|
<p>There is this kubernetes cluster with n number of nodes where some of the nodes are fitted with multiple NVIDIA 1080Ti GPU cards on it. </p>
<p>I have two kind of pods
1. GPU enabled, these need to be scheduled on GPU fitted nodes where pod will only use one of the GPU cards present on that node.
2. CPU only, now these can be scheduled anywhere, preferably on CPU only nodes.</p>
<p>Scheduling problem is addressed clearly <a href="https://stackoverflow.com/questions/53859237/kubernetes-scheduling-for-expensive-resources">in this</a> answer.</p>
<p>Issue:
When scheduling a GPU-enabled pod on a GPU fitted node I want to be able decide on which GPU card among those multiple GPU cards my pod is going to use. Further, I was thinking of a loadbalancer sitting transparently b/w GPU hardware and pods that will decide the mapping.</p>
<p>Any help around this architecture would be deeply appreciated. Thank you!</p>
|
<p>You have to use <a href="https://github.com/NVIDIA/k8s-device-plugin" rel="nofollow noreferrer">Official NVIDIA GPU device plugin</a> rather than suggested by GCE. There's possibility to schedule GPUs by attributes</p>
<p>Pods can specify device selectors based on the attributes that are advertised on the node. These can be specified at the container level. For example:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: gpu-pod
spec:
containers:
- name: cuda-container
image: nvidia/cuda:9.0-base
command: ["sleep"]
args: ["100000"]
computeResourceRequests: ["nvidia-gpu"]
computeResources:
- name: "nvidia-gpu"
resources:
limits:
nvidia.com/gpu: 1
affinity:
required:
- key: "nvidia.com/gpu-memory"
operator: "Gt"
values: ["8000"] # change value to appropriate mem for GPU
</code></pre>
<p>Check Kubernetes on NVIDIA GPUs <a href="https://docs.nvidia.com/datacenter/kubernetes/kubernetes-install-guide/index.html#abstract" rel="nofollow noreferrer">Installation Guide</a></p>
<p>Hope this will help</p>
|
<p>I have developed an Openshift template which basically creates two objects (A cluster & a container operator).</p>
<p>I understand that templates run <code>oc create</code> under the hood. So, in case any of these two objects already exists then trying to create the objects through template would through an error. Is there any way to override this behaviour? I want my template to re-configure the object even if it exists. </p>
|
<p>You can use "oc process" which renders template into set of manifests:</p>
<pre><code>oc process foo PARAM1=VALUE1 PARAM2=VALUE2 | oc apply -f -
</code></pre>
<p>or</p>
<pre><code>oc process -f template.json PARAM1=VALUE1 PARAM2=VALUE2 | oc apply -f -
</code></pre>
|
<p>I don't find autoscaling/v2beta2 or beta1 when I run the command $kubectl api-versions. But I need it for memory autoscaling. What to do ?</p>
<p>To enable autoscaling/v2beta2</p>
|
<p>Most likely you're using latest Minikube with Kubernetes 1.26 where <code>autoscaling/v2beta2</code> API is no longer served:</p>
<blockquote>
<p>The autoscaling/v2beta2 API version of HorizontalPodAutoscaler is no
longer served as of v1.26.</p>
</blockquote>
<p>Read more: <a href="https://kubernetes.io/docs/reference/using-api/deprecation-guide/#horizontalpodautoscaler-v126" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/using-api/deprecation-guide/#horizontalpodautoscaler-v126</a></p>
<p>So the solution might be either changing API version to <code>autoscaling/v2</code> in your manifests or use older version of Minikube/Kubernetes.</p>
|
<p>I have defined an object with a few attributes in my <code>values.yaml</code> file:</p>
<pre><code>serverOptions:
defaultUrl:
p1: abc
p2: def
cpu_request:
p1: abc
p2: def
mem_request:
p1: abc
p2: def
</code></pre>
<p>I am saving these data to a <code>server_options</code> json file in <code>configmap.yaml</code> using this code:</p>
<pre><code>data:
server_options.json: |
{{ toJson .Values.serverOptions }}
</code></pre>
<p>It works but the initial "list" of attributes gets alphabetically ordered. This is the file's content</p>
<blockquote>
<p>{"cpu_request":{"p1":"abc","p2":"def"},"defaultUrl":{"p1":"abc","p2":"def"},"mem_request":{"p1":"abc","p2":"def"}}</p>
</blockquote>
<p>Is there a way to keep the original ordering?</p>
|
<p>Json dictionaries aren't ordered, so no that's not possible. They may be alphabetically ordered when printed but that's only for readability.</p>
|
<p>In <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#add-imagepullsecrets-to-a-service-account" rel="nofollow noreferrer">this</a> kubernetres documentation, why do you need the extra step replacing the <code>kubectl replace serviceaccount</code>?</p>
<p>I can see that the name of the <code>imagePullSecrets</code> is wrong alright, but I would expect <code>kubectl patch serviceaccount</code> to do this - well it does not, so there must be a reason?</p>
|
<p>It's for the sake of convenience. Imagine you have several typical deployments that use the same serviceaccount and a number of images from Docker registry with authentication. By incorporating imagePullSecrets inside serviceaccount you can now specify only serviceAccountName in your deployments - imagePullSecrets will be automatically added. </p>
<p>I would not say this is a very cool feature, but in some cases it can be useful.</p>
|
<p>We are trying to replace our existing PSPs in kubernetes with OPA policies using Gatekeeper. I'm using the default templates provided by Gatekeeper <a href="https://github.com/open-policy-agent/gatekeeper-library/tree/master/library/pod-security-policy" rel="nofollow noreferrer">https://github.com/open-policy-agent/gatekeeper-library/tree/master/library/pod-security-policy</a> and defined corresponding constraints.</p>
<p>However, I can't figure out how I can apply a policy to a specific <code>ServiceAccount</code>.
For eg. how to define <code>allow-privilege-escalation</code> policy only to a ServiceAccount named awsnode?</p>
<p>In PSPs I create a Role/ClusterRole for required <code>podsecuritypolicies</code> and create a RoleBinding to allow awsnode ServiceAccount to use required PSP. I'm struggling to understand how to achieve the same using Gatekeeper OPA policies?</p>
<p>Thank you.</p>
|
<p>I think a possible solution to applying an OPA Gatekeeper policy (a ConstraintTemplate) to a specific ServiceAccount, is to make the OPA/Rego policy code reflect that filter / selection logic. Since you said you're using pre-existing policies from the gatekeeper-library, maybe changing the policy code isn't an option for you. But if changing it <em>is</em> an option, I think your OPA/Rego policy can take into account the pod's serviceAccount field. Keep in mind with OPA Gatekeeper, the input to the Rego policy code is the entire admission request, including the spec of the pod (assuming it's pod creations that you're trying to check).</p>
<p>So part of the input to the Rego policy code might be like</p>
<pre><code> "spec": {
"volumes": [... ],
"containers": [
{
"name": "apache",
"image": "docker.io/apache:latest",
"ports": [... ],
"env": [... ],
"resources": {},
"volumeMounts": [... ],
"imagePullPolicy": "IfNotPresent",
"securityContext": {... }
}
],
"restartPolicy": "Always",
"terminationGracePeriodSeconds": 30,
"dnsPolicy": "ClusterFirst",
"serviceAccountName": "apache-service-account",
"serviceAccount": "apache-service-account",
</code></pre>
<p>So gatekeeper-library's <a href="https://github.com/open-policy-agent/gatekeeper-library/blob/master/library/pod-security-policy/allow-privilege-escalation/template.yaml#L57" rel="nofollow noreferrer">allow-privilege-escalation</a> references <code>input.review.object.spec.containers</code> and finds an array of containers like "apache". Similarly, you could modify the policy code to reference <code>input.review.object.spec.serviceAccount</code> and find "apache-service-account". From there, it's a matter of using that information to make sure the rule "violation" only matches if the service account is one you want to apply to.</p>
<p>Beyond that, it's possible to then take the expected service account name and make it a ConstraintTemplate <a href="https://open-policy-agent.github.io/gatekeeper/website/docs/howto#the-parameters-field" rel="nofollow noreferrer">parameter</a>, to make your new policy more flexible/useable.</p>
<p>Hope this helps!</p>
|
<p>I'm currently creating a kubernetes deployment, in this deployment I have replicas value set at X and I want to create X volume that are not empty when the corresponding pod is restarted.
I'm not using any cloud provider infrastructures then please avoid command using cloud services.</p>
<p>I've been searching answer in kubernetes doc, and my first try was to create one huge persistent volume and one persistant volume claim per pod that are bind to the pv but it's seem's to not work...</p>
<p>My expectations are to have X volumes that are not shared between pods and that are not dying when pod is killed because of a liveness probe.
I'm aware of any possibilities that can do the trick!</p>
|
<p>Deployment replicas all use the same volume. There is no possibility currently to create independent volumes per replica.</p>
<p>StatefulSets can define <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#components" rel="noreferrer">volumeClaimTemplates</a> which means one or more independent volumes per replica. For that to work StorageClass must be capable of dynamic volume provisioning.</p>
|
<p>I have running kubernetes cluster in europe region and there are 4 nodes running in same region within cluster GCP, Now my requirement is i want 2 node in Asia region and other 2 node keep in europe, Is it possible to run node in multi region within cluster ? Or Can we setup node pool region wise to the cluster?</p>
<p>I am not talking about multi region cluster and cluster federation.</p>
|
<p>If you use GKE - then no, it is not possible to use nodes from different regions in single cluster. But it is possible to use several clusters and one Istio control plane. Read more here: <a href="https://istio.io/docs/concepts/multicluster-deployments" rel="nofollow noreferrer">https://istio.io/docs/concepts/multicluster-deployments</a></p>
<p>If you are using vanilla Kubernetes on GCP Compute instances - then yes, it is possible to create multi-region cluster in single VPC.</p>
|
<p>I deploy a custom scheduler after following instructions step by step like mentioned in Kubernetes Documentation</p>
<p>Here's [a link] (<a href="https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/</a>)</p>
<p>Pods I specify should be scheduled using the scheduler that I deployed "my-scheduler" leaves in Pending.</p>
<pre><code>Kubectl version : -Client: v1.14.1
-Server: v1.14.0
kubeadm version : v1.14.1
alisd@kubeMaster:~$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-node-944jv 2/2 Running 4 45h
coredns-fb8b8dccf-hzzwf 1/1 Running 2 45h
coredns-fb8b8dccf-zb228 1/1 Running 2 45h
etcd-kubemaster 1/1 Running 3 45h
kube-apiserver-kubemaster 1/1 Running 3 45h
kube-controller-manager-kubemaster 1/1 Running 3 45h
kube-proxy-l6wrc 1/1 Running 3 45h
kube-scheduler-kubemaster 1/1 Running 3 45h
my-scheduler-66cf896bfb-8j8sr 1/1 Running 2 45h
alisd@kubeMaster:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
annotation-second-scheduler 0/1 Pending 0 4s
alisd@kubeMaster:~$ kubectl describe pod annotation-second-scheduler
Name: annotation-second-scheduler
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: <none>
Labels: name=multischeduler-example
Annotations: <none>
Status: Pending
IP:
Containers:
pod-with-second-annotation-container:
Image: k8s.gcr.io/pause:2.0
Port: <none>
Host Port: <none>
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-jclk7 (ro)
Volumes:
default-token-jclk7:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-jclk7
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
alisd@kubeMaster:~$ kubectl logs -f my-scheduler-66cf896bfb-8j8sr -n kube-system
E0426 14:44:01.742799 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:serviceaccount:kube-system:my-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0426 14:44:02.743952 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:serviceaccount:kube-system:my-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
</code></pre>
<p>.....</p>
<pre><code>alisd@kubeMaster:~$ kubectl get clusterrolebinding
NAME AGE
calico-node 46h
cluster-admin 46h
kubeadm:kubelet-bootstrap 46h
kubeadm:node-autoapprove-bootstrap 46h
kubeadm:node-autoapprove-certificate-rotation 46h
kubeadm:node-proxier 46h
my-scheduler-as-kube-scheduler 46h
</code></pre>
<p>......</p>
<pre><code>alisd@kubeMaster:~$ kubectl describe clusterrolebinding my-scheduler-as-kube-scheduler
Name: my-scheduler-as-kube-scheduler
Labels: <none>
Annotations: <none>
Role:
Kind: ClusterRole
Name: system:kube-scheduler
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount my-scheduler kube-system
</code></pre>
<p>........</p>
<pre><code>alisd@kubeMaster:~$ kubectl describe serviceaccount my-scheduler -n kube-systemName: my-scheduler
Namespace: kube-system
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: my-scheduler-token-68pvk
Tokens: my-scheduler-token-68pvk
Events: <none>
</code></pre>
<p>.......</p>
|
<p>I've found a solution</p>
<p>Add these lines:</p>
<pre><code>- apiGroups:
- storage.k8s.io
resources:
- storageclasses
verbs:
- watch
- list
- get
</code></pre>
<p>to the end of the output of this command (this opens a file for you to edit):</p>
<pre><code>kubectl edit clusterrole system:kube-scheduler
</code></pre>
<p>The pod using the scheduler that I deployed is now Running </p>
<pre><code>alisd@kubeMaster:~$ kubectl get pods
NAME READY STATUS RESTARTS AGE
annotation-second-scheduler 1/1 Running 0 9m33s
</code></pre>
<p>......</p>
<pre><code>kubectl describe pod annotation-second-scheduler
</code></pre>
<p>...... </p>
<pre><code> Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 12m my-scheduler Successfully assigned default/annotation-second-scheduler to kubemaster
Normal Pulled 12m kubelet, kubemaster Container image "k8s.gcr.io/pause:2.0" already present on machine
Normal Created 12m kubelet, kubemaster Created container pod-with-second-annotation-container
Normal Started 12m kubelet, kubemaster Started container pod-with-second-annotation-container
</code></pre>
|
<p>Kubernetes version - 1.8</p>
<ol>
<li>Created statefulset for postgres database with pvc</li>
<li>Added some tables to database</li>
<li>Restarted pod by scaling statefulset to 0 and then again 1</li>
<li>Created tables in step # 2 are no longer available</li>
</ol>
<p>Tried another scnario with steps on docker-for-desktop cluster k8s version 1.10</p>
<ol>
<li>Created statefulset for postgres database with pvc</li>
<li>Added some tables to database</li>
<li>Restarted docker for desktop</li>
<li>Created tables in step # 2 are no longer available</li>
</ol>
<p>k8s manifest</p>
<pre><code> apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-config
labels:
app: postgres
data:
POSTGRES_DB: kong
POSTGRES_USER: kong
POSTGRES_PASSWORD: kong
PGDATA: /var/lib/postgresql/data/pgdata
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: postgres-pv
labels:
app: postgres
spec:
storageClassName: manual
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/postgresql/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
labels:
app: postgres
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: postgres
app: postgres
spec:
ports:
- name: pgql
port: 5432
targetPort: 5432
protocol: TCP
selector:
app: postgres
---
apiVersion: apps/v1beta2 # for k8s versions before 1.9.0 use apps/v1beta2 and before 1.8.0 use extensions/v1beta1
kind: StatefulSet
metadata:
name: postgres
spec:
serviceName: "postgres"
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:9.6
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pvc
---
</code></pre>
|
<p>If you have multiple nodes - the issue you see is totally expected. So if you want to use hostPath as a Persistent Volume in a multi-node cluster - you must use some shared filesystem like Glusterfs or Ceph and place your /mnt/postgresql/data folder onto that shared filesystem.</p>
|
<p>I am experimenting with Kubernetes on Digital Ocean.
As a testcase, i am trying to deploy a Jenkins instance to my cluster with a persistent volume.</p>
<p>My deployment yaml:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins-deployment
labels:
app: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: jenkins/jenkins:lts
ports:
- containerPort: 8080
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
volumes:
- name: jenkins-home
persistentVolumeClaim:
claimName: jenkins-pvc
</code></pre>
<p>My PV Claim</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: do-block-storage
resources:
requests:
storage: 30Gi
</code></pre>
<p>For some reason the pod keeps ending up in a <code>CrashLoopBackOff</code> state.</p>
<p><code>kubectl describe pod <podname></code> gives me</p>
<pre><code>Name: jenkins-deployment-bb5857d76-j2f2w
Namespace: default
Priority: 0
Node: cc-pool-bg6c/10.138.123.186
Start Time: Sun, 15 Sep 2019 22:18:56 +0200
Labels: app=jenkins
pod-template-hash=bb5857d76
Annotations: <none>
Status: Running
IP: 10.244.0.166
Controlled By: ReplicaSet/jenkins-deployment-bb5857d76
Containers:
jenkins:
Container ID: docker://4eaadebb917001d8d3eaaa3b043e1b58b6269f929b9e95c4b08d88b0098d29d6
Image: jenkins/jenkins:lts
Image ID: docker-pullable://jenkins/jenkins@sha256:7cfe34701992434cc08bfd40e80e04ab406522214cf9bbefa57a5432a123b340
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Sun, 15 Sep 2019 22:35:14 +0200
Finished: Sun, 15 Sep 2019 22:35:14 +0200
Ready: False
Restart Count: 8
Environment: <none>
Mounts:
/var/jenkins_home from jenkins-home (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-wd6p7 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
jenkins-home:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: jenkins-pvc
ReadOnly: false
default-token-wd6p7:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-wd6p7
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 19m default-scheduler Successfully assigned default/jenkins-deployment-bb5857d76-j2f2w to cc-pool-bg6c
Normal SuccessfulAttachVolume 19m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-cb772fdb-492b-4ef5-a63e-4e483b8798fd"
Normal Pulled 17m (x5 over 19m) kubelet, cc-pool-bg6c Container image "jenkins/jenkins:lts" already present on machine
Normal Created 17m (x5 over 19m) kubelet, cc-pool-bg6c Created container jenkins
Normal Started 17m (x5 over 19m) kubelet, cc-pool-bg6c Started container jenkins
Warning BackOff 4m8s (x72 over 19m) kubelet, cc-pool-bg6c Back-off restarting failed container
</code></pre>
<p>Could anyone help me point out what is wrong here, or where to look for that matter?</p>
<p>Many thanks in advance.</p>
|
<p>Looks like you don't have permission to write to the volume.
Try running the container as root using security contexts:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins-deployment
labels:
app: jenkins
spec:
replicas: 1
selector:
matchLabels:
app: jenkins
template:
metadata:
labels:
app: jenkins
spec:
securityContext:
fsGroup: 1000
runAsUser: 0
containers:
- name: jenkins
image: jenkins/jenkins:lts
ports:
- containerPort: 8080
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
volumes:
- name: jenkins-home
persistentVolumeClaim:
claimName: jenkins-pvc
</code></pre>
|
<p>I have setup ingress-nginx using helm through <code>helm install --name x2f1 stable/nginx-ingress --namespace ingress-nginx</code> and service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: x2f1-ingress-nginx-svc
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
nodePort: 30080
- name: https
port: 443
targetPort: 443
protocol: TCP
nodePort: 30443
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
</code></pre>
<p>running svc and po's:</p>
<pre><code>[ottuser@ottorc01 ~]$ kubectl get svc,po -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/x2f1-ingress-nginx-svc NodePort 192.168.34.116 <none> 80:30080/TCP,443:30443/TCP 2d18h
service/x2f1-nginx-ingress-controller LoadBalancer 192.168.188.188 <pending> 80:32427/TCP,443:31726/TCP 2d18h
service/x2f1-nginx-ingress-default-backend ClusterIP 192.168.156.175 <none> 80/TCP 2d18h
NAME READY STATUS RESTARTS AGE
pod/x2f1-nginx-ingress-controller-cd5fbd447-c4fqm 1/1 Running 0 2d18h
pod/x2f1-nginx-ingress-default-backend-67f8db4966-nlgdd 1/1 Running 0 2d18h
</code></pre>
<p>after that my nodePort: 30080 is only available against tcp6, due to this, im facing connection refused when try to access from other vm.</p>
<pre><code>[ottuser@ottorc01 ~]$ netstat -tln | grep '30080'
tcp6 3 0 :::30080 :::* LISTEN
</code></pre>
<pre><code>[ottuser@ottwrk02 ~]$ netstat -tln
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:10249 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:6443 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:2379 0.0.0.0:* LISTEN
tcp 0 0 10.18.0.10:2379 0.0.0.0:* LISTEN
tcp 0 0 10.18.0.10:2380 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:8081 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:33372 0.0.0.0:* LISTEN
tcp6 0 0 :::10250 :::* LISTEN
tcp6 0 0 :::30443 :::* LISTEN
tcp6 0 0 :::32427 :::* LISTEN
tcp6 0 0 :::31726 :::* LISTEN
tcp6 0 0 :::10256 :::* LISTEN
tcp6 0 0 :::22 :::* LISTEN
tcp6 0 0 :::30462 :::* LISTEN
tcp6 0 0 :::30080 :::* LISTEN
</code></pre>
<p>Logs from <code>pod/x2f1-nginx-ingress-controller-cd5fbd447-c4fqm</code>:</p>
<pre><code>[ottuser@ottorc01 ~]$ kubectl logs pod/x2f1-nginx-ingress-controller-cd5fbd447-c4fqm -n ingress-nginx --tail 50
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: 0.24.1
Build: git-ce418168f
Repository: https://github.com/kubernetes/ingress-nginx
-------------------------------------------------------------------------------
I0621 11:48:26.952213 6 flags.go:185] Watching for Ingress class: nginx
W0621 11:48:26.952772 6 flags.go:214] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false)
nginx version: nginx/1.15.10
W0621 11:48:26.961458 6 client_config.go:549] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0621 11:48:26.961913 6 main.go:205] Creating API client for https://192.168.0.1:443
I0621 11:48:26.980673 6 main.go:249] Running in Kubernetes cluster version v1.14 (v1.14.1) - git (clean) commit b7394102d6ef778017f2ca4046abbaa23b88c290 - platform linux/amd64
I0621 11:48:26.986341 6 main.go:102] Validated ingress-nginx/x2f1-nginx-ingress-default-backend as the default backend.
I0621 11:48:27.339581 6 main.go:124] Created fake certificate with PemFileName: /etc/ingress-controller/ssl/default-fake-certificate.pem
I0621 11:48:27.384666 6 nginx.go:265] Starting NGINX Ingress controller
I0621 11:48:27.403396 6 event.go:209] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"x2f1-nginx-ingress-controller", UID:"89b4caf0-941a-11e9-a0fb-005056010a71", APIVersion:"v1", ResourceVersion:"1347806", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/x2f1-nginx-ingress-controller
I0621 11:48:28.585472 6 nginx.go:311] Starting NGINX process
I0621 11:48:28.585630 6 leaderelection.go:217] attempting to acquire leader lease ingress-nginx/ingress-controller-leader-nginx...
W0621 11:48:28.586778 6 controller.go:373] Service "ingress-nginx/x2f1-nginx-ingress-default-backend" does not have any active Endpoint
I0621 11:48:28.586878 6 controller.go:170] Configuration changes detected, backend reload required.
I0621 11:48:28.592786 6 status.go:86] new leader elected: x2f1-ngin-nginx-ingress-controller-567f495994-hmcqq
I0621 11:48:28.761600 6 controller.go:188] Backend successfully reloaded.
I0621 11:48:28.761677 6 controller.go:202] Initial sync, sleeping for 1 second.
[21/Jun/2019:11:48:29 +0000]TCP200000.001
W0621 11:48:32.444623 6 controller.go:373] Service "ingress-nginx/x2f1-nginx-ingress-default-backend" does not have any active Endpoint
[21/Jun/2019:11:48:35 +0000]TCP200000.000
I0621 11:49:05.793313 6 status.go:86] new leader elected: x2f1-nginx-ingress-controller-cd5fbd447-c4fqm
I0621 11:49:05.793331 6 leaderelection.go:227] successfully acquired lease ingress-nginx/ingress-controller-leader-nginx
I0621 11:53:08.579333 6 controller.go:170] Configuration changes detected, backend reload required.
I0621 11:53:08.579639 6 event.go:209] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ott", Name:"hie-01-hie", UID:"32678e25-941b-11e9-a0fb-005056010a71", APIVersion:"extensions/v1beta1", ResourceVersion:"1348532", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ott/hie-01-hie
I0621 11:53:08.764204 6 controller.go:188] Backend successfully reloaded.
[21/Jun/2019:11:53:08 +0000]TCP200000.000
I0621 11:54:05.812798 6 status.go:295] updating Ingress ott/hie-01-hie status from [] to [{ }]
</code></pre>
<pre><code>[ottuser@ottorc01 ~]$ sudo ss -l -t -p | grep 30080
LISTEN 3 128 :::30080 :::* users:(("kube-proxy",pid=29346,fd=15))
</code></pre>
<p>Is there any way to debug it in further depth or add that port to tcp/ipv4. If still something unclear from my side let me know. Thanks in advance.</p>
|
<p>It's not a problem of the tcp6.</p>
<p>On most modern Linux distros, including Container Linux, listening on
tcp6 will also imply tcp4. </p>
<p>The issue itself is with your <code>x2f1-ingress-nginx-svc</code> service and specifically with selectors, which do not match with any pod</p>
<pre><code>selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
</code></pre>
<p>If you will do </p>
<pre><code>kubectl get ep -n ingress-nginx
</code></pre>
<p>you will see that there's no endpoints for that service</p>
<pre><code>NAME ENDPOINTS AGE
x2f1-ingress-nginx-svc <none> 13m
</code></pre>
<p>Now the question is what do you want to expose with this service? </p>
<p>For instance, if you will be exposing <code>x2f1-nginx-ingress-controller</code> (even though helm already created appropriate service), your yaml should be like:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: x2f1-ingress-nginx-svc
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: NodePort
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
nodePort: 30080
- name: https
port: 443
targetPort: 443
protocol: TCP
nodePort: 30443
selector:
app: nginx-ingress
component: controller
</code></pre>
|
<p>When we launch the EKS Cluster using the below manifest, it is creating ALB. We have a default ALB that we are using, let's call it EKS-ALB. The Hosted zone is routing traffic to this EKS-ALB. We gave tag <strong>ingress.k8s.aws/resource:LoadBalancer, ingress.k8s.aws/stack:test-alb, elbv2.k8s.aws/cluster: EKS</strong>. But when we delete the manifest, it is deleting the default ALB and we need to reconfigure hosted zone again with New ALB which will get created in next deployment. Is there any way to block Ingress-controller not deleting ALB, but only deleting the listeners and Target Group?</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-nginx-rule
namespace: test
annotations:
alb.ingress.kubernetes.io/group.name: test-alb
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/target-type: instance
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
alb.ingress.kubernetes.io/healthcheck-path: /index.html
alb.ingress.kubernetes.io/success-codes: 200-399
alb.ingress.kubernetes.io/security-groups: eks-test-alb-sg
spec:
ingressClassName: alb
rules:
- host: test.eks.abc.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: test-svc
port:
number: 5005
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-dep
namespace: test
labels:
app: test
spec:
replicas: 1
restartPolicy:
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: Imagepath
imagePullPolicy: IfNotPresent
ports:
- containerPort: 5005
resources:
requests:
memory: "256Mi"
cpu: "500m"
---
apiVersion: v1
kind: Service
metadata:
name: test-svc
namespace: test
labels:
app: test
spec:
type: NodePort
ports:
- port: 5005
targetPort: 80
protocol: TCP
selector:
app: test
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: test-scaler
namespace: test
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: test-dep
minReplicas: 1
maxReplicas: 5
targetCPUUtilizationPercentage: 60
---
</code></pre>
|
<p>In order to achieve the existing ALB not being deleted with group.name annotation enabled, we need to meet following conditions:</p>
<ol>
<li>ALB should be tagged with below 3 tags:</li>
</ol>
<pre><code>alb.ingress.kubernetes.io/group.name: test-alb
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/target-type: instance
</code></pre>
<ol start="2">
<li>Create a dummy ingress with the same group name with the below manifest.</li>
</ol>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-nginx-rule
namespace: test
annotations:
alb.ingress.kubernetes.io/group.name: test-alb
alb.ingress.kubernetes.io/scheme: internal
alb.ingress.kubernetes.io/target-type: instance
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'
alb.ingress.kubernetes.io/healthcheck-port: traffic-port
alb.ingress.kubernetes.io/healthcheck-path: /index.html
alb.ingress.kubernetes.io/success-codes: 200-399
alb.ingress.kubernetes.io/security-groups: eks-test-alb-sg
spec:
ingressClassName: alb
rules:
- host: dummy.eks.abc.com
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: test-svc
port:
number: 5005
</code></pre>
<p>After deploying the above manifest, an ingress will be created using the same ALB and listener will have rule of if host is dummy.eks.abc.com, it will return 443. It's create and forget type of manifest, so after creating this ingress, even after we delete all the running deployment services (except the dummy manifest file above), the ALB will remain.</p>
|
<p>A pod with a web application that is in GKE, can have a subdomain appspot.com ?, just like in GAE.</p>
<p>I have a cluster in GKE, and within this I have some services, including a web application that uses a ngnix ingress. Currently I do not want to acquire a paid domain, but I would like to expose my web application in a subdomain appspot.com, is this possible?</p>
<p>I have read that the applications found in GAE are automatically associated with a subdomain of appspot.com, but is it possible to expose applications in GKE?</p>
|
<p>Unfortunately, no. Domain appspot.com is specific for GAE and some other fully managed GCloud services but not for GKE. In GKE you have to do everything yourself - create domain, expose your app on load balancer, create record and all that stuff. But domains are so cheap, why not buy one?</p>
|
<p>I installed Istio with</p>
<pre><code>gateways.istio-egressgateway.enabled = true
</code></pre>
<p>When I try to connect to external database I receive an error.
I do not have a domain (only ip and port), so I define the following rules:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: external-db
spec:
hosts:
- external-db.tcp.svc
addresses:
- 190.64.31.232/32
ports:
- number: 3306
name: tcp
protocol: TCP
location: MESH_EXTERNAL
resolution: STATIC
endpoints:
- address: 190.64.31.232
</code></pre>
<p>then I open a Shell in my system (deployed in my service mesh)
And it can't resolve the name </p>
<pre><code>$ ping external-db.tcp.svc
ping: ceip-db.tcp.svc: Name or service not known
</code></pre>
<p>But i can connect using the ip address</p>
<pre><code>$ ping 190.64.31.232
PING 190.64.31.232 (190.64.31.232) 56(84) bytes of data.
64 bytes from 190.64.31.232: icmp_seq=1 ttl=249 time=1.35 ms
64 bytes from 190.64.31.232: icmp_seq=2 ttl=249 time=1.42 ms
</code></pre>
<p>What is happening? Do I have to connect using the domain or the ip?
Can I define a internal domain for my external ip? </p>
|
<p>You can create headless service with hardcoded IP endpoint: </p>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
name: external-db
spec:
clusterIP: None
ports:
- protocol: TCP
port: 3306
---
apiVersion: v1
kind: Endpoints
metadata:
name: external-db
subsets:
- addresses:
- ip: 190.64.31.232
ports:
- port: 3306
</code></pre>
<p>And then you may add to your ServiceEntry a host <code>external-db.default.svc.cluster.local</code></p>
|
<p>I have to install kubernetes v1.13.7 on ubuntu 18.04.2 LTS in Internal network environment.</p>
<p>I can use docker and USB device.</p>
<p>actually, I can't download files through internet directly.</p>
<p>for one thing, I installed APIserver / controller / scheduler / etcd / coredns / proxy / flannel through docker load command.</p>
<p>but now, I should install kubeadm / kubelet / kubectl but I didn't install these.</p>
<p>how can I install kubernetes?</p>
<p>let me know your experience or websites</p>
|
<p>Here is step-by-step <a href="https://gist.github.com/onuryilmaz/89a29261652299d7cf768223fd61da02#download-kubernetes-rpms" rel="nofollow noreferrer">instruction</a> </p>
<p>As for the kubernetes part, you can download packages from the online workstation</p>
<pre><code>wget https://packages.cloud.google.com/yum/pool/e6aef7b2b7d9e5bd4db1e5747ebbc9f1f97bbfb8c7817ad68028565ca263a672-kubectl-1.6.0.x86_64.rpm
wget https://packages.cloud.google.com/yum/pool/af8567f1ba6f8dc1d43b60702d45c02aca88607b0e721d76897e70f6a6e53115-kubelet-1.6.0.x86_64.rpm
wget https://packages.cloud.google.com/yum/pool/e7a4403227dd24036f3b0615663a371c4e07a95be5fee53505e647fd8ae58aa6-kubernetes-cni-0.5.1.x86_64.rpm
wget https://packages.cloud.google.com/yum/pool/5116fa4b73c700823cfc76b3cedff6622f2fbd0a3d2fa09bce6d93329771e291-kubeadm-1.6.0.x86_64.rpm
</code></pre>
<p>and then just copy it over to your offline server via internal network </p>
<pre><code>scp <folder_with_rpms>/*.rpm <user>@<server>:<path>/<to>/<remote>/<folder>
</code></pre>
<p>Lastly, install packages</p>
<pre><code>yum install -y *.rpm
systemctl enable kubelet && systemctl start kubelet
</code></pre>
|
<p>I recently started to explore k8s extensions and got introduced to two concepts:</p>
<ol>
<li>CRD.</li>
<li>Service catalogs.</li>
</ol>
<p>They look pretty similar to me. The only difference to my understanding is, CRDs are deployed inside same cluster to be consumed; whereas, catalogs are deployed to be exposed outside the cluster for example as database service (client can order cluster of mysql which will be accessible from his cluster). </p>
<p>My query here is:</p>
<p>Is my understanding correct? if yes, can there be any other scenario where I would like to create catalog and not CRD.</p>
|
<p>Yes, your understanding is correct. Taken from <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/service-catalog" rel="nofollow noreferrer">official documentation</a>: </p>
<blockquote>
<h3>Example use case</h3>
<p>An application developer wants to use message queuing as part of their application running in a Kubernetes cluster.
However, they do not want to deal with the overhead of setting such a
service up and administering it themselves. Fortunately, there is a
cloud provider that offers message queuing as a managed service
through its service broker.</p>
<p>A cluster operator can setup Service Catalog and use it to communicate
with the cloud provider’s service broker to provision an instance of
the message queuing service and make it available to the application
within the Kubernetes cluster. The application developer therefore
does not need to be concerned with the implementation details or
management of the message queue. The application can simply use it as
a service.</p>
</blockquote>
<p>With CRD you are responsible for provisioning resources, running backend logic and so on.</p>
<p>More info can be found in this <a href="https://www.youtube.com/watch?v=7wdUa4Ulwxg" rel="nofollow noreferrer">KubeCon 2018 presentation</a>.</p>
|
<p>I'm migrating some of my applications that today is running on EC2 with Auto Scalling to k8.</p>
<p>Today my Auto Scalling is based on <code>ApproximateNumberOfMessagesVisible</code> metric from SQS queues (That i configured on CloudWatch).</p>
<p>I trying to figure out if i can use this metric to scalling pods of my application in AWS EKS environment.</p>
|
<ol>
<li>Install <a href="https://github.com/awslabs/k8s-cloudwatch-adapter" rel="nofollow noreferrer">k8s-cloudwatch-adapter</a> </li>
<li><a href="https://github.com/awslabs/k8s-cloudwatch-adapter/blob/master/samples/sqs/README.md" rel="nofollow noreferrer">Deploy HPA</a> with custom metrics from AWS SQS.</li>
</ol>
|
<p>I am writing a pipeline with kubernetes in google cloud.</p>
<p>I need to activate sometimes a few pods in a second, where each pod is a task that runs inside a pod.</p>
<p>I plan to call kubectl run with Kubernetes job and wait for it to complete (poll every second all the pods running) and activate the next step in the pipeline.</p>
<p>I will also monitor the cluster size to make sure I am not exceeding the max CPU/RAM usage.</p>
<p>I can run tens of thousands of jobs at the same time.</p>
<p>I am not using standard pipelines because I need to create a dynamic number of tasks in the pipeline.</p>
<p>I am running the batch operation so I can handle the delay.</p>
<p>Is it the best approach? How long does it take to create a pod in Kubernetes?</p>
|
<p>If you wanna run ten thousands of jobs at the same time - you will definitely need to plan resource allocation. You need to estimate the number of nodes that you need. After that you may create all nodes at once, or use GKE cluster autoscaler for automatically adding new nodes in response to resource demand. If you preallocate all nodes at once - you will probably have high bill at the end of month. But pods can be created very quickly. If you create only small number of nodes initially and use cluster autoscaler - you will face large delays, because nodes take several minutes to start. You must decide what your approach will be.</p>
<p>If you use cluster autoscaler - do not forget to specify maximum nodes number in cluster.</p>
<p>Another important thing - you should put your jobs into Guaranteed <a href="https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/" rel="nofollow noreferrer">quality of service</a> in Kubernetes. Otherwise if you use Best Effort or Burstable pods - you will end up with Eviction nightmare which is really terrible and uncontrolled.</p>
|
<p>Our application uses RabbitMQ with only a single node. It is run in a single Kubernetes pod.</p>
<p>We use durable/persistent queues, but any time that our cloud instance is brought down and back up, and the RabbitMQ pod is restarted, our existing durable/persistent queues are gone.</p>
<p>At first, I though that it was an issue with the volume that the queues were stored on not being persistent, but that turned out not to be the case. </p>
<p>It appears that the queue data is stored in <code>/var/lib/rabbitmq/mnesia/<user@hostname></code>. Since the pod's hostname changes each time, it creates a new set of data for the new hostname and loses access to the previously persisted queue. I have many sets of files built up in the mnesia folder, all from previous restarts.</p>
<p>How can I prevent this behavior?</p>
<p>The closest answer that I could find is in <a href="https://stackoverflow.com/questions/46892531/messages-dont-survive-pod-restarts-in-rabbitmq-autocluster-kubernetes-installat">this question</a>, but if I'm reading it correctly, this would only work if you have multiple nodes in a cluster simultaneously, sharing queue data. I'm not sure it would work with a single node. Or would it?</p>
|
<p>What helped in our case was to set <code>hostname: <static-host-value></code></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
spec:
replicas: 1
...
template:
metadata:
labels:
app: rabbitmq
spec:
...
containers:
- name: rabbitmq
image: rabbitmq:3-management
...
hostname: rmq-host
</code></pre>
|
<p>Here is a yaml file that has been created to be deployed in kubernetes. I would like to know since there is no resource request and limits in the file, how kubernetes knows the resource requests and limits to run it? How can I fetch that information?</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: rss-site
labels:
app: web
spec:
containers:
- name: front-end
image: nginx
ports:
- containerPort: 80
- name: rss-reader
image: nickchase/rss-php-nginx:v1
ports:
- containerPort: 88
</code></pre>
|
<p>You can "kubectl describe" your pod and see what actual resources got assigned. With <a href="https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/memory-default-namespace/#create-a-limitrange-and-a-pod" rel="nofollow noreferrer">LimitRange</a> Kubernetes can assign default requests and limits to pod if not part of its spec. </p>
<p>If there are no requests/limits assigned - your pod will become of <a href="https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/#create-a-pod-that-gets-assigned-a-qos-class-of-besteffort" rel="nofollow noreferrer">Best Effort</a> quality of service and can be Evicted in case of resource pressure on node.</p>
|
<p>Heketi pod was restarted on our Kubernetes Cluster and now I'm struggling with how to change glusterfs storage class resturl with new heketi endpoint.<br>
What are the safest options without any data loss on our PVCs?
I was able to recreate Kubernetes Cluster v1.11.10 on our test environment and start investigating on it. When I tried to edit storage class I got:</p>
<pre><code>"StorageClass.storage.k8s.io "glusterfs" is invalid: parameters Forbidden: updates to parameters are forbidden."
</code></pre>
<p>We are using Kubernetes v.1.11.10.<br>
I tried to create new storage class with correct heketi endpoint, but I couldn't edit PVCs:</p>
<pre><code>PersistentVolumeClaim "test-pvc" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
</code></pre>
<p>I was able only to delete old storage class and create new with correct heketi resturl.</p>
|
<p>You may try to use "kubectl replace" like that:</p>
<pre><code>kubectl replace -f storage-class.yaml --force
</code></pre>
<p>Just make sure that you use Heketi Service name as a REST URL to avoid further such issues.</p>
|
<p>I have reids nodes:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
pod/redis-haproxy-deployment-65497cd78d-659tq 1/1 Running 0 31m
pod/redis-sentinel-node-0 3/3 Running 0 81m
pod/redis-sentinel-node-1 3/3 Running 0 80m
pod/redis-sentinel-node-2 3/3 Running 0 80m
pod/ubuntu 1/1 Running 0 85m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/redis-haproxy-balancer ClusterIP 10.43.92.106 <none> 6379/TCP 31m
service/redis-sentinel-headless ClusterIP None <none> 6379/TCP,26379/TCP 99m
service/redis-sentinel-metrics ClusterIP 10.43.72.97 <none> 9121/TCP 99m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/redis-haproxy-deployment 1/1 1 1 31m
NAME DESIRED CURRENT READY AGE
replicaset.apps/redis-haproxy-deployment-65497cd78d 1 1 1 31m
NAME READY AGE
statefulset.apps/redis-sentinel-node 3/3 99m
</code></pre>
<p>I connect to the master redis using the following command:</p>
<pre><code>redis-cli -h redis-haproxy-balancer
redis-haproxy-balancer:6379> keys *
1) "sdf"
2) "sdf12"
3) "s4df12"
4) "s4df1"
5) "fsafsdf"
6) "!s4d!1"
7) "s4d!1"
</code></pre>
<p>Here is my configuration file haproxy.cfg:</p>
<pre><code>global
daemon
maxconn 256
defaults REDIS
mode tcp
timeout connect 3s
timeout server 3s
timeout client 3s
frontend front_redis
bind 0.0.0.0:6379
use_backend redis_cluster
backend redis_cluster
mode tcp
option tcp-check
tcp-check comment PING\ phase
tcp-check send PING\r\n
tcp-check expect string +PONG
tcp-check comment role\ check
tcp-check send info\ replication\r\n
tcp-check expect string role:master
tcp-check comment QUIT\ phase
tcp-check send QUIT\r\n
tcp-check expect string +OK
server redis-0 redis-sentinel-node-0.redis-sentinel-headless:6379 maxconn 1024 check inter 1s
server redis-1 redis-sentinel-node-1.redis-sentinel-headless:6379 maxconn 1024 check inter 1s
server redis-2 redis-sentinel-node-2.redis-sentinel-headless:6379 maxconn 1024 check inter 1s
</code></pre>
<p>Here is the service I go to in order to get to the master redis - haproxy-service.yaml:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: redis-haproxy-balancer
spec:
type: ClusterIP
selector:
app: redis-haproxy
ports:
- protocol: TCP
port: 6379
targetPort: 6379
</code></pre>
<p>here is a deployment that refers to a configuration file - redis-haproxy-deployment.yaml:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-haproxy-deployment
labels:
app: redis-haproxy
spec:
replicas: 1
selector:
matchLabels:
app: redis-haproxy
template:
metadata:
labels:
app: redis-haproxy
spec:
containers:
- name: redis-haproxy
image: haproxy:lts-alpine
volumeMounts:
- name: redis-haproxy-config-volume
mountPath: /usr/local/etc/haproxy/haproxy.cfg
subPath: haproxy.cfg
ports:
- containerPort: 6379
volumes:
- name: redis-haproxy-config-volume
configMap:
name: redis-haproxy-config
items:
- key: haproxy.cfg
path: haproxy.cfg
</code></pre>
<p>After restarting redis I cannot connect to it with redis-haproxy-balancer...</p>
<pre><code>[NOTICE] (1) : New worker (8) forked
[NOTICE] (1) : Loading success.
[WARNING] (8) : Server redis_cluster/redis-0 is DOWN, reason: Layer7 timeout, info: " at step 6 of tcp-check (expect string 'role:master')", check duration: 1000ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] (8) : Server redis_cluster/redis-1 is DOWN, reason: Layer7 timeout, info: " at step 6 of tcp-check (expect string 'role:master')", check duration: 1005ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[WARNING] (8) : Server redis_cluster/redis-2 is DOWN, reason: Layer7 timeout, info: " at step 6 of tcp-check (expect string 'role:master')", check duration: 1001ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
[ALERT] (8) : backend 'redis_cluster' has no server available!
</code></pre>
<p>It only works by connecting directly: redis-sentinel-node-0.redis-sentinel-headless</p>
<p>What is wrong with my haproxy?</p>
|
<p>The problem lies with HAProxy <a href="https://docs.haproxy.org/2.6/configuration.html#5.3" rel="nofollow noreferrer">only resolving DNS names once</a>, at startup. It becomes a problem when you re-create pods, as the IP address of the backend service may get changed. Unfortunately, there is no way to simply tell HAProxy to try resolving them again!</p>
<p>However, you can specify your own DNS resolver section with the "hold valid" option to determine how long it will keep the resolution results, together with the "parse-resolv-conf" option. Specify this resolver in your server line.</p>
<pre><code>resolvers globaldnspolicy
parse-resolv-conf
hold valid 30s
...
listen myservice:
bind *:8080 accept-proxy
mode http
server theserver theserver.mynamespace.svc.cluster.local check resolvers globaldnspolicy
</code></pre>
<p>"parse-resolv-conf" tells HAProxy to parse /etc/resolv.conf, so that you do not need to hardcode the DNS servers on your own. I found this more elegant, since we're using Kubernetes and/or do not have the IP addresses of the DNS services.</p>
<p>"hold valid" tells HAProxy to cache the results, not indefinitely. I do not have a specific reason to justify the 30-second number, but I thought it would be good to still have some caching to avoid hammering the DNS services.</p>
<p>Since we are using services in Kubernetes: There seems to be some difference in DNS resolution behaviour, once this is done; it would appear that we have to now resolve with the FQDN, otherwise we end up with NX domain errors. As documented for HAProxy: DNS names are by default, resolved with the libc functions (likely getHostByName etc), but HAProxy now makes queries with the DNS servers it learns about. I do not consider myself an expert, but the documentation about <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer">DNS for Services and Pods</a> describes the expected behaviour and I think it explains what happened for me. So you need to enter the FQDN of your services, in order to keep things working.</p>
|
<p>For testing I created Kubernetes on single node by using Virtualbox.
I created one Pod listening on port 4646, then I created LoadBalancer for that Pod.</p>
<p>Yaml file for Pod:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: simple-app
labels:
app: simple-app
spec:
containers:
...
name: test-flask-app
ports:
- containerPort: 4646
</code></pre>
<p>Yaml file for LoadBalancer:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: simple-app-lb
spec:
type: LoadBalancer
ports:
- port: 88
protocol: TCP
targetPort: 4646
selector:
app: simple-app
</code></pre>
<p>Output of command <code>kubectl get nodes -o wide</code>:</p>
<pre><code>NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
server Ready master 20h v1.14.1 10.0.2.8 <none> Ubuntu 18.04.2 LTS 4.15.0-48-generic docker://18.6.2
</code></pre>
<p>Output of command <code>kubectl get all</code></p>
<pre><code>NAME READY STATUS RESTARTS AGE
pod/simple-app 1/1 Running 5 20h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 20h
service/simple-app-lb LoadBalancer 10.98.241.39 <pending> 88:32319/TCP 20h
</code></pre>
<p>On other virtual machine that is located in the same network I'd like to get an access to LoadBalancer by IP address of machine with Kubernetes and by port 88. If I run the next command I get the output below:</p>
<pre><code>sergey@server2:~$ curl 10.0.2.8:88
curl: (7) Failed to connect to 10.0.2.8 port 88: connection refused
</code></pre>
<p>But if I use port 32319 I get the access:</p>
<pre><code>sergey@server2:~$ curl 10.0.2.8:32319
{"msg":"superuser, Hello from GeoServer"}
</code></pre>
<p>Also I can get the access if I am on the machine with Kubernetes:</p>
<pre><code>sergey@server:~$ curl 10.98.241.39:88
{"msg":"superuser, Hello from GeoServer"}
</code></pre>
<p>What reasons cause that I can't get the access by EXTERNAL-IP and PORT?</p>
|
<p>Under the hood Load Balancer service is also a NodePort, that's why you can connect to NodeIP:32319. You can read more about NodePort services here: <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#nodeport</a></p>
<p>Also you should see that your LoadBalancer External IP is forever in Pending state, which means no real Load Balancer was created. Because you are not running in cloud and there is no provider for Load Balancer. So in your case you can access Load Balancer service only by ClusterIP:88 from Kube nodes or by NodeIP:32319 from outside of Kubernetes cluster.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.