Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>I want to customize the arguments to kube-apiserver so that I can inspect the requests being made, so that I can debug a failing request. However, I'm not sure what I need to modify. I know that kube-apiserver can't be being deployed with helm, because helm requires an API server in order to do its job. So it must be being deployed via some other bootstrapping method, but looking at the kind source code I can't figure out what that method is.</p>
| Robin Green | <p>The kube-apiserver is configured with a static manifest file, which is stored in <code>/etc/kubernetes/manifests/kube-apiserver.yaml</code>.</p>
<p>So find out the ID of the container that is the Kubernetes control plane node in kind:</p>
<pre><code>docker ps|grep cluster-control-plane
</code></pre>
<p>Get a shell in it:</p>
<pre><code>docker exec -it 4aeedccce928 bash
</code></pre>
<p>Install an editor (e.g. emacs) and edit the aforementioned file to add/remove/replace <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/" rel="nofollow noreferrer">the desired arguments</a>:</p>
<pre><code>apt-get update
apt-get install emacs-nox
emacs /etc/kubernetes/manifests/kube-apiserver.yaml
</code></pre>
<p>Kubernetes will detect the file change and automatically restart the server, which can be validated with:</p>
<pre><code>ps -Afl|grep kube-apiserver
</code></pre>
<p>If it crashes on startup, you can find the logs using</p>
<pre><code>apt-get install less
less /var/log/pods/kube-system_kube-apiserver-cluster-control-plane_*/*/*.log
</code></pre>
<p>If the container fails to start at all, there will not be any log file there - check the manifest file for syntax errors by referring to the Kubernetes documentation.</p>
| Robin Green |
<p>I am trying to deploy a simple node-red container on kubernetes locally to monitor its usage and at the same time create a persistent volume storage so that my node red work is saved. However, I can't get it to deploy to kubernetes. I have created a Deployment.yaml file with the following code</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nodered
name: nodered
spec:
replicas: 1
selector:
matchLabels:
app: nodered
template:
metadata:
labels:
app: nodered
spec:
containers:
- name: nodered
image: nodered/node-red:latest
limits:
memory: 512Mi
cpu: "1"
requests:
memory: 256Mi
cpu: "0.2"
ports:
- containerPort: 1880
volumeMounts:
- name: nodered-claim
mountPath: /data/nodered
# subPath: nodered <-- not needed in your case
volumes:
- name: nodered-claim
persistentVolumeClaim:
claimName: nodered-claim
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: small-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /data
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- minikube
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nodered-claim
spec:
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:enter image description here
storage: 1Gi
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
</code></pre>
<p>I am getting this error in powershell:</p>
<pre><code>kubectl apply -f ./Deployment.yaml
persistentvolume/small-pv unchanged
persistentvolumeclaim/nodered-claim unchanged
storageclass.storage.k8s.io/local-storage unchanged
Error from server (BadRequest): error when creating "./Deployment.yaml": Deployment in version "v1" cannot be handled as a Deployment: strict decoding error: unknown field "spec.template.spec.containers[0].cpu", unknown field "spec.template.spec.containers[0].limits", unknown field "spec.template.spec.containers[0].memory", unknown field "spec.template.spec.requests"
</code></pre>
<p>I want it to deploy to kubernetes so that I can monitor the memory usage of the node red container</p>
| Quazi Afnan Ahmed | <p>The <code>requests</code> & <code>limits</code> sections needs to be under a <code>resources</code> heading as follows:</p>
<pre><code> spec:
containers:
- name: nodered
image: nodered/node-red:latest
resources:
limits:
memory: 512Mi
cpu: "1"
requests:
memory: 256Mi
cpu: "0.2"
ports:
- containerPort: 1880
volumeMounts:
- name: nodered-claim
mountPath: /data/nodered
# subPath: nodered <-- not needed in your case
</code></pre>
| hardillb |
<p>I get the error: <strong><em>python: can't open file 'app.py': [Errno 2] No such file or directory</em></strong> when I try to create a deployment. I have included the folder structure, deployment and PVC manifests.</p>
<p>When I create a container from the docker image which I built using the docker file below, it runs fine - STATUS: Running.</p>
<p>I suspect it might have something to do with the persistent volumes or the way I have written my paths. I have tried the long-form (/var/www/code/order_service/app..) for my paths as well but face the same issue. </p>
<p>I'll appreciate any help. Thanks in advance guys.</p>
<p>Docker File</p>
<pre><code>FROM python:3-alpine3.10
COPY ./app/requirements.txt /app/requirements.txt
WORKDIR /app
RUN apk add --update \
bash \
curl \
py-mysqldb \
gcc \
libc-dev \
mariadb-dev \
nodejs \
npm \
&& pip install --upgrade pip \
&& pip install -r requirements.txt \
&& rm -rf /var/cache/apk/*
COPY ./app/package.json /app/package.json
RUN npm install
COPY ./app /app
CMD ["python", "app.py"]
</code></pre>
<p>Folder structure</p>
<pre><code>code
order_service
app
app.py
</code></pre>
<p>Here is my manifest:</p>
<pre><code>DEPLOYMENT
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
io.kompose.service: order
name: order
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: order
strategy:
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: order
spec:
containers:
- image: order:1.0
imagePullPolicy: IfNotPresent
name: order
ports:
- containerPort: 5000
resources: {}
volumeMounts:
- mountPath: ./app
name: order-claim0
restartPolicy: Always
volumes:
- name: order-claim0
persistentVolumeClaim:
claimName: order-claim0
status: {}
PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: order-claim0
name: order-claim0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
status: {}
</code></pre>
| UJAY | <p>I didn't get the point. </p>
<p>In <code>Dockerfile</code>, you put <code>app.py</code> in docker image's folder <code>/app</code></p>
<pre><code>WORKDIR /app
COPY ./app /app
CMD ["python", "app.py"]
</code></pre>
<p>Then in Kubernetes, you try to replace the folder <code>/app</code> with a persistent volume. </p>
<p>But how the first one comes from?</p>
<pre><code> volumeMounts:
- mountPath: ./app
name: order-claim0
</code></pre>
<p>So that's the reason, when you run locally with that docker image, it is fine, but when you run a similar command as below, it will be failed.</p>
<pre><code>docker run -ti --rm -v $(New_Local_folder_somewhere):/app order:1.0
</code></pre>
<p>Because the folder <code>/app</code> has been replaced with a totally new mounted folder. </p>
<p>Second, could you use absolute path more than relative path in this case?</p>
<pre><code> - mountPath: ./app
change to
- mountPath: /app
</code></pre>
| BMW |
<p>The <a href="https://quay.io/repository/kubernetes-ingress-controller/nginx-ingress-controller?tag=latest&tab=tags" rel="nofollow noreferrer">nginx ingress controller</a> for Kubernetes uses the cap_net_bind_service capability, which is a Linux filesystem attribute, to obtain the permissions to open a privileged port (port 80). However, I have a kind test which creates a local Kubernetes cluster using docker containers as virtual nodes (docker inside docker) and starts up an nginx ingress controller pod. This controller pod works fine in Docker Desktop on Windows 10, but when I run the same test on Linux, the controller pod repeatedly crashes on startup, with:</p>
<pre><code>[17:27:34]nginx: [emerg] bind() to 0.0.0.0:80 failed (13: Permission denied)
</code></pre>
<p>Yet the required capability exists in the nested Docker container:</p>
<pre><code>$ allpods=$(kubectl get pods)
$ ingresspod=$(echo "$allpods"|grep '^nginx-ingress-controller'|head -n1)
$ kubectl exec "${ingresspod%% *}" -- getcap -v /usr/local/nginx/sbin/nginx
/usr/local/nginx/sbin/nginx = cap_net_bind_service+ep
</code></pre>
<p>SELinux is enabled but in permissive mode on the Linux host.</p>
| Robin Green | <p>This turned out to be because on the Linux host, <code>dockerd</code> was being run with the <code>--no-new-privileges</code> option.</p>
| Robin Green |
<p>I had Minikube installed on my Mac and then I removed it and replaced it with a <a href="https://medium.com/better-programming/local-k3s-cluster-made-easy-with-multipass-108bf6ce577c" rel="nofollow noreferrer">3-node cluster using Multipass and K3s</a>. Now my issue is that <code>kubectl</code> is still referring to Minikube when I execute its commands in terminal. </p>
<p>For instance when I run <code>kubectl get nodes</code> I get the following error:</p>
<pre><code>Error in configuration:
* unable to read client-cert /Users/hemodd/.minikube/client.crt for cluster1 due to open /Users/hemodd/.minikube/client.crt: no such file or directory
* unable to read client-key /Users/hemodd/.minikube/client.key for cluster1 due to open /Users/hemodd/.minikube/client.key: no such file or directory
* unable to read certificate-authority /Users/hemodd/.minikube/ca.crt for cluster1 due to open /Users/hemodd/.minikube/ca.crt: no such file or directory
</code></pre>
<p>I have followed these steps to remove Minikube but I am still getting the same error:</p>
<pre><code>minikube delete
brew uninstall minikube
rm -rf ~/.minikube
</code></pre>
<p>Any help is much appreciated.</p>
| HemOdd | <p>Your error are mostly about the wrong setting in kubeconfig, please double check local file <code>~/.kube/confg</code></p>
<p>This is the default kubectl configuration file, it includes three parts: clusters, contexts and users</p>
<p>If you have two kubernetes clusters (one is minikube, and the other is k3s), you should be fine to set up them in same file without conflict. </p>
<p>Read this first: <a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/</a></p>
<p>Each cluster should have related certificates (and user certificates) set properly.</p>
<p>then you are fine to set the current context with below command</p>
<pre><code>kubectl config set-context XXXX
</code></pre>
| BMW |
<p>I have my deployment scaled across multiple pods. The requirement is that whenever the app starts up I want a piece of code to be executed. But I just want one pod to execute this code. How can this be achieved?</p>
<p>Right now I have a way - I set a flag in my DB and have my pods read the flag, which ever reads and locks the flag first will have to process the task.</p>
<p>Are there any drawbacks in my approach ? Is there any better way to do this?</p>
| Kowshhal | <p>I believe this is right approach. All pods need to have a way to understand if someone else is processing the task and doing through DB is best option. The only drawback is what if while the pod which picks it up, couldnt update the flag status. What would happen in that case?</p>
<p>The other option that i could think of is perhaps publishing a message to a message queue (a queue maintained outside of your pods, cloud platforms?). The idea is whenever a pod comes alive, they will check the queue and process it. Its similar to what database approach you have.</p>
| Em Ae |
<p>Is the best approach to make migrations and migrate models using a Job and a Persistent Volume Claim on Kubernetes Django deployed app?</p>
<p>Persistent Volume</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: csi-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: do-block-storage
</code></pre>
<p>Job</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: django-migrations-job
spec:
template:
spec:
containers:
- name: app
image: user/app:latest
command: ["/bin/sh", "-c"]
args: ["python manage.py makemigrations app; python manage.py migrate"]
volumeMounts:
- mountPath: "/container-code-dir/app/migrations"
name: my-do-volume
volumes:
- name: my-do-volume
persistentVolumeClaim:
claimName: csi-pvc
</code></pre>
| cuscode | <p>Looks fine for me. Not sure if you need run this job once or every time, when a new pod is up?</p>
<p>If it is running before Django service pod started every time, maybe you can get help with <a href="https://kubernetes.io/docs/concepts/workloads/pods/init-containers/" rel="nofollow noreferrer">Init Containers</a></p>
<p>Example: </p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox:1.28
command: ['sh', '-c', 'echo The app is running! && sleep 3600']
initContainers:
- name: init-myservice
image: busybox:1.28
command: ['sh', '-c', 'until nslookup myservice; do echo waiting for myservice; sleep 2; done;']
- name: init-mydb
image: busybox:1.28
command: ['sh', '-c', 'until nslookup mydb; do echo waiting for mydb; sleep 2; done;']
</code></pre>
<p>you can do the same for deployment</p>
| BMW |
<p>I am currently facing a situation where I need to deploy a small cluster (only 4 pods for now) which will be containing 4 different microservices. This cluster has to be duplicated so I can have one PRODUCTION cluster and one DEVELOPMENT cluster.</p>
<p>Even if it's not hard from my point of view (Creating a cluster and then uploading docker images to pods with parameters in order to use the right resources connection strings), I am stuck at the CD/CI part..</p>
<blockquote>
<p>From a CloudBuild trigger, how to push the Docker image to the right "cluster's pod", I have absolutely no idea AT ALL how to achieve it...</p>
</blockquote>
<p>There is my cloudbuild.yaml</p>
<pre><code>steps:
#step 1 - Getting the previous (current) image
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args: [
'-c',
'docker pull gcr.io/{PROJECT_ID}/{SERVICE_NAME}:latest || exit 0'
]
#step 2 - Build the image and push it to gcr.io
- name: 'gcr.io/cloud-builders/docker'
args: [
'build',
'-t',
'gcr.io/{PROJECT_ID}/{SERVICE_NAME}',
'-t',
'gcr.io/{PROJECT_ID}/{SERVICE_NAME}:latest',
'.'
]
#step 3 - Deploy our container to our cluster
- name: 'gcr.io/cloud-builders/kubectl'
args: ['apply', '-f', 'service.yaml', '--force']
env:
- 'CLOUDSDK_COMPUTE_ZONE={CLUSTER_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER={CLUSTER_NAME}'
#step 4 - Set the image
- name: 'gcr.io/cloud-builders/kubectl'
args: [
'set',
'image',
'deployment',
'{SERVICE_NAME}',
'{SERVICE_NAME}=gcr.io/{PROJECT_ID}/{SERVICE_NAME}'
]
env:
- 'CLOUDSDK_COMPUTE_ZONE={CLUSTER_ZONE}'
- 'CLOUDSDK_CONTAINER_CLUSTER={CLUSTER_NAME}'
# push images to Google Container Registry with tags
images: [
'gcr.io/{PROJECT_ID}/{SERVICE_NAME}',
'gcr.io/{PROJECT_ID}/{SERVICE_NAME}:latest'
]
</code></pre>
<p>Can anyone help me out? I don't really know in which direction to go to..</p>
| Emixam23 | <p>Did you know <a href="https://helm.sh/docs/topics/charts/" rel="nofollow noreferrer">helm chart</a>? It is designed for different environment deployment. </p>
<p>With different <code>values.yaml</code> file, you can quickly deploy to different environment with same source code base.</p>
<p>For example, you can name the values.yaml file with environment.</p>
<pre><code>values-dev.yaml
values-sit.yaml
values-prod.yaml
</code></pre>
<p>the only differences are some varialbes, such as environment (dev/sit/prod), and namespaces. </p>
<p>so when you run the deployment, it will be: </p>
<pre><code>env=${ENVIRONMENT}
helm install -f values-${env}.yaml myredis ./redis
</code></pre>
| BMW |
<p>If I run:</p>
<pre><code>kubectl exec spi-tools-dev-3449236037-08pau -it -- /bin/bash
</code></pre>
<p>I get an interactive shell, but something is eating ^p characters. If I type one ^p, nothing happens. When I type a second ^p, two get sent. In bash, I go two items back in my history. In emacs, I go up two lines.</p>
<p>What's going on here? Something is obviously interpreting ^p as a command/escape character, but I don't see anything in the kubernetes docs that talks about that.</p>
| Roy Smith | <p>It looks like the answer is:</p>
<ol>
<li>Yes, this is kubectl's emulation of docker's ctrl-p/ctrl-q detach sequence.</li>
<li>No, there's nothing you can do to change it.</li>
<li>See <a href="https://github.com/kubernetes/kubernetes/issues/79110" rel="nofollow noreferrer">this closed bug</a>.</li>
</ol>
<p>I'm running this under tmux, which in turn is under ssh. Each of which has their own in-band command signalling. It's amazing anything works at all :-)</p>
| Roy Smith |
<p>When running <code>kubectl get events</code>, is there a way to filter by events without knowing the name of the pod?</p>
<p>I am trying to do this with <a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/kubernetes?view=azure-devops" rel="nofollow noreferrer">Azure Pipeline's <strong>Kubectl</strong> task</a>, which is limited to passing arguments to <code>kubectl get events</code>, but does not allow subshells and pipes, so <code>grep</code> and <code>awk</code> are not available.</p>
<p>I tried using <code>kubectl get events --field-selector involvedObject.name=my-microservice-name</code>, which works to an extent (i.e., for the deployment resource), but not for the pods.</p>
<p>Using <code>kubectl get events --field-selector app.kubernetes.io/name=my-microservice-name</code> returns no results, despite having that label configured as seen in <code>kubectl describe pod <my-microservice-name>-pod-name</code>.</p>
<p>Ideally if there is a way to use wildcards, such as <code>kubectl get events --field-selector involvedObject.name=*my-microservice-name*</code>, would be the best case scenario.</p>
<p>Any help is greatly appreciated.</p>
<p>Thanks!</p>
| HXK | <p>I don't have azure environment, but I can show events on pods</p>
<pre><code>master $ kubectl get events --field-selector involvedObject.kind=Pod
LAST SEEN TYPE REASON OBJECT MESSAGE
<unknown> Normal Scheduled pod/nginx Successfully assigned default/nginx to node01
5m13s Normal Pulling pod/nginx Pulling image "nginx"
5m8s Normal Pulled pod/nginx Successfully pulled image "nginx"
5m8s Normal Created pod/nginx Created container nginx
5m8s Normal Started pod/nginx Started container nginx
</code></pre>
<p>If you need target on particular pod, you should work with <code>involvedObject.kind</code> and <code>involvedObject.name</code> together. </p>
<pre><code>master $ kubectl run redis --image=redis --generator=run-pod/v1
master $ kubectl run nginx --image=nginx --generator=run-pod/v1
master $ kubectl get events --field-selector involvedObject.kind=Pod,involvedObject.name=nginx
LAST SEEN TYPE REASON OBJECT MESSAGE
<unknown> Normal Scheduled pod/nginx Successfully assigned default/nginx to node01
16m Normal Pulling pod/nginx Pulling image "nginx"
16m Normal Pulled pod/nginx Successfully pulled image "nginx"
16m Normal Created pod/nginx Created container nginx
16m Normal Started pod/nginx Started container nginx
</code></pre>
<p>Why I knew <code>involvedObject.kind</code> works, because its json output shows the key is exist</p>
<pre><code> "involvedObject": {
"apiVersion": "v1",
"fieldPath": "spec.containers{nginx}",
"kind": "Pod",
"name": "nginx",
"namespace": "default",
"resourceVersion": "604",
"uid": "7ebaaf99-aa9c-402b-9517-1628d99c1763"
},
</code></pre>
<p>The other way you need try is <code>jsonpath</code>, get the output as json format</p>
<pre><code>kubectl get events -o json
</code></pre>
<p>then copy & paste the json to <a href="https://jsonpath.com/" rel="nofollow noreferrer">https://jsonpath.com/</a> and play around with <a href="https://kubernetes.io/docs/reference/kubectl/jsonpath/" rel="nofollow noreferrer">jsonpath practices</a></p>
| BMW |
<p>By default, image-gc-high-threshold and image-gc-low-threshold values are 90 and 80% respectively.</p>
<p>We want to change them to 80 and 70, How we can change the Kubernetes image garbage collection threshold values.</p>
| Rahul Khengare | <p>Changing the garbage collection thresholds can be done using switches on the <code>kubelet</code>.</p>
<p>From <a href="https://kubernetes.io/docs/admin/kubelet/" rel="nofollow noreferrer">the docs</a></p>
<pre><code>--image-gc-high-threshold int32 The percent of disk usage after which image garbage collection is always run. (default 85)
--image-gc-low-threshold int32 The percent of disk usage before which image garbage collection is never run. Lowest disk usage to garbage collect to. (default 80)
</code></pre>
| coreypobrien |
<p>With the rise of containers, Kuberenetes, 12 Factor etc, it has become easier to replicate an identical environment across dev, staging and production. However, what there appears to be no common standard to domain name conventions.</p>
<p>As far as I can see it, there are two ways of doing it:</p>
<ul>
<li>Use subdomains:
<ul>
<li><code>*.dev.foobar.tld</code></li>
<li><code>*.staging.foobar.tld</code></li>
<li><code>*.foobar.tld</code></li>
</ul></li>
<li>Use separate domains:
<ul>
<li><code>*.foobar-dev.tld</code></li>
<li><code>*.foobar-staging.tld</code></li>
<li><code>*.foobar.tld</code></li>
</ul></li>
</ul>
<p>I can see up and downs with both approaches, but I'm curious what the common practise is.</p>
<p>As a side-note, Cloudflare will not issue you certificates for sub-sub domains (e.g. <code>*.stage.foobar.tld</code>).</p>
| vpetersson | <blockquote>
<p>There are only two hard things in Computer Science: cache invalidation
and naming things.</p>
<p>-- Phil Karlton</p>
</blockquote>
<p>Depends on the company size.</p>
<p>Small businesses usually go for dashes and get the wildcard certificate.
So they would have <code>dev.example.com, test.example.com</code></p>
<p>In larger enterprises they usually have a DNS infrastructure rolled out and the provisioning processes takes care of the assignment. It usually looks like</p>
<pre><code>aws-eu-central-1.appName.staging.[teamName].example.com
</code></pre>
<p>They would either use their own self-signed certs with the CA on all servers or have the money for the SANs.</p>
<p>For more inspiration:</p>
<p><a href="https://blog.serverdensity.com/server-naming-conventions-and-best-practices/" rel="noreferrer">https://blog.serverdensity.com/server-naming-conventions-and-best-practices/</a></p>
<p><a href="https://mnx.io/blog/a-proper-server-naming-scheme/" rel="noreferrer">https://mnx.io/blog/a-proper-server-naming-scheme/</a></p>
<p><a href="https://namingschemes.com/" rel="noreferrer">https://namingschemes.com/</a></p>
| Marcus Maxwell |
<p>I have a couple Charts which all need access to the same Kubernetes Secret. My initial plan was to create a Chart just for those Secrets but it seems <a href="https://github.com/helm/helm/issues/4670" rel="nofollow noreferrer">Helm doesn't like that</a>. I am thinking this must be a common problem and am wondering what folks generally do to solve this problem?</p>
<p>Thanks!</p>
| adelbertc | <p>Best practice is, don't save any sensitive secrets in kubernetes clusters. kubernetes secret is <strong>encode</strong>, not <strong>encrypt</strong>. </p>
<p>You can reference the secret via aws ssm/secrets manager, hashicorp Vault or other similars. </p>
<p><a href="https://github.com/aws-samples/aws-workshop-for-kubernetes/tree/master/04-path-security-and-networking/401-configmaps-and-secrets" rel="nofollow noreferrer">https://github.com/aws-samples/aws-workshop-for-kubernetes/tree/master/04-path-security-and-networking/401-configmaps-and-secrets</a></p>
| BMW |
<p>AKS- Can't log into one of the worker nodes (VM). I assigned the public IP as per <a href="https://gist.github.com/tsaarni/624d5406e442f08fe11083169c059a68" rel="nofollow noreferrer">https://gist.github.com/tsaarni/624d5406e442f08fe11083169c059a68</a> but still no luck. I get the error below:</p>
<blockquote>
<p>JohnDoeMac:.kube john_doe$ ssh [email protected]
Permission denied (publickey).</p>
</blockquote>
<p>Here subscription ID looks like: e84ff951-xxxxxxxxxxxx</p>
| codeRelix | <p>if you create AKS from Azure portal, you can specify the user name of VM.</p>
<p>at that case, user name is not <strong>azureuser</strong> any more</p>
<p>you can find out the user name and public key from Azure portal</p>
<p><a href="https://i.stack.imgur.com/GTelq.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GTelq.jpg" alt="enter image description here" /></a></p>
| SILENCE |
<p>I've built a docker image based on <a href="https://hub.docker.com/_/httpd" rel="nofollow noreferrer">httpd:2.4</a>. In my k8s deployment I've defined the following <code>securityContext</code>:</p>
<pre><code>securityContext:
privileged: false
runAsNonRoot: true
runAsUser: 431
allowPrivilegeEscalation: false
</code></pre>
<p>When I apply the deployment without this <code>securityContext</code> everything works fine, the server starts up correctly, etc. However when I add in the above <code>securityContext</code> my pod has the status <code>CrashLoopBackOff</code> and I get the following from <code>$ kubectl logs...</code></p>
<pre><code>(13)Permission denied: AH00072: make_sock: could not bind to address 0.0.0.0:80
no listening sockets available, shutting down
AH00015: Unable to open logs
</code></pre>
<p>From searching around online I've found that this is because apache needs to be root in order to run, so running as non-root will fail.</p>
<p>I've also found that <code>httpd.conf</code> has the following</p>
<pre><code>#
# If you wish httpd to run as a different user or group, you must run
# httpd as root initially and it will switch.
#
# User/Group: The name (or #number) of the user/group to run httpd as.
# It is usually good practice to create a dedicated user and group for
# running httpd, as with most system services.
#
User daemon
Group daemon
</code></pre>
<p>Which seems to suggest that if I don't use <code>runAsNonRoot</code> or <code>runAsUser</code> in <code>securityContext</code> it should automatically switch to whichever user I specify in <code>httpd.conf</code>. In my case I created a user/group <code>swuser</code> and edited <code>httpd.conf</code> accordingly. However when I run the image locally with <code>docker run -p 5000:80 my-registry/my-image:1.0.12-alpha</code> and then run <code>docker exec -it my-container whoami</code> it prints <code>root</code>.</p>
<p>So my question is, what can I do to run my container safely as non-root in k8s (and be sure it is non-root)</p>
| Mike S | <p>Run the apache on a port greater than 1024. </p>
<p>Ports below 1024 are privileged ports only available to the root user.</p>
<p>As you will have some ingress load balancer before that, it shouldn't matter :-)</p>
| flob |
<p>I understand that files / folders can be copied into a container using the command:</p>
<pre><code>kubectl cp /tmp/foo_dir <some-pod>:/tmp/bar_dir
</code></pre>
<p>However, I am looking to do this in a yaml file</p>
<p>How would I go about doing this? (Assuming that I am using a deployment for the container)</p>
| Kyle Blue | <p>The way you are going is wrong direction. Kubernetes does this with serveral ways.</p>
<p>first, think about configmap</p>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap" rel="noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap</a></p>
<p>You can easily define the configuration files for your application running in container</p>
<p>If you do know the files or folders is exist on worker nodes, you can use <code>hostPath</code> to mount it into container with nominated <code>nodeName: node01</code> in k8s yaml.</p>
<p><a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="noreferrer">https://kubernetes.io/docs/concepts/storage/volumes/#hostpath</a></p>
<p>if the files or folders are generated temporarily, you can use <code>emptyDir</code> </p>
<p><a href="https://kubernetes.io/docs/concepts/storage/volumes/#emptydir" rel="noreferrer">https://kubernetes.io/docs/concepts/storage/volumes/#emptydir</a></p>
| BMW |
<p>I want to be able to mount an unknown number of config files in /etc/configs</p>
<p>I have added some files to the configmap using:</p>
<blockquote>
<p>kubectl create configmap etc-configs --from-file=/tmp/etc-config</p>
</blockquote>
<p>The number of files and file names are never going to be known and I would like to recreate the configmap and the folder in the Kubernetes container should be updated after sync interval.</p>
<p>I have tried to mount this but I'm not able to do so, the folder is always empty but I have data in the configmap.</p>
<pre><code>bofh$ kubectl describe configmap etc-configs
Name: etc-configs
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
file1.conf:
----
{
... trunkated ...
}
file2.conf:
----
{
... trunkated ...
}
file3.conf:
----
{
... trunkated ...
}
Events: <none>
</code></pre>
<p>I'm using this one in the container volumeMounts:</p>
<pre><code>- name: etc-configs
mountPath: /etc/configs
</code></pre>
<p>And this is the volumes:</p>
<pre><code>- name: etc-configs
configMap:
name: etc-configs
</code></pre>
<p>I can mount individual items but not an entire directory.</p>
<p>Any suggestions about how to solve this?</p>
| Johan Ryberg | <p>You can mount the ConfigMap as a special volume into your container.</p>
<p>In this case, the mount folder will show each of the keys as a file in the mount folder and the files will have the map values as content.</p>
<p>From the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#populate-a-volume-with-data-stored-in-a-configmap" rel="noreferrer">Kubernetes documentation</a>:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
...
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: special-config
</code></pre>
| sola |
<p>I am trying to deploy an app to Kubernetes cluster via Helm charts. Every time I try to deploy the app I get </p>
<blockquote>
<p>"Liveness probe failed: Get <a href="http://172.17.0.7:80/" rel="nofollow noreferrer">http://172.17.0.7:80/</a>: dial tcp
172.17.0.7:80: connect: connection refused" and "Readiness probe failed: Get <a href="http://172.17.0.7:80/" rel="nofollow noreferrer">http://172.17.0.7:80/</a>: dial tcp 172.17.0.7:80: connect:
connection refused"</p>
</blockquote>
<p>. </p>
<p>This is my deployment.yaml:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "mychart.fullname" . }}
labels:
{{- include "mychart.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
{{- include "mychart.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "mychart.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "mychart.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: nikovlyubomir/docker-spring-boot:latest
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
initialDelaySeconds: 200
httpGet:
path: /
port: 80
readinessProbe:
initialDelaySeconds: 200
httpGet:
path: /
port: http
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
</code></pre>
<p>I read that possible solution might be adding more initialDelaySecond in both probes, but still this did not resolve my issue.</p>
<p>Any opinion? </p>
| Javista | <p>Since I can pull the image I did a try</p>
<pre><code>$ docker run -d nikovlyubomir/docker-spring-boot:latest
9ac42a1228a610ae424217f9a2b93cabfe1d3141fe49e0665cc71cb8b2e3e0fd
</code></pre>
<p>I got logs</p>
<pre><code>$ docker logs 9ac
...
2020-03-08 02:02:30.552 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 1993 (http) with context path ''
</code></pre>
<p>Seems the application starts on port 1993, not 80</p>
<p>then I check the port and connection in container: </p>
<pre><code>$ docker exec -ti 9ac bash
root@9ac42a1228a6:/# curl localhost:1993
{"timestamp":"2020-03-08T02:03:12.104+0000","status":404,"error":"Not Found","message":"No message available","path":"/"}
root@9ac42a1228a6:/# curl localhost:1993/actuator/health
{"timestamp":"2020-03-08T02:04:01.348+0000","status":404,"error":"Not Found","message":"No message available","path":"/actuator/health"}
root@9ac42a1228a6:/# curl localhost:80
curl: (7) Failed to connect to localhost port 80: Connection refused
root@9ac42a1228a6:/# curl localhost:80/actuator/health
curl: (7) Failed to connect to localhost port 80: Connection refused
</code></pre>
<p>So make sure the check path <code>/</code> or other is properly set and the port <code>80</code> or <code>1993</code> is ready.</p>
| BMW |
<p>I'm providing an external-facing REST GET API service in a kubernetes pod on AWS EKS. I had configured an ALB Ingress for this service which enforces Cognito user pool authentication. Cognito is configured with <code>Authorization code grant</code> with the <code>openid</code> OAuth scope enabled.</p>
<p>If I invoke my REST API from the browser, I get redirected to the Cognito login page. After a sucessful authentication on the form here, I can access my REST GET API just fine. This works, but this is not what I'd like to achieve.</p>
<p>Instead of this, I would need to use a <code>Bearer</code> token, after getting successfully authenticated. So first I invoke <a href="https://cognito-idp.ap-southeast-1.amazonaws.com" rel="noreferrer">https://cognito-idp.ap-southeast-1.amazonaws.com</a> using Postman with the request:</p>
<pre><code> "AuthParameters" : {
"USERNAME" : "<email>",
"PASSWORD" : "<mypass>",
"SECRET_HASH" : "<correctly calculated hash>"
},
"AuthFlow" : "USER_PASSWORD_AUTH",
"ClientId" : "<cognito user pool id>"
}
</code></pre>
<p>and I get a successful response like:</p>
<pre><code> "AuthenticationResult": {
"AccessToken": "...",
"ExpiresIn": 3600,
"IdToken": "...",
"RefreshToken": "...",
"TokenType": "Bearer"
},
"ChallengeParameters": {}
}
</code></pre>
<p>In the last step I'm trying to invoke my REST API service passing the <code>Authorization</code> HTTP header with the value <code>Bearer <AccessToken></code> but I still get a HTML response with the login page.</p>
<p>How can I configure Cognito to accept my Bearer token for this call as an authenticated identity?</p>
| Kristof Jozsa | <p>Quoting AWS support on this topic: "the Bearer token can not be used instead of the session cookie because in a flow involving bearer token would lead to generating the session cookie".</p>
<p>So unfortunately this usecase is not possible to implemented as of today.</p>
| Kristof Jozsa |
<p>I am using a inital container(k8s version:v1.15.2) to initial skywalking(6.5.0) jar file before container startup.But I could not found the file and directory the intial container create,this is my initial container define:</p>
<pre><code>"initContainers": [
{
"name": "init-agent",
"image": "registry.cn-shenzhen.aliyuncs.com/dabai_app_k8s/dabai_fat/skywalking-agent:6.5.0",
"command": [
"sh",
"-c",
"set -ex;mkdir -p /skywalking/agent;cp -r /opt/skywalking/agent/* /skywalking/agent;"
],
"resources": {},
"volumeMounts": [
{
"name": "agent",
"mountPath": "/skywalking/agent"
},
{
"name": "default-token-xnrwt",
"readOnly": true,
"mountPath": "/var/run/secrets/kubernetes.io/serviceaccount"
}
],
"terminationMessagePath": "/dev/termination-log",
"terminationMessagePolicy": "File",
"imagePullPolicy": "Always"
}
]
</code></pre>
<p>now the initial container execute success,I am check the log output like this:</p>
<pre><code>~/Library/Mobile Documents/com~apple~CloudDocs/Document/source/dabai/microservice/soa-red-envelope on develop_jiangxiaoqiang! ⌚ 14:48:26
$ kubectl logs soa-red-envelope-service-85758d88cb-rmtcj -c init-agent
+ mkdir -p /skywalking/agent
+ cp -r /opt/skywalking/agent/activations /opt/skywalking/agent/bootstrap-plugins /opt/skywalking/agent/config /opt/skywalking/agent/logs /opt/skywalking/agent/optional-plugins /opt/skywalking/agent/plugins /opt/skywalking/agent/skywalking-agent.jar /skywalking/agent
</code></pre>
<p>now something I am confusing is where the directory locate? where is the file I am copy? I am login my container and do not find the jar file:</p>
<pre><code>~/Library/Mobile Documents/com~apple~CloudDocs/Document/source/dabai/microservice/soa-red-envelope on develop_jiangxiaoqiang! ⌚ 14:50:55
$ kubectl exec -it soa-red-envelope-service-85758d88cb-rmtcj /bin/ash
/ # ls
bin data dev etc home lib media mnt opt proc root run sbin srv sys tmp usr var
/ # cd /opt/
/opt # ls
data
/opt #
</code></pre>
<p>now I am starting my app to collection metrics data like this:</p>
<pre><code>ENTRYPOINT exec java -Xmx1g -Xms1g -Dapp.id=$APP_ID -javaagent:/skywalking/agent/skywalking-agent.jar -Dskywalking.agent.service_name=soa-red-envelope-service -Dskywalking.collector.backend_service=10.254.35.220:11800 -jar /root/soa-red-envelope-service-1.0.0-SNAPSHOT.jar
</code></pre>
<p>obviously it tell me could not fond the jar file error:</p>
<pre><code>Error occurred during initialization of VM
agent library failed to init: instrument
Error opening zip file or JAR manifest missing : /skywalking/agent/skywalking-agent.jar
</code></pre>
<p>so what should I do to fix this? I already searching from internet bu found no useful way to solve my situation.</p>
| Dolphin | <p>So you should go throught this document first</p>
<p><a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/volumes/#hostpath</a></p>
<p>use <code>hostPath</code> as sample</p>
<pre><code> volumes:
- name: agent
hostPath:
# directory location on host
path: /agent
# this field is optional
type: Directory
</code></pre>
<p>You need reference it for both init container and normal container.</p>
| BMW |
<p>I have setup kubernetes cluster in Ubuntu 16.04 with a master and a worker. I deployed application and created NodePort service as below.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: hello-app-deployment
spec:
selector:
matchLabels:
app: hello-app
replicas: 1
template:
metadata:
labels:
app: hello-app
spec:
containers:
- name: hello-app
image: yeasy/simple-web:latest
ports:
- containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
name: hello-app-service
spec:
selector:
app: hello-app
ports:
- protocol: TCP
port: 8000
targetPort: 80
nodePort: 30020
name: hello-app-port
type: NodePort
</code></pre>
<p>Pods and service are created for same</p>
<pre><code>NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/hello-app-deployment-6bfdc9c668-smsgq 1/1 Running 0 83m 10.32.0.3 k8s-worker-1 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/hello-app-service NodePort 10.106.91.145 <none> 8000:30020/TCP 83m app=hello-app
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 10h <none>
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
deployment.apps/hello-app-deployment 1/1 1 1 83m hello-app yeasy/simple-web:latest app=hello-app
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
replicaset.apps/hello-app-deployment-6bfdc9c668 1 1 1 83m hello-app yeasy/simple-web:latest app=hello-app,pod-template-hash=6bfdc9c668
</code></pre>
<p>I am able to access application from host where application is deployed as:</p>
<pre class="lang-bash prettyprint-override"><code>kubeuser@kube-worker-1:~$ curl http://kube-worker-1:30020
Hello!
</code></pre>
<p>But when I access from master node or other worker nodes it doesn't connect.</p>
<pre class="lang-bash prettyprint-override"><code>kubeuser@k8s-master:~$ curl http://k8s-master:30020
curl: (7) Failed to connect to k8s-master port 30020: Connection refused
kubeuser@k8s-master:~$ curl http://localhost:30020
curl: (7) Failed to connect to localhost port 30020: Connection refused
kubeuser@k8s-master:~$ curl http://k8s-worker-2:30020
Failed to connect to k8s-worker-2 port 30020: No route to host
kubeuser@k8s-worker-2:~$ curl http://localhost:30020
Failed to connect to localhost port 30020: No route to host
</code></pre>
<p>Created CIDR as below</p>
<pre class="lang-bash prettyprint-override"><code>kubeadm init --pod-network-cidr=192.168.0.0/16
</code></pre>
<p>The following is iptable-save result:</p>
<pre><code>*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:OUTPUT ACCEPT [30:1891]
:POSTROUTING ACCEPT [30:1891]
:DOCKER - [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-NODEPORTS - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-SEP-3DU66DE6VORVEQVD - [0:0]
:KUBE-SEP-6UWAUPYDDOV5SU5B - [0:0]
:KUBE-SEP-S4MK5EVI7CLHCCS6 - [0:0]
:KUBE-SEP-SWLOBIBPXYBP7G2Z - [0:0]
:KUBE-SEP-SZZ7MOWKTWUFXIJT - [0:0]
:KUBE-SEP-UJJNLSZU6HL4F5UO - [0:0]
:KUBE-SEP-ZCHNBYOGFZRFKYMA - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-SVC-ERIFXISQEP7F7OF4 - [0:0]
:KUBE-SVC-JD5MR3NA4I4DYORP - [0:0]
:KUBE-SVC-NPX46M4PTMTKRN6Y - [0:0]
:KUBE-SVC-TCOU7JCQXEZGVUNU - [0:0]
:OUTPUT_direct - [0:0]
:POSTROUTING_ZONES - [0:0]
:POSTROUTING_ZONES_SOURCE - [0:0]
:POSTROUTING_direct - [0:0]
:POST_public - [0:0]
:POST_public_allow - [0:0]
:POST_public_deny - [0:0]
:POST_public_log - [0:0]
:PREROUTING_ZONES - [0:0]
:PREROUTING_ZONES_SOURCE - [0:0]
:PREROUTING_direct - [0:0]
:PRE_public - [0:0]
:PRE_public_allow - [0:0]
:PRE_public_deny - [0:0]
:PRE_public_log - [0:0]
:WEAVE - [0:0]
:WEAVE-CANARY - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A PREROUTING -j PREROUTING_direct
-A PREROUTING -j PREROUTING_ZONES_SOURCE
-A PREROUTING -j PREROUTING_ZONES
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j OUTPUT_direct
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -j POSTROUTING_direct
-A POSTROUTING -j POSTROUTING_ZONES_SOURCE
-A POSTROUTING -j POSTROUTING_ZONES
-A POSTROUTING -j WEAVE
-A DOCKER -i docker0 -j RETURN
-A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-POSTROUTING -m mark ! --mark 0x4000/0x4000 -j RETURN
-A KUBE-POSTROUTING -j MARK --set-xmark 0x4000/0x0
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -j MASQUERADE
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-SEP-3DU66DE6VORVEQVD -s 10.32.0.3/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-3DU66DE6VORVEQVD -p udp -m udp -j DNAT --to-destination 10.32.0.3:53
-A KUBE-SEP-6UWAUPYDDOV5SU5B -s 10.111.1.158/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-6UWAUPYDDOV5SU5B -p tcp -m tcp -j DNAT --to-destination 10.111.1.158:6443
-A KUBE-SEP-S4MK5EVI7CLHCCS6 -s 10.32.0.3/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-S4MK5EVI7CLHCCS6 -p tcp -m tcp -j DNAT --to-destination 10.32.0.3:53
-A KUBE-SEP-SWLOBIBPXYBP7G2Z -s 10.32.0.2/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-SWLOBIBPXYBP7G2Z -p tcp -m tcp -j DNAT --to-destination 10.32.0.2:9153
-A KUBE-SEP-SZZ7MOWKTWUFXIJT -s 10.32.0.2/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-SZZ7MOWKTWUFXIJT -p udp -m udp -j DNAT --to-destination 10.32.0.2:53
-A KUBE-SEP-UJJNLSZU6HL4F5UO -s 10.32.0.2/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-UJJNLSZU6HL4F5UO -p tcp -m tcp -j DNAT --to-destination 10.32.0.2:53
-A KUBE-SEP-ZCHNBYOGFZRFKYMA -s 10.32.0.3/32 -j KUBE-MARK-MASQ
-A KUBE-SEP-ZCHNBYOGFZRFKYMA -p tcp -m tcp -j DNAT --to-destination 10.32.0.3:9153
-A KUBE-SERVICES ! -s 192.168.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:dns-tcp cluster IP" -m tcp --dport 53 -j KUBE-SVC-ERIFXISQEP7F7OF4
-A KUBE-SERVICES ! -s 192.168.0.0/16 -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p tcp -m comment --comment "kube-system/kube-dns:metrics cluster IP" -m tcp --dport 9153 -j KUBE-SVC-JD5MR3NA4I4DYORP
-A KUBE-SERVICES ! -s 192.168.0.0/16 -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.1/32 -p tcp -m comment --comment "default/kubernetes:https cluster IP" -m tcp --dport 443 -j KUBE-SVC-NPX46M4PTMTKRN6Y
-A KUBE-SERVICES ! -s 192.168.0.0/16 -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-MARK-MASQ
-A KUBE-SERVICES -d 10.96.0.10/32 -p udp -m comment --comment "kube-system/kube-dns:dns cluster IP" -m udp --dport 53 -j KUBE-SVC-TCOU7JCQXEZGVUNU
-A KUBE-SERVICES -m comment --comment "kubernetes service nodeports; NOTE: this must be the last rule in this chain" -m addrtype --dst-type LOCAL -j KUBE-NODEPORTS
-A KUBE-SVC-ERIFXISQEP7F7OF4 -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-UJJNLSZU6HL4F5UO
-A KUBE-SVC-ERIFXISQEP7F7OF4 -j KUBE-SEP-S4MK5EVI7CLHCCS6
-A KUBE-SVC-JD5MR3NA4I4DYORP -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-SWLOBIBPXYBP7G2Z
-A KUBE-SVC-JD5MR3NA4I4DYORP -j KUBE-SEP-ZCHNBYOGFZRFKYMA
-A KUBE-SVC-NPX46M4PTMTKRN6Y -j KUBE-SEP-6UWAUPYDDOV5SU5B
-A KUBE-SVC-TCOU7JCQXEZGVUNU -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-SZZ7MOWKTWUFXIJT
-A KUBE-SVC-TCOU7JCQXEZGVUNU -j KUBE-SEP-3DU66DE6VORVEQVD
-A POSTROUTING_ZONES -g POST_public
-A POST_public -j POST_public_log
-A POST_public -j POST_public_deny
-A POST_public -j POST_public_allow
-A PREROUTING_ZONES -g PRE_public
-A PRE_public -j PRE_public_log
-A PRE_public -j PRE_public_deny
-A PRE_public -j PRE_public_allow
-A WEAVE -m set --match-set weaver-no-masq-local dst -m comment --comment "Prevent SNAT to locally running containers" -j RETURN
-A WEAVE -s 10.32.0.0/12 -d 224.0.0.0/4 -j RETURN
-A WEAVE ! -s 10.32.0.0/12 -d 10.32.0.0/12 -j MASQUERADE
-A WEAVE -s 10.32.0.0/12 ! -d 10.32.0.0/12 -j MASQUERADE
COMMIT
# Completed on Sun Aug 16 17:11:47 2020
# Generated by iptables-save v1.6.0 on Sun Aug 16 17:11:47 2020
*security
:INPUT ACCEPT [1417084:253669465]
:FORWARD ACCEPT [4:488]
:OUTPUT ACCEPT [1414939:285083560]
:FORWARD_direct - [0:0]
:INPUT_direct - [0:0]
:OUTPUT_direct - [0:0]
-A INPUT -j INPUT_direct
-A FORWARD -j FORWARD_direct
-A OUTPUT -j OUTPUT_direct
COMMIT
# Completed on Sun Aug 16 17:11:47 2020
# Generated by iptables-save v1.6.0 on Sun Aug 16 17:11:47 2020
*raw
:PREROUTING ACCEPT [1417204:253747905]
:OUTPUT ACCEPT [1414959:285085300]
:OUTPUT_direct - [0:0]
:PREROUTING_direct - [0:0]
-A PREROUTING -j PREROUTING_direct
-A OUTPUT -j OUTPUT_direct
COMMIT
# Completed on Sun Aug 16 17:11:47 2020
# Generated by iptables-save v1.6.0 on Sun Aug 16 17:11:47 2020
*mangle
:PREROUTING ACCEPT [1401943:246825511]
:INPUT ACCEPT [1401934:246824763]
:FORWARD ACCEPT [4:488]
:OUTPUT ACCEPT [1399691:277923964]
:POSTROUTING ACCEPT [1399681:277923072]
:FORWARD_direct - [0:0]
:INPUT_direct - [0:0]
:OUTPUT_direct - [0:0]
:POSTROUTING_direct - [0:0]
:PREROUTING_ZONES - [0:0]
:PREROUTING_ZONES_SOURCE - [0:0]
:PREROUTING_direct - [0:0]
:PRE_public - [0:0]
:PRE_public_allow - [0:0]
:PRE_public_deny - [0:0]
:PRE_public_log - [0:0]
:WEAVE-CANARY - [0:0]
-A PREROUTING -j PREROUTING_direct
-A PREROUTING -j PREROUTING_ZONES_SOURCE
-A PREROUTING -j PREROUTING_ZONES
-A INPUT -j INPUT_direct
-A FORWARD -j FORWARD_direct
-A OUTPUT -j OUTPUT_direct
-A POSTROUTING -j POSTROUTING_direct
-A PREROUTING_ZONES -g PRE_public
-A PRE_public -j PRE_public_log
-A PRE_public -j PRE_public_deny
-A PRE_public -j PRE_public_allow
COMMIT
# Completed on Sun Aug 16 17:11:47 2020
# Generated by iptables-save v1.6.0 on Sun Aug 16 17:11:47 2020
*filter
:INPUT ACCEPT [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [2897:591977]
:DOCKER - [0:0]
:DOCKER-ISOLATION-STAGE-1 - [0:0]
:DOCKER-ISOLATION-STAGE-2 - [0:0]
:DOCKER-USER - [0:0]
:FORWARD_IN_ZONES - [0:0]
:FORWARD_IN_ZONES_SOURCE - [0:0]
:FORWARD_OUT_ZONES - [0:0]
:FORWARD_OUT_ZONES_SOURCE - [0:0]
:FORWARD_direct - [0:0]
:FWDI_public - [0:0]
:FWDI_public_allow - [0:0]
:FWDI_public_deny - [0:0]
:FWDI_public_log - [0:0]
:FWDO_public - [0:0]
:FWDO_public_allow - [0:0]
:FWDO_public_deny - [0:0]
:FWDO_public_log - [0:0]
:INPUT_ZONES - [0:0]
:INPUT_ZONES_SOURCE - [0:0]
:INPUT_direct - [0:0]
:IN_public - [0:0]
:IN_public_allow - [0:0]
:IN_public_deny - [0:0]
:IN_public_log - [0:0]
:KUBE-EXTERNAL-SERVICES - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-FORWARD - [0:0]
:KUBE-SERVICES - [0:0]
:OUTPUT_direct - [0:0]
:WEAVE-CANARY - [0:0]
:WEAVE-NPC - [0:0]
:WEAVE-NPC-DEFAULT - [0:0]
:WEAVE-NPC-EGRESS - [0:0]
:WEAVE-NPC-EGRESS-ACCEPT - [0:0]
:WEAVE-NPC-EGRESS-CUSTOM - [0:0]
:WEAVE-NPC-EGRESS-DEFAULT - [0:0]
:WEAVE-NPC-INGRESS - [0:0]
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A INPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes externally-visible service portals" -j KUBE-EXTERNAL-SERVICES
-A INPUT -j KUBE-FIREWALL
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A INPUT -i lo -j ACCEPT
-A INPUT -j INPUT_direct
-A INPUT -j INPUT_ZONES_SOURCE
-A INPUT -j INPUT_ZONES
-A INPUT -p icmp -j ACCEPT
-A INPUT -m conntrack --ctstate INVALID -j DROP
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A INPUT -d 127.0.0.1/32 -p tcp -m tcp --dport 6784 -m addrtype ! --src-type LOCAL -m conntrack ! --ctstate RELATED,ESTABLISHED -m comment --comment "Block non-local access to Weave Net control port" -j DROP
-A INPUT -i weave -j WEAVE-NPC-EGRESS
-A FORWARD -i weave -m comment --comment "NOTE: this must go before \'-j KUBE-FORWARD\'" -j WEAVE-NPC-EGRESS
-A FORWARD -o weave -m comment --comment "NOTE: this must go before \'-j KUBE-FORWARD\'" -j WEAVE-NPC
-A FORWARD -o weave -m state --state NEW -j NFLOG --nflog-group 86
-A FORWARD -o weave -j DROP
-A FORWARD -i weave ! -o weave -j ACCEPT
-A FORWARD -o weave -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A FORWARD -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -i lo -j ACCEPT
-A FORWARD -j FORWARD_direct
-A FORWARD -j FORWARD_IN_ZONES_SOURCE
-A FORWARD -j FORWARD_IN_ZONES
-A FORWARD -j FORWARD_OUT_ZONES_SOURCE
-A FORWARD -j FORWARD_OUT_ZONES
-A FORWARD -p icmp -j ACCEPT
-A FORWARD -m conntrack --ctstate INVALID -j DROP
-A FORWARD -j REJECT --reject-with icmp-host-prohibited
-A OUTPUT -m conntrack --ctstate NEW -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A OUTPUT -j KUBE-FIREWALL
-A OUTPUT -j OUTPUT_direct
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
-A FORWARD_IN_ZONES -g FWDI_public
-A FORWARD_OUT_ZONES -g FWDO_public
-A FWDI_public -j FWDI_public_log
-A FWDI_public -j FWDI_public_deny
-A FWDI_public -j FWDI_public_allow
-A FWDO_public -j FWDO_public_log
-A FWDO_public -j FWDO_public_deny
-A FWDO_public -j FWDO_public_allow
-A INPUT_ZONES -g IN_public
-A IN_public -j IN_public_log
-A IN_public -j IN_public_deny
-A IN_public -j IN_public_allow
-A IN_public_allow -p tcp -m tcp --dport 22 -m conntrack --ctstate NEW -j ACCEPT
-A IN_public_allow -p tcp -m tcp --dport 8080 -m conntrack --ctstate NEW -j ACCEPT
-A IN_public_allow -p tcp -m tcp --dport 10251 -m conntrack --ctstate NEW -j ACCEPT
-A IN_public_allow -p tcp -m tcp --dport 6443 -m conntrack --ctstate NEW -j ACCEPT
-A IN_public_allow -p tcp -m tcp --dport 30000:32767 -m conntrack --ctstate NEW -j ACCEPT
-A IN_public_allow -p tcp -m tcp --dport 10255 -m conntrack --ctstate NEW -j ACCEPT
-A IN_public_allow -p tcp -m tcp --dport 10252 -m conntrack --ctstate NEW -j ACCEPT
-A IN_public_allow -p tcp -m tcp --dport 2379:2380 -m conntrack --ctstate NEW -j ACCEPT
-A IN_public_allow -p tcp -m tcp --dport 10250 -m conntrack --ctstate NEW -j ACCEPT
-A IN_public_allow -p tcp -m tcp --dport 6784 -m conntrack --ctstate NEW -j ACCEPT
-A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
-A KUBE-FIREWALL ! -s 127.0.0.0/8 -d 127.0.0.0/8 -m comment --comment "block incoming localnet connections" -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP
-A KUBE-FORWARD -m conntrack --ctstate INVALID -j DROP
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -s 192.168.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-FORWARD -d 192.168.0.0/16 -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A WEAVE-NPC -m state --state RELATED,ESTABLISHED -j ACCEPT
-A WEAVE-NPC -d 224.0.0.0/4 -j ACCEPT
-A WEAVE-NPC -m physdev --physdev-out vethwe-bridge --physdev-is-bridged -j ACCEPT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-DEFAULT
-A WEAVE-NPC -m state --state NEW -j WEAVE-NPC-INGRESS
-A WEAVE-NPC-DEFAULT -m set --match-set weave-Rzff}h:=]JaaJl/G;(XJpGjZ[ dst -m comment --comment "DefaultAllow ingress isolation for namespace: kube-public" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-P.B|!ZhkAr5q=XZ?3}tMBA+0 dst -m comment --comment "DefaultAllow ingress isolation for namespace: kube-system" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-;rGqyMIl1HN^cfDki~Z$3]6!N dst -m comment --comment "DefaultAllow ingress isolation for namespace: default" -j ACCEPT
-A WEAVE-NPC-DEFAULT -m set --match-set weave-]B*(W?)t*z5O17G044[gUo#$l dst -m comment --comment "DefaultAllow ingress isolation for namespace: kube-node-lease" -j ACCEPT
-A WEAVE-NPC-EGRESS -m state --state RELATED,ESTABLISHED -j ACCEPT
-A WEAVE-NPC-EGRESS -m physdev --physdev-in vethwe-bridge --physdev-is-bridged -j RETURN
-A WEAVE-NPC-EGRESS -m addrtype --dst-type LOCAL -j RETURN
-A WEAVE-NPC-EGRESS -d 224.0.0.0/4 -j RETURN
-A WEAVE-NPC-EGRESS -m state --state NEW -j WEAVE-NPC-EGRESS-DEFAULT
-A WEAVE-NPC-EGRESS -m state --state NEW -m mark ! --mark 0x40000/0x40000 -j WEAVE-NPC-EGRESS-CUSTOM
-A WEAVE-NPC-EGRESS -m state --state NEW -m mark ! --mark 0x40000/0x40000 -j NFLOG --nflog-group 86
-A WEAVE-NPC-EGRESS-ACCEPT -j MARK --set-xmark 0x40000/0x40000
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-41s)5vQ^o/xWGz6a20N:~?#|E src -m comment --comment "DefaultAllow egress isolation for namespace: kube-public" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-41s)5vQ^o/xWGz6a20N:~?#|E src -m comment --comment "DefaultAllow egress isolation for namespace: kube-public" -j RETURN
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-E1ney4o[ojNrLk.6rOHi;7MPE src -m comment --comment "DefaultAllow egress isolation for namespace: kube-system" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-E1ney4o[ojNrLk.6rOHi;7MPE src -m comment --comment "DefaultAllow egress isolation for namespace: kube-system" -j RETURN
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-s_+ChJId4Uy_$}G;WdH|~TK)I src -m comment --comment "DefaultAllow egress isolation for namespace: default" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-s_+ChJId4Uy_$}G;WdH|~TK)I src -m comment --comment "DefaultAllow egress isolation for namespace: default" -j RETURN
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-sui%__gZ}{kX~oZgI_Ttqp=Dp src -m comment --comment "DefaultAllow egress isolation for namespace: kube-node-lease" -j WEAVE-NPC-EGRESS-ACCEPT
-A WEAVE-NPC-EGRESS-DEFAULT -m set --match-set weave-sui%__gZ}{kX~oZgI_Ttqp=Dp src -m comment --comment "DefaultAllow egress isolation for namespace: kube-node-lease" -j RETURN
</code></pre>
<pre><code>weave status connections
-> 10.111.1.156:6783 failed IP allocation was seeded by different peers (received: [2a:21:42:e0:5d:5f(k8s-worker-1)], ours: [12:35:b2:39:cf:7d(k8s-master)]), retry: 2020-08-17 08:15:51.155197759 +0000 UTC m=+68737.225153235
</code></pre>
<p>weave status in weave-pod</p>
<pre><code> Version: 2.7.0 (failed to check latest version - see logs; next check at 2020/08/17 13:35:46)
Service: router
Protocol: weave 1..2
Name: 12:35:b2:39:cf:7d(k8s-master)
Encryption: disabled
PeerDiscovery: enabled
Targets: 1
Connections: 1 (1 failed)
Peers: 1
TrustedSubnets: none
Service: ipam
Status: ready
Range: 10.32.0.0/12
DefaultSubnet: 10.32.0.0/12
</code></pre>
<p>it tried solution in these links but didn't work <a href="https://stackoverflow.com/q/46667659/513494">solution1</a> and <a href="https://stackoverflow.com/q/53775084/513494">solution2</a></p>
<p>Please let me know what could be the possible reason for master to not serve on the published NodePort.</p>
| Prasad | <p>finally, it worked it was with ports for weave wasn't open in firewall as mentioned in <a href="https://github.com/kubernetes/kops/issues/1311" rel="nofollow noreferrer">this</a></p>
<p>also deleted weave deployment in kubernetes, removed /var/lib/weave/weave-netdata.db and deployed weave again, it worked.</p>
| Prasad |
<p>How to constraint a K8s Job to a maximum amount of memory? I've tried the spec below, similar to a pod, but it's not recognized as a valid descriptor:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: countdown
spec:
template:
metadata:
name: countdown
spec:
containers:
- name: counter
image: centos:7
command:
- "bin/bash"
- "-c"
- "for i in 9 8 7 6 5 4 3 2 1 ; do echo $i ; done"
resources:
limits:
memory: "1000Mi"
restartPolicy: Never
</code></pre>
| pditommaso | <p>I didn't see any error in your job yaml file</p>
<pre><code>$ kk apply -f job.yaml
job.batch/countdown created
$ kk get pod
NAME READY STATUS RESTARTS AGE
countdown-5dckl 0/1 Completed 0 95s
</code></pre>
| BMW |
<p>I have a micro service scaled out across several pods in a Google Cloud Kubernetes Engine. Being in a multi-cloud-shop, we have our logging/monitoring/telemetry in Azure Application Insights.
Our data should be kept inside Europe, so our GCP Kubernetes cluster is set up with </p>
<pre><code>Master zone: europe-west1-b
Node zones: europe-west1-b
</code></pre>
<p>When I create a node pool on this cluster, the nodes apparently has the zone europe-west1-b (as expected), seen from the Google Cloud Platform Console "Node details".</p>
<p>However, in Azure Application Insights, from the telemetry reported from the applications running in pods in this node pool, the client_City is reported as "Mountain View" and client_StateOrProvince is "California", and some cases "Ann Arbor" in "Michigan".</p>
<p>At first I waived this strange location as just some inter-cloud-issue (e.g. defaulting to something strange when not filling out the information as expected on the receiving end, or similar). </p>
<p>But now, Application Insights actually pointed out that there is a quite significant performance difference depending on if my pod is running in Michigan or in California, which lead me to belive that these fields are actually correct.</p>
<p>Is GCP fooling me? Am I looking at the wrong place? How can I make sure my GCP Kubernetes nodes are running in Europe?</p>
<p>This is essential for me to know, both from a GCPR perspective, and of course performance (latency) wise.</p>
| audunsol | <p>the Azure Application Insights are fooling you, because the external IP was registered by Google in California, not considering that these are used by data-centers distributed all over the globe. also have a GCE instance deployed to Frankfurt am Main, while the IP appears as if it would be Mountain View. <a href="https://cloud.google.com/stackdriver/" rel="nofollow noreferrer">StackDriver</a> might report the actual locations (and not some vague GeoIP locations).</p>
| Martin Zeitler |
<p>Kubernetes already provides a way to manage configuration with <code>ConfigMap</code>.</p>
<p>However, I have a question/problem here.</p>
<p>If I have multiple applications with different needs deployed in Kubernetes, all these deployments might share and access some common config variables. Is it possible for ConfigMap to use a common config variable?</p>
| edwin | <p>There are two ways to do that. </p>
<ul>
<li><strong>Kustomize</strong> - Customization of kubernetes YAML configurations (developed as kubernetes sigs, and had been integrated into <code>kubectl</code> command line). But currently it isn't mature enough if compare with <code>helm</code> chart</li>
</ul>
<p><a href="https://github.com/kubernetes-sigs/kustomize" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/kustomize</a></p>
<ul>
<li><strong>Helm chart</strong> - The Kubernetes Package Manager. Its <code>vaules.yaml</code> can define the vaule for same configuration files (in your case, they are configmap) with variables. </li>
</ul>
<p><a href="https://helm.sh/" rel="nofollow noreferrer">https://helm.sh/</a></p>
| BMW |
<p>I'm attempting to read the contents of a Kubernetes Secret using <code>kube-rs</code>. The secret contains a key named "apiKey".</p>
<p>I seem to be able to pull the secret from the kube-apiserver (debug logging shows the correct contents of the secret) but I can't get to the value itself as it's returned as a <code>ByteString</code>.</p>
<p>I can't find a way to convert the <code>ByteString</code> to a <code>String</code>.</p>
<p>Could anyone kindly shed some light on this? My code is below, including my failed attempts and the related error messages.</p>
<pre><code>use kube::{Client, api::{Api, ResourceExt, ListParams, PostParams}};
use k8s_openapi::api::core::v1::Secret;
use bytes::Bytes;
use std::collections::BTreeMap;
use k8s_openapi::ByteString;
async fn get_credentials_from_secret(secret_namespace: &String, secret_name: &String) -> Result<String, kube::Error> {
let client = Client::try_default().await?;
let secrets: Api<Secret> = Api::namespaced(client, secret_namespace);
let secret: Secret = secrets.get(secret_name).await?;
let secret_data: BTreeMap<String, ByteString> = secret.data.unwrap();
let api_key = &secret_data["apiKey"];
println!("{:?}", api_key); // This prints 'ByteString([97, 112, 105, 107, 101, 121])'
// let api_key_string: String = api_key.into(); // the trait `From<&ByteString>` is not implemented for `std::string::String`
// let api_key_slice = api_key.as_slice(); // method not found in `&ByteString
// let api_key_string: String = api_key.serialize(); // method not found in `&ByteString`
// let api_key_string = String::from(api_key); // the trait `From<&ByteString>` is not implemented for `std::string::String`
Ok("Secret data here ideally!".to_string())
}
</code></pre>
| Harry | <p>I faced the same issue. I saw that it had some serialization traits at <a href="https://docs.rs/k8s-openapi/0.13.1/src/k8s_openapi/lib.rs.html#482-506" rel="nofollow noreferrer">https://docs.rs/k8s-openapi/0.13.1/src/k8s_openapi/lib.rs.html#482-506</a>, so I resorted to using the <code>serde_json</code> crate which worked just fine.</p>
<pre><code>use k8s_openapi::ByteString;
...
let some_byte_str = ByteString("foobar".as_bytes().to_vec());
serde_json::to_string(&some_byte_str).unwrap();
</code></pre>
| digitalfoo |
<p>I'm currently running through the secure deployment guide for CockroachDB on Kubernetes and while it works exactly as expected, but I'm searching for ways to streamline/automate the deployment. I'm using <code>Configs</code> to deploy, and I would honestly just like to be able to automate the final step (after <code>kubectl create -f cockroachdb-statefulset.yaml</code>). I've been searching around for guides on streamlining deployments, but I haven't come up with much. Is there a way to complete the following after the config application:</p>
<pre><code>kubectl exec -it cockroachdb-0 \
-- /cockroach/cockroach init \
--certs-dir=/cockroach/cockroach-certs
</code></pre>
<p>Perhaps as part of an <code>initContainer</code> in the <code>cockroachdb-statefulset.yaml</code> config?</p>
<p>I'm also looking for a way to automate the creation of a db/user account, so any insight there would be greatly appreciated.</p>
<p>Thanks!</p>
| tparrott | <p>take a look on <a href="https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/" rel="nofollow noreferrer">kubernetes jos</a></p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: pi
spec:
template:
spec:
containers:
- name: pi
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2000)"]
restartPolicy: Never
backoffLimit: 4
</code></pre>
<p>You can integrate this yaml within your deployment, but I do think you need write some wrapper script to confirm the cockroach service is up and health first. </p>
<p>so the job's command coule be:</p>
<pre><code>while true;
do
if `command to check health`; then
# run kubernetes exec
exit
else
sleep 5
fi
done
</code></pre>
| BMW |
<p>I was reading about <a href="https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/" rel="nofollow noreferrer"><strong>Pod Priority and Preemption</strong></a>, I had a question in my mind. </p>
<p>Lets say A is higher priority pod and B is lower one. B is already running , A came along and now it eviction had to be happened. Note that B is of type JOB. I wanted to ask, If B is evicted. will it be rescheduled later?</p>
| Talha Irfan | <p>Answer is Yes.</p>
<blockquote>
<p>If a pending Pod has inter-pod affinity to one or more of the lower-priority Pods on the Node, the inter-Pod affinity rule cannot be satisfied in the absence of those lower-priority Pods. In this case, the scheduler does not preempt any Pods on the Node. Instead, it looks for another Node. The scheduler might find a suitable Node or it might not. There is no guarantee that the pending Pod can be scheduled</p>
</blockquote>
| BMW |
<p>I'm trying to configure SSL certificates in kubernetes with cert-manager, istio ingress and LetsEncrypt. I have installed istio with helm, cert-manager, created ClusterIssuer and then I'm trying to create a Certificate. The acme challenge can't be validated, i'm trying to do it with http01 and can't figure it out how to use istio ingress for this. Istio is deployed with following options:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>helm install --name istio install/kubernetes/helm/istio `
--namespace istio-system `
--set global.controlPlaneSecurityEnabled=true `
--set grafana.enabled=true`
--set tracing.enabled=true
--set kiali.enabled=true `
--set ingress.enabled=true</code></pre>
</div>
</div>
</p>
<p>Certificate configuration:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: example.com
namespace: istio-system
spec:
secretName: example.com
issuerRef:
name: letsencrypt-staging
kind: ClusterIssuer
commonName: 'example.com'
dnsNames:
- example.com
acme:
config:
- http01:
ingress: istio-ingress
domains:
- example.com</code></pre>
</div>
</div>
</p>
<p>When trying this way, for some reason, istio-ingress can't be found, but when trying to specify ingressClass: some-name, instead of ingress: istio-ingress, I get 404 because example.com/.well-known/acme-challenge/token can't be reached.
How can this be solved? Thank you!</p>
| Raducu Ilie Radu | <p>Istio ingress has been deprecated, you can use the Ingress Gateway with the DNS challenge. </p>
<p>Define a generic public ingress gateway:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: public-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
tls:
httpsRedirect: true
- port:
number: 443
name: https
protocol: HTTPS
hosts:
- "*"
tls:
mode: SIMPLE
privateKey: /etc/istio/ingressgateway-certs/tls.key
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
</code></pre>
<p>Create an issuer using one of the DNS providers supported by cert-manager. Here is the config for GCP CloudDNS:</p>
<pre><code>apiVersion: certmanager.k8s.io/v1alpha1
kind: Issuer
metadata:
name: letsencrypt-prod
namespace: istio-system
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: [email protected]
privateKeySecretRef:
name: letsencrypt-prod
dns01:
providers:
- name: cloud-dns
clouddns:
serviceAccountSecretRef:
name: cert-manager-credentials
key: gcp-dns-admin.json
project: my-gcp-project
</code></pre>
<p>Create a wildcard cert with:</p>
<pre><code>apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: istio-gateway
namespace: istio-system
spec:
secretname: istio-ingressgateway-certs
issuerRef:
name: letsencrypt-prod
commonName: "*.example.com"
acme:
config:
- dns01:
provider: cloud-dns
domains:
- "*.example.com"
- "example.com"
</code></pre>
<p>It takes of couple of minutes for cert-manager to issue the cert:</p>
<pre><code>kubectl -n istio-system describe certificate istio-gateway
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CertIssued 1m52s cert-manager Certificate issued successfully
</code></pre>
<p>You can find a step-by-step guide on setting up Istio ingress on GKE with Let's Encrypt here <a href="https://docs.flagger.app/install/flagger-install-on-google-cloud#cloud-dns-setup" rel="nofollow noreferrer">https://docs.flagger.app/install/flagger-install-on-google-cloud#cloud-dns-setup</a></p>
| Stefan P. |
<p>I'm fairly novice in GCP and would like to ask a question:</p>
<p>I have two private clusters in the same region with internal LB (all in one VPC), currently pods from both clusters are able to communicate with each other over HTTP.</p>
<p>As far as I understand from the documentation - internal LB is a regional product, therefore if the private clusters were located in different regions the above scenario wouldn't be possible.</p>
<p>What do I need to do in order to make pods of two private clusters which are located on different regions to be able to communicate with each other?</p>
<p>My guess is that I have to define external LB for both of those clusters and using firewall rules allow communication only cluster to cluster via external IP and block all communication from the outside world.</p>
| Medvednic | <p>since these are different <a href="https://cloud.google.com/vpc/docs/vpc#ip-ranges" rel="nofollow noreferrer">IP ranges</a> (at least in auto mode), it may not help that it is global VPC - when this should be the case, you'd have to add a <a href="https://cloud.google.com/vpn/docs/how-to/creating-route-based-vpns" rel="nofollow noreferrer">VPN tunnel</a>, in order to route these network segments. also consider the possibility to add two tunnels; one for ingress and one for egress traffic.</p>
<p>an alternative to VPN tunnels might be <a href="https://cloud.google.com/vpc/docs/vpc-peering" rel="nofollow noreferrer">VPC Network Peering</a>, were the main difference is:</p>
<blockquote>
<p>Peered VPC networks remain administratively separate. Routes, firewalls, VPNs, and other traffic management tools are administered and applied separately in each of the VPC networks.</p>
</blockquote>
| Martin Zeitler |
<h2>Problem</h2>
<p>I am trying to implement a Horizontal Pod Autoscaler (HPA) on my AKS cluster. However, I'm unable to retrieve the GPU metrics (auto-generated by Azure) that my HPA requires to scale.</p>
<h2>Example</h2>
<p>As a reference, see <a href="https://learn.microsoft.com/en-us/azure/aks/tutorial-kubernetes-scale#autoscale-pods" rel="nofollow noreferrer">this example</a> where the HPA scales based on <code>targetCPUUtilizationPercentage: 50</code>. That is, the HPA will deploy more/less pods to achieve a target of an average CPU utilization across all pods. Ideally, I want to achieve the same with the GPU.</p>
<h2>Setup</h2>
<p>I have deployed an AKS cluster with Azure Monitor enabled and my node size set to <code>Standard_NC6_Promo</code> - Azure's VM option that comes equipped with Nvidia's Tesla K80 GPU. However, in order to utilize the GPU, you must first install the appropriate plugin into your cluster, as explained <a href="https://learn.microsoft.com/en-us/azure/aks/gpu-cluster" rel="nofollow noreferrer">here</a>. Once you install this plugin a number of GPU metrics are automatically collected by Azure and logged to a table named "InsightsMetrics" (<a href="https://learn.microsoft.com/en-us/azure/azure-monitor/insights/container-insights-gpu-monitoring" rel="nofollow noreferrer">see</a>). From what I can read, the metric <code>containerGpuDutyCycle</code> will be the most beneficial for monitoring GPU utilization.</p>
<h2>Current Situation</h2>
<p>I can successfully see the insight metrics gathered by installed plugin, where one of the metrics is <code>containerGpuDutyCycle</code>.</p>
<p><a href="https://i.stack.imgur.com/UONW7.png" rel="nofollow noreferrer">InsightsMetrics table inside of Logs tab of Kubernetes Service on Azure Portal</a></p>
<p>Now how to expose/provide this metric to my HPA?</p>
<h2>Possible Solutions</h2>
<p>What I've noticed is that if you navigate to the <strong>Metrics</strong> tab of your AKS cluster, you cannot retrieve these GPU metrics. I assume this is because these GPU "metrics" are technically logs and not "official" metrics. However, azure does support something called <a href="https://learn.microsoft.com/en-us/azure/azure-monitor/platform/app-insights-metrics" rel="nofollow noreferrer">log-based metrics</a>, where the results of log queries can be treated as an "official" metric, but nowhere do I see how to create my own custom log-based metric.</p>
<p>Furthermore, <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-metrics-apis" rel="nofollow noreferrer">Kubernetes supports custom and external metrics</a> through their Metrics API, where metrics can be retrieved from external sources (such as Azure's Application Insights). Azure has an implementation of the Metrics API called <a href="https://github.com/Azure/azure-k8s-metrics-adapter" rel="nofollow noreferrer">Azure Kubernetes Metrics Adapter</a>. Perhaps I need to expose the <code>containerGpuDutyCycle</code> metric as an external metric using this? If so, how do I reference/expose the metric as external/custom?</p>
<h2>Alternative Solutions</h2>
<p>My main concern is exposing the GPU metrics for my HPA. I'm using Azure's Kubernetes Metrics Adapter for now as I assumed it would better integrate into my AKS cluster (same eco-system). However, it's in alpha stage (not production ready). If anyone can solve my problem using an alternative metric adapter (e.g. <a href="https://prometheus.io/" rel="nofollow noreferrer">Prometheus</a>), that would still be very helpful.</p>
<p>Many thanks for any light you can shed on this issue.</p>
| KarFan-FS | <p>I managed to do this recently (just this week). I'll outline my solution and all the gotchas, in case that helps.</p>
<p>Starting with an AKS cluster, I installed the following components in order to harvest the GPU metrics:</p>
<ol>
<li>nvidia-device-plugin - to make GPU metrics collectable</li>
<li>dcgm-exporter - a daemonset to reveal GPU metrics on each node</li>
<li>kube-prometheus-stack - to harvest the GPU metrics and store them</li>
<li>prometheus-adapter - to make harvested, stored metrics available to the k8s metrics server</li>
</ol>
<p>The AKS cluster comes with a metrics server built in, so you don't need to worry about that. It is also possible to provision the cluster with the nvidia-device-plugin already applied, but currently not possible via terraform (<a href="https://stackoverflow.com/questions/66117018/is-it-possible-to-use-aks-custom-headers-with-the-azurerm-kubernetes-cluster-res">Is it possible to use aks custom headers with the azurerm_kubernetes_cluster resource?</a>), which is how I was deploying my cluster.</p>
<p>To install all this stuff I used a script much like the following:</p>
<pre><code>helm repo add nvdp https://nvidia.github.io/k8s-device-plugin
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo add gpu-helm-charts https://nvidia.github.io/gpu-monitoring-tools/helm-charts
helm repo update
echo "Installing the NVIDIA device plugin..."
helm install nvdp/nvidia-device-plugin \
--generate-name \
--set migStrategy=mixed \
--version=0.9.0
echo "Installing the Prometheus/Grafana stack..."
helm install prometheus-community/kube-prometheus-stack \
--create-namespace --namespace prometheus \
--generate-name \
--values ./kube-prometheus-stack.values
prometheus_service=$(kubectl get svc -nprometheus -lapp=kube-prometheus-stack-prometheus -ojsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}')
helm install prometheus-adapter prometheus-community/prometheus-adapter \
--namespace prometheus \
--set rbac.create=true,prometheus.url=http://${prometheus_service}.prometheus.svc.cluster.local,prometheus.port=9090
helm install gpu-helm-charts/dcgm-exporter \
--generate-name
</code></pre>
<p>Actually, I'm lying about the <code>dcgm-exporter</code>. I was experiencing a problem (my first "gotcha") where the <code>dcgm-exporter</code> was not responding to liveness requests in time, and was consistently entering a <code>CrashLoopBackoff</code> status (<a href="https://github.com/NVIDIA/gpu-monitoring-tools/issues/120" rel="nofollow noreferrer">https://github.com/NVIDIA/gpu-monitoring-tools/issues/120</a>). To get around this, I created my own <code>dcgm-exporter</code> k8s config (by taking details from here and modifying them slightly: <a href="https://github.com/NVIDIA/gpu-monitoring-tools" rel="nofollow noreferrer">https://github.com/NVIDIA/gpu-monitoring-tools</a>) and applied it.
In doing this I experienced my second "gotcha", which was that in the latest <code>dcgm-exporter</code> images they have removed some GPU metrics, such as <code>DCGM_FI_DEV_GPU_UTIL</code>, largely because these metrics are resource intensive to collect (see <a href="https://github.com/NVIDIA/gpu-monitoring-tools/issues/143" rel="nofollow noreferrer">https://github.com/NVIDIA/gpu-monitoring-tools/issues/143</a>). If you want to re-enable them make sure you run the <code>dcgm-exporter</code> with the arguments set as: <code>["-f", "/etc/dcgm-exporter/dcp-metrics-included.csv"]</code> OR you can create your own image and supply your own metrics list, which is what I did by using this Dockerfile:</p>
<pre><code>FROM nvcr.io/nvidia/k8s/dcgm-exporter:2.1.4-2.3.1-ubuntu18.04
RUN sed -i -e '/^# DCGM_FI_DEV_GPU_UTIL.*/s/^#\ //' /etc/dcgm-exporter/default-counters.csv
ENTRYPOINT ["/usr/local/dcgm/dcgm-exporter-entrypoint.sh"]
</code></pre>
<p>Another thing you can see from the above script is that I also used my own Prometheus helm chart values file. I followed the instructions from nvidia's site (<a href="https://docs.nvidia.com/datacenter/cloud-native/kubernetes/dcgme2e.html" rel="nofollow noreferrer">https://docs.nvidia.com/datacenter/cloud-native/kubernetes/dcgme2e.html</a>), but found my third "gotcha" in the <code>additionalScrapeConfig</code>.</p>
<p>What I learned was that, in the final deployment, the HPA has to be in the same namespace as the service it's scaling (identified by <code>targetRef</code>), otherwise it can't find it to scale it, as you probably already know.</p>
<p><strong>But just as importantly</strong> the <code>dcgm-metrics</code> <code>Service</code> <em>also has to be in the same namespace</em>, otherwise the HPA can't find the metrics it needs to scale by.
So, I changed the <code>additionalScrapeConfig</code> to target the relevant namespace. I'm sure there's a way to use the <code>additionalScrapeConfig.relabel_configs</code> section to enable you to keep <code>dcgm-exporter</code> in a different namespace and still have the HPA find the metrics, but I haven't had time to learn that voodoo yet.</p>
<p>Once I had all of that, I could check that the DCGM metrics were being made available to the kube metrics server:</p>
<pre><code>$ kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1 | jq -r . | grep DCGM_FI_DEV_GPU_UTIL
</code></pre>
<p>In the resulting list you <em>really</em> want to see a <code>services</code> entry, like so:</p>
<pre><code>"name": "jobs.batch/DCGM_FI_DEV_GPU_UTIL",
"name": "namespaces/DCGM_FI_DEV_GPU_UTIL",
"name": "services/DCGM_FI_DEV_GPU_UTIL",
"name": "pods/DCGM_FI_DEV_GPU_UTIL",
</code></pre>
<p>If you don't it probably means that the dcgm-exporter deployment you used is missing the <code>ServiceAccount</code> component, and also the HPA still won't work.</p>
<p>Finally, I wrote my HPA something like this:</p>
<pre><code>apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: my-app-hpa
namespace: my-namespace
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-app
minReplicas: X
maxReplicas: Y
...
metrics:
- type: Object
object:
metricName: DCGM_FI_DEV_GPU_UTIL
targetValue: 80
target:
kind: Service
name: dcgm-exporter
</code></pre>
<p>and it all worked.</p>
<p>I hope this helps! I spent so long trying different methods shown by people on consultancy company blogs, medium posts etc before discovering that people who write these pieces have already made assumptions about your deployment which affect details that you really need to know about (eg: the namespacing issue).</p>
| ndtreviv |
<p>I'm trying to set up my mosquitto server inside a Kubernetes cluster and somehow I'm getting the following error and I can't figure out why.
Could someone help me?</p>
<p><strong>Error:</strong></p>
<pre><code>1551171948: mosquitto version 1.4.10 (build date Wed, 13 Feb 2019 00:45:38 +0000) starting
1551171948: Config loaded from /etc/mosquitto/mosquitto.conf.
1551171948: |-- *** auth-plug: startup
1551171948: |-- ** Configured order: http
1551171948: |-- with_tls=false
1551171948: |-- getuser_uri=/api/mosquitto/users
1551171948: |-- superuser_uri=/api/mosquitto/admins
1551171948: |-- aclcheck_uri=/api/mosquitto/permissions
1551171948: |-- getuser_params=(null)
1551171948: |-- superuser_params=(null)
1551171948: |-- aclcheck_paramsi=(null)
1551171948: Opening ipv4 listen socket on port 1883.
1551171948: Error: Cannot assign requested address
</code></pre>
<p><strong>Mosquitto.conf:</strong></p>
<pre><code>allow_duplicate_messages false
connection_messages true
log_dest stdout stderr
log_timestamp true
log_type all
persistence false
listener 1883 mosquitto
allow_anonymous true
# Public
# listener 8883 0.0.0.0
listener 9001 0.0.0.0
protocol websockets
allow_anonymous false
auth_plugin /usr/lib/mosquitto-auth-plugin/auth-plugin.so
auth_opt_backends http
auth_opt_http_ip 127.0.0.1
auth_opt_http_getuser_uri /api/mosquitto/users
auth_opt_http_superuser_uri /api/mosquitto/admins
auth_opt_http_aclcheck_uri /api/mosquitto/permissions
auth_opt_acl_cacheseconds 1
auth_opt_auth_cacheseconds 0
</code></pre>
<p><strong>Kubernetes.yaml:</strong></p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: mosquitto
spec:
replicas: 1
template:
metadata:
labels:
app: mosquitto
spec:
imagePullSecrets:
- name: abb-login
containers:
- name: mosquitto
image: ****mosquitto:develop
imagePullPolicy: Always
ports:
- containerPort: 9001
protocol: TCP
- containerPort: 1883
protocol: TCP
- containerPort: 8883
protocol: TCP
resources: {}
---
apiVersion: v1
kind: Service
metadata:
name: mosquitto
spec:
ports:
- name: "9001"
port: 9001
targetPort: 9001
protocol: TCP
- name: "1883"
port: 1883
targetPort: 1883
protocol: TCP
- name: "8883"
port: 8883
targetPort: 8883
protocol: TCP
selector:
app: mosquitto
</code></pre>
| raven | <p>The problem is with the listener on port 1883, this can be determined because the log hasn't got to the 9001 listener yet.</p>
<p>The problem is most likely because mosquitto can not resolve the IP address of the hostname <code>mosquitto</code>. When passing a hostname the name must resolve to a valid IP address. The same problem has been discussed in <a href="https://stackoverflow.com/questions/54863408/facing-error-while-using-tls-with-mosquitto/54865869#54865869">this</a> recent answer. It could also be that <code>mosquitto</code> is resolving to an address that is not bound to any of the interfaces on the actual machine (e.g. if Address Translation is being used).</p>
<p>Also for the 9001 listener rather than passing <code>0.0.0.0</code> you can just not include a bind address and the default is to listen on all interfaces.</p>
| hardillb |
<p>I am trying to create a custom template for a coder.com installation on a self hosted kubernetes cluster.</p>
<p>When I try to crate a new workspace from the template, I get the following error.</p>
<pre><code>Error: namespaces is forbidden: User "system:serviceaccount:coder:coder" cannot create resource "namespaces" in API group "" at the cluster scope
on main.tf line 42, in resource "kubernetes_namespace" "workspace":
42: resource "kubernetes_namespace" "workspace" {
</code></pre>
<p>I have tried to apply the following ClusterRole and ClsuterRoleBiding, but I get the same error</p>
<pre><code># a cluster role that allows creating namespaces
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: coder-create-namespace
rules:
- apiGroups:
- ""
resources: ["*"]
verbs: ["get","watch","create","edit","delete","view"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: coder-create-namespace
namespace:
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: coder-create-namespace
subjects:
- kind: User
name: coder
namespace: coder
---
</code></pre>
<p>This is my coder template terraform file.</p>
<pre><code>terraform {
required_providers {
coder = {
source = "coder/coder"
version = "~> 0.11.0"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = "~> 2.22"
}
}
}
provider "coder" {
}
variable "use_kubeconfig" {
type = bool
description = <<-EOF
Use host kubeconfig? (true/false)
Set this to false if the Coder host is itself running as a Pod on the same
Kubernetes cluster as you are deploying workspaces to.
Set this to true if the Coder host is running outside the Kubernetes cluster
for workspaces. A valid "~/.kube/config" must be present on the Coder host.
EOF
default = false
}
# Info about the current workspace
data "coder_workspace" "me" {}
# Used for all resources created by this template
locals {
name = "coder-ws-${lower(data.coder_workspace.me.owner)}-${lower(data.coder_workspace.me.name)}"
labels = {
"app.kubernetes.io/managed-by" = "coder"
}
}
resource "kubernetes_namespace" "workspace" {
metadata {
name = local.name
labels = local.labels
}
}
</code></pre>
| PrestonDocks | <p>This was the solution to my problem.</p>
<pre><code># a cluster role that allows creating namespaces
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: coder-create-namespace
rules:
- apiGroups: [""]
resources:
- namespaces
verbs:
- create
- get
- list
- watch
- delete
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: coder-create-namespace
namespace:
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: coder-create-namespace
subjects:
- kind: ServiceAccount
name: coder
namespace: coder
---
</code></pre>
| PrestonDocks |
<p>using a standard istio deployment in a kubernetes cluster I am trying to add an initContainer to my pod deployment, which does additional database setup.</p>
<p>Using the cluster IP of the database doesn't work either. But I can connect to the database from my computer using port-forwarding.</p>
<p>This container is fairly simple:</p>
<pre><code> spec:
initContainers:
- name: create-database
image: tmaier/postgresql-client
args:
- sh
- -c
- |
psql "postgresql://$DB_USER:$DB_PASSWORD@db-host:5432" -c "CREATE DATABASE fusionauth ENCODING 'UTF-8' LC_CTYPE 'en_US.UTF-8' LC_COLLATE 'en_US.UTF-8' TEMPLATE template0"
psql "postgresql://$DB_USER:$DB_PASSWORD@db-host:5432" -c "CREATE ROLE user WITH LOGIN PASSWORD 'password';"
psql "postgresql://$DB_USER:$DB_PASSWORD@db-host:5432" -c "GRANT ALL PRIVILEGES ON DATABASE fusionauth TO user; ALTER DATABASE fusionauth OWNER TO user;"
</code></pre>
<p>This kubernetes initContainer according to what I can see runs before the "istio-init" container. Is that the reason why it cannot resolve the db-host:5432 to the ip of the pod running the postgres service?</p>
<p>The error message in the init-container is:</p>
<pre><code>psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?
</code></pre>
<p>The same command from fully initialized pod works just fine.</p>
| Janos Veres | <p>You can't access services inside the mesh without the Envoy sidecar, your init container runs alone with no sidecars. In order to reach the DB service from an init container you need to expose the DB with a ClusterIP service that has a different name to the Istio Virtual Service of that DB. </p>
<p>You could create a service named <code>db-direct</code> like:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: db-direct
labels:
app: db
spec:
type: ClusterIP
selector:
app: db
ports:
- name: db
port: 5432
protocol: TCP
targetPort: 5432
</code></pre>
<p>And in your init container use <code>db-direct:5432</code>.</p>
| Stefan P. |
<p>So I currently have a self-managed certificate, but I want to switch to a google-managed certificate. The google docs for it say to keep the old certificate active while the new one is provisioned. When I try to create a google-managed certificate for the same ingress IP, I get the following error: <code>Invalid value for field 'resource.IPAddress': 'xx.xxx.xx.xx'. Specified IP address is in-use and would result in a conflict.</code></p>
<p>How can I keep the old certificate active, like it tells me to, if it won't let me start provisioning a certificate for the same ingress?</p>
| Peter R | <p>This can happen if 2 load balancers are sharing the same IP address (<a href="https://github.com/GoogleCloudPlatform/k8s-multicluster-ingress/blob/master/troubleshooting.md" rel="nofollow noreferrer">source</a>). most likely you would have to detach that IP - or add another IP and then swap, once the certificate had been provisioned. it's difficult to tell by the error message, while not knowing which command had been issued.</p>
| Martin Zeitler |
<p>I see <a href="https://github.com/dgkanatsios/CKAD-exercises/blob/master/d.configuration.md#create-and-display-a-configmap-from-a-file-giving-the-key-special" rel="nofollow noreferrer">here</a> a syntax like this:</p>
<pre><code>kubectl create cm configmap4 --from-file=special=config4.txt
</code></pre>
<p>I did not find a description of what repetition of = and the <strong>special</strong> means here.
Kubernetes documentation <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#create-configmaps-from-files" rel="nofollow noreferrer">here</a> only denotes one time usage of <strong>=</strong> after <strong>--from-file</strong> while creating configmaps in kubectl. </p>
| Farshid | <p>It appears from generating the YAML that this middle key mean all the keys that are being loaded from the file to be nested inside the mentioned key (special keyword in the question example).</p>
<p>It appears like this:</p>
<pre><code>apiVersion: v1
data:
special: |
var3=val3
var4=val4
kind: ConfigMap
metadata:
creationTimestamp: "2019-06-01T08:20:15Z"
name: configmap4
namespace: default
resourceVersion: "123320"
selfLink: /api/v1/namespaces/default/configmaps/configmap4
uid: 1582b155-8446-11e9-87b7-0800277f619d
</code></pre>
| Farshid |
<p>When we were deploying active-mq in azure kubernetes service(aks), where active-mq data folder mounted on azure managed disk as a persistent volume claim. Below is the yaml used for deployment.
<strong>ActiveMQ Image used</strong>: rmohr/activemq
Kubernetes Version: v1.15.7</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: activemqcontainer
spec:
replicas: 1
selector:
matchLabels:
app: activemqcontainer
template:
metadata:
labels:
app: activemqcontainer
spec:
securityContext:
runAsUser: 1000
fsGroup: 2000
runAsNonRoot: false
containers:
- name: web
image: azureregistry.azurecr.io/rmohractivemq
imagePullPolicy: IfNotPresent
ports:
- containerPort: 61616
volumeMounts:
- mountPath: /opt/activemq/data
subPath: data
name: volume
- mountPath: /opt/apache-activemq-5.15.6/conf/activemq.xml
name: config-xml
subPath: activemq.xml
imagePullSecrets:
- name: secret
volumes:
- name: config-xml
configMap:
name: active-mq-xml
- name: volume
persistentVolumeClaim:
claimName: azure-managed-disk
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: azure-managed-disk
spec:
accessModes:
- ReadWriteOnce
storageClassName: managed-premium
resources:
requests:
storage: 100Gi
</code></pre>
<p>Getting below error.</p>
<pre><code>WARN | Failed startup of context o.e.j.w.WebAppContext@517566b{/admin,file:/opt/apache-activemq-5.15.6/webapps/admin/,null}
java.lang.IllegalStateException: Parent for temp dir not configured correctly: writeable=false
at org.eclipse.jetty.webapp.WebInfConfiguration.makeTempDirectory(WebInfConfiguration.java:336)[jetty-all-9.2.25.v20180606.jar:9.2.25.v20180606]
at org.eclipse.jetty.webapp.WebInfConfiguration.resolveTempDirectory(WebInfConfiguration.java:304)[jetty-all-9.2.25.v20180606.jar:9.2.25.v20180606]
at org.eclipse.jetty.webapp.WebInfConfiguration.preConfigure(WebInfConfiguration.java:69)[jetty-all-9.2.25.v20180606.jar:9.2.25.v20180606]
at org.eclipse.jetty.webapp.WebAppContext.preConfigure(WebAppContext.java:468)[jetty-all-9.2.25.v20180606.jar:9.2.25.v20180606]
at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:504)[jetty-all-9.2.25.v20180606.jar:9.2.25.v20180606]
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)[jetty-all-9.2.25.v20180606.jar:9.2.25.v20180606]
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132)[jetty-all-9.2.25.v20180606.jar:9.2.25.v20180606]
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114)[jetty-all-9.2.25.v20180606.jar:9.2.25.v20180606]
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)[jetty-all-9.2.25.v20180606.jar:9.2.2
</code></pre>
| Gowri Shankar | <p>Its a warning from activemq web admin console. Jetty which hosts web console is unable to create temp directory.</p>
<pre><code>WARN | Failed startup of context o.e.j.w.WebAppContext@517566b{/admin,file:/opt/apache-activemq-5.15.6/webapps/admin/,null}
java.lang.IllegalStateException: Parent for temp dir not configured correctly: writeable=false
</code></pre>
<p>You can override default temp directory by setting up environment variable ACTIVEMQ_TMP as below in container spec</p>
<pre><code> env:
- name: ACTIVEMQ_TMP
value : "/tmp"
</code></pre>
| Dhananjay |
<p>I am building a service which creates on demand node red instance on Kubernetes. This service needs to have custom authentication, and some other service specific data in a JSON file.</p>
<p>Every instance of node red will have a Persistent Volume associated with it, so one way I though of doing this was to attach the PVC with a pod and copy the files into the PV, and then start the node red deployment over the modified PVC.</p>
<p>I use following script to accomplish this</p>
<pre><code>def paste_file_into_pod(self, src_path, dest_path):
dir_name= path.dirname(src_path)
bname = path.basename(src_path)
exec_command = ['/bin/sh', '-c', 'cd {src}; tar cf - {base}'.format(src=dir_name, base=bname)]
with tempfile.TemporaryFile() as tar_buffer:
resp = stream(self.k8_client.connect_get_namespaced_pod_exec, self.kube_methods.component_name, self.kube_methods.namespace,
command=exec_command,
stderr=True, stdin=True,
stdout=True, tty=False,
_preload_content=False)
print(resp)
while resp.is_open():
resp.update(timeout=1)
if resp.peek_stdout():
out = resp.read_stdout()
tar_buffer.write(out.encode('utf-8'))
if resp.peek_stderr():
print('STDERR: {0}'.format(resp.read_stderr()))
resp.close()
tar_buffer.flush()
tar_buffer.seek(0)
with tarfile.open(fileobj=tar_buffer, mode='r:') as tar:
subdir_and_files = [tarinfo for tarinfo in tar.getmembers()]
tar.extractall(path=dest_path, members=subdir_and_files)
</code></pre>
<p>This seems like a very messy way to do this. Can someone suggest a quick and easy way to start node red in Kubernetes with custom settings.js and some additional files for config?</p>
| ROHIT SHARMA | <p>The better approach is not to use a PV for flow storage, but to use a <a href="https://nodered.org/docs/api/storage/" rel="nofollow noreferrer">Storage Plugin</a> to save flows in a central database. There are several already in existence using DBs like MongoDB</p>
<p>You can extend the existing Node-RED container to include a modified <code>settings.js</code> in <code>/data</code> that includes the details for the storage and authentication plugins and uses environment variables to set the instance specific at start up.</p>
<p>Examples here: <a href="https://www.hardill.me.uk/wordpress/tag/multi-tenant/" rel="nofollow noreferrer">https://www.hardill.me.uk/wordpress/tag/multi-tenant/</a></p>
| hardillb |
<p>I have deployed kube-state-metrics into kube-system namespace and in the same cluster we are having prometheus-operator running I've written the below service monitor file for sending metrics to prometheus but it is not working. Please find the files below.</p>
<p>Servicemonitor.yaml</p>
<pre><code>apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: kube-state-metrics
labels:
app.kubernetes.io/name: kube-state-metrics
namespace: kube-system
spec:
selector:
matchLabels:
prometheus-scrape: "true"
endpoints:
- port: metrics
path: /metrics
targetPort: 8080
honorLabels: true
scheme: https
tlsConfig:
insecureSkipVerify: true
</code></pre>
<p>Prometheus-deploy.yaml</p>
<pre><code>apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
annotations:
argocd.argoproj.io/sync-wave: "1"
name: prometheus
labels:
name: prometheus
spec:
serviceAccountName: prometheus
serviceMonitorSelector: {}
serviceMonitorNamespaceSelector:
matchLabels:
prometheus-scrape: "true"
podMonitorSelector: {}
podMonitorNamespaceSelector:
matchLabels:
prometheus-scrape: "true"
resources:
requests:
memory: 400Mi
enableAdminAPI: false
additionalScrapeConfigs:
name: additional-scrape-configs
key: prometheus-additional.yaml
</code></pre>
<p>Can any one please help me out regarding this issue.</p>
<p>Thanks.</p>
| Dasari Sai Kumar | <p>ServiceMonitor's selector>matchLabels should match with "Service"'s labels. Check if your service has correct label.</p>
| Arda |
<p>When 1 of my 6 pods connected to the MQTT in AWS IoT Core another pod will also try to connect with the same clientId in the env config of the node server. This will lead to the disconnection and reconnection to the new pod. This event happens continuously, and the topic <code>$aws/events/presence/connected/#</code> received multiples messages in 1 second. This make the MQTT client unstable.</p>
<p><img src="https://i.stack.imgur.com/DYx3U.png" alt="connection code" /></p>
<p>I tried to lock the connection of the MQTT client on just 1 pod by storing the status of the client connection in the database.</p>
<p><a href="https://i.stack.imgur.com/oTJYL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oTJYL.png" alt="enter image description here" /></a></p>
<p>However, this leads to another problem that when I call an API to publish 1 topic using the MQTT client the API can not know which pod has the MQTT client connected.</p>
| Vũ Thành An | <p>Can you not just leave the <code>clientId</code> value empty and let the client assign a random one?</p>
| hardillb |
<p>Our setup:</p>
<p>We are using kubernetes in GCP.
We have pods that write logs to a shared volume, with a sidecar container that sucks up our logs for our logging system.
We cannot just use stdout instead for this process.</p>
<p>Some of these pods are long lived and are filling up disk space because of no log rotation.</p>
<p>Question:
What is the easiest way to prevent the disk space from filling up here (without scheduling pod restarts)?</p>
<p>I have been attempting to install logrotate using: <code>RUN apt-get install -y logrotate</code> in our Dockerfile and placing a logrotate config file in <code>/etc/logrotate.d/dynamicproxy</code> but it doesnt seem to get run. <code>/var/lib/logrotate/status</code> never gets generated.</p>
<p>I feel like I am barking up the wrong tree or missing something integral to getting this working. Any help would be appreciated.</p>
| Tyler Zale | <p>We ended up writing our own daemonset to properly collect the logs from the nodes instead of the container level. We then stopped writing to shared volumes from the containers and logged to stdout only.</p>
<p>We used fluentd to the logs around.</p>
<p><a href="https://github.com/splunk/splunk-connect-for-kubernetes/tree/master/helm-chart/splunk-kubernetes-logging" rel="nofollow noreferrer">https://github.com/splunk/splunk-connect-for-kubernetes/tree/master/helm-chart/splunk-kubernetes-logging</a></p>
| Tyler Zale |
<p>I have successfully built Docker images and ran them in a Docker swarm. When I attempt to build an image and run it with Docker Desktop's Kubernetes cluster:</p>
<pre><code>docker build -t myimage -f myDockerFile .
</code></pre>
<p>(the above successfully creates an image in the docker local registry)</p>
<pre><code>kubectl run myapp --image=myimage:latest
</code></pre>
<p>(as far as I understand, this is the same as using the kubectl create deployment command)</p>
<p>The above command successfully creates a deployment, but when it makes a pod, the pod status always shows:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
myapp-<a random alphanumeric string> 0/1 ImagePullBackoff 0 <age>
</code></pre>
<p>I am not sure why it is having trouble pulling the image - does it maybe not know where the docker local images are?</p>
| JakeJ | <p>I just had the exact same problem. Boils down to the <code>imagePullPolicy</code>:</p>
<pre><code>PC:~$ kubectl explain deployment.spec.template.spec.containers.imagePullPolicy
KIND: Deployment
VERSION: extensions/v1beta1
FIELD: imagePullPolicy <string>
DESCRIPTION:
Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always
if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated.
More info:
https://kubernetes.io/docs/concepts/containers/images#updating-images
</code></pre>
<p>Specifically, the part that says: <em>Defaults to Always if :latest tag is specified</em>.</p>
<p>That means, you created a local image, but, because you use the <code>:latest</code> it will try to find it in whatever remote repository you configured (by default docker hub) rather than using your local. Simply change your command to:</p>
<pre><code>kubectl run myapp --image=myimage:latest --image-pull-policy Never
</code></pre>
<p>or</p>
<pre><code>kubectl run myapp --image=myimage:latest --image-pull-policy IfNotPresent
</code></pre>
| Lucas |
<p>When i run</p>
<pre><code>oc import-image centos:7 --confirm true
</code></pre>
<p>I am getting</p>
<pre><code>The import completed with errors.
Name: centos
Namespace: pd-kube-ci
Created: Less than a second ago
Labels: <none>
Annotations: openshift.io/image.dockerRepositoryCheck=2018-12-27T21:00:26Z
Docker Pull Spec: docker-registry.default.svc:5000/pd-kube-ci/centos
Image Lookup: local=false
Unique Images: 0
Tags: 1
7
tagged from centos:7
! error: Import failed (InternalError): Internal error occurred: Get https://registry-1.docker.io/v2/: proxyconnect tcp: EOF
Less than a second ago
error: tag 7 failed: Internal error occurred: Get https://registry-1.docker.io/v2/: proxyconnect tcp: EOF
</code></pre>
<p>For the life of me, i cannot find the source of <code>proxyconnect tcp: EOF</code>. Its not found anywhere in the OpenShift/Kubernetes source. Google knows next to nothing about that.</p>
<p>I have also verified that i can <code>docker pull centos</code> from each node (including master and infra nodes). Its only when openshift tries to pull that image.</p>
<p>Any ideas?</p>
| Lucas | <p>Turns out it was a mis-configuration in our <code>openshift_https_proxy</code> ansible var. Specifically we had:</p>
<pre><code>openshift_https_proxy=https://proxy.mycompany.com:8443
</code></pre>
<p>And we should have had</p>
<pre><code>openshift_https_proxy=http://proxy.mycompany.com:8443
</code></pre>
<p>To fix this, we had to edit <code>/etc/origin/master/master.env</code> on the masters and <code>/etc/sysconfig/docker</code> on all nodes, then restart per the <a href="https://docs.okd.io/3.11/install_config/http_proxies.html" rel="nofollow noreferrer">Working with HTTP Proxies</a> documentation.</p>
| Lucas |
<p>I'm using Kubernetes 1.11 on Digital Ocean, when I try to use kubectl top node I get this error:</p>
<pre><code>Error from server (NotFound): the server could not find the requested resource (get services http:heapster:)
</code></pre>
<p>but as stated in the doc, heapster is deprecated and no longer required from kubernetes 1.10</p>
| FakeAccount | <p>If you are running a newer version of Kubernetes and still receiving this error, there is probably a problem with your installation.</p>
<p>Please note that to install metrics server on kubernetes, you should first clone it by typing:</p>
<pre><code>git clone https://github.com/kodekloudhub/kubernetes-metrics-server.git
</code></pre>
<p>then you should install it, <strong>WITHOUT GOING INTO THE CREATED FOLDER AND WITHOUT MENTIONING AN SPECIFIC YAML FILE</strong> , only via:</p>
<pre><code>kubectl create -f kubernetes-metrics-server/
</code></pre>
<p>In this way all services and components are installed correctly and you can run:</p>
<pre><code>kubectl top nodes
</code></pre>
<p>or </p>
<pre><code>kubectl top pods
</code></pre>
<p>and get the correct result.</p>
| Farshid |
<p>I have a dotnet application, which is not working as non-root user even though I am exposing it on port 5000, greater then the 1024 requirement. </p>
<pre><code>WORKDIR /app
EXPOSE 5000
COPY app $local_artifact_path
RUN chown www-data:www-data /app /app/*
RUN chmod 777 /app
USER www-data
ENTRYPOINT dotnet $app_entry_point
</code></pre>
<p>The stacktrace is</p>
<pre><code>warn: Microsoft.AspNetCore.DataProtection.Repositories.EphemeralXmlRepository[50]
Using an in-memory repository. Keys will not be persisted to storage.
warn: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[59]
Neither user profile nor HKLM registry available. Using an ephemeral key repository. Protected data will be unavailable when application exits.
warn: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[35]
No XML encryptor configured. Key {551dd8d6-67f6-4c6a-b5a4-9ea86b69593b} may be persisted to storage in unencrypted form.
crit: Microsoft.AspNetCore.Server.Kestrel[0]
Unable to start Kestrel.
System.Net.Sockets.SocketException (13): Permission denied
at System.Net.Sockets.Socket.UpdateStatusAfterSocketErrorAndThrowException(SocketError error, String callerName)
at System.Net.Sockets.Socket.DoBind(EndPoint endPointSnapshot, SocketAddress socketAddress)
at System.Net.Sockets.Socket.Bind(EndPoint localEP)
at Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketConnectionListener.Bind()
at Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketTransportFactory.BindAsync(EndPoint endpoint, CancellationToken cancellationToken)
at Microsoft.AspNetCore.Server.Kestrel.Core.KestrelServer.<>c__DisplayClass21_0`1.<<StartAsync>g__OnBind|0>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.BindEndpointAsync(ListenOptions endpoint, AddressBindContext context)
at Microsoft.AspNetCore.Server.Kestrel.Core.ListenOptions.BindAsync(AddressBindContext context)
at Microsoft.AspNetCore.Server.Kestrel.Core.AnyIPListenOptions.BindAsync(AddressBindContext context)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.AddressesStrategy.BindAsync(AddressBindContext context)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.BindAsync(IServerAddressesFeature addresses, KestrelServerOptions serverOptions, ILogger logger, Func`
2 createBinding)
at Microsoft.AspNetCore.Server.Kestrel.Core.KestrelServer.StartAsync[TContext](IHttpApplication`1 application, CancellationToken cancellationToken)
Unhandled exception. System.Net.Sockets.SocketException (13): Permission denied
at System.Net.Sockets.Socket.UpdateStatusAfterSocketErrorAndThrowException(SocketError error, String callerName)
at System.Net.Sockets.Socket.DoBind(EndPoint endPointSnapshot, SocketAddress socketAddress)
at System.Net.Sockets.Socket.Bind(EndPoint localEP)
at Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketConnectionListener.Bind()
at Microsoft.AspNetCore.Server.Kestrel.Transport.Sockets.SocketTransportFactory.BindAsync(EndPoint endpoint, CancellationToken cancellationToken)
at Microsoft.AspNetCore.Server.Kestrel.Core.KestrelServer.<>c__DisplayClass21_0`1.<<StartAsync>g__OnBind|0>d.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.BindEndpointAsync(ListenOptions endpoint, AddressBindContext context)
at Microsoft.AspNetCore.Server.Kestrel.Core.ListenOptions.BindAsync(AddressBindContext context)
at Microsoft.AspNetCore.Server.Kestrel.Core.AnyIPListenOptions.BindAsync(AddressBindContext context)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.AddressesStrategy.BindAsync(AddressBindContext context)
at Microsoft.AspNetCore.Server.Kestrel.Core.Internal.AddressBinder.BindAsync(IServerAddressesFeature addresses, KestrelServerOptions serverOptions, ILogger logger, Func`2 createBinding)
at Microsoft.AspNetCore.Server.Kestrel.Core.KestrelServer.StartAsync[TContext](IHttpApplication`1 application, CancellationToken cancellationToken)
at Microsoft.AspNetCore.Hosting.GenericWebHostService.StartAsync(CancellationToken cancellationToken)
at Microsoft.Extensions.Hosting.Internal.Host.StartAsync(CancellationToken cancellationToken)
at Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.RunAsync(IHost host, CancellationToken token)
at Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.RunAsync(IHost host, CancellationToken token)
at Microsoft.Extensions.Hosting.HostingAbstractionsHostExtensions.Run(IHost host)
at SanUserManagementService.Program.Main(String[] args) in /home/jankins/workspace/Daniel/dotnet/SanUserManagementService/Program.cs:line 10
Aborted (core dumped)
</code></pre>
<p>Any help related to this will be appreciated!
Thanks!</p>
| Taseer Ahmed | <p>In my case the <code>ASPNETCORE_URLS</code> setting in environment variables or <code>appsettings.json</code> was set to <code>http://+:80</code>.</p>
<p>Changing it to <code>http://+:5000</code> worked. Make sure you change your Docker port bindings as well, or load balancer settings if using AWS.</p>
| Mathias Lykkegaard Lorenzen |
<p>I have newbie question, I could not find the answer.</p>
<p>Let`s assume that I have 2 pods, each on different service in kubernetes. One pod must have set HTTP_PROXY.</p>
<p>When I make internal HTTP request beetwen services from proxy-pod to no-proxy-pod it won`t work, becouse proxy cannot reach internal IPs.</p>
<p>When I make internal HTTP requests beetwen services if I for example make HTTP GET from no-proxy-pod to proxy-pod -> does response from proxy-pod goes through proxy or directly?</p>
<p>Is there a way to decide if it goes through proxy or not?</p>
| xbubus | <p>You probably want to set the <code>NO_PROXY</code> environment variable to the list of hosts that should not use the proxy.</p>
<p>See this SuperUser question/answer for more details</p>
<p><a href="https://superuser.com/questions/944958/are-http-proxy-https-proxy-and-no-proxy-environment-variables-standard">https://superuser.com/questions/944958/are-http-proxy-https-proxy-and-no-proxy-environment-variables-standard</a></p>
| hardillb |
<p>Trying to figure out how to authenticate with the storage API from within a GKE cluster.</p>
<p>Code:</p>
<pre><code>Storage storage = StorageOptions.newBuilder()
.setCredentials(ServiceAccountCredentials.getApplicationDefault())
.setProjectId(gcpProjectId)
.build().getService();
</code></pre>
<p><code>getApplicationDefault()</code> is documented to use these means to authenticate with the API:</p>
<ol>
<li>Credentials file pointed to by the {@code GOOGLE_APPLICATION_CREDENTIALS} environment variable</li>
<li>Credentials provided by the Google Cloud SDK {@code gcloud auth application-default login} command</li>
<li>Google App Engine built-in credentials</li>
<li>Google Cloud Shell built-in credentials</li>
<li>Google Compute Engine built-in credentials</li>
</ol>
<p>The application is using the GCP workload identity feature, so the application (in-cluster) service account is annotated with:</p>
<pre><code>serviceAccount.annotations.iam.gke.io/gcp-service-account: [email protected]
</code></pre>
<p>Now the call to the storage account fails with the following error:</p>
<pre><code>{
"code" : 403,
"errors" : [ {
"domain" : "global",
"message" : "Primary: /namespaces/my-project.svc.id.goog with additional claims does not have storage.objects.create access to the Google Cloud Storage object.",
"reason" : "forbidden"
} ],
"message" : "Primary: /namespaces/my-project.svc.id.goog with additional claims does not have storage.objects.create access to the Google Cloud Storage object."
}
</code></pre>
<p>This makes me think that the workload identity is not working correctly. I am expecting to receive an error message for my annotated service account and not the default one.</p>
<p>Is there anything else I should have been doing?</p>
| Moritz Schmitz v. Hülst | <p>The answer, in part, aside from the annotation syntax, is that, just like me, you probably didn't look closely enough at this part in the documentation:</p>
<pre><code> gcloud iam service-accounts add-iam-policy-binding \
--role roles/iam.workloadIdentityUser \
--member "serviceAccount:PROJECT_ID.svc.id.goog[K8S_NAMESPACE/KSA_NAME]" \
GSA_NAME@PROJECT_ID.iam.gserviceaccount.com
</code></pre>
<p>Notice the <code>PROJECT_ID.svc.id.goog[K8S_NAMESPACE/KSA_NAME]</code> piece. It's something they give no examples on as far as syntax but it looks like this in my terraform.</p>
<pre><code>resource "google_project_iam_member" "app-binding-2" {
role = "roles/iam.workloadIdentityUser"
member = "serviceAccount:${local.ws_vars["project-id"]}.svc.id.goog[mynamespace/myk8ssaname]"
}
</code></pre>
<p>Weirdly, I didn't know you could bind an IAM policy to a k8s service account, even more weirdly you can bind this in the terraform even if the namespace doesn't exist, much less the service account. So you can run this first before deployments.</p>
<p>I truly wish Google would provide better documentation and support, this took me several hours to figure out.</p>
| Nathan McKaskle |
<p>I'm new to Istio and learning from the official website examples. The one I can't understand is <a href="https://istio.io/docs/tasks/security/authn-policy/#globally-enabling-istio-mutual-tls" rel="nofollow noreferrer">globally enabling Istio mutual TLS</a>.</p>
<p>I can run the example with the yaml code present on the web. After that, I changed the <code>DestinationRule</code>:</p>
<pre><code>kubectl apply -f - <<EOF
apiVersion: "networking.istio.io/v1alpha3"
kind: "DestinationRule"
metadata:
name: "default"
namespace: "foo"
spec:
host: "*.local"
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
EOF
</code></pre>
<p>The only part I changed is replacing the namespace of the example from <code>istio-system</code> to <code>foo</code>. Then I switch to <code>foo</code> namespace, and I run the following test command:</p>
<pre><code>$ for from in "foo" "bar"; do for to in "foo" "bar"; do kubectl exec $(kubectl get pod -l app=sleep -n ${from} -o jsonpath={.items..metadata.name}) -c sleep -n ${from} -- curl "http://httpbin.${to}:8000/ip" -s -o /dev/null -w "sleep.${from} to httpbin.${to}: %{http_code}\n"; done; done
</code></pre>
<p>and the result is below:</p>
<pre><code>sleep.foo to httpbin.foo: 503
sleep.foo to httpbin.bar: 200
sleep.bar to httpbin.foo: 503
sleep.bar to httpbin.bar: 503
</code></pre>
<p>what I expect is:</p>
<pre><code>sleep.foo to httpbin.foo: 200
sleep.foo to httpbin.bar: 503
sleep.bar to httpbin.foo: 503
sleep.bar to httpbin.bar: 503
</code></pre>
<p>Followed the official example, I set a mesh-wide authentication policy that enables mutual TLS, and I then configured the client side mutual TLS on namespace <code>foo</code>, I think it should work on namespace <code>foo</code>, but it does not work.</p>
<p>Questions:</p>
<ol>
<li>why the status of <code>sleep.foo to httpbin.foo: 503</code> is 503 instead of 200?</li>
<li>why the status of <code>sleep.foo to httpbin.bar: 200</code> is 200 instead of 503?</li>
</ol>
<p>Is there anyone to explain this? thanks.</p>
| leo | <p>You should wait for 1-2 minutes before the policies will be fully enforced. </p>
| Vadim Eisenberg |
<p>I am using the latest Kube cookbook for deploying Kubernetes cluster in my environment using Chef Here is my recipe based on the Kube cookbook available in the <a href="https://supermarket.chef.io/cookbooks/kube" rel="nofollow noreferrer">chef supermarket</a></p>
<pre><code># Etcd
etcd_service 'default' do
action %w(create start)
end
# Kubernetes cluster
kube_apiserver 'default' do
service_cluster_ip_range '10.0.0.1/24'
etcd_servers 'http://127.0.0.1:2379'
insecure_bind_address '0.0.0.0'
action %w(create start)
end
group 'docker' do
members %w(kubernetes)
end
kube_scheduler 'default' do
master '127.0.0.1:8080'
action %w(create start)
end
kube_controller_manager 'default' do
master '127.0.0.1:8080'
action %w(create start)
end
</code></pre>
<p>Here is my metadata.rb</p>
<pre><code>depends 'etcd', '>= 6.0.0'
depends 'kube', '>= 4.0.0'
depends 'docker', '>= 7.0.0'
</code></pre>
<p>But after running the recipe I get the following error:</p>
<pre><code> ================================================================================
virtualbox-iso: Recipe Compile Error in /var/chef/cache/cookbooks/k8_master/recipes/default.rb
virtualbox-iso: ================================================================================
virtualbox-iso:
virtualbox-iso: NoMethodError
virtualbox-iso: -------------
virtualbox-iso: undefined method `kube_apiserver' for cookbook: k8_master, recipe: default :Chef::Recipe
virtualbox-iso:
virtualbox-iso: Cookbook Trace: (most recent call first)
virtualbox-iso: ----------------------------------------
virtualbox-iso: /var/chef/cache/cookbooks/k8_master/recipes/default.rb:48:in `from_file'
virtualbox-iso:
virtualbox-iso: Relevant File Content:
virtualbox-iso: ----------------------
virtualbox-iso: /var/chef/cache/cookbooks/k8_master/recipes/default.rb:
virtualbox-iso:
virtualbox-iso: 46: # Kubernetes cluster
virtualbox-iso: 47:
virtualbox-iso: 48>> kube_apiserver 'default' do
virtualbox-iso: 49: service_cluster_ip_range '10.0.0.1/24'
virtualbox-iso: 50: etcd_servers 'http://127.0.0.1:2379'
virtualbox-iso: 51: insecure_bind_address '0.0.0.0'
virtualbox-iso: 52: action %w(create start)
virtualbox-iso: 53: end
virtualbox-iso:
virtualbox-iso: System Info:
virtualbox-iso: ------------
virtualbox-iso: chef_version=16.4.41
virtualbox-iso: platform=centos
virtualbox-iso: platform_version=7.8.2003
virtualbox-iso: ruby=ruby 2.7.1p83 (2020-03-31 revision a0c7c23c9c) [x86_64-linux]
virtualbox-iso: program_name=/bin/chef-client
virtualbox-iso: executable=/opt/chef/bin/chef-client
virtualbox-iso:
virtualbox-iso:
</code></pre>
<p>I followed exactly as specified in <a href="https://github.com/aespinosa/cookbook-kube" rel="nofollow noreferrer">Readme</a>
Can someone tell me whats wrong here ?</p>
| Hassnain Alvi | <p>You need to use the v5.0.0 version of that cookbook which has support for Chef Infra Client 16:</p>
<p><a href="https://github.com/aespinosa/cookbook-kube/commit/f95626f6ce00b9f8c9cf94fbcb87dfffb74d00c2" rel="nofollow noreferrer">https://github.com/aespinosa/cookbook-kube/commit/f95626f6ce00b9f8c9cf94fbcb87dfffb74d00c2</a></p>
| lamont |
<p>We have an Airflow (Celery executor) setup that can run tasks on our K8s cluster. The tasks that use KubernetesPodOperator can access K8s secrets <a href="https://airflow.apache.org/docs/apache-airflow-providers-cncf-kubernetes/stable/operators.html#how-to-use-cluster-configmaps-secrets-and-volumes-with-pod" rel="nofollow noreferrer">as described in the documentation</a>. The rest of the tasks run on Celery workers outside of the K8s cluster.</p>
<p>How can tasks using other operators (e.g., SqlSensor) access the same K8s secrets as the tasks using KubernetesPodOperator?</p>
| SergiyKolesnikov | <p>You can map the secrets as volumes ot variables into your Worker Pods and they will be available for all tasks - either as specific directory or as environment variables.</p>
<p>You just have to modify the Helm Chart (or whatever deployment you use) to use those.</p>
| Jarek Potiuk |
<p>I have a MiniKube that is running and I deploy Airflow via docker-compose this way:</p>
<pre><code>---
version: '3'
x-airflow-common:
&airflow-common
# In order to add custom dependencies or upgrade provider packages you can use your extended image.
# Comment the image line, place your Dockerfile in the directory where you placed the docker-compose.yaml
# and uncomment the "build" line below, Then run `docker-compose build` to build the images.
image: ${AIRFLOW_IMAGE_NAME:-apache/airflow:2.1.3}
# build: .
environment:
&airflow-common-env
AIRFLOW__CORE__EXECUTOR: KubernetesExecutor
AIRFLOW__CORE__SQL_ALCHEMY_CONN: postgresql+psycopg2://airflow:airflow@postgres/airflow
AIRFLOW__CORE__FERNET_KEY: ''
AIRFLOW__CORE__DAGS_ARE_PAUSED_AT_CREATION: 'true'
# AIRFLOW__CORE__LOAD_EXAMPLES: 'true'
AIRFLOW__API__AUTH_BACKEND: 'airflow.api.auth.backend.basic_auth'
_PIP_ADDITIONAL_REQUIREMENTS: ${_PIP_ADDITIONAL_REQUIREMENTS:-}
volumes:
- ~/.kube:/home/airflow/.kube
- ./dags/:/opt/airflow/dags
- ./logs:/opt/airflow/logs
- ./plugins:/opt/airflow/plugins
user: "${AIRFLOW_UID:-50000}:${AIRFLOW_GID:-0}"
depends_on:
redis:
condition: service_healthy
postgres:
condition: service_healthy
services:
postgres:
image: postgres:13
environment:
POSTGRES_USER: airflow
POSTGRES_PASSWORD: airflow
POSTGRES_DB: airflow
volumes:
- postgres-db-volume:/var/lib/postgresql/data
healthcheck:
test: ["CMD", "pg_isready", "-U", "airflow"]
interval: 5s
retries: 5
restart: always
redis:
image: redis:latest
ports:
- 6379:6379
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 30s
retries: 50
restart: always
airflow-webserver:
<<: *airflow-common
command: webserver
ports:
- 8080:8080
healthcheck:
test: ["CMD", "curl", "--fail", "http://localhost:8080/health"]
interval: 10s
timeout: 10s
retries: 5
restart: always
airflow-scheduler:
<<: *airflow-common
command: scheduler
healthcheck:
test: ["CMD-SHELL", 'airflow jobs check --job-type SchedulerJob --hostname "$${HOSTNAME}"']
interval: 10s
timeout: 10s
retries: 5
restart: always
airflow-init:
<<: *airflow-common
entrypoint: /bin/bash
command:
- -c
- |
function ver() {
printf "%04d%04d%04d%04d" $${1//./ }
}
airflow_version=$$(gosu airflow airflow version)
airflow_version_comparable=$$(ver $${airflow_version})
min_airflow_version=2.1.0
min_airlfow_version_comparable=$$(ver $${min_airflow_version})
if (( airflow_version_comparable < min_airlfow_version_comparable )); then
echo -e "\033[1;31mERROR!!!: Too old Airflow version $${airflow_version}!\e[0m"
echo "The minimum Airflow version supported: $${min_airflow_version}. Only use this or higher!"
exit 1
fi
if [[ -z "${AIRFLOW_UID}" ]]; then
echo -e "\033[1;31mERROR!!!: AIRFLOW_UID not set!\e[0m"
echo "Please follow these instructions to set AIRFLOW_UID and AIRFLOW_GID environment variables:
https://airflow.apache.org/docs/apache-airflow/stable/start/docker.html#initializing-environment"
exit 1
fi
one_meg=1048576
mem_available=$$(($$(getconf _PHYS_PAGES) * $$(getconf PAGE_SIZE) / one_meg))
cpus_available=$$(grep -cE 'cpu[0-9]+' /proc/stat)
disk_available=$$(df / | tail -1 | awk '{print $$4}')
warning_resources="false"
if (( mem_available < 4000 )) ; then
echo -e "\033[1;33mWARNING!!!: Not enough memory available for Docker.\e[0m"
echo "At least 4GB of memory required. You have $$(numfmt --to iec $$((mem_available * one_meg)))"
warning_resources="true"
fi
if (( cpus_available < 2 )); then
echo -e "\033[1;33mWARNING!!!: Not enough CPUS available for Docker.\e[0m"
echo "At least 2 CPUs recommended. You have $${cpus_available}"
warning_resources="true"
fi
if (( disk_available < one_meg * 10 )); then
echo -e "\033[1;33mWARNING!!!: Not enough Disk space available for Docker.\e[0m"
echo "At least 10 GBs recommended. You have $$(numfmt --to iec $$((disk_available * 1024 )))"
warning_resources="true"
fi
if [[ $${warning_resources} == "true" ]]; then
echo
echo -e "\033[1;33mWARNING!!!: You have not enough resources to run Airflow (see above)!\e[0m"
echo "Please follow the instructions to increase amount of resources available:"
echo " https://airflow.apache.org/docs/apache-airflow/stable/start/docker.html#before-you-begin"
fi
mkdir -p /sources/logs /sources/dags /sources/plugins
chown -R "${AIRFLOW_UID}:${AIRFLOW_GID}" /sources/{logs,dags,plugins}
exec /entrypoint airflow version
environment:
<<: *airflow-common-env
_AIRFLOW_DB_UPGRADE: 'true'
_AIRFLOW_WWW_USER_CREATE: 'true'
_AIRFLOW_WWW_USER_USERNAME: ${_AIRFLOW_WWW_USER_USERNAME:-airflow}
_AIRFLOW_WWW_USER_PASSWORD: ${_AIRFLOW_WWW_USER_PASSWORD:-airflow}
user: "0:${AIRFLOW_GID:-0}"
volumes:
- .:/sources
volumes:
postgres-db-volume:
</code></pre>
<p>But the connection between Airflow and Kubernetes seems to fail (removing the AIRFLOW__CORE__EXECUTOR varenv allows the creation):</p>
<pre><code>airflow-scheduler_1 | Traceback (most recent call last):
airflow-scheduler_1 | File "/home/airflow/.local/bin/airflow", line 8, in <module>
airflow-scheduler_1 | sys.exit(main())
airflow-scheduler_1 | File "/home/airflow/.local/lib/python3.8/site-packages/airflow/__main__.py", line 40, in main
airflow-scheduler_1 | args.func(args)
airflow-scheduler_1 | File "/home/airflow/.local/lib/python3.8/site-packages/airflow/cli/cli_parser.py", line 48, in command
airflow-scheduler_1 | return func(*args, **kwargs)
airflow-scheduler_1 | File "/home/airflow/.local/lib/python3.8/site-packages/airflow/utils/cli.py", line 91, in wrapper
airflow-scheduler_1 | return f(*args, **kwargs)
airflow-scheduler_1 | File "/home/airflow/.local/lib/python3.8/site-packages/airflow/cli/commands/scheduler_command.py", line 70, in scheduler
airflow-scheduler_1 | job.run()
airflow-scheduler_1 | File "/home/airflow/.local/lib/python3.8/site-packages/airflow/jobs/base_job.py", line 245, in run
airflow-scheduler_1 | self._execute()
airflow-scheduler_1 | File "/home/airflow/.local/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 686, in _execute
airflow-scheduler_1 | self.executor.start()
airflow-scheduler_1 | File "/home/airflow/.local/lib/python3.8/site-packages/airflow/executors/kubernetes_executor.py", line 485, in start
airflow-scheduler_1 | self.kube_client = get_kube_client()
airflow-scheduler_1 | File "/home/airflow/.local/lib/python3.8/site-packages/airflow/kubernetes/kube_client.py", line 145, in get_kube_client
airflow-scheduler_1 | client_conf = _get_kube_config(in_cluster, cluster_context, config_file)
airflow-scheduler_1 | File "/home/airflow/.local/lib/python3.8/site-packages/airflow/kubernetes/kube_client.py", line 40, in _get_kube_config
airflow-scheduler_1 | config.load_incluster_config()
airflow-scheduler_1 | File "/home/airflow/.local/lib/python3.8/site-packages/kubernetes/config/incluster_config.py", line 93, in load_incluster_config
airflow-scheduler_1 | InClusterConfigLoader(token_filename=SERVICE_TOKEN_FILENAME,
airflow-scheduler_1 | File "/home/airflow/.local/lib/python3.8/site-packages/kubernetes/config/incluster_config.py", line 45, in load_and_set
airflow-scheduler_1 | self._load_config()
airflow-scheduler_1 | File "/home/airflow/.local/lib/python3.8/site-packages/kubernetes/config/incluster_config.py", line 51, in _load_config
airflow-scheduler_1 | raise ConfigException("Service host/port is not set.")
airflow-scheduler_1 | kubernetes.config.config_exception.ConfigException: Service host/port is not set.
</code></pre>
<p>My Idea is that the kube config file is not correctly found by the Airflow Scheduler. I mounted the volume <code>~/.kube:/home/airflow/.kube</code> but can't find a way to make it work.</p>
| val | <p>Using Docker Compose to run KubernetesExecutor seems like a bad idea.</p>
<p>Why would you want to do it?</p>
<p>It makes a lot more sense to use the official Helm Chart - it's easier to manage and configure, you can easily deploy it to your minikube and it will work out-of-the-box with KubernetesExecutor.</p>
<p><a href="https://airflow.apache.org/docs/helm-chart/stable/index.html" rel="nofollow noreferrer">https://airflow.apache.org/docs/helm-chart/stable/index.html</a></p>
| Jarek Potiuk |
<p>I've been runnning into what should be a simple issue with my airflow scheduler. Every couple of weeks, the scheduler becomes <code>Evicted</code>. When I run a describe on the pod, the issue is because <code>The node was low on resource: ephemeral-storage. Container scheduler was using 14386916Ki, which exceeds its request of 0.</code></p>
<p>The question is two fold. First, why is the scheduler utilizing ephemeral-storage? And second, is it possible to do add ephemeral-storage when running on eks?</p>
<p>Thanks!</p>
| JaMo | <p>I believe Ephemeral Storage is not Airflow's question but more of the configuration of your K8S cluster.</p>
<p>Assuming we are talking about OpenShift' ephemeral storage:</p>
<p><a href="https://docs.openshift.com/container-platform/4.9/storage/understanding-ephemeral-storage.html" rel="nofollow noreferrer">https://docs.openshift.com/container-platform/4.9/storage/understanding-ephemeral-storage.html</a></p>
<p>This can be configured in your cluster and it wil make "/var/log" ephemeral.</p>
<p>I think the problem is that <code>/var/logs</code> gets full. Possibly some of the system logs (not from airlfow but from some other processes running in the same container). I think a solution will be to have a job that cleans that system log periodically.</p>
<p>For example we have this script that cleans-up Airlfow logs:</p>
<p><a href="https://github.com/apache/airflow/blob/main/scripts/in_container/prod/clean-logs.sh" rel="nofollow noreferrer">https://github.com/apache/airflow/blob/main/scripts/in_container/prod/clean-logs.sh</a></p>
| Jarek Potiuk |
<p>Behind the enterprise proxy,</p>
<p>what is the proper setting for kubernetes (and docker)?</p>
<ol>
<li>when set the http_proxy, https_proxy, no_proxy</li>
</ol>
<p><code>export http_proxy="http://1.2.3.4:8080"</code></p>
<p>or</p>
<p><code>export http_proxy=http://1.2.3.4:8080</code></p>
<p>or</p>
<p><code>export http_proxy=1.2.3.4:8080</code></p>
<ol start="2">
<li><p>Should I set capital environment variable like HTTP_PROXY ?</p></li>
<li><p>When I set no_proxy,</p></li>
</ol>
<p><code>export no_proxy=10.0.0.1,10.0.0.2,10.0.0.3</code></p>
<p>(all the kubernetes master and nodes )</p>
<p>or</p>
<p><code>export no_proxy=10.0.0.*</code></p>
<ol start="4">
<li><p>Should I setting below file ?</p>
<p>$ vi /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=<a href="http://1.2.3.4:8080" rel="nofollow noreferrer">http://1.2.3.4:8080</a>" "HTTPS_PROXY=<a href="http://1.2.3.4:8080" rel="nofollow noreferrer">http://1.2.3.4:8080</a>" "NO_PROXY=127.0.0.1,localhost,10.0.0.1,10.0.0.2,10.0.0.3"</p></li>
</ol>
<p>In this file, applied same rule with above question?</p>
<ol start="5">
<li>any other considerations?</li>
</ol>
<p>Thanks inadvance.</p>
| hokwang | <p>We always include the scheme in our environment variables.</p>
<p>/etc/profile.d/proxy.sh:</p>
<pre><code>#!/bin/bash
export http_proxy=http://<proxy>:3128
export https_proxy=$http_proxy
export no_proxy=169.254.169.254,localhost,127.0.0.1
export HTTP_PROXY=$http_proxy
export HTTPS_PROXY=$https_proxy
export NO_PROXY=$no_proxy
</code></pre>
<p>/etc/systemd/system/docker.service.d/proxy.conf:</p>
<pre><code>[Service]
Environment="HTTPS_PROXY=https://<proxy>:3128/" "HTTP_PROXY=http://<proxy>:3128/"
</code></pre>
| Valdis R |
<p>I have a mutual TLS enabled Istio mesh. My setup is as follows</p>
<p><a href="https://i.stack.imgur.com/Vcxa4.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Vcxa4.png" alt="enter image description here"></a></p>
<ol>
<li>A service running inside a pod (Service container + envoy)</li>
<li>An envoy gateway which stays in front of the above service. An Istio Gateway and Virtual Service attached to this. It routes <code>/info/</code> route to the above service.</li>
<li>Another Istio Gateway configured for ingress using the default istio ingress pod. This also has Gateway+Virtual Service combination. The virtual service directs <code>/info/</code> path to the service described in 2</li>
</ol>
<p>I'm attempting to access the service from the ingress gateway using a curl command such as:</p>
<pre><code>$ curl -X GET http://istio-ingressgateway.istio-system:80/info/ -H "Authorization: Bearer $token" -v
</code></pre>
<p>But I'm getting a 503 not found error as below:</p>
<pre><code>$ curl -X GET http://istio-ingressgateway.istio-system:80/info/ -H "Authorization: Bearer $token" -v
Note: Unnecessary use of -X or --request, GET is already inferred.
* Trying 10.105.138.94...
* Connected to istio-ingressgateway.istio-system (10.105.138.94) port 80 (#0)
> GET /info/ HTTP/1.1
> Host: istio-ingressgateway.istio-system
> User-Agent: curl/7.47.0
> Accept: */*
> Authorization: Bearer ...
>
< HTTP/1.1 503 Service Unavailable
< content-length: 57
< content-type: text/plain
< date: Sat, 12 Jan 2019 13:30:13 GMT
< server: envoy
<
* Connection #0 to host istio-ingressgateway.istio-system left intact
</code></pre>
<p>I checked the logs of <code>istio-ingressgateway</code> pod and the following line was logged there</p>
<pre><code>[2019-01-13T05:40:16.517Z] "GET /info/ HTTP/1.1" 503 UH 0 19 6 - "10.244.0.5" "curl/7.47.0" "da02fdce-8bb5-90fe-b422-5c74fe28759b" "istio-ingressgateway.istio-system" "-"
</code></pre>
<p>If I logged into istio ingress pod and attempt to send the request with curl, I get a successful 200 OK.</p>
<pre><code># curl hr--gateway-service.default/info/ -H "Authorization: Bearer $token" -v
</code></pre>
<p>Also, I managed to get a successful response for the same curl command when the mesh was created in mTLS disabled mode. There are no conflicts shown in mTLS setup.</p>
<p>Here are the config details for my service mesh in case you need additional info.</p>
<p><strong>Pods</strong></p>
<pre><code>$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default hr--gateway-deployment-688986c87c-z9nkh 1/1 Running 0 37m
default hr--hr-deployment-596946948d-c89bn 2/2 Running 0 37m
default hr--sts-deployment-694d7cff97-gjwdk 1/1 Running 0 37m
ingress-nginx default-http-backend-6586bc58b6-8qss6 1/1 Running 0 42m
ingress-nginx nginx-ingress-controller-6bd7c597cb-t4rwq 1/1 Running 0 42m
istio-system grafana-85dbf49c94-lfpbr 1/1 Running 0 42m
istio-system istio-citadel-545f49c58b-dq5lq 1/1 Running 0 42m
istio-system istio-cleanup-secrets-bh5ws 0/1 Completed 0 42m
istio-system istio-egressgateway-7d59954f4-qcnxm 1/1 Running 0 42m
istio-system istio-galley-5b6449c48f-72vkb 1/1 Running 0 42m
istio-system istio-grafana-post-install-lwmsf 0/1 Completed 0 42m
istio-system istio-ingressgateway-8455c8c6f7-5khtk 1/1 Running 0 42m
istio-system istio-pilot-58ff4d6647-bct4b 2/2 Running 0 42m
istio-system istio-policy-59685fd869-h7v94 2/2 Running 0 42m
istio-system istio-security-post-install-cqj6k 0/1 Completed 0 42m
istio-system istio-sidecar-injector-75b9866679-qg88s 1/1 Running 0 42m
istio-system istio-statsd-prom-bridge-549d687fd9-bspj2 1/1 Running 0 42m
istio-system istio-telemetry-6ccf9ddb96-hxnwv 2/2 Running 0 42m
istio-system istio-tracing-7596597bd7-m5pk8 1/1 Running 0 42m
istio-system prometheus-6ffc56584f-4cm5v 1/1 Running 0 42m
istio-system servicegraph-5d64b457b4-jttl9 1/1 Running 0 42m
kube-system coredns-78fcdf6894-rxw57 1/1 Running 0 50m
kube-system coredns-78fcdf6894-s4bg2 1/1 Running 0 50m
kube-system etcd-ubuntu 1/1 Running 0 49m
kube-system kube-apiserver-ubuntu 1/1 Running 0 49m
kube-system kube-controller-manager-ubuntu 1/1 Running 0 49m
kube-system kube-flannel-ds-9nvf9 1/1 Running 0 49m
kube-system kube-proxy-r868m 1/1 Running 0 50m
kube-system kube-scheduler-ubuntu 1/1 Running 0 49m
</code></pre>
<p><strong>Services</strong></p>
<pre><code>$ kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default hr--gateway-service ClusterIP 10.100.238.144 <none> 80/TCP,443/TCP 39m
default hr--hr-service ClusterIP 10.96.193.43 <none> 80/TCP 39m
default hr--sts-service ClusterIP 10.99.54.137 <none> 8080/TCP,8081/TCP,8090/TCP 39m
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 52m
ingress-nginx default-http-backend ClusterIP 10.109.166.229 <none> 80/TCP 44m
ingress-nginx ingress-nginx NodePort 10.108.9.180 192.168.60.3 80:31001/TCP,443:32315/TCP 44m
istio-system grafana ClusterIP 10.102.141.231 <none> 3000/TCP 44m
istio-system istio-citadel ClusterIP 10.101.128.187 <none> 8060/TCP,9093/TCP 44m
istio-system istio-egressgateway ClusterIP 10.102.157.204 <none> 80/TCP,443/TCP 44m
istio-system istio-galley ClusterIP 10.96.31.251 <none> 443/TCP,9093/TCP 44m
istio-system istio-ingressgateway LoadBalancer 10.105.138.94 <pending> 80:31380/TCP,443:31390/TCP,31400:31400/TCP,15011:31219/TCP,8060:31482/TCP,853:30034/TCP,15030:31544/TCP,15031:32652/TCP 44m
istio-system istio-pilot ClusterIP 10.100.170.73 <none> 15010/TCP,15011/TCP,8080/TCP,9093/TCP 44m
istio-system istio-policy ClusterIP 10.104.77.184 <none> 9091/TCP,15004/TCP,9093/TCP 44m
istio-system istio-sidecar-injector ClusterIP 10.100.180.152 <none> 443/TCP 44m
istio-system istio-statsd-prom-bridge ClusterIP 10.107.39.50 <none> 9102/TCP,9125/UDP 44m
istio-system istio-telemetry ClusterIP 10.110.55.232 <none> 9091/TCP,15004/TCP,9093/TCP,42422/TCP 44m
istio-system jaeger-agent ClusterIP None <none> 5775/UDP,6831/UDP,6832/UDP 44m
istio-system jaeger-collector ClusterIP 10.102.43.21 <none> 14267/TCP,14268/TCP 44m
istio-system jaeger-query ClusterIP 10.104.182.189 <none> 16686/TCP 44m
istio-system prometheus ClusterIP 10.100.0.70 <none> 9090/TCP 44m
istio-system servicegraph ClusterIP 10.97.65.37 <none> 8088/TCP 44m
istio-system tracing ClusterIP 10.109.87.118 <none> 80/TCP 44m
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 52m
</code></pre>
<p><strong>Gateway and virtual service described in point 2</strong></p>
<pre><code>$ kubectl describe gateways.networking.istio.io hr--gateway
Name: hr--gateway
Namespace: default
API Version: networking.istio.io/v1alpha3
Kind: Gateway
Metadata:
...
Spec:
Selector:
App: hr--gateway
Servers:
Hosts:
*
Port:
Name: http2
Number: 80
Protocol: HTTP2
Hosts:
*
Port:
Name: https
Number: 443
Protocol: HTTPS
Tls:
Mode: PASSTHROUGH
$ kubectl describe virtualservices.networking.istio.io hr--gateway
Name: hr--gateway
Namespace: default
Labels: app=hr--gateway
Annotations: <none>
API Version: networking.istio.io/v1alpha3
Kind: VirtualService
Metadata:
...
Spec:
Gateways:
hr--gateway
Hosts:
*
Http:
Match:
Uri:
Prefix: /info/
Rewrite:
Uri: /
Route:
Destination:
Host: hr--hr-service
</code></pre>
<p><strong>Gateway and virtual service described in point 3</strong></p>
<pre><code>$ kubectl describe gateways.networking.istio.io ingress-gateway
Name: ingress-gateway
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"networking.istio.io/v1alpha3","kind":"Gateway","metadata":{"annotations":{},"name":"ingress-gateway","namespace":"default"},"spec":{"sel...
API Version: networking.istio.io/v1alpha3
Kind: Gateway
Metadata:
...
Spec:
Selector:
Istio: ingressgateway
Servers:
Hosts:
*
Port:
Name: http2
Number: 80
Protocol: HTTP2
$ kubectl describe virtualservices.networking.istio.io hr--gateway-ingress-vs
Name: hr--gateway-ingress-vs
Namespace: default
Labels: app=hr--gateway
API Version: networking.istio.io/v1alpha3
Kind: VirtualService
Metadata:
Spec:
Gateways:
ingress-gateway
Hosts:
*
Http:
Match:
Uri:
Prefix: /info/
Route:
Destination:
Host: hr--gateway-service
Events: <none>
</code></pre>
| Pasan W. | <p>The problem is probably as follows: <em>istio-ingressgateway</em> initiates mTLS to <em>hr--gateway-service</em> on port 80, but <em>hr--gateway-service</em> expects plain HTTP connections.</p>
<p>There are multiple solutions:</p>
<ol>
<li>Define a DestinationRule to instruct clients to disable mTLS on calls to <em>hr--gateway-service</em></li>
</ol>
<pre><code> apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: hr--gateway-service-disable-mtls
spec:
host: hr--gateway-service.default.svc.cluster.local
trafficPolicy:
tls:
mode: DISABLE
</code></pre>
<ol start="2">
<li>Instruct <em>hr-gateway-service</em> to accept mTLS connections. For that, configure the <a href="https://istio.io/docs/reference/config/networking/v1alpha3/gateway/#Server-TLSOptions" rel="noreferrer">server TLS options</a> on port 80 to be <code>MUTUAL</code> and to use Istio certificates and the private key. Specify <code>serverCertificate</code>, <code>caCertificates</code> and <code>privateKey</code> to be <code>/etc/certs/cert-chain.pem</code>, <code>/etc/certs/root-cert.pem</code>, <code>/etc/certs/key.pem</code>, respectively.</li>
</ol>
| Vadim Eisenberg |
<p>I'm trying to deploy Airflow on kubernetes (on Azure Kubernetes Service) with the celery Executor. However, once a task is done, I get the following error while trying to access its logs:</p>
<pre><code>*** Log file does not exist: /opt/airflow/logs/maintenance/clean_events/2021-08-23T14:46:18.953030+00:00/1.log
*** Fetching from: http://airflow-worker-0.airflow-worker.airflow.svc.cluster.local:8793/log/maintenance/clean_events/2021-08-23T14:46:18.953030+00:00/1.log
*** Failed to fetch log file from worker. 403 Client Error: FORBIDDEN for url: http://airflow-worker-0.airflow-worker.airflow.svc.cluster.local:8793/log/maintenance/clean_events/2021-08-23T14:46:18.953030+00:00/1.log
For more information check: https://httpstatuses.com/403
</code></pre>
<p>my charts.yaml is pretty simple</p>
<pre class="lang-yaml prettyprint-override"><code>---
airflow:
image:
repository: myrepo.azurecr.io/maintenance-scripts
tag: latest
pullPolicy: Always
pullSecret: "secret"
executor: CeleryExecutor
config:
AIRFLOW__CORE__LOAD_EXAMPLES: "True"
AIRFLOW__KUBERNETES__DELETE_WORKER_PODS: "False"
users:
- username: admin
password: password
role: Admin
email: [email protected]
firstName: admin
lastName: admin
rbac:
create: true
serviceAccount:
create: true
#postgresql:
# enabled: true
workers:
enabled: true
redis:
enabled: true
flower:
enabled: false
global:
postgresql: {
storageClass: managed
}
persistence:
fixPermissions: true
storageClassName: managed
</code></pre>
<p>I have not been able to fix this, and it seems to be the most basic conf you can use on airflow, anyone knows where this could come from ?</p>
<p>Thanks a lot</p>
| Papotitu | <p>You need to have the same webserver secret configured for both webserver and workers: <a href="https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#secret-key" rel="nofollow noreferrer">https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#secret-key</a></p>
<p>It's been recently fixed as potential security vulnerability - now you need to know the secret key to be able to retrieve logs (it was unauthenticated before).</p>
| Jarek Potiuk |
<p>My objective is to be able to deploy airflow on Kubernetes using a custom image (placed in ECR. The reason I want to use this custom Image is because I want to deploy another tool (<a href="https://github.com/airflow-helm/charts/blob/main/charts/airflow/values.yaml#L9" rel="nofollow noreferrer">dbt</a>) with airflow in the same container (also open for other suggestions there)</p>
<p><strong>What actually worked:</strong>
I have managed to use <a href="https://github.com/airflow-helm/charts" rel="nofollow noreferrer">this Helm chart</a> (which uses the following <a href="https://github.com/airflow-helm/charts/blob/main/charts/airflow/values.yaml#L9" rel="nofollow noreferrer">image</a> as default) to deploy</p>
<p><strong>What I tried to do and did not work:</strong>
I wanted to now exchange the default <a href="https://github.com/airflow-helm/charts/blob/main/charts/airflow/values.yaml#L9" rel="nofollow noreferrer">image</a> with my custom image in ECR, so I created <code>values.yaml</code>, that contains:</p>
<pre><code>airflow:
image:
repository: 000000000.dkr.ecr.eu-central-1.amazonaws.com/foo/meow
tag: latest
</code></pre>
<p>And then ran:</p>
<pre><code>helm upgrade airflow-pod airflow-stable/airflow --version "7.14.0" --values values.yaml
</code></pre>
<p>Which I expected to override the default yaml and pull the image from ECR instead. I then ran <code>describe pod airflow-pod</code>, and found the following log of the error (snippet):</p>
<pre><code> Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 12m default-scheduler Successfully assigned 000000000.dkr.ecr.eu-central-1.amazonaws.com/foo/meow to ip-10-0-0-0eu-central-1.compute.internal
Normal Pulling 12m kubelet Pulling image "000000000.dkr.ecr.eu-central-1.amazonaws.com/foo/meow:latest"
Normal Pulled 11m kubelet Successfully pulled image "000000000.dkr.ecr.eu-central-1.amazonaws.com/foo/meow:latest"
Normal Created 9m39s (x5 over 11m) kubelet Created container airflow-web
Normal Pulled 9m39s (x4 over 11m) kubelet Container image "000000000.dkr.ecr.eu-central-1.amazonaws.com/foo/meow:latest" already present on machine
Warning Failed 9m38s (x5 over 11m) kubelet Error: failed to start container "airflow-web": Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: exec: "/usr/bin/dumb-init": stat /usr/bin/dumb-init: no such file or directory: unknown
</code></pre>
<p><strong>What I have tried to confirm/fix the issue</strong></p>
<p>First, I tried to see if it is an ECR issue. So I put the same <a href="https://github.com/airflow-helm/charts/blob/main/charts/airflow/values.yaml#L9" rel="nofollow noreferrer">original image</a> in ECR (instead of my own image with dbt), and found that the same error above persists.</p>
<p>Second, I digged and found the following <a href="https://stackoverflow.com/questions/61955393/weird-error-in-kubernetes-starting-container-process-caused-exec-usr-bin">question</a> that makes me think I can't use that airflow helm <a href="https://github.com/airflow-helm/charts/tree/main/charts/airflow" rel="nofollow noreferrer">chart</a> from an ECR repo (non-official one).</p>
<p><strong>One last approach I took as an alternative path:</strong>
I tried to use the <a href="https://github.com/apache/airflow/tree/master/chart" rel="nofollow noreferrer">chart on the Apache airflow repo</a> like:</p>
<pre><code>helm install airflow . --namespace airflow-deploy --set executor=CeleryExecutor --set workers.keda.enabled=true --set workers.persistence.enabled=false
</code></pre>
<p>But I got the error:</p>
<pre><code>Error: failed post-install: timed out waiting for the condition
</code></pre>
| alt-f4 | <p>The original image seems to have "dumb-init" binary in, so it should work. However, if you use "imagePullPolicy: IfNotPresent" then Kubernetes might cache the image and even if you re-upload a new image to ECR it might not be pulled (though I believe for latest it should be, unless some custom configuration of the Kubernetes is in place.</p>
<p>See <a href="https://kubernetes.io/docs/concepts/containers/images/#updating-images" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/containers/images/#updating-images</a></p>
<p>You can always run the image and check it's content locally. The official Docker image of Airflow has support for <code>bash</code> command:</p>
<pre><code>docker run -it apache/airflow:1.10.12-python3.6 bash
airflow@18278a339579:/opt/airflow$ /usr/bin/dumb-init --help
dumb-init v1.2.2
Usage: /usr/bin/dumb-init [option] command [[arg] ...]
dumb-init is a simple process supervisor that forwards signals to children.
It is designed to run as PID1 in minimal container environments.
Optional arguments:
-c, --single-child Run in single-child mode.
In this mode, signals are only proxied to the
direct child and not any of its descendants.
-r, --rewrite s:r Rewrite received signal s to new signal r before proxying.
To ignore (not proxy) a signal, rewrite it to 0.
This option can be specified multiple times.
-v, --verbose Print debugging information to stderr.
-h, --help Print this help message and exit.
-V, --version Print the current version and exit.
Full help is available online at https://github.com/Yelp/dumb-init
</code></pre>
| Jarek Potiuk |
<p>We are now on our journey to break our monolith (on-prem pkg (rpm/ova)) into services (dockers).</p>
<p>In the process we are evaluation envoy/istio as our communication and security layer, it looks great when running as sidecar in k8s, or each service on a separate machie.</p>
<p>As we are going to deliver several services within one machine, and can't deliver it within k8s, I'm not sure if we can use envoy, I didn't find any reference on using envoy in additional ways, are there additional deployment methods I can use to enjoy it?</p>
| user1447703 | <p>You can <a href="https://istio.io/latest/docs/examples/virtual-machines/" rel="nofollow noreferrer">run part of your services on Kubernetes and part on VMs</a>.</p>
| Vadim Eisenberg |
<p>I'm installing jupyterhub on k8s using <a href="https://z2jh.jupyter.org/en/stable/jupyterhub/installation.html#install-jupyterhub" rel="nofollow noreferrer">helm</a>.</p>
<pre><code>helm upgrade --cleanup-on-fail --install jupyterhub jupyterhub-2.0.0/jupyterhub/ --namespace my-NS --create-namespace --version=2.0.0 --values my-values.yaml --timeout 30m --debug
</code></pre>
<p>Its failing with error in creating hook-image-awaiter pods.</p>
<p>Error from helm debug:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>upgrade.go:142: [debug] preparing upgrade for jupyterhub
upgrade.go:150: [debug] performing update for jupyterhub
upgrade.go:322: [debug] creating upgraded release for jupyterhub
client.go:310: [debug] Starting delete for "hook-image-puller" DaemonSet
client.go:128: [debug] creating 1 resource(s)
client.go:310: [debug] Starting delete for "hook-image-awaiter" ServiceAccount
client.go:128: [debug] creating 1 resource(s)
client.go:310: [debug] Starting delete for "hook-image-awaiter" Job
client.go:128: [debug] creating 1 resource(s)
client.go:540: [debug] Watching for changes to Job hook-image-awaiter with timeout of 30m0s
client.go:568: [debug] Add/Modify event for hook-image-awaiter: ADDED
client.go:607: [debug] hook-image-awaiter: Jobs active: 1, jobs failed: 0, jobs succeeded: 0
client.go:568: [debug] Add/Modify event for hook-image-awaiter: MODIFIED
client.go:607: [debug] hook-image-awaiter: Jobs active: 1, jobs failed: 1, jobs succeeded: 0
client.go:568: [debug] Add/Modify event for hook-image-awaiter: MODIFIED
client.go:607: [debug] hook-image-awaiter: Jobs active: 1, jobs failed: 2, jobs succeeded: 0
client.go:568: [debug] Add/Modify event for hook-image-awaiter: MODIFIED
client.go:607: [debug] hook-image-awaiter: Jobs active: 1, jobs failed: 3, jobs succeeded: 0
client.go:568: [debug] Add/Modify event for hook-image-awaiter: MODIFIED
client.go:607: [debug] hook-image-awaiter: Jobs active: 1, jobs failed: 4, jobs succeeded: 0
client.go:568: [debug] Add/Modify event for hook-image-awaiter: MODIFIED
client.go:607: [debug] hook-image-awaiter: Jobs active: 1, jobs failed: 5, jobs succeeded: 0
client.go:568: [debug] Add/Modify event for hook-image-awaiter: MODIFIED
client.go:607: [debug] hook-image-awaiter: Jobs active: 1, jobs failed: 6, jobs succeeded: 0
client.go:568: [debug] Add/Modify event for hook-image-awaiter: MODIFIED
upgrade.go:434: [debug] warning: Upgrade "jupyterhub" failed: pre-upgrade hooks failed: job failed: BackoffLimitExceeded
Error: UPGRADE FAILED: pre-upgrade hooks failed: job failed: BackoffLimitExceeded
helm.go:84: [debug] pre-upgrade hooks failed: job failed: BackoffLimitExceeded
UPGRADE FAILED</code></pre>
</div>
</div>
</p>
<p>There are some answers in web which didn't help to resolve the issue. I have tried following,</p>
<ul>
<li>Increase helm install time out - didn't work</li>
<li>Failed to pull image: I can see jupyterhub/k8s-image-awaiter:2.0.0 got pulled and I do not see any errors when pods are described</li>
</ul>
<p><a href="https://i.stack.imgur.com/Nnscp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Nnscp.png" alt="enter image description here" /></a></p>
<p>hook-image-puller:
<a href="https://i.stack.imgur.com/L22Oy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/L22Oy.png" alt="enter image description here" /></a>
hook-image-awaiter:
<a href="https://i.stack.imgur.com/iUOkM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iUOkM.png" alt="enter image description here" /></a></p>
<p>Event logs:
<a href="https://i.stack.imgur.com/MjOFc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MjOFc.png" alt="enter image description here" /></a></p>
<p>PS: I disabled rbac in values.yaml (create=false) and am installing jupyterhub in a new namespace while there is one is running in another namespace)</p>
| veeresh patil | <p>Disable the pre-pull hook:</p>
<p><a href="https://z2jh.jupyter.org/en/latest/administrator/optimization.html#pulling-images-before-users-arrive" rel="nofollow noreferrer">https://z2jh.jupyter.org/en/latest/administrator/optimization.html#pulling-images-before-users-arrive</a></p>
<p>prePuller:
hook:
enabled: false</p>
| Mike Barry |
<p>I currently have airflow running in a Kubernetes cluster in Azure using the helm chart for Apache airflow. I am able to use the API from the VM where I port forward the web server, using the endpoint: http://localhost:8080/api/v1/dags/test_trigger/dagRuns</p>
<p>Can anyone point me in the right direction for how I can interact with the API from other locations, or just expose the API endpoint in general to be able to be called from other locations?</p>
<p>Thanks,</p>
| adan11 | <p>There is a short chapter in Airflow Helm Chart's Production Guide:</p>
<p><a href="https://airflow.apache.org/docs/helm-chart/stable/production-guide.html#accessing-the-airflow-ui" rel="nofollow noreferrer">https://airflow.apache.org/docs/helm-chart/stable/production-guide.html#accessing-the-airflow-ui</a></p>
<p>It's about setting up Ingress or LoadBalancer essentially.</p>
<p>Accessing the API server is the same as accessing the webserver - they use the same port/run on webserver, so the pointes there should guide you what to do.</p>
| Jarek Potiuk |
<p>The syntax for adding a dependency to a helm 3 chart looks like this (inside of chart.yaml).<br>
How can you specify a release name if you need multiple instances of a dependency?</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v2
name: shared
description: Ingress Controller and Certificate Manager
type: application
version: 0.1.1
appVersion: 0.1.0
dependencies:
- name: cert-manager
version: ~0.13
repository: https://charts.jetstack.io
</code></pre>
<p>In the CLI it's just <code>helm upgrade -i RELEASE_NAME CHART_NAME -n NAMESPACE</code>
But inside of Chart.yaml the option to specify a release seems to be missing.</p>
<p>The next question I have is if there's a weird way to do it, how would you write the values for each instance in the values.yaml file?</p>
| Stephen | <p>After 5 more minutes of searching I found that there's an <code>alias</code> field that can be added, like so:</p>
<pre class="lang-yaml prettyprint-override"><code>dependencies:
- name: cert-manager
alias: first-one
version: ~0.13
repository: https://charts.jetstack.io
- name: cert-manager
alias: second-one
version: ~0.13
repository: https://charts.jetstack.io
</code></pre>
<p>And in the values.yaml file</p>
<pre class="lang-yaml prettyprint-override"><code>first-one:
# values go here
second-one:
# values go here
</code></pre>
<p>Reference <a href="https://helm.sh/docs/topics/charts/#the-chartyaml-file" rel="nofollow noreferrer">https://helm.sh/docs/topics/charts/#the-chartyaml-file</a></p>
<p>Using cert-manager is just an example, I can't think of a use-case that would need two instances of that particular chart. I'm hoping to use it for brigade projects</p>
| Stephen |
<p>I am working on Airflow 1.10.</p>
<p>I have problem with running commands on KubernetesPodOperator, where entire command is evaluated during DAG runtime.</p>
<p>I am generating command in DAG runtime as some of command's arguments depends on parameters passed by user.</p>
<p>As I read from documentation
KubernetesPodOperator expects list of strings or list of jinja templates:</p>
<pre><code> :param arguments: arguments of the entrypoint. (templated)
The docker image's CMD is used if this is not provided.
</code></pre>
<p>I have PythonOperator which generates command and push it to XCOM and KubernetesPodOperator
where in arguments I pass command generated by PythonOperator.</p>
<pre><code>from airflow.operators.python_operator import PythonOperator
from airflow.contrib.operators.kubernetes_pod_operator import KubernetesPodOperator
def command_maker():
import random # random is to illustrate that we don't know arguments value before runtime
return f"my_command {random.randint(1, 10)} --option {random.randint(1, 4)}"
def create_tasks(dag):
first = PythonOperator(
task_id="generate_command",
python_callable=command_maker,
provide_context=True,
dag=dag,
)
second = KubernetesPodOperator(
namespace='some_namespace',
image='some_image',
name='execute_command',
dag=dag,
arguments=[f'{{ ti.xcom_pull(dag_id="{dag.dag_id}", task_ids="generate_command", key="return_value")}}']
)
second.set_upstream(first)
</code></pre>
<p>Unfortunately KubernetesPodOperator doesn't run this command correctly as he tries to run something like this:</p>
<pre><code>[my_command 4 --option 2]
</code></pre>
<p>Is there way to eval this list at KubernetesPodOperator runtime
or am I enforced to push all runtime arguments into separate XCOM's?
I would like to avoid such solution as it required lot of changes in my project.</p>
<pre><code> arguments=[
"my_command",
f'{{ ti.xcom_pull(dag_id="{dag.dag_id}", task_ids="generate_command", key="first_argument")}}',
"--option",
f'{{ ti.xcom_pull(dag_id="{dag.dag_id}", task_ids="generate_command", key="second_argument")}}',
]
</code></pre>
| domandinho | <p>The problem is that JINJA template returns the template as string by default.</p>
<p>In recent Airflow, however (As of Airlfow 2.1.0) you can render the templates as native python objects:</p>
<p><a href="https://airflow.apache.org/docs/apache-airflow/stable/concepts/operators.html#rendering-fields-as-native-python-objects" rel="nofollow noreferrer">https://airflow.apache.org/docs/apache-airflow/stable/concepts/operators.html#rendering-fields-as-native-python-objects</a></p>
<p>By using a <code>render_template_as_native_obj=True</code> parameter when you create DAG.</p>
<p>Then you need to format your output in the way that python's <code>literal_eval</code> will be able to convert it to python object. In your case you have to make the output similar to:</p>
<p><code>[ 'my_command', '4', '--option', '2' ]</code></p>
<p>Note that this parameter will return native objects for all your templates so if they return some values that <a href="https://jinja.palletsprojects.com/en/2.11.x/nativetypes/" rel="nofollow noreferrer">literal_eval understands</a> - they will also be converted to native types (and you might have some unintended side-effects.</p>
| Jarek Potiuk |
<p>I would like to deploy Airflow locally on Minikube and have a local folder mounted for DAGs handling.</p>
<p>Airflow is deployed like this:</p>
<pre><code>helm install $AIRFLOW_NAME apache-airflow/airflow \
--values values.yml \
--set logs.persistence.enabled=true \
--namespace $AIRFLOW_NAMESPACE \
--kubeconfig ~/.kube/config
</code></pre>
<p>The <code>values.yml</code> looks like this:</p>
<pre><code>executor: KubernetesExecutor
config:
core:
dags_folder: /dags
webserver:
extraVolumes:
- name: dags
hostPath:
path: /path/dags
extraVolumeMounts:
- name: dags
mountPath: /dags
</code></pre>
<p><code>kubectl describe pods airflow-webserver --kubeconfig ~/.kube/config --namespace airflow</code>:</p>
<pre><code>Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: airflow-airflow-config
Optional: false
logs:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: airflow-logs
ReadOnly: false
dags:
Type: HostPath (bare host directory volume)
Path: /path/dags/
HostPathType:
airflow-webserver-token-xtq9h:
Type: Secret (a volume populated by a Secret)
SecretName: airflow-webserver-*
Optional: false
QoS Class: BestEffort
</code></pre>
<p>The volume dags appears to be correctly mounted but remains empty.
What could cause this behaviour ?</p>
<p>Edit:
<code>kubectl describe pods airflow-scheduler-0 --kubeconfig ~/.kube/config --namespace airflow</code></p>
<pre><code> Mounts:
/opt/airflow/airflow.cfg from config (ro,path="airflow.cfg")
/opt/airflow/dags from dags (rw)
/opt/airflow/logs from logs (rw)
/opt/airflow/pod_templates/pod_template_file.yaml from config (ro,path="pod_template_file.yaml")
/var/run/secrets/kubernetes.io/serviceaccount from airflow-scheduler-token-9zfpv (ro)
</code></pre>
<pre><code>Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: airflow-airflow-config
Optional: false
dags:
Type: HostPath (bare host directory volume)
Path: /path/dags
HostPathType:
logs:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: airflow-logs
ReadOnly: false
airflow-scheduler-token-9zfpv:
Type: Secret (a volume populated by a Secret)
SecretName: airflow-scheduler-token-9zfpv
Optional: false
</code></pre>
| val | <p>Assuming that you have some dags in /path/dags already, you should mount your dags folder to scheduler not to webserver (if you are using Airflow 2). Scheduler is the one to parse dags, webserver only displays them based on information stored in the DB so it does not actually need DAGs (it used to need it Airflow 1.10 without serialization)</p>
<p>Also I guess you should use LocalExecutor not KubernetesExecutor if you want to execute dags from local folder - then the <code>dags</code> mounted to scheduler will be available to the processes which are spawned from scheduler in the same container.</p>
<p>If you want to run Kubernetes Executor and want to mount host folder, I believe you will need to add it as a mount to pod template file of yours (you can generate such pod template file using airflow CLI</p>
<p>See <a href="https://airflow.apache.org/docs/apache-airflow/stable/executor/kubernetes.html#pod-template-file" rel="nofollow noreferrer">https://airflow.apache.org/docs/apache-airflow/stable/executor/kubernetes.html#pod-template-file</a></p>
| Jarek Potiuk |
<p>I am currently using the KubernetesPodOperator to run a Pod on a Kubernetes cluster. I am getting the below error:</p>
<blockquote>
<p>kubernetes.client.rest.ApiException: (403) Reason: Forbidden</p>
<p>HTTP response headers: HTTPHeaderDict({'Audit-Id': '',
'Cache-Control': 'no-cache, private', 'Content-Type':
'application/json', 'X-Content-Type-Options': 'nosniff', 'Date': 'Mon,
30 Aug 2021 00:12:57 GMT', 'Content-Length': '309'})</p>
<p>HTTP response body:
{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"pods
is forbidden: User
"system:serviceaccount:airflow10:airflow-worker-serviceaccount"
cannot list resource "pods" in API group "" in the namespace
"default"","reason":"Forbidden","details":{"kind":"pods"},"code":403}</p>
</blockquote>
<p>I can resolve this by running the below commands:</p>
<blockquote>
<p>kubectl create clusterrole pod-creator --verb=create,get,list,watch
--resource=pods</p>
<p>kubectl create clusterrolebinding pod-creator-clusterrolebinding
--clusterrole=pod-creator --serviceaccount=airflow10:airflow-worker-serviceaccount</p>
</blockquote>
<p>But I want to be able to setup the service account with the correct permissions inside airflow automatically. What would be a good approach to do this without having to run the above commands?</p>
| adan11 | <p>You can't really. You need to assign and create the roles when you deploy airflow, otherwise that would mean that you have huge security risk because deployed application would be able to give more permissions.</p>
<p>This can be done in multiple ways "automatically" if your intention was to somewhat automate the deployment. For example if your airflow deployment is done via Helm chart, the chart can add an configure the right resources to create appropriate role bindings. You can see how our Official Helm chart does it:</p>
<ul>
<li><a href="https://github.com/apache/airflow/blob/main/chart/templates/rbac/pod-launcher-role.yaml" rel="nofollow noreferrer">https://github.com/apache/airflow/blob/main/chart/templates/rbac/pod-launcher-role.yaml</a></li>
<li><a href="https://github.com/apache/airflow/blob/main/chart/templates/rbac/pod-launcher-rolebinding.yaml" rel="nofollow noreferrer">https://github.com/apache/airflow/blob/main/chart/templates/rbac/pod-launcher-rolebinding.yaml</a></li>
</ul>
| Jarek Potiuk |
<p>I'm quite new in docker and VPNs so I don't know what should be the best way to achieve this.</p>
<p>Contex:
I use airflow in Google Cloud to schedule some task. These tasks are dockerized so each task is the execution of a docker container with a script (Using KubernetesPodOperator)</p>
<p>For this use case I need that the connection was done through VPN and then run the script.
To connect the VPN (locally) I use user, password and CA certificate.</p>
<p>I've seen some ways to do it, but all of them use another docker image as VPN or with a bridge using host vpn.</p>
<p>What's the best way to develop a solution for this?</p>
| Miguel Angel Alvarez Rodriguez | <p>I think what you saw is good advice.</p>
<p>There are a number of projects that show how it could be done - one example here: <a href="https://gitlab.com/dealako/k8s-sidecar-vpn" rel="nofollow noreferrer">https://gitlab.com/dealako/k8s-sidecar-vpn</a></p>
<p>Using sidecar for VPN connection is usually a good idea. It has a number of advantages:</p>
<ul>
<li>allows you to use existing VPN images so that you do not have to add the VPN software to your images</li>
<li>Allows to use exactly the same VPN image and configuration for multiple pods/services</li>
<li>allows you to keep your secrets (user/password) only available to VPN and the VPN will only expose a plain TCP/http connection available only to your service - your service/task will never a cess the secrets which makes it a very secure way of storing the secrets and authentication</li>
</ul>
| Jarek Potiuk |
<p>I have single node kubenertes cluster running on GKE. All the load is running on single node separated by namesapces.</p>
<p>now i would like to implement the auto-scaling. Is it possible i can scale mircoservices to new node but one pod is running My main node only.</p>
<p>what i am thinking </p>
<p>Main node : Running everything with 1 pod avaibility (Redis, Elasticsearch)</p>
<p>Scaled up node : Scaled up replicas only of stateless microservice</p>
<p>so is there any way i can implemet this using <code>node auto scaeler</code> or using <code>affinity</code>.</p>
<p>Issue is that right now i am running graylog, elasticsearch and redis and rabbitmq on single node having <code>statefulsets</code> and backed by volume i have to redeploy everything edit yaml file for adding <code>affinity</code> to all.</p>
| Harsh Manvar | <p>I'm not sure that I understand your question correctly but if I do then you may try to use taints and tolerations (node affinity). Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes. All the details are available in the documentation <a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/" rel="nofollow noreferrer">here</a>.</p>
| Lachezar Balev |
<p>Here is my deploment template:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
name: XXX
version: {{ xxx-version }}
deploy_time: "{{ xxx-time }}"
name: XXX
spec:
replicas: 1
revisionHistoryLimit : 0
strategy:
type : "RollingUpdate"
rollingUpdate:
maxUnavailable : 0%
maxSurge : 100%
selector:
matchLabels:
name: XXX
version: {{ xxx-version }}
deploy_time: "{{ xxx-time }}"
template:
metadata:
labels:
name: XXX
version: {{ xxx-version }}
deploy_time: "{{ xxx-time }}"
spec:
containers:
- image: docker-registry:{{ xxx-version }}
name: XXX
ports:
- name: XXX
containerPort: 9000
</code></pre>
| karthikcru | <p>The key section in the documentation that's relevant to this issues is:</p>
<blockquote>
<p>Existing Replica Set controlling Pods whose labels match <code>.spec.selector</code>but whose template does not match <code>.spec.template</code> are scaled down. Eventually, the new Replica Set will be scaled to <code>.spec.replicas</code> and all old Replica Sets will be scaled to 0.</p>
</blockquote>
<p><a href="http://kubernetes.io/docs/user-guide/deployments/" rel="noreferrer">http://kubernetes.io/docs/user-guide/deployments/</a></p>
<p>So the spec.selector should not vary across multiple deployments:</p>
<pre><code>selector:
matchLabels:
name: XXX
version: {{ xxx-version }}
deploy_time: "{{ xxx-time }}"
</code></pre>
<p>should become:</p>
<pre><code>selector:
matchLabels:
name: XXX
</code></pre>
<p>The rest of the labels can remain the same</p>
| manojlds |
<p>my vm(virtual machine) have Multiple virtual network cards,so it has multiple ip,When I installed kubernetes, etcd was automatically installed and configured,and it automatically selected a default IP. but This IP is not what I want it to listen for. where and how can I configure the etcd to listen the right ip I wanted?</p>
<p>I installed kubernetes and the first control panel(master01) is work(ready). but when i join the second control panel(master02),I get the error like this: "error execution phase check-etcd: error syncing endpoints with etc: dial tcp 10.0.2.15:2379: connect: connection refused". so I checked etcd process found that one of it configuration is"--advertise-client-urls=10.0.2.15:2379",the ip is not what I want it to listen for. my realy ip is 192.168.56.101. And I want it listen for this ip. what should I do? </p>
<p>my kubernetes cluster version is v1.14.1 </p>
<p>I hope the etcd can listen for the correct IP. And the second kubernetes master node can join the cluster successfully.</p>
| Esc | <p>Judging by the error message it looks like you're using <code>kubeadm</code>. You need to add <code>extraArgs</code> to your etcd in <code>ClusterConfiguration</code>, something like (untested):</p>
<pre><code>apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
etcd:
local:
...
extraArgs:
advertise-client-urls: "https://192.168.56.101:2379"
listen-client-urls: "https://192.168.56.101:2379,https://127.0.0.1:2379"
...
</code></pre>
<p>Also see the <code>ClusterConfiguration</code> documentation: <a href="https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta1#LocalEtcd" rel="nofollow noreferrer">https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta1#LocalEtcd</a></p>
| johnharris85 |
<p><strong>What I want to do:</strong></p>
<p>For my own educational purposes I'm trying to start a kube API server and register a kubelet as a node into it. I'm doing this exercise into a vagrant box which runs linux and a docker runtime. </p>
<p><strong>What I did so far is:</strong></p>
<ol>
<li>I've run a dockerized etcd using the host network:</li>
</ol>
<pre><code>$docker run --volume=$PWD/etcd-data:/default.etcd --detach --net=host quay.io/coreos/etcd
$docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3f4a42fce24a quay.io/coreos/etcd "/usr/local/bin/etcd" 2 hours ago Up 2 hours awesome_bartik
</code></pre>
<ol start="2">
<li>I've started the API server connecting it to etcd</li>
</ol>
<pre><code>$./kube-apiserver --etcd-servers=http://127.0.0.1:2379 --service-cluster-ip-range=10.0.0.0/16
</code></pre>
<p>The server v.1.16 is up and running as seen next:</p>
<pre><code>$curl http://localhost:8080/version
{
"major": "1",
"minor": "16",
"gitVersion": "v1.16.0",
"gitCommit": "2bd9643cee5b3b3a5ecbd3af49d09018f0773c77",
"gitTreeState": "clean",
"buildDate": "2019-09-18T14:27:17Z",
"goVersion": "go1.12.9",
"compiler": "gc",
"platform": "linux/amd64"
}
</code></pre>
<p>No nodes are registered yet.</p>
<p><strong>What I can't yet achieve:</strong></p>
<p>Now I want to start the kubelet and register it as a Node. In earlier versions this was maybe possible with the <code>--api-servers</code> flag but this flag is already removed and the configuration is supposed to be in a separate kubelet config file.</p>
<p>My question is how to configure the access to the API server in the kubelet configuration file? Similar discussion is available <a href="https://github.com/kubernetes/kubernetes/issues/36745" rel="nofollow noreferrer">here</a> but it did not help me too much. The kubelet configuration options are available <a href="https://github.com/kubernetes/kubelet/blob/master/config/v1beta1/types.go" rel="nofollow noreferrer">here</a>.</p>
<p>So far the config file looks like this... Seems that <code>staticPodURL</code> is definitely not the right config :-)</p>
<pre><code>kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
staticPodURL: http://localhost:8080
failSwapOn: false
authentication:
anonymous:
enabled: true
webhook:
enabled: false
authorization:
mode: AlwaysAllow
</code></pre>
| Lachezar Balev | <p>After a good amount of digging I've managed to make the kubelet register into the kube-api server which opens my way for further building of a small k8s cluster component by component.</p>
<p>The flag that I was looking for in the kubelet config is the following:</p>
<blockquote>
<p>--kubeconfig string</p>
<p>Path to a kubeconfig file, specifying how to connect to the API server. Providing --kubeconfig enables API server
mode, omitting --kubeconfig enables standalone mode.</p>
</blockquote>
<p>Now I have two config files:</p>
<pre><code>$ cat 02.kubelet-api-server-config.yaml
apiVersion: v1
kind: Config
clusters:
- cluster:
server: http://localhost:8080
name: kubernetes
contexts:
- context:
cluster: kubernetes
name: system:node:java2dayskube@kubernetes
current-context: system:node:java2dayskube@kubernetes
preferences: {}
$ cat 02.kubelet-base-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
staticPodURL: http://localhost:8080
failSwapOn: false
authentication:
anonymous:
enabled: true
webhook:
enabled: false
authorization:
mode: AlwaysAllow
</code></pre>
<p>As stated above the API server is up and running so I can start the kubelet now:</p>
<pre><code>sudo ./kubelet --config=02.kubelet-base-config.yaml --kubeconfig=02.kubelet-api-server-config.yaml
</code></pre>
<p>Obviously the kubelet registered itself as a node in the API server (details skipped for brevity):</p>
<pre><code>$ curl http://localhost:8080/api/v1/nodes
{
"kind": "NodeList",
"apiVersion": "v1",
"metadata": {
"selfLink": "/api/v1/nodes",
"resourceVersion": "62"
},
"items": [
{
"metadata": {
"name": "vagrant",
...
"creationTimestamp": "2019-11-03T09:12:18Z",
"labels": {
"beta.kubernetes.io/arch": "amd64",
"beta.kubernetes.io/os": "linux",
"kubernetes.io/arch": "amd64",
"kubernetes.io/hostname": "vagrant",
"kubernetes.io/os": "linux"
},
"annotations": {
"volumes.kubernetes.io/controller-managed-attach-detach": "true"
}
},
"spec": {
"taints": [
{
"key": "node.kubernetes.io/not-ready",
"effect": "NoSchedule"
}
]
}
...
}
</code></pre>
<p>I've managed to create one pod by making a POST request to the api-server. The kubelet was notified and span the corresponding docker containers.</p>
| Lachezar Balev |
<p>I was checking Kubernetes <a href="https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy" rel="nofollow noreferrer">documentation</a> for pulling images. In that, I saw two policies IfNotPresent and Always. In "Always" its stated that</p>
<blockquote>
<p>If the kubelet has a container image with that exact digest cached locally, the kubelet uses its cached image; otherwise, the kubelet pulls the image with the resolved digest, and uses that image to launch the container.</p>
</blockquote>
<p>I am unable to understand what is local here. Is it a node, pod, or cluster? What is the difference between Always and IfNotPresent if it is at node level? It's very confusing.</p>
| Akshit Bansal | <p>When you use an image WITHOUT a tag, Kubernetes will assume that you want the latest version of the image, which is identified by the latest tag by default. If you have multiple versions of the same image in your repository with different tags, such as img1:1.0.0, img1:1.1.0, and img1:latest, Kubernetes will use the image with the tag specified in the pod specification.</p>
<p>If you use IfNotPresent and the image with the specified tag is already present on the worker node, Kubernetes will use that image to start the container, even if there is a newer version of the image available in the repository with the same tag.</p>
<p>If you use Always, however, Kubernetes will always attempt to download the latest version of the image with the specified tag from the repository, even if a cached copy of the image is already present on the worker node. This can be useful if you want to ensure that your containers are always running the latest version of the image.</p>
<p>consider a scenario where a container is running on a worker node with img1:latest as the latest tag, and then the container restarts or reschedules on another worker node with the same tag pointing to an older version of the image, IfNotPresent will use the local image present on the worker node, while Always will attempt to download the latest version of the image from the repository.</p>
<p>However, it's important to note that the behavior of Always is based on the digest of the image, not the tag. The digest is a unique identifier for a specific version of an image that is based on the content of the image. When you specify Always, Kubernetes will check the digest of the image on the worker node against the digest of the latest version of the image in the repository with the same tag. If the digests match, Kubernetes will use the cached copy of the image on the worker node. If the digests differ, Kubernetes will download the latest version of the image from the repository and use it to start the container.</p>
| Viswesn |
<p>I have a question related to Kubernetes Ingress-nginx, I want to use <a href="http://nginx.org/en/docs/http/ngx_http_map_module.html" rel="nofollow noreferrer">ngx_http_map_module</a> to define a new attribute for <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/log-format/" rel="nofollow noreferrer">log-format-upstream</a>. The respective part in <a href="https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-helm/" rel="nofollow noreferrer">helm chart</a> where I have defined my map looks like this:</p>
<pre><code>containerPort:
http: 80
https: 443
config:
log-format-upstream: $time_iso8601, $proxy_protocol_addr, $proxy_add_x_forwarded_for, $req_id, $remote_user, $bytes_sent, $request_time, $status, $host, $server_protocol, $uri $uri_category
http-snippet: |
map $uri $uri_category {
~(*/)([0-9]{3,}+)(/*)$ $2;
}
configAnnotations: {}
</code></pre>
<p>However, it gives me following error:</p>
<pre><code>nginx: [emerg] unexpected "{" in /tmp/nginx-cfg1517276787:255
</code></pre>
<p>The line 255 in the config looks like this:</p>
<pre><code> proxy_ssl_session_reuse on;
map $uri $uri_category { #Line: 255
~(*/)([0-9]{3,}+)(/*)$ $2;
}
upstream upstream_balancer {
</code></pre>
<p>I doubt that i havent defined <code>http-snippet</code> and map at the right location in the chart, but i am not sure where exactly it should be either?</p>
| Maven | <p><a href="https://stackoverflow.com/a/49440631">Related answer</a>: Surround the regex in double-quotes; Nginx uses <code>{</code> and <code>}</code> for defining blocks.</p>
<p>For example:</p>
<pre><code> map $uri $uri_category {
"~[0-9]{3,}" 'FOO';
}
server {
location / {
try_files $uri$uri_category =404;
}
}
</code></pre>
<p>That config appends <code>'FOO'</code> to three+ consecutive digits.</p>
<pre><code>/123 -> /123FOO
/4444 -> /4444FOO
</code></pre>
<hr />
<p>In your case, I think the regex should be something like:</p>
<p><code>"~(.*/)([0-9]{3,})(/.*)$" $2;</code></p>
| Eric Fortis |
<p>I am trying to create EKS Fargate cluster and deploy example Spring Boot application with 1 endpoint, I successfully create stack with following CloudFormation script:</p>
<pre><code>---
AWSTemplateFormatVersion: '2010-09-09'
Description: 'AWS CloudFormation template for EKS Fargate managed Kubernetes cluster with exposed endpoints'
Resources:
VPC:
Type: AWS::EC2::VPC
Properties:
CidrBlock: 10.0.0.0/16
EnableDnsSupport: true
EnableDnsHostnames: true
InternetGateway:
Type: AWS::EC2::InternetGateway
VPCGatewayAttachment:
Type: AWS::EC2::VPCGatewayAttachment
Properties:
VpcId: !Ref VPC
InternetGatewayId: !Ref InternetGateway
PublicSubnet:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref VPC
CidrBlock: 10.0.2.0/24
MapPublicIpOnLaunch: true
AvailabilityZone: !Select [ 0, !GetAZs '' ]
PrivateSubnetA:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref VPC
CidrBlock: 10.0.0.0/24
AvailabilityZone: !Select [ 0, !GetAZs '' ]
PrivateSubnetB:
Type: AWS::EC2::Subnet
Properties:
VpcId: !Ref VPC
CidrBlock: 10.0.1.0/24
AvailabilityZone: !Select [ 1, !GetAZs '' ]
PublicRouteTable:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref VPC
PublicRoute:
Type: AWS::EC2::Route
Properties:
RouteTableId: !Ref PublicRouteTable
DestinationCidrBlock: 0.0.0.0/0
GatewayId: !Ref InternetGateway
SubnetRouteTableAssociationA:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref PublicSubnet
RouteTableId: !Ref PublicRouteTable
EIP:
Type: AWS::EC2::EIP
NatGateway:
Type: AWS::EC2::NatGateway
Properties:
SubnetId: !Ref PublicSubnet
AllocationId: !GetAtt EIP.AllocationId
PrivateRouteTable:
Type: AWS::EC2::RouteTable
Properties:
VpcId: !Ref VPC
PrivateRoute:
Type: AWS::EC2::Route
Properties:
RouteTableId: !Ref PrivateRouteTable
DestinationCidrBlock: 0.0.0.0/0
NatGatewayId: !Ref NatGateway
PrivateSubnetRouteTableAssociationA:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref PrivateSubnetA
RouteTableId: !Ref PrivateRouteTable
PrivateSubnetRouteTableAssociationB:
Type: AWS::EC2::SubnetRouteTableAssociation
Properties:
SubnetId: !Ref PrivateSubnetB
RouteTableId: !Ref PrivateRouteTable
EKSCluster:
Type: AWS::EKS::Cluster
Properties:
Name: EKSFargateCluster
Version: '1.26'
ResourcesVpcConfig:
SubnetIds:
- !Ref PrivateSubnetA
- !Ref PrivateSubnetB
RoleArn: !GetAtt EKSClusterRole.Arn
FargateProfile:
Type: AWS::EKS::FargateProfile
Properties:
ClusterName: !Ref EKSCluster
FargateProfileName: FargateProfile
PodExecutionRoleArn: !GetAtt FargatePodExecutionRole.Arn
Selectors:
- Namespace: default
Subnets:
- !Ref PrivateSubnetA
- !Ref PrivateSubnetB
FargateProfileCoredns:
Type: AWS::EKS::FargateProfile
Properties:
ClusterName: !Ref EKSCluster
FargateProfileName: CorednsProfile
PodExecutionRoleArn: !GetAtt FargatePodExecutionRole.Arn
Selectors:
- Namespace: kube-system
Labels:
- Key: k8s-app
Value: kube-dns
Subnets:
- !Ref PrivateSubnetA
- !Ref PrivateSubnetB
FargatePodExecutionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- eks-fargate-pods.amazonaws.com
Action:
- sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AmazonEKSFargatePodExecutionRolePolicy
EKSClusterRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- eks.amazonaws.com
Action:
- sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AmazonEKSClusterPolicy
- arn:aws:iam::aws:policy/AmazonEKSVPCResourceController
</code></pre>
<p>I run following command to path the CoreDNS for Fargate:</p>
<pre><code>kubectl patch deployment coredns \
-n kube-system \
--type json \
-p='[{"op": "remove", "path": "/spec/template/metadata/annotations/eks.amazonaws.com~1compute-type"}]'
</code></pre>
<p>Then I deploy my example application image from public ECR with following kubernetes manifest:</p>
<pre><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-app
spec:
replicas: 2
selector:
matchLabels:
app: example-app
template:
metadata:
labels:
app: example-app
spec:
containers:
- name: ventu
image: public.ecr.aws/not_real_url/public_ecr_name:latest
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: example-service
spec:
type: LoadBalancer
selector:
app: example-app
ports:
- protocol: TCP
port: 80
targetPort: 8080
</code></pre>
<p>Then when I run:</p>
<pre><code>kubectl get svc
</code></pre>
<p>I see result:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
example-service LoadBalancer 172.20.228.77 aa0116829ac2647a7bf39a97bffb0183-1208408433.eu-central-1.elb.amazonaws.com 80:31915/TCP 16m
kubernetes ClusterIP 172.20.0.1 <none> 443/TCP 29m
</code></pre>
<p>However when I try to reach the EXTERNAL-IP on my LoadBalancer example-service, I get empty response, I can't reach my application on only path defined in my Spring Boot application: /api/v1/info</p>
<pre><code>server.port=8080
server.servlet.context-path=/api/v1
</code></pre>
<p>What am I missing?</p>
<p>Couple of information:</p>
<ul>
<li>my pods spin up successfully, I can see Spring Boot logging when I run kubectl logs pod-name</li>
<li>my coredns pods spin up correctly as well</li>
<li>I use busybox to test my cluster's dns, and everything seems to be working too</li>
</ul>
| Jakub Zak | <p>I solved my issue, by following this <a href="https://repost.aws/knowledge-center/eks-alb-ingress-controller-fargate" rel="nofollow noreferrer">guide</a></p>
<p>I then exported resulting stack into my CloudFormation script.</p>
<p>Then to deploy my application I updated my kubernetes manifest to:</p>
<pre><code>---
apiVersion: v1
kind: Namespace
metadata:
name: example
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: example
name: deployment-example-be-app
spec:
selector:
matchLabels:
app.kubernetes.io/name: example-be-app
replicas: 2
template:
metadata:
labels:
app.kubernetes.io/name: example-be-app
spec:
containers:
- name: example-be-app
image: public.ecr.aws/fake_url/example:latest
imagePullPolicy: Always
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
namespace: example
name: service-example-be-app
annotations:
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
protocol: TCP
selector:
app.kubernetes.io/name: example-be-app
</code></pre>
<p>Now I access my example application form browser.</p>
| Jakub Zak |
<p>currently I’m trying to get the the api server connected with my keycloak.</p>
<p>When I use the oidc-information from the user everything works fine, but the groups seem to be ignored
The apiserver is running with the parameter</p>
<pre><code> --oidc-ca-file=/etc/kubernetes/ssl/ca.pem
--oidc-client-id=kubernetes
--oidc-groups-claim=groups
--oidc-groups-prefix=oidc:
--oidc-issuer-url=https://keycloak.example.com/auth/realms/master
--oidc-username-claim=preferred_username
--oidc-username-prefix=oidc:
</code></pre>
<p>I added a ClusterRole and ClusterRoleBinding</p>
<pre><code>kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: developer-role
rules:
- apiGroups: [""]
resources: ["namespaces","pods"]
verbs: ["get", "watch", "list"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: developer-crb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: developer-role
subjects:
- kind: User
name: "oidc:myuser"
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>and for my user “myuser” everythin works fine.</p>
<p>But when I change the ClusterRoleBinding to subjet Group</p>
<pre><code>....
subjects:
- kind: User
name: "oidc:group1"
apiGroup: rbac.authorization.k8s.io
...
</code></pre>
<p>I receive forbidden.</p>
<p>I tried to debug the jwt token and the group seems to be included:</p>
<pre><code>{
...
"groups": [
"group1",
"group2",
"group3"
],
...
}
</code></pre>
<p>Any ideas why my groups are ignored/my ClusterRoleBinding not working?</p>
| Heiko | <pre><code>....
subjects:
- kind: User
name: "oidc:group1"
apiGroup: rbac.authorization.k8s.io
...
</code></pre>
<p>should be:</p>
<pre><code>....
subjects:
- kind: Group
name: "oidc:group1"
apiGroup: rbac.authorization.k8s.io
...
</code></pre>
| johnharris85 |
<p>I am new to Argo Workflows and following along with <a href="https://youtu.be/XySJb-WmL3Q?t=1247" rel="nofollow noreferrer">this tutorial</a>.</p>
<p>Following along with it, we are to create a service account and then attach the pre-existing <code>workflow-role</code> to the service account, like this:</p>
<pre class="lang-sh prettyprint-override"><code>> kubectl create serviceaccount mike
serviceaccount/mike created # Response from my terminal
> kubectl create rolebinding mike --serviceaccount=argo:mike --role=workflow-role
rolebinding.rbac.authorization.k8s.io/mike created # Response from my terminal
</code></pre>
<p>But then when I tried to submit a job using that service account, it said that there is no such role <code>workflow-role</code>:</p>
<pre class="lang-sh prettyprint-override"><code>Message: Error (exit code 1): pods "mike-cli-hello-svlmn" is forbidden: User
"system:serviceaccount:argo:mike" cannot patch resource "pods" in API group "" in the namespace
"argo": RBAC: role.rbac.authorization.k8s.io "workflow-role" not found
</code></pre>
<p>(I also do not understand why my default API group is null, but I'm assuming that is unrelated.)</p>
<p>I then checked, and indeed there is no such role:</p>
<pre class="lang-sh prettyprint-override"><code>❯ kubectl get role
NAME CREATED AT
agent 2022-02-28T21:38:31Z
argo-role 2022-02-28T21:38:31Z
argo-server-role 2022-02-28T21:38:32Z
executor 2022-02-28T21:38:32Z
pod-manager 2022-02-28T21:38:32Z
submit-workflow-template 2022-02-28T21:38:32Z
workflow-manager 2022-02-28T21:38:32Z
</code></pre>
<p>Could it be that the role is <code>workflow-manager</code>? That sounds more like an automated service to manage the pipeline / DAG or something similar.</p>
<p>I am obviously quite new to Argo. I have successfully launched jobs, but not when trying to use that newly created service account.</p>
<p>Should Argo have a default role of <code>workflow-role</code>? How do I create it?</p>
| Mike Williamson | <p>Actually, I think I got it, but if someone sees this, a confirmation would be nice.</p>
<p>I created a role file as follows:</p>
<pre class="lang-sh prettyprint-override"><code>role.yaml:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: workflow
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- watch
- patch
- apiGroups:
- ""
resources:
- pods/log
verbs:
- get
- watch
</code></pre>
<p>I then created the role via the standard</p>
<pre class="lang-sh prettyprint-override"><code>kubectl apply -f role.yaml
</code></pre>
<p>Then created the role-binding same as above:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl create rolebinding mike --serviceaccount=argo:mike --role=workflow
</code></pre>
<p>Then I could submit jobs with the new service account without a problem:</p>
<pre class="lang-sh prettyprint-override"><code>argo submit --serviceaccount mike --watch argo_tutorial.yaml
</code></pre>
| Mike Williamson |
<p>In kubernetes you can set "readOnlyRootFilesystem: true" for a container, in order to make the container's file system read-only, thus making it more secure.</p>
<p>However, in my particular case, my application still needs to write some files, so I need to add some read-write volume mounts for some particular paths.</p>
<p>Now my question is: if I introduce these writable locations into my setup, does it really make a difference from a security point of view if the rest of the file system is read-only?</p>
| adrians | <p>Yep, having the main filesystem in the container as read-only with specific locations (for example logging or temporary files) as read-write will, in general, improve the overall security of your container.</p>
<p>An attacker trying to compromise your contained application from the outside, likely won't know which directories are read-write, so would have difficulty in placing their payloads onto disk.</p>
<p>It's not a perfect defence, by any manner of means, however it's a good layer of security, and if you know which directories need to be read-write, a relatively simple step to implement.</p>
| Rory McCune |
<p>Currently, our CI/CD Environment is cloud based in Kubernetes.
Kubernetes Cloud Providers recently removed the docker deamon, due to performance advantages. For example Google Kubernetes Engine or IBM Cloud Kubernetes only feature an Containerd runtime, to <strong>run</strong> but not <strong>build</strong> container images.</p>
<p>Many tools like <a href="https://github.com/GoogleContainerTools/kaniko" rel="nofollow noreferrer">kaniko</a> or <a href="https://github.com/GoogleContainerTools/jib" rel="nofollow noreferrer">jib</a> fix this gap. They provide a way to build docker images very effectivly without requiring a docker deamon.</p>
<p><strong>Here comes the Problem:</strong></p>
<ol>
<li>Image "registry-x.com/repo/app1:v1-snapshot" gets build from jib in CI to registry-x.</li>
<li>Image "registry-x.com/repo/app1:v1-snapshot" is then at some point of time deployed and tested and needs to be delivered to the registry Y if the test is successfull as well as needs to be marked as stable release in registry X.</li>
</ol>
<p>So Image "registry-x.com/repo/app1:v1-snapshot" needs to be tagged from "registry-x.com/repo/app1:v1-snapshot" to "registry-x.com/web/app1:v1-release" and then it needs additionally to be tagged with "registry-y.com/web/app1:v1-release" and both need to be pushed.</p>
<p>Outcome: The Snapshot image from development is available in both registries with a release tag.</p>
<p>So how to do these simple 3 operations (Pull, Tag, Push) without a docker deamon? Seems like kaniko and jib are not a way. </p>
<p>I dont want to order an VM only to get a docker deamon to do these operations. And I also know that Jib is capable of pushing to multiple registries. But it is not able to just rename images.</p>
<p>Relates also to this Question from last year:
<a href="https://stackoverflow.com/questions/44974656/clone-an-image-from-a-docker-registry-to-another">Clone an image from a docker registry to another</a></p>
<p>Regards, Leon</p>
| LeonG | <p>Docker Registry provides an <a href="https://docs.docker.com/registry/spec/api/" rel="nofollow noreferrer">HTTP API</a>, so you could those methods to pull and push images. </p>
<p>There are several libraries providing an higher abstraction layer over it (<a href="https://github.com/heroku/docker-registry-client" rel="nofollow noreferrer">docker-registry-client in Go</a>, <a href="https://www.npmjs.com/package/docker-registry-client" rel="nofollow noreferrer">docker-registry-client in Js</a>, etc).</p>
<p>In any case, the flow will be </p>
<ul>
<li><p><a href="https://docs.docker.com/registry/spec/api/#pulling-an-image" rel="nofollow noreferrer">Pulling an image</a> involves:</p>
<ul>
<li><a href="https://docs.docker.com/registry/spec/api/#manifest" rel="nofollow noreferrer">Retrieve manifests</a> from <code>registry-x.com/repo/app1:v1-snapshot</code>.</li>
<li><a href="https://docs.docker.com/registry/spec/api/#blob" rel="nofollow noreferrer">Download</a> of the layers (blobs) named on the manifest.</li>
</ul></li>
<li><p><a href="https://docs.docker.com/registry/spec/api/#pushing-an-image" rel="nofollow noreferrer">Pushing an image</a> involves:</p>
<ul>
<li>Upload all layers you previously downloaded</li>
<li>Modify the original manifest with your new version</li>
<li>Upload the new manifest</li>
</ul></li>
</ul>
| Gonzalo Matheu |
<p>I heard ElasticSearch is already changing its license to SSPL. Because of that, it will not be considered as an OSS (OpenSource Software) anymore.</p>
<p>Do you know of a better OSS as replacement for ElasticSearch?</p>
<p>Hope suggested OSS has an official image in dockerhub since I will be using it also in Kubernetes.</p>
| lemont80 | <p>The <a href="https://opensearch.org/" rel="nofollow noreferrer">OpenSearch</a> alternative provided by AWS could be a correct possibility.
It's forked from Elasticsearch and provide the same <a href="https://aws.amazon.com/fr/blogs/aws/amazon-elasticsearch-service-is-now-amazon-opensearch-service-and-supports-opensearch-10/" rel="nofollow noreferrer">features</a>.</p>
| YLR |
<p>I'm running into issues trying to deploy stateful mongodb replicaset with sidecar from cvallance while running istio 0.8, if I leave istio out of the mix everything works, but when istio is enabled mongo-sidecars can't find eachother and replicaset is not configured. Below is my mongo deployment and service.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
service: mongo-test
environment: test
name: mongo-test
namespace: test
spec:
ports:
- name: mongo
port: 27017
clusterIP: None
selector:
service: mongo-test
role: mongo-test
environment: test
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo-test
namespace: test
spec:
serviceName: "mongo-test"
replicas: 3
selector:
matchLabels:
service: mongo-test
template:
metadata:
labels:
role: mongo-test
environment: test
service: mongo-test
spec:
serviceAccountName: mongo-test-serviceaccount
terminationGracePeriodSeconds: 60
containers:
- name: mongo
image: mongo:3.6.5
resources:
requests:
cpu: "10m"
command:
- mongod
- "--bind_ip_all"
- "--replSet"
- rs0
- "--smallfiles"
- "--noprealloc"
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
- name: mongo-sidecar
image: cvallance/mongo-k8s-sidecar
resources:
requests:
cpu: "10m"
env:
- name: MONGO_SIDECAR_POD_LABELS
value: "role=mongo-test,environment=test"
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
annotations:
volumes.beta.kubernetes.io/storage-class: "mongo-ssd"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 100Gi
</code></pre>
| jelums | <p>istio does not support mutual TLS for statefulsets at least till V.1.0.2 </p>
| Paul Ma |
<p>I have deployed a simple dotnet core app into Kubernetes. The service which is exposed is as below </p>
<pre><code>apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2020-01-17T18:07:23Z"
labels:
app.kubernetes.io/instance: expo-api
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: expo-api
app.kubernetes.io/version: 0.0.4
helm.sh/chart: expo-api-0.0.4
name: expo-api-service
namespace: default
resourceVersion: "997971"
selfLink: /api/v1/namespaces/default/services/expo-api-service
uid: 144b9d1d-87d2-4096-9851-9563266b2099
spec:
clusterIP: 10.12.0.122
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
selector:
app.kubernetes.io/instance: expo-api
app.kubernetes.io/name: expo-api
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
</code></pre>
<p>The ingress controller I am using is nginx ingress controller and the simple ingress rules are set as below - </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
creationTimestamp: "2020-01-17T18:07:24Z"
generation: 3
labels:
app.kubernetes.io/instance: expo-api
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: expo-api
app.kubernetes.io/version: 0.0.4
helm.sh/chart: expo-api-0.0.4
name: expo-api
namespace: default
resourceVersion: "1004650"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/expo-api
uid: efef4e15-ed0a-417f-8b34-4e0f46cb1e70
spec:
rules:
- http:
paths:
- backend:
serviceName: expo-api-service
servicePort: 80
path: /expense
status:
loadBalancer:
ingress:
- ip: 34.70.45.62
</code></pre>
<p>The dotnet core app which has a simple start up - </p>
<pre><code>public class Startup
{
public Startup(IConfiguration configuration)
{
Configuration = configuration;
}
public IConfiguration Configuration { get; }
// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
services.AddControllers();
}
// This method gets called by the runtime. Use this method to configure the HTTP request pipeline.
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
app.UseHttpsRedirection();
app.UseRouting();
app.UseAuthorization();
app.UseEndpoints(endpoints =>
{
endpoints.MapControllers();
});
}
}
</code></pre>
<p>This is the ingress output - </p>
<pre><code>Name: expo-api
Namespace: default
Address: 34.70.45.62
Default backend: default-http-backend:80 (10.8.0.9:8080)
Rules:
Host Path Backends
---- ---- --------
*
/expense expo-api-service:80 (10.8.0.26:80,10.8.0.27:80,10.8.1.14:80)
Annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: true
Events: <none>
</code></pre>
<p>Below is the nginx ingress controller setup -</p>
<pre><code>Name: nginx-nginx-ingress-controller
Namespace: default
Labels: app=nginx-ingress
chart=nginx-ingress-1.29.2
component=controller
heritage=Helm
release=nginx
Annotations: <none>
Selector: app=nginx-ingress,component=controller,release=nginx
Type: LoadBalancer
IP: 10.12.0.107
LoadBalancer Ingress: 34.66.164.70
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 30144/TCP
Endpoints: 10.8.1.6:80
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 30469/TCP
Endpoints: 10.8.1.6:443
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
</code></pre>
<p>The issue is when I am changing the ingress rules path to only <code>/</code> and access using - <code>curl 34.66.164.70/weatherforecast</code> it works perfectly fine . </p>
<p>However when I change the ingress path to <code>/expense</code> and try to access using - curl <code>34.66.164.70/expense/weatherforecast</code> . The output is an error as - </p>
<pre><code>{
"type": "https://tools.ietf.org/html/rfc7231#section-6.5.1",
"title": "One or more validation errors occurred.",
"status": 400,
"traceId": "|4dec8cf0-4fddb4d168cb9569.",
"errors": {
"id": [
"The value 'weatherforecast' is not valid."
]
}
}
</code></pre>
<p>I am unable to understand what is the issue behind this. Whether it is appearing from dotnet core side or Kubernetes? If dotnet what may be the resolution and if on Kubernetes what is the expected resolution. </p>
| Joy | <p><strong>ORIGINAL</strong> :Thanks to @heyzling insight I found out the solution to it . I changed the app path into the code <code>startup.cs</code>. The issue was the Api originally was not expecting a route prefix for all controllers . Hence it was giving the error . So that I had to do a slight change in the <code>startup.cs</code> to add <code>app.UsePathBase("/expense")</code> . Below is the configuration which I added - </p>
<pre><code>public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
if (env.IsDevelopment())
{
app.UseDeveloperExceptionPage();
}
app.UsePathBase("/expense"); // this is the added configuration which identifies the ingress path rule individually.
app.UseHttpsRedirection();
app.UseRouting();
app.UseAuthorization();
app.UseEndpoints(endpoints =>
{
endpoints.MapControllers();
});
}
</code></pre>
<p>Ideally I feel this is isn't a good design since kubernetes ingress and dotnet core routes should know nothing about each other . They ideally should not be dependent on each other to comply routing rules . If someone has better solution ? Please do post . The above one solves my purpose but I am not happy with it . </p>
<pre><code>----------------------------------------------------------------------------------
</code></pre>
<p><strong>UPDATE 2</strong>: Thanks to @heyzling . I finally found the solution -
It looks like it had to rewrite the url and forward the actual API url which dotnet code expecting to the docker image which is running .</p>
<p>Here is the code example - </p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/use-regex: "true"
labels:
app.kubernetes.io/name: expo-api
name: expo-api
namespace: default
spec:
rules:
- http:
paths:
- backend:
serviceName: expo-api-service
servicePort: 80
path: /expense(/|$)(.*)
</code></pre>
<p>So now you can do both -</p>
<pre><code>curl 35.192.198.231/expense/weatherforecast
curl 35.192.198.231/expense/fakeapi
</code></pre>
<p>it would rewrite and forward the url as - </p>
<pre><code>localhost:80/weatherforecast
localhost:80/fakeapi
</code></pre>
<p>inside the container. Hence it works as expected . In that way we <code>DO NOT</code> require <code>app.UsePathBase("/expense")</code> anymore and both dotnet core and ingress does not have to know anything about each other. </p>
| Joy |
<p>I'm trying to gey pods scheduled on the master node. Succesfully untainted the node</p>
<blockquote>
<p>kubectl taint node mymasternode
node-role.kubernetes.io/master:NoSchedule-</p>
<p>node/mymasternode untainted</p>
</blockquote>
<p>But then changing replicas to 4 in the deploy.yaml and apply it all the pods are scheduled on the worker nodes that were workers already.</p>
<p>Is there an extra step needed to get pods scheduled on the master node as well?</p>
| Serve Laurijssen | <p>To get pods scheduled on Control plane nodes which have a taint applied (which most Kubernetes distributions will do), you need to add a toleration to your manifests, as described in <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/" rel="nofollow noreferrer">their documentation</a>, rather than untaint the control plane node. Untainting the control plane node can be dangerous as if you run out of resources on that node, your cluster's operation is likely to suffer.</p>
<p>Something like the following should work</p>
<pre class="lang-yaml prettyprint-override"><code> tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
</code></pre>
<p>If you're looking to get a pod scheduled to every node, usually the approach is to create a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="nofollow noreferrer">daemonset</a> with that toleration applied.</p>
<p>If you need to have a pod scheduled to a control plane node, without using a daemonset, it's possible to combine a toleration with scheduling information to get it assigned to a specific node. The simplest approach to this is to specify the target <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodename" rel="nofollow noreferrer">node name</a> in the manifest.</p>
<p>This isn't a very flexible approach, so for example if you wanted to assign pods to any control plane node, you could apply a label to those nodes and use a <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector" rel="nofollow noreferrer">node selector</a> combined with the toleration to get the workloads assigned there.</p>
| Rory McCune |
<p>Is there any command to revert back to previous configuration on a resource?</p>
<p>For example, if I have a Service kind resource created declaratively, and then I change the ports manually, how can I discard live changes so the original definition that created the resource is reapplied?</p>
<p>Is there any tracking on previous applied configs? it could be even nicer if we could say: reconfigure my service to current appied config - 2 versions.</p>
<p>EDIT: I know deployments have rollout options, but I am wondering about a Kind-wise mechanism</p>
| Whimusical | <p>Since you're asking explicitly about the <code>last-applied-configuration</code> annotation...</p>
<p>Very simple:</p>
<pre><code>kubectl apply view-last-applied deployment/foobar-module | kubectl apply -f -
</code></pre>
<p>Given that <code>apply</code> composes via stdin ever so flexibly — there's no dedicated <code>kubectl apply revert-to-last-applied</code> subcommand, as it'd be redundant reimplementation of the simple pipe above.</p>
<p>One could also suspect, that such a <code>revert</code> built-in could never be made perfect, (as Nick_Kh notices) for complicated reasons. A subcommand named <code>revert</code> evokes a lot of expectation from users which it would never fulfill.</p>
<p>So we get a simplified approximation: a <code>spec.bak</code> saved in resource annotations, ready to be re-<code>apply</code>'d.</p>
| ulidtko |
<p>I have a simple demo Flask application that is deployed to kubernetes using minikube. I am able to access the app using the Services. But I am not able to connect using ingress.</p>
<p><strong>Services.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: services-app-service
spec:
selector:
app: services-app
type: ClusterIP
ports:
- protocol: TCP
port: 5000 # External connection
targetPort: 5000 # Internal connection
</code></pre>
<hr />
<pre><code>D:Path>kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
db ClusterIP None <none> 3306/TCP 120m
kubernetes ClusterIP 10.20.30.1 <none> 443/TCP 3h38m
services-app-service ClusterIP 10.20.30.40 <none> 5000/TCP 18m
</code></pre>
<p><strong>I am able to access the app using minikube.</strong></p>
<pre><code>D:Path>minikube service services-app-service --url
* service default/services-app-service has no node port
* Starting tunnel for service services-app-service.
|-----------|----------------------|-------------|------------------------|
| NAMESPACE | NAME | TARGET PORT | URL |
|-----------|----------------------|-------------|------------------------|
| default | services-app-service | | http://127.0.0.1:50759 |
|-----------|----------------------|-------------|------------------------|
http://127.0.0.1:50759
! Because you are using a Docker driver on windows, the terminal needs to be open to run it.
</code></pre>
<p><strong>Ingress.yaml</strong></p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: services-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: mydemo.info
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: services-app-service
port:
number: 5000
</code></pre>
<hr />
<pre><code>D:Path>kubectl get ing
NAME CLASS HOSTS ADDRESS PORTS AGE
services-ingress <none> mydemo.info 192.168.40.1 80 15m
</code></pre>
<p>Are there any additional configuration required to access the app via ingress?</p>
| Rakesh | <p>The <strong>ingress</strong>, and <strong>ingress-dns</strong> addons are currently only supported on <strong>Linux</strong>. Currently not supported on windows.
<strong><a href="https://minikube.sigs.k8s.io/docs/drivers/docker/#known-issues" rel="nofollow noreferrer">MoreInfo</a></strong></p>
<p><a href="https://i.stack.imgur.com/58Qc4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/58Qc4.png" alt="enter image description here" /></a></p>
<p><strong>Not Supported on Windows</strong>:</p>
<ul>
<li>minikube version: v1.16.0</li>
<li>minikube version: v1.17.1</li>
</ul>
| Rakesh |
<p>This is a bit of a silly setup, but here's what I'm looking at right now:</p>
<ul>
<li>I'm learning Kubernetes</li>
<li>I want to push custom code to my Kubernetes cluster, which means the code must be available as a Docker image available from <strong>some</strong> Docker repository (default is Docker Hub)</li>
<li>While I'm willing to pay for Docker Hub if I have to (though I'd rather avoid it), I have concerns about putting my custom code on a third-party service. <a href="https://www.docker.com/blog/checking-your-current-docker-pull-rate-limits-and-status/" rel="nofollow noreferrer">Sudden rate limits</a>, <a href="https://www.eweek.com/security/docker-hub-breached-impacting-190-000-accounts" rel="nofollow noreferrer">security breaches</a>, <a href="https://www.docker.com/pricing/resource-consumption-updates" rel="nofollow noreferrer">sudden ToS changes</a>, etc</li>
<li>To this end, I'm running my own Docker registry within my Kubernetes cluster</li>
<li>I do not want to configure the Docker clients running on the Kubernetes nodes to trust insecure (HTTP) Docker registries. If I do choose to pull any images from an external registry (e.g. public images like <code>nginx</code> I may pull from Docker Hub instead of hosting locally) then I don't want to be vulnerable to MITM attacks swapping out the image</li>
<li>Ultimately I will have a build tool within the cluster (Jenkins or otherwise) pull my code from git, build the image, and push it to my internal registry. Then all nodes pulling from the registry live within the cluster. Since the registry never needs to receive images from sources outside of the cluster or delivery them to sources outside of the cluster, the registry does not need a NodePort service but can instead be a ClusterIP service.... <em><strong>ultimately</strong></em></li>
<li>Until I have that ultimate setup ready, I'm building images on my local machine and wish to push them to the registry (from the internet)</li>
<li>Because I don't plan on making the registry accessible from the outside world (eventually), I can't utilize Let's Encrypt to generate valid certs for it (even if I were making my Docker registry available to the outside world, <a href="https://github.com/docker/distribution-library-image/issues/96" rel="nofollow noreferrer">I can't use Let's Encrypt, anyway</a> without writing some extra code to utilize certbot or something)</li>
</ul>
<p>My plan is to follow the example in <a href="https://stackoverflow.com/questions/53545732/how-do-i-access-a-private-docker-registry-with-a-self-signed-certificate-using-k">this StackOverflow post</a>: generate a self-signed cert and then launch the Docker registry using that certificate. Then use a DaemonSet to make this cert trusted on all nodes in the cluster.</p>
<p>Now that you have the setup, here's the crux of my issue: within my cluster my Docker registry can be accessed via a simple host name (e.g. "docker-registry"), but outside of my cluster I need to either access it via a node IP address or a domain name pointing at a node or a load balancer.</p>
<p>When generating my self-signed cert I was asked to provide a CN / FQDN for the certificate. I put in "docker-registry" -- the internal host name I plan to utilize. I then tried to access my registry locally to push an image to it:</p>
<pre><code>> docker pull ubuntu
> docker tag ubuntu example.com:5000/my-ubuntu
> docker push example.com:5000/my-ubuntu
The push refers to repository [example.com:5000/my-ubuntu]
Get https://example.com:5000/v2/: x509: certificate is valid for docker-registry, not example.com
</code></pre>
<p>I can generate a certificate for <code>example.com</code> instead of for <code>docker-registry</code>, however I worry that I'll have issues configuring the service or connecting to my registry from within my cluster if I provide my external domain like this instead of an internal host name.</p>
<p>This is why I'm wondering if I can just say that my self-signed cert applies to <em><strong>both</strong></em> <code>example.com</code> <em><strong>and</strong></em> <code>docker-registry</code>. If not, two other acceptable solutions would be:</p>
<ul>
<li>Can I tell the Docker client not to verify the host name and just trust the certificate implicitly?</li>
<li>Can I tell the Docker registry to deliver one of two <em><strong>different</strong></em> certificates based on the host name used to access it?</li>
</ul>
<p>If none of the three options are possible, then I can always just forego pushing images from my local machine and start the process of building images within the cluster -- but I was hoping to put that off until later. I'm learning a lot right now and trying to avoid getting distracted by tangential things.</p>
| stevendesu | <p>Probably the easiest way to solve your problem would be to use Docker's insecure-registry feature. The concern you mention about this in your post (that it would open you up to security risks later) probably won't apply as the feature works by specifying specific IP addresses or host names to trust.</p>
<p>For example you could configure something like</p>
<pre><code>{
"insecure-registries" : [ "10.10.10.10:5000" ]
}
</code></pre>
<p>and the only IP address that your Docker daemons will access without TLS is the one at that host and port number.</p>
<p>If you don't want to do that, then you'll need to get a trusted TLS certificate in place. The issue you mentioned about having multiple names per cert is usually handled with the <a href="https://en.wikipedia.org/wiki/Subject_Alternative_Name" rel="nofollow noreferrer">Subject Alternative Name</a> field in a cert. (indeed Kubernetes uses that feature quite a bit).</p>
| Rory McCune |
<p>While testing SSH from one container to another in a K8s environment, I'm getting this strange issue of "matching key found" but ended up with error "Failed publickey.."</p>
<p>Have tried with securityCapability of "SYS_CHROOT" and with privileged as true in pod and container.</p>
<p>sshd config is below,</p>
<pre><code>PasswordAuthentication no
ChallengeResponseAuthentication no
UsePAM yes
</code></pre>
<p>ssh command output:</p>
<pre><code>[jboss@home]$ ssh -i key.txt [email protected] -p 2025 -v
OpenSSH_7.4p1, OpenSSL 1.0.2k-fips 26 Jan 2017
debug1: Reading configuration data /etc/ssh/ssh_config
debug1: /etc/ssh/ssh_config line 58: Applying options for *
debug1: Connecting to 10.128.2.190 [10.128.2.190] port 2025.
debug1: Connection established.
debug1: key_load_public: No such file or directory
debug1: identity file key.txt type -1
debug1: key_load_public: No such file or directory
debug1: identity file key.txt-cert type -1
debug1: Enabling compatibility mode for protocol 2.0
debug1: Local version string SSH-2.0-OpenSSH_7.4
debug1: Remote protocol version 2.0, remote software version OpenSSH_7.4
debug1: match: OpenSSH_7.4 pat OpenSSH* compat 0x04000000
debug1: Authenticating to 10.128.2.190:2025 as 'root'
debug1: SSH2_MSG_KEXINIT sent
debug1: SSH2_MSG_KEXINIT received
debug1: kex: algorithm: curve25519-sha256
debug1: kex: host key algorithm: ecdsa-sha2-nistp256
debug1: kex: server->client cipher: [email protected] MAC: <implicit> compression: none
debug1: kex: client->server cipher: [email protected] MAC: <implicit> compression: none
debug1: kex: curve25519-sha256 need=64 dh_need=64
debug1: kex: curve25519-sha256 need=64 dh_need=64
debug1: expecting SSH2_MSG_KEX_ECDH_REPLY
debug1: Server host key: ecdsa-sha2-nistp256 SHA256:j5XrSrnXj/IuqIbvYOu234KT/OhQm/8qBiazCtD2G5E
debug1: Host '[10.128.2.190]:2025' is known and matches the ECDSA host key.
debug1: Found key in /opt/jboss/.ssh/known_hosts:2
debug1: rekey after 134217728 blocks
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug1: SSH2_MSG_NEWKEYS received
debug1: rekey after 134217728 blocks
debug1: SSH2_MSG_EXT_INFO received
debug1: kex_input_ext_info: server-sig-algs=<rsa-sha2-256,rsa-sha2-512>
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug1: Authentications that can continue: publickey
debug1: Next authentication method: publickey
debug1: Trying private key: key.txt
debug1: Authentications that can continue: publickey
debug1: No more authentication methods to try.
Permission denied (publickey).
</code></pre>
<p>sshd debug output:</p>
<pre><code>/usr/sbin/sshd -ddd -D -p 2025
debug2: load_server_config: filename /etc/ssh/sshd_config
debug2: load_server_config: done config len = 127
debug2: parse_server_config: config /etc/ssh/sshd_config len 127
debug3: /etc/ssh/sshd_config:2 setting Port 2022
debug3: /etc/ssh/sshd_config:7 setting PasswordAuthentication no
debug3: /etc/ssh/sshd_config:8 setting ChallengeResponseAuthentication no
debug3: /etc/ssh/sshd_config:9 setting UsePAM yes
debug3: /etc/ssh/sshd_config:10 setting SyslogFacility DAEMON
debug3: /etc/ssh/sshd_config:11 setting LogLevel DEBUG3
debug1: sshd version OpenSSH_7.4, OpenSSL 1.0.2k-fips 26 Jan 2017
debug1: private host key #0: ssh-rsa SHA256:bZPN1dSnLtGHMOgf5VJAMYYionA5GJo5fuKS0r4JtuA
debug1: private host key #1: ssh-dss SHA256:IFYQSI7Fn9WCcfIOiSdUvKR5hvJzhQd4u+3l+dNKfnc
debug1: private host key #2: ecdsa-sha2-nistp256 SHA256:j5XrSrnXj/IuqIbvYOu234KT/OhQm/8qBiazCtD2G5E
debug1: private host key #3: ssh-ed25519 SHA256:rO/wKAQObCmbaGu1F2vJMYLTDYr61+TWMsHDVBKJa1Q
debug1: rexec_argv[0]='/usr/sbin/sshd'
debug1: rexec_argv[1]='-ddd'
debug1: rexec_argv[2]='-D'
debug1: rexec_argv[3]='-p'
debug1: rexec_argv[4]='2025'
debug3: oom_adjust_setup
debug1: Set /proc/self/oom_score_adj from 1000 to -1000
debug2: fd 3 setting O_NONBLOCK
debug1: Bind to port 2025 on 0.0.0.0.
Server listening on 0.0.0.0 port 2025.
debug2: fd 4 setting O_NONBLOCK
debug3: sock_set_v6only: set socket 4 IPV6_V6ONLY
debug1: Bind to port 2025 on ::.
Server listening on :: port 2025.
debug3: fd 5 is not O_NONBLOCK
debug1: Server will not fork when running in debugging mode.
debug3: send_rexec_state: entering fd = 8 config len 127
debug3: ssh_msg_send: type 0
debug3: send_rexec_state: done
debug1: rexec start in 5 out 5 newsock 5 pipe -1 sock 8
debug1: inetd sockets after dupping: 3, 3
Connection from 10.131.1.10 port 41462 on 10.128.2.190 port 2025
debug1: Client protocol version 2.0; client software version OpenSSH_7.4
debug1: match: OpenSSH_7.4 pat OpenSSH* compat 0x04000000
debug1: Local version string SSH-2.0-OpenSSH_7.4
debug1: Enabling compatibility mode for protocol 2.0
debug2: fd 3 setting O_NONBLOCK
debug3: ssh_sandbox_init: preparing seccomp filter sandbox
debug2: Network child is on pid 1186
debug3: preauth child monitor started
debug1: SELinux support disabled [preauth]
debug3: privsep user:group 74:74 [preauth]
debug1: permanently_set_uid: 74/74 [preauth]
debug3: ssh_sandbox_child: setting PR_SET_NO_NEW_PRIVS [preauth]
debug3: ssh_sandbox_child: attaching seccomp filter program [preauth]
debug1: list_hostkey_types: ssh-rsa,rsa-sha2-512,rsa-sha2-256,ssh-dss,ecdsa-sha2-nistp256,ssh-ed25519 [preauth]
debug3: send packet: type 20 [preauth]
debug1: SSH2_MSG_KEXINIT sent [preauth]
debug3: receive packet: type 20 [preauth]
debug1: SSH2_MSG_KEXINIT received [preauth]
debug2: local server KEXINIT proposal [preauth]
debug2: KEX algorithms: curve25519-sha256,[email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1 [preauth]
debug2: host key algorithms: ssh-rsa,rsa-sha2-512,rsa-sha2-256,ssh-dss,ecdsa-sha2-nistp256,ssh-ed25519 [preauth]
debug2: ciphers ctos: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],aes128-cbc,aes192-cbc,aes256-cbc,blowfish-cbc,cast128-cbc,3des-cbc [preauth]
debug2: ciphers stoc: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],aes128-cbc,aes192-cbc,aes256-cbc,blowfish-cbc,cast128-cbc,3des-cbc [preauth]
debug2: MACs ctos: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1 [preauth]
debug2: MACs stoc: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1 [preauth]
debug2: compression ctos: none,[email protected] [preauth]
debug2: compression stoc: none,[email protected] [preauth]
debug2: languages ctos: [preauth]
debug2: languages stoc: [preauth]
debug2: first_kex_follows 0 [preauth]
debug2: reserved 0 [preauth]
debug2: peer client KEXINIT proposal [preauth]
debug2: KEX algorithms: curve25519-sha256,[email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha256,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1,ext-info-c [preauth]
debug2: host key algorithms: [email protected],[email protected],[email protected],ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,[email protected],[email protected],[email protected],ssh-ed25519,rsa-sha2-512,rsa-sha2-256,ssh-rsa,ssh-dss [preauth]
debug2: ciphers ctos: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],aes128-cbc,aes192-cbc,aes256-cbc [preauth]
debug2: ciphers stoc: [email protected],aes128-ctr,aes192-ctr,aes256-ctr,[email protected],[email protected],aes128-cbc,aes192-cbc,aes256-cbc [preauth]
debug2: MACs ctos: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1 [preauth]
debug2: MACs stoc: [email protected],[email protected],[email protected],[email protected],[email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512,hmac-sha1 [preauth]
debug2: compression ctos: none,[email protected],zlib [preauth]
debug2: compression stoc: none,[email protected],zlib [preauth]
debug2: languages ctos: [preauth]
debug2: languages stoc: [preauth]
debug2: first_kex_follows 0 [preauth]
debug2: reserved 0 [preauth]
debug1: kex: algorithm: curve25519-sha256 [preauth]
debug1: kex: host key algorithm: ecdsa-sha2-nistp256 [preauth]
debug1: kex: client->server cipher: [email protected] MAC: <implicit> compression: none [preauth]
debug1: kex: server->client cipher: [email protected] MAC: <implicit> compression: none [preauth]
debug1: kex: curve25519-sha256 need=64 dh_need=64 [preauth]
debug3: mm_request_send entering: type 120 [preauth]
debug3: mm_request_receive_expect entering: type 121 [preauth]
debug3: mm_request_receive entering [preauth]
debug3: mm_request_receive entering
debug3: monitor_read: checking request 120
debug3: mm_request_send entering: type 121
debug1: kex: curve25519-sha256 need=64 dh_need=64 [preauth]
debug3: mm_request_send entering: type 120 [preauth]
debug3: mm_request_receive_expect entering: type 121 [preauth]
debug3: mm_request_receive entering [preauth]
debug3: mm_request_receive entering
debug3: monitor_read: checking request 120
debug3: mm_request_send entering: type 121
debug1: expecting SSH2_MSG_KEX_ECDH_INIT [preauth]
debug3: receive packet: type 30 [preauth]
debug3: mm_key_sign entering [preauth]
debug3: mm_request_send entering: type 6 [preauth]
debug3: mm_key_sign: waiting for MONITOR_ANS_SIGN [preauth]
debug3: mm_request_receive_expect entering: type 7 [preauth]
debug3: mm_request_receive entering [preauth]
debug3: mm_request_receive entering
debug3: monitor_read: checking request 6
debug3: mm_answer_sign
debug3: mm_answer_sign: hostkey proof signature 0x557cd5190710(101)
debug3: mm_request_send entering: type 7
debug2: monitor_read: 6 used once, disabling now
debug3: send packet: type 31 [preauth]
debug3: send packet: type 21 [preauth]
debug2: set_newkeys: mode 1 [preauth]
debug1: rekey after 134217728 blocks [preauth]
debug1: SSH2_MSG_NEWKEYS sent [preauth]
debug1: expecting SSH2_MSG_NEWKEYS [preauth]
debug3: send packet: type 7 [preauth]
debug3: receive packet: type 21 [preauth]
debug1: SSH2_MSG_NEWKEYS received [preauth]
debug2: set_newkeys: mode 0 [preauth]
debug1: rekey after 134217728 blocks [preauth]
debug1: KEX done [preauth]
debug3: receive packet: type 5 [preauth]
debug3: send packet: type 6 [preauth]
debug3: receive packet: type 50 [preauth]
debug1: userauth-request for user root service ssh-connection method none [preauth]
debug1: attempt 0 failures 0 [preauth]
debug3: mm_getpwnamallow entering [preauth]
debug3: mm_request_send entering: type 8 [preauth]
debug3: mm_getpwnamallow: waiting for MONITOR_ANS_PWNAM [preauth]
debug3: mm_request_receive_expect entering: type 9 [preauth]
debug3: mm_request_receive entering [preauth]
debug3: mm_request_receive entering
debug3: monitor_read: checking request 8
debug3: mm_answer_pwnamallow
debug3: Trying to reverse map address 10.131.1.10.
debug2: parse_server_config: config reprocess config len 127
debug3: mm_answer_pwnamallow: sending MONITOR_ANS_PWNAM: 1
debug3: mm_request_send entering: type 9
debug2: monitor_read: 8 used once, disabling now
debug2: input_userauth_request: setting up authctxt for root [preauth]
debug3: mm_start_pam entering [preauth]
debug3: mm_request_send entering: type 100 [preauth]
debug3: mm_inform_authserv entering [preauth]
debug3: mm_request_send entering: type 4 [preauth]
debug3: mm_inform_authrole entering [preauth]
debug3: mm_request_send entering: type 80 [preauth]
debug2: input_userauth_request: try method none [preauth]
debug3: userauth_finish: failure partial=0 next methods="publickey" [preauth]
debug3: send packet: type 51 [preauth]
debug3: mm_request_receive entering
debug3: monitor_read: checking request 100
debug1: PAM: initializing for "root"
debug1: PAM: setting PAM_RHOST to "ip-10-131-1-10.ap-south-1.compute.internal"
debug1: PAM: setting PAM_TTY to "ssh"
debug2: monitor_read: 100 used once, disabling now
debug3: mm_request_receive entering
debug3: monitor_read: checking request 4
debug3: mm_answer_authserv: service=ssh-connection, style=
debug2: monitor_read: 4 used once, disabling now
debug3: mm_request_receive entering
debug3: monitor_read: checking request 80
debug3: mm_answer_authrole: role=
debug2: monitor_read: 80 used once, disabling now
debug3: receive packet: type 50 [preauth]
debug1: userauth-request for user root service ssh-connection method publickey [preauth]
debug1: attempt 1 failures 0 [preauth]
debug2: input_userauth_request: try method publickey [preauth]
debug3: userauth_pubkey: have signature for RSA SHA256:/7PPUU+YPuJeKNXZdPoShSqmlfL+rfae/Fb471C0Dyc [preauth]
debug3: mm_key_allowed entering [preauth]
debug3: mm_request_send entering: type 22 [preauth]
debug3: mm_key_allowed: waiting for MONITOR_ANS_KEYALLOWED [preauth]
debug3: mm_request_receive_expect entering: type 23 [preauth]
debug3: mm_request_receive entering [preauth]
debug3: mm_request_receive entering
debug3: monitor_read: checking request 22
debug3: mm_answer_keyallowed entering
debug3: mm_answer_keyallowed: key_from_blob: 0x557cd51913e0
debug1: temporarily_use_uid: 0/0 (e=0/0)
debug1: trying public key file /root/.ssh/authorized_keys
debug1: fd 4 clearing O_NONBLOCK
debug1: matching key found: file /root/.ssh/authorized_keys, line 1 RSA SHA256:/7PPUU+YPuJeKNXZdPoShSqmlfL+rfae/Fb471C0Dyc
debug1: restore_uid: 0/0
debug3: mm_answer_keyallowed: key 0x557cd51913e0 is allowed
debug3: mm_request_send entering: type 23
debug3: mm_key_verify entering [preauth]
debug3: mm_request_send entering: type 24 [preauth]
debug3: mm_key_verify: waiting for MONITOR_ANS_KEYVERIFY [preauth]
debug3: mm_request_receive_expect entering: type 25 [preauth]
debug3: mm_request_receive entering [preauth]
debug3: mm_request_receive entering
debug3: monitor_read: checking request 24
debug3: mm_answer_keyverify: key 0x557cd51912c0 signature unverified
debug3: mm_request_send entering: type 25
Failed publickey for root from 10.131.1.10 port 41462 ssh2: RSA SHA256:/7PPUU+YPuJeKNXZdPoShSqmlfL+rfae/Fb471C0Dyc
linux_audit_write_entry failed: Operation not permitted
debug1: do_cleanup
debug1: PAM: cleanup
debug3: PAM: sshpam_thread_cleanup entering
debug1: Killing privsep child 1186
linux_audit_write_entry failed: Operation not permitted
</code></pre>
| Karthik Murugan | <p>After adding AUDIT_WRITE capability to the container, it started working. Apparently both SYS_CHROOT and AUDIT_WRITE are required for the container running sshd to work</p>
| Karthik Murugan |
<p>I've got a username and password, how do I authenticate kubectl with them?</p>
<p>Which command do I run?</p>
<p>I've read through: <a href="https://kubernetes.io/docs/reference/access-authn-authz/authorization/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/authorization/</a> and <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/authentication/</a> though can not find any relevant information in there for this case.</p>
<hr>
<pre><code>kubectl config set-credentials cluster-admin --username=admin --password=uXFGweU9l35qcif
</code></pre>
<p><a href="https://kubernetes-v1-4.github.io/docs/user-guide/kubectl/kubectl_config_set-credentials/" rel="nofollow noreferrer">https://kubernetes-v1-4.github.io/docs/user-guide/kubectl/kubectl_config_set-credentials/</a></p>
<hr>
<p>The above does not seem to work:
<code>kubectl get pods<br>
Error from server (Forbidden): pods is forbidden: User "client" cannot list pods in the namespace "default": Unknown user "client"
</code></p>
| Chris Stryczynski | <p>Kubernetes provides a number of different authentication mechanisms. Providing a username and password directly to the cluster (as opposed to using an OIDC provider) would indicate that you're using Basic authentication, which hasn't been the default option for a number of releases.</p>
<p>The syntax you've listed appears right, assuming that the cluster supports basic authentication.</p>
<p>The error you're seeing is similar to the one <a href="https://stackoverflow.com/questions/49075723/what-does-unknown-user-client-mean">here</a> which may suggest that the cluster you're using doesn't currently support the authentication method you're using.</p>
<p>Additional information about what Kubernetes distribution and version you're using would make it easier to provide a better answer, as there is a lot of variety in how k8s handles authentication.</p>
| Rory McCune |
<p>I am having issue to use grep with regular expression
I am trying to read and filter some logs on server</p>
<pre><code>kubectl logs -lname=ambassador --tail=40 | grep ACCESS | grep '" 200 ' | grep ' /api/entitlements '
</code></pre>
<p>so this returns some logs and it is fine but I need to search for all api including entitlements in their path. I tried this:</p>
<pre><code>kubectl logs -lname=ambassador --tail=40 | grep ACCESS | grep '" 200 ' | grep ' *entitlements* '
</code></pre>
<p>but nothing returns</p>
<p>can anyone help?</p>
| Learner | <p>You may use <code>awk</code> to avoid multiple <code>grep</code> command and do all filters in one command:</p>
<pre><code>kubectl logs -lname=ambassador --tail=40 | awk '/ACCESS/ && /" 200 / && /entitlements/'
</code></pre>
<p><code>/substr/</code> searches for regex pattern <code>substr</code> in each line of <code>awk</code>. <code>&&</code> ensures that all the given patterns are found in same line.</p>
| anubhava |
<p>I am setting up a <code>kind</code> cluster</p>
<pre><code>Creating cluster "kind" ...
✓ Ensuring node image (kindest/node:v1.22.1) 🖼
✓ Preparing nodes 📦 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
✓ Joining worker nodes 🚜
✓ Waiting ≤ 5m0s for control-plane = Ready ⏳
• Ready after 0s 💚
</code></pre>
<p>and then trying to install ECK operator as per <a href="https://www.elastic.co/guide/en/cloud-on-k8s/1.6/k8s-deploy-eck.html" rel="noreferrer">instructions</a> about version 1.6</p>
<pre><code>kubectl apply -f https://download.elastic.co/downloads/eck/1.6.0/all-in-one.yaml
</code></pre>
<p>However the process fails, as if <code>kind</code> does not support CRDs...Is this the case?</p>
<pre><code>namespace/elastic-system created
serviceaccount/elastic-operator created
secret/elastic-webhook-server-cert created
configmap/elastic-operator created
clusterrole.rbac.authorization.k8s.io/elastic-operator created
clusterrole.rbac.authorization.k8s.io/elastic-operator-view created
clusterrole.rbac.authorization.k8s.io/elastic-operator-edit created
clusterrolebinding.rbac.authorization.k8s.io/elastic-operator created
service/elastic-webhook-server created
statefulset.apps/elastic-operator created
unable to recognize "https://download.elastic.co/downloads/eck/1.6.0/all-in-one.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
unable to recognize "https://download.elastic.co/downloads/eck/1.6.0/all-in-one.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
unable to recognize "https://download.elastic.co/downloads/eck/1.6.0/all-in-one.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
unable to recognize "https://download.elastic.co/downloads/eck/1.6.0/all-in-one.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
unable to recognize "https://download.elastic.co/downloads/eck/1.6.0/all-in-one.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
unable to recognize "https://download.elastic.co/downloads/eck/1.6.0/all-in-one.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
unable to recognize "https://download.elastic.co/downloads/eck/1.6.0/all-in-one.yaml": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1"
unable to recognize "https://download.elastic.co/downloads/eck/1.6.0/all-in-one.yaml": no matches for kind "ValidatingWebhookConfiguration" in version "admissionregistration.k8s.io/v1beta1"
</code></pre>
| pkaramol | <p>The problem you're seeing here isn't related to kind, instead it's the manifest you're trying to apply is using outdated API versions, which were removed in Kubernetes 1.22</p>
<p>Specifically the manifest is using the v1beta1 version of the customresourcedefinition object and validatingadmissionwebhook object</p>
<pre><code>apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
</code></pre>
<p>As noted in <a href="https://kubernetes.io/blog/2021/07/14/upcoming-changes-in-kubernetes-1-22/" rel="noreferrer">this post</a> that was one of the versions removed when 1.22 went in.</p>
<p>There's a couple of fixes for this. Firstly you could get the manifest and just change the customresourcedefinitions to use the new API version <code>apiextensions.k8s.io/v1</code> and validatingadmissionwebhook to use <code>admissionregistration.k8s.io/v1</code>.</p>
<p>The other fix would be to use an older version of Kubernetes. if you use 1.21 or earler, that issue shouldn't occur, so something like <code> kind create cluster --image=kindest/node:v1.21.2</code> should work.</p>
| Rory McCune |
<p>I am new to cluster container management, and this question is the basis for all the freshers over here.</p>
<p>I read some documentation, but still, my understanding is not too clear, so any leads.. helping to understand?</p>
<ol>
<li>Somewhere it is mentioned, Minikube is used to run Kubernetes locally. So if we want to maintain cluster management in my four-node Raspberry Pi, then Minikube is not the option?</li>
<li>Does Minikube support only a one-node system?</li>
<li>Docker Compose is set of instructions and a YAML file to configure and start multiple Docker containers. Can we use this to start containers of the different hosts? Then for simple orchestration where I need to call container of the second host, I don't need any cluster management, right?</li>
<li>What is the link between Docker Swarm and Kubernetes? Both are independent cluster management. Is it efficient to use Kubernetes on Raspberry Pi? Any issue, because I was told that Kubernetes in single node takes the complete memory and CPU usage? Is it true?</li>
<li>Is there other cluster management for Raspberry Pi?</li>
</ol>
<p>I think this 4-5 set will help me better.</p>
| stackjohnny | <p>Presuming that your goal here is to run a set of containers over a number of different Raspberry Pi based nodes:</p>
<ul>
<li><p>Minikube isn't really appropriate. This starts a single virtual machine on a Windows, MacOS or Linux and installs a Kubernetes cluster into it. It's generally used by developers to quickly start-up a cluster on their laptops or desktops for development and testing purposes.</p></li>
<li><p>Docker Compose is a system for managing sets of related containers. So for example if you had a web server and database that you wanted to manage together you could put them in a single Docker Compose file.</p></li>
<li><p>Docker Swarm is a system for managing sets of containers across multiple hosts. It's essentially an alternative to Kubernetes. It has fewer features than Kubernetes, but it is much simpler to set up.</p></li>
</ul>
<p>If you want a really simple multi-node Container cluster, I'd say that Docker swarm is a reasonable choice. If you explicitly want to experiment with Kubernetes, I'd say that kubeadm is a good option here. Kubernetes in general has higher resource requirements than Docker Swarm, so it could be somewhat less suited to it, although I know people have successfully run Kubernetes clusters on Raspberry Pis.</p>
| Rory McCune |
<p>I'm trying to deploy my NodeJS application to EKS and run 3 pods with exactly the same container.</p>
<p>Here's the error message:</p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
cm-deployment-7c86bb474c-5txqq 0/1 Pending 0 18s
cm-deployment-7c86bb474c-cd7qs 0/1 ImagePullBackOff 0 18s
cm-deployment-7c86bb474c-qxglx 0/1 ImagePullBackOff 0 18s
public-api-server-79b7f46bf9-wgpk6 0/1 ImagePullBackOff 0 2m30s
$ kubectl describe pod cm-deployment-7c86bb474c-5txqq
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 23s (x4 over 2m55s) default-scheduler 0/3 nodes are available: 3 Insufficient pods.
</code></pre>
<p>So it says that <code>0/3 nodes are available</code> However, if I run
<code>kubectl get nodes --watch</code></p>
<pre><code>$ kubectl get nodes --watch
NAME STATUS ROLES AGE VERSION
ip-192-168-163-73.ap-northeast-2.compute.internal Ready <none> 6d7h v1.14.6-eks-5047ed
ip-192-168-172-235.ap-northeast-2.compute.internal Ready <none> 6d7h v1.14.6-eks-5047ed
ip-192-168-184-236.ap-northeast-2.compute.internal Ready <none> 6d7h v1.14.6-eks-5047ed
</code></pre>
<p>3 pods are running.</p>
<p>here are my configurations:</p>
<pre><code>aws-auth-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: [MY custom role ARN]
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
</code></pre>
<pre><code>deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: cm-deployment
spec:
replicas: 3
selector:
matchLabels:
app: cm-literal
template:
metadata:
name: cm-literal-pod
labels:
app: cm-literal
spec:
containers:
- name: cm
image: docker.io/cjsjyh/public_test:1
imagePullPolicy: Always
ports:
- containerPort: 80
#imagePullSecrets:
# - name: regcred
env:
[my environment variables]
</code></pre>
<p>I applied both .yaml files</p>
<p>How can I solve this?
Thank you</p>
| J.S.C | <p>My guess, without running the manifests you've got is that the image tag <code>1</code> on your image doesn't exist, so you're getting <code>ImagePullBackOff</code> which usually means that the container runtime can't find the image to pull .</p>
<p>Looking at the Docker Hub <a href="https://hub.docker.com/r/cjsjyh/public_test/tags" rel="nofollow noreferrer">page</a> there's no <code>1</code> tag there, just <code>latest</code>. </p>
<p>So, either removing the tag or replace <code>1</code> with <code>latest</code> may resolve your issue.</p>
| Rory McCune |
<p>I have a K8S service (app-filestash-testing) running like following:</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
app-filestash-testing ClusterIP 10.111.128.18 <none> 10000/TCP 18h
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 20h
</code></pre>
<p>I used the following yaml file to create an Ingress trying reach this service:</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-filestash-testing
spec:
rules:
- host: www.masternode.com
http:
paths:
- backend:
serviceName: app-filestash-testing
servicePort: 10000
</code></pre>
<p>In the <em>/etc/hosts</em> file, I made this change (I used the worker node public IP):</p>
<pre><code>127.0.0.1 localhost
xx.xxx.xxx.xxx www.masternode.com
</code></pre>
<p>However, when I checked the Ingress, I saw that the Ingress port is 80.</p>
<pre><code>NAME CLASS HOSTS ADDRESS PORTS AGE
app-filestash-testing nginx www.masternode.com 80 14h
</code></pre>
<p>Currently the service is running and listening on port 10000, but the Ingress port is 80.</p>
<p>I am just wondering is there any method/ setting to change the port number of Ingress to 10000? How to reach this service through Ingress? Is is possible to set the port number in <em>/etc/hosts</em> file?</p>
<p>Thanks.</p>
| maantarng | <p>From: <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress</a></p>
<blockquote>
<p>An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.</p>
</blockquote>
<p>NodePort might be what you are looking for. More information and options are documented here: <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types</a></p>
| Michael Leaney |
<p>I am running processes in Jupyter (lab) in a JupyterHub-created container running on Kubernetes.</p>
<p>The processes are too RAM-intensive, to the extent that the pod sometimes gets evicted due to an OOM.</p>
<p><strong>Without modifications to my code/algorithms etc., how can I in the general case tell Jupyter(Lab) to use swap space/virtual memory when a predefined RAM limit is reached?</strong></p>
<p>PS This question has no answer mentioning swap space - <a href="https://stackoverflow.com/questions/58400437/jupyter-lab-freezes-the-computer-when-out-of-ram-how-to-prevent-it">Jupyter Lab freezes the computer when out of RAM - how to prevent it?</a></p>
| jtlz2 | <p>You can't actively control swap space.</p>
<p>In Kubernetes specifically, you just don't supply a memory limit for the Kubernetes pod.
That would at least not kill it because of OOM (out of memory). However, I doubt it would work because this will make the whole node go out of RAM, then swap and become extremely slow and thus at some point declared dead by the Kubernetes master. Which in turn will cause the Pod to run somewhere else and start all over again.</p>
<p>A more scalable approach for you might be to use out-of-core algorithms, that can operate on disk directly (so just attach a PV/PVC to your pod), but that depends on the algorithm or process you're using.</p>
| Thomas Jungblut |
<p>We're trying run a PostgreSQL in a minikube (1.18.1) Kubernetes (1.16.15) cluster so that the database is reset on every pod redeployment.</p>
<p>In our deployment template, we have:</p>
<pre class="lang-yaml prettyprint-override"><code>containers:
- name: mock-db
image: postgres:13
imagePullPolicy: "IfNotPresent"
ports:
- name: postgres
containerPort: 5432
volumeMounts:
- mountPath: /docker-entrypoint-initdb.d
name: postgresql-init
- mountPath: /var/lib/postgresql/data
name: postgresql-data
volumes:
- name: postgresql-init
configMap:
name: "mock-db-init-scripts"
optional: false
- name: postgresql-data
emptyDir:
medium: Memory
</code></pre>
<p>Based on the <a href="https://kubernetes.io/docs/concepts/storage/volumes/#emptydir" rel="nofollow noreferrer">docs</a>, this should work as intended:</p>
<blockquote>
<p>When a Pod is removed from a node for any reason, the data in the emptyDir is deleted permanently.</p>
</blockquote>
<p>However, after <code>helm uninstall</code> and subsequent <code>helm upgrade --install</code>, I'm getting this in the container logs:</p>
<pre><code>PostgreSQL Database directory appears to contain a database; Skipping initialization
</code></pre>
<p>So apparently, that volume is <em>not</em> being cleared. Why? Do I need to change something in the configuration?</p>
| Raphael | <p>Here is what I missed:</p>
<ul>
<li>the init script failed and</li>
<li>the container immediately restarted.</li>
</ul>
<p>No problem with the volume, at all.</p>
| Raphael |
<p>I am trying to allow some users in my org to forward ports to our production namespace in Kubernetes. However, I don't want them to be able to forward ports to all services. I want to restrict access to only certain services. Is this possible?</p>
<pre><code>kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: allow-port-forward-for-deployment-a
rules:
- apiGroups: [""]
resources: ["pods/portforward"]
verbs: ["get", "list", "create"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: allow-port-forward-for-deployment-a
namespace: production
subjects:
- kind: User
name: "[email protected]"
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: allow-port-forward-for-deployment-a
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>The above set up allows all services, but I don't want that.</p>
| VBoi | <p>I believe you can't. <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#referring-to-resources" rel="noreferrer">According to the docs</a></p>
<blockquote>
<p>Resources can also be referred to by name for certain requests through
the <code>resourceNames</code> list. When specified, requests can be restricted to
individual instances of a resource. To restrict a subject to only
“get” and “update” a single configmap, you would write:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: configmap-updater
rules:
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["my-configmap"]
verbs: ["update", "get"]
</code></pre>
<p><strong>Note that create requests
cannot be restricted by resourceName, as the object name is not known
at authorization time. The other exception is deletecollection.</strong></p>
</blockquote>
<p>Since you want to give the user permissions to <strong>create</strong> the forward ports, I don't think you can.</p>
| Jose Armesto |
<p>I have kubernetes cluster with installed Istio. I have two pods, for example, sleep1 and sleep2 (containers with installed curl). I want to configure istio to permit traffic from sleep1 to www.google.com and forbid traffic from sleep2 to www.google.com.</p>
<p>So, I created ServiceEntry:</p>
<pre><code>---
apiVersion: networking.istio.io/v1alpha3
kind: ServiceEntry
metadata:
name: google
spec:
hosts:
- www.google.com
- google.com
ports:
- name: http-port
protocol: HTTP
number: 80
resolution: DNS
</code></pre>
<p>Gateway</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: istio-egressgateway
spec:
selector:
istio: egressgateway
servers:
- port:
number: 80
name: http-port
protocol: HTTP
hosts:
- "*"
</code></pre>
<p>two virtualServices (mesh->egress, egress->google)</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: mesh-to-egress
spec:
hosts:
- www.google.com
- google.com
gateways:
- mesh
http:
- match:
- gateways:
- mesh
port: 80
route:
- destination:
host: istio-egressgateway.istio-system.svc.cluster.local
port:
number: 80
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: egress-to-google-int
spec:
hosts:
- www.google.com
- google.com
gateways:
- istio-egressgateway
http:
- match:
- gateways:
- istio-egressgateway
port: 80
route:
- destination:
host: google.com
port:
number: 80
weight: 100
</code></pre>
<p>As result, I can curl google from both pods. </p>
<p>And the question again: can i permit traffic from sleep1 to www.google.com and forbid traffic from sleep2 to www.google.com? I know that this is possible to do with kubernetes NetworkPolicy and black/white lists (<a href="https://istio.io/docs/tasks/policy-enforcement/denial-and-list/" rel="nofollow noreferrer">https://istio.io/docs/tasks/policy-enforcement/denial-and-list/</a>), but both methods are forbids (permits) traffic to specific ips or maybe I missed something?</p>
| Gcinbax | <p>You can create different service accounts for <code>sleep1</code> and <code>sleep2</code>. Then you <a href="https://istio.io/docs/tasks/security/authz-http/#enforcing-service-level-access-control" rel="nofollow noreferrer">create an RBAC policy</a> to limit access to the <code>istio-egressgateway</code> policy, so <code>sleep2</code> will not be able to access any egress traffic through the egress gateway. This should work with forbidding any egress traffic from the cluster, that does not originate from the egress gateway. See <a href="https://istio.io/docs/tasks/traffic-management/egress/egress-gateway/#additional-security-considerations" rel="nofollow noreferrer">https://istio.io/docs/tasks/traffic-management/egress/egress-gateway/#additional-security-considerations</a>.</p>
<p>If you want to allow <code>sleep2</code> access other services, but not <code>www.google.com</code>, you can use Mixer rules and handlers, see <a href="https://istio.io/blog/2018/egress-monitoring-access-control/#access-control-by-mixer-policy-checks-part-2" rel="nofollow noreferrer">this blog post</a>. It shows how to allow a certain URL path to a specific service account.</p>
| Vadim Eisenberg |
<p>I'm trying to accomplish a VERY common task for an application: </p>
<p>Assign a certificate and secure it with TLS/HTTPS.</p>
<p>I've spent nearly a day scouring thru documentation and trying multiple different tactics to get this working but nothing is working for me.</p>
<p>Initially I setup nginx-ingress on EKS using Helm by following the docs here: <a href="https://github.com/nginxinc/kubernetes-ingress" rel="noreferrer">https://github.com/nginxinc/kubernetes-ingress</a>. I tried to get the sample app working (cafe) using the following config:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cafe-ingress
spec:
tls:
- hosts:
- cafe.example.com
secretName: cafe-secret
rules:
- host: cafe.example.com
http:
paths:
- path: /tea
backend:
serviceName: tea-svc
servicePort: 80
- path: /coffee
backend:
serviceName: coffee-svc
servicePort: 80
</code></pre>
<p>The ingress and all supported services/deploys worked fine but there's one major thing missing: the ingress doesn't have an associated address/ELB:</p>
<pre><code>NAME HOSTS ADDRESS PORTS AGE
cafe-ingress cafe.example.com 80, 443 12h
</code></pre>
<p>Service LoadBalancers create ELB resources, i.e.:</p>
<pre><code>testnodeapp LoadBalancer 172.20.4.161 a64b46f3588fe... 80:32107/TCP 13h
</code></pre>
<p>However, the Ingress is not creating an address. How do I get an Ingress controller exposed externally on EKS to handle TLS/HTTPS? </p>
| Ken J | <p>I've replicated every step necessary to get up and running on EKS with a secure ingress. I hope this helps anybody else that wants to get their application on EKS quickly and securely.</p>
<p>To get up and running on EKS:</p>
<ol>
<li><p>Deploy EKS using the CloudFormation template <a href="https://gist.github.com/kjenney/356cf4bb029ec0bb7f78fad8230530d5" rel="noreferrer">here</a>: Keep in mind that I've restricted access with the CidrIp: 193.22.12.32/32. Change this to suit your needs.</p></li>
<li><p>Install Client Tools. Follow the guide <a href="https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html" rel="noreferrer">here</a>.</p></li>
<li>Configure the client. Follow the guide <a href="https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html#eks-configure-kubectl" rel="noreferrer">here</a>.</li>
<li>Enable the worker nodes. Follow the guide <a href="https://docs.aws.amazon.com/eks/latest/userguide/launch-workers.html" rel="noreferrer">here</a>.</li>
</ol>
<p>You can verify that the cluster is up and running and you are pointing to it by running:</p>
<p><code>kubectl get svc</code></p>
<p>Now you launch a test application with the nginx ingress.</p>
<p>NOTE: <strong><em>Everything is placed under the ingress-nginx namespace. Ideally this would be templated to build under different namespaces, but for the purposes of this example it works.</em></strong></p>
<p>Deploy nginx-ingress:</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml
</code></pre>
<p>Fetch rbac.yml from <a href="https://gist.github.com/kjenney/e1cd655ec2c646c7a700eac7b6488422kubectl" rel="noreferrer">here</a>. Run:</p>
<p><code>kubectl apply -f rbac.yml</code></p>
<p>Have a certificate and key ready for testing. Create the necessary secret like so:</p>
<p><code>kubectl create secret tls cafe-secret --key mycert.key --cert mycert.crt -n ingress-nginx</code></p>
<p>Copy coffee.yml from <a href="https://gist.github.com/kjenney/c0f9acc33bf6b9b38c9e1cc3102efb1b" rel="noreferrer">here</a>. Copy coffee-ingress.yml from <a href="https://gist.github.com/kjenney/cd980c01f7b1008b3ef61c99a6453bc8" rel="noreferrer">here</a>. Update the domain you want to run this under. Run them like so</p>
<pre><code>kubectl apply -f coffee.yaml
kubectl apply -f coffee-ingress.yaml
</code></pre>
<p>Update the CNAME for your domain to point to the ADDRESS for:</p>
<p><code>kubectl get ing -n ingress-nginx -o wide</code></p>
<p>Refresh DNS cache and test the domain. You should get a secure page with request stats. I've replicated this multiple times so if it fails to work for you check the steps, config, and certificate. Also, check the logs on the nginx-ingress-controller* pod. </p>
<p><code>kubectl logs pod/nginx-ingress-controller-*********** -n ingress-nginx</code></p>
<p>That should give you some indication of what's wrong.</p>
| Ken J |
<p>I setup my Kubernetes cluster using kops, and I did so from local machine. So my <code>.kube</code> directory is stored on my local machine, but I setup <code>kops</code> for state storage in <code>s3</code>.</p>
<p>I'm in the process of setting up my CI server now, and I want to run my <code>kubectl</code> commands from that box. How do I go about importing the existing state to that server?</p>
| djt | <p>To run <code>kubectl</code> command, you will need the cluster's apiServer URL and related credentials for authentication. Those data are by convention stored in <code>~/.kube/config</code> file. You may also view it via <code>kubectl config view</code> command.</p>
<p>In order to run <code>kubectl</code> on your CI server, you need to make sure the <code>~/.kube/config</code> file contains all the information that <code>kubectl</code> client needs. </p>
<p>With kops, a simple naive solution is to:</p>
<p>1) install kops, kubectl on your CI server</p>
<p>2) config the AWS access credential on your CI server (either via IAM Role or simply env vars), make sure it has access to your s3 state store path</p>
<p>3) set env var for kops to access your cluster:</p>
<pre><code> export NAME=${YOUR_CLUSTER_NAME}
export KOPS_STATE_STORE=s3://${YOUR_CLUSTER_KOPS_STATE_STORE}
</code></pre>
<p>4) Use kops export command to get the kubecfg needed for running kubectl</p>
<pre><code> kops export kubecfg ${YOUR_CLUSTER_NAME}
</code></pre>
<p>see <a href="https://github.com/kubernetes/kops/blob/master/docs/cli/kops_export.md" rel="noreferrer">https://github.com/kubernetes/kops/blob/master/docs/cli/kops_export.md</a></p>
<p>Now the <code>~/.kube/config</code> file on your CI server should contain all the information <code>kubectl</code> needs to access your cluster.</p>
<p>Note that this will use the default admin account on your CI server. To implement a more secure CI/CD environment, you should create a service account bind to a required permission scope (a namespace or type or resources for example), and place its credential on your CI server machine.</p>
| Chaoyu |
<p>I am following <a href="https://cloud.google.com/run/docs/quickstarts/prebuilt-deploy-gke" rel="nofollow noreferrer">this</a> tutorial to perform a so called quickstart on <code>gcp</code>'s <code>cloud run</code> and experiment a bit with it.</p>
<p>Some delays and inconsistencies about announced and typical service availability aside, the scripted steps went well.</p>
<p>What I want to ask (couldn't find any documentation or explanation about it) is <strong>why</strong>, in order for me to access the service I need to pass to <code>curl</code> a specific <code>Host</code> header as indicated by the relevant tutorial:</p>
<pre><code>curl -v -H "Host: hello.default.example.com" YOUR-IP
</code></pre>
<p>Where <code>YOUR-IP</code> is the public IP of the Load Balancer created by istio-managed ingress gatewau</p>
| pkaramol | <p><a href="https://dzone.com/articles/the-three-http-routing-patterns-you-should-know" rel="nofollow noreferrer">Most proxies</a> that handle external traffic match requests based on the <code>Host</code> header. They use what's inside the <code>Host</code> header to decide which service send the request to. Without the <code>Host</code> header, they wouldn't know where to send the request.</p>
<blockquote>
<p>Host-based routing is what enables virtual servers on web servers.
It’s also used by application services like load balancing and ingress
controllers to achieve the same thing. One IP address, many hosts.</p>
<p>Host-based routing allows you to send a request for api.example.com
and for web.example.com to the same endpoint with the certainty it
will be delivered to the correct back-end application.</p>
</blockquote>
<p>That's typical in proxies/load balancers that are multi-tenant, meaning they handle traffic for totally different tenants/applications sitting behind the proxy.</p>
| Jose Armesto |
<p>I am trying to enable mTLS in my mesh that I have already working with istio's sidecars.
The problem I have is that I just get working connections up to one point, and then it fails to connect.</p>
<p>This is how the services are set up right now with my failing implementation of mTLS (simplified):</p>
<p><strong>Istio IngressGateway -> NGINX pod -> API Gateway -> Service A -> <em>[ Database ]</em> -> Service B</strong></p>
<p>First thing to note is that I was using a NGINX pod as a load balancer to proxy_pass my requests to my API Gateway or my frontend page. I tried keeping that without the istio IngressGateway but I wasn't able to make it work. Then I tried to use Istio IngressGateway and connect directly to the API Gateway with VirtualService but also fails for me. So I'm leaving it like this for the moment because it was the only way that my request got to the API Gateway successfully.</p>
<p>Another thing to note is that Service A first connects to a Database outside the mesh and then makes a request to Service B which is inside the mesh and with mTLS enabled.</p>
<p>NGINX, API Gateway, Service A and Service B are within the mesh with mTLS enabled and <strong>"istioctl authn tls-check"</strong> shows that status is OK.</p>
<p>NGINX and API Gateway are in a namespace called <strong>"gateway"</strong>, Database is in <strong>"auth"</strong> and Service A and Service B are in another one called <strong>"api"</strong>.</p>
<p>Istio IngressGateway is in namespace <strong>"istio-system"</strong> right now.</p>
<p>So the problem is that everything work if I set <strong>STRICT</strong> mode to the gateway namespace and <strong>PERMISSIVE</strong> to api, but once I set <strong>STRICT</strong> to api, I see the request getting into Service A, but then it fails to send the request to Service B with a 500.</p>
<p>This is the output when it fails that I can see in the istio-proxy container in the Service A pod:</p>
<pre><code>api/serviceA[istio-proxy]: [2019-09-02T12:59:55.366Z] "- - -" 0 - "-" "-" 1939 0 2 - "-" "-" "-" "-" "10.20.208.248:4567" outbound|4567||database.auth.svc.cluster.local 10.20.128.44:35366 10.20.208.248:4567
10.20.128.44:35364 -
api/serviceA[istio-proxy]: [2019-09-02T12:59:55.326Z] "POST /api/my-call HTTP/1.1" 500 - "-" "-" 74 90 60 24 "10.90.0.22, 127.0.0.1, 127.0.0.1" "PostmanRuntime/7.15.0" "14d93a85-192d-4aa7-aa45-1501a71d4924" "serviceA.api.svc.cluster.local:9090" "127.0.0.1:9090" inbound|9090|http-serviceA|serviceA.api.svc.cluster.local - 10.20.128.44:9090 127.0.0.1:0 outbound_.9090_._.serviceA.api.svc.cluster.local
</code></pre>
<p>No messages in ServiceB though.</p>
<p>Currently, I do not have a global MeshPolicy, and I am setting Policy and DestinationRule per namespace</p>
<p><strong>Policy:</strong></p>
<pre><code>apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
name: "default"
namespace: gateway
spec:
peers:
- mtls:
mode: STRICT
---
apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
name: "default"
namespace: auth
spec:
peers:
- mtls:
mode: STRICT
---
apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
name: "default"
namespace: api
spec:
peers:
- mtls:
mode: STRICT
</code></pre>
<p><strong>DestinationRule:</strong></p>
<pre><code>apiVersion: "networking.istio.io/v1alpha3"
kind: "DestinationRule"
metadata:
name: "mutual-gateway"
namespace: "gateway"
spec:
host: "*.gateway.svc.cluster.local"
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
---
apiVersion: "networking.istio.io/v1alpha3"
kind: "DestinationRule"
metadata:
name: "mutual-api"
namespace: "api"
spec:
host: "*.api.svc.cluster.local"
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
---
apiVersion: "networking.istio.io/v1alpha3"
kind: "DestinationRule"
metadata:
name: "mutual-auth"
namespace: "auth"
spec:
host: "*.auth.svc.cluster.local"
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
</code></pre>
<p>Then I have some DestinationRule to disable mTLS for Database (I have some other services in the same namespace that I want to enable with mTLS) and for Kubernetes API</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: "myDatabase"
namespace: "auth"
spec:
host: "database.auth.svc.cluster.local"
trafficPolicy:
tls:
mode: DISABLE
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: "k8s-api-server"
namespace: default
spec:
host: "kubernetes.default.svc.cluster.local"
trafficPolicy:
tls:
mode: DISABLE
</code></pre>
<p>Then I have my IngressGateway like so:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: ingress-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway # use istio default ingress gateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- my-api.example.com
tls:
httpsRedirect: true # sends 301 redirect for http requests
- port:
number: 443
name: https
protocol: HTTPS
tls:
mode: SIMPLE
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
privateKey: /etc/istio/ingressgateway-certs/tls.key
hosts:
- my-api.example.com
</code></pre>
<p>And lastly, my VirtualServices:</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ingress-nginx
namespace: gateway
spec:
hosts:
- my-api.example.com
gateways:
- ingress-gateway.istio-system
http:
- match:
- uri:
prefix: /
route:
- destination:
port:
number: 80
host: ingress.gateway.svc.cluster.local # this is NGINX pod
corsPolicy:
allowOrigin:
- my-api.example.com
allowMethods:
- POST
- GET
- DELETE
- PATCH
- OPTIONS
allowCredentials: true
allowHeaders:
- "*"
maxAge: "24h"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: api-gateway
namespace: gateway
spec:
hosts:
- my-api.example.com
- api-gateway.gateway.svc.cluster.local
gateways:
- mesh
http:
- match:
- uri:
prefix: /
route:
- destination:
port:
number: 80
host: api-gateway.gateway.svc.cluster.local
corsPolicy:
allowOrigin:
- my-api.example.com
allowMethods:
- POST
- GET
- DELETE
- PATCH
- OPTIONS
allowCredentials: true
allowHeaders:
- "*"
maxAge: "24h"
</code></pre>
<p>One thing that I don't understand is why do I have to create a VirtualService for my API Gateway and why do I have to use "mesh" in the gateways block. If I remove this block, I don't get my request in API Gateway, but if I do, it works and my requests even get to the next service (Service A), but not the next one to that.</p>
<p>Thanks for the help. I am really stuck with this.</p>
<p>Dump of listeners of ServiceA:</p>
<pre><code>ADDRESS PORT TYPE
10.20.128.44 9090 HTTP
10.20.253.21 443 TCP
10.20.255.77 80 TCP
10.20.240.26 443 TCP
0.0.0.0 7199 TCP
10.20.213.65 15011 TCP
0.0.0.0 7000 TCP
10.20.192.1 443 TCP
0.0.0.0 4568 TCP
0.0.0.0 4444 TCP
10.20.255.245 3306 TCP
0.0.0.0 7001 TCP
0.0.0.0 9160 TCP
10.20.218.226 443 TCP
10.20.239.14 42422 TCP
10.20.192.10 53 TCP
0.0.0.0 4567 TCP
10.20.225.206 443 TCP
10.20.225.166 443 TCP
10.20.207.244 5473 TCP
10.20.202.47 44134 TCP
10.20.227.251 3306 TCP
0.0.0.0 9042 TCP
10.20.207.141 3306 TCP
0.0.0.0 15014 TCP
0.0.0.0 9090 TCP
0.0.0.0 9091 TCP
0.0.0.0 9901 TCP
0.0.0.0 15010 TCP
0.0.0.0 15004 TCP
0.0.0.0 8060 TCP
0.0.0.0 8080 TCP
0.0.0.0 20001 TCP
0.0.0.0 80 TCP
0.0.0.0 10589 TCP
10.20.128.44 15020 TCP
0.0.0.0 15001 TCP
0.0.0.0 9000 TCP
10.20.219.237 9090 TCP
10.20.233.60 80 TCP
10.20.200.156 9100 TCP
10.20.204.239 9093 TCP
0.0.0.0 10055 TCP
0.0.0.0 10054 TCP
0.0.0.0 10251 TCP
0.0.0.0 10252 TCP
0.0.0.0 9093 TCP
0.0.0.0 6783 TCP
0.0.0.0 10250 TCP
10.20.217.136 443 TCP
0.0.0.0 15090 HTTP
</code></pre>
<p><strong>Dump clusters in json format:</strong> <a href="https://pastebin.com/73zmAPWg" rel="nofollow noreferrer">https://pastebin.com/73zmAPWg</a></p>
<p><strong>Dump listeners in json format:</strong> <a href="https://pastebin.com/Pk7ddPJ2" rel="nofollow noreferrer">https://pastebin.com/Pk7ddPJ2</a></p>
<p><strong>Curl command from serviceA container to serviceB:</strong> </p>
<pre><code>/opt/app # curl -X POST -v "http://serviceB.api.svc.cluster.local:4567/session/xxxxxxxx=?parameters=hi"
* Trying 10.20.228.217...
* TCP_NODELAY set
* Connected to serviceB.api.svc.cluster.local (10.20.228.217) port 4567 (#0)
> POST /session/xxxxxxxx=?parameters=hi HTTP/1.1
> Host: serviceB.api.svc.cluster.local:4567
> User-Agent: curl/7.61.1
> Accept: */*
>
* Empty reply from server
* Connection #0 to host serviceB.api.svc.cluster.local left intact
curl: (52) Empty reply from server
</code></pre>
<p>If I disable mTLS, request gets from serviceA to serviceB with Curl</p>
| codiaf | <p>General tips for debugging Istio service mesh:</p>
<ol>
<li>Check <a href="https://istio.io/docs/setup/additional-setup/requirements/" rel="nofollow noreferrer">the requirements for services and pods</a>.</li>
<li>Try a similar task to what you are trying to perform from the list of <a href="https://istio.io/docs/tasks/" rel="nofollow noreferrer">Istio tasks</a>. See if that task works and find the differences with your task.</li>
<li>Follow the instructions on <a href="https://istio.io/latest/docs/ops/common-problems/" rel="nofollow noreferrer">Istio common problems page</a>.</li>
</ol>
| Vadim Eisenberg |
<p>I have been struggling to push a docker image to a Google Container Registry using fabric8 maven plugin on a Jenkins pipeline. I have checked every question on stackoverflow but none of them solved my problem.</p>
<p>This is my setup:</p>
<p>Kubernetes Cluster running on Google Kubernetes Engine. I have deployed a pod with a Jenkins server that starts agents with the Kubernetes CI plugin, based on this custom image:</p>
<pre><code>FROM openjdk:8
RUN apt-get update && \
apt-get -y install \
apt-transport-https \
ca-certificates \
curl \
software-properties-common
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
RUN add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable" && \
apt-get update && \
apt-get -y install docker-ce
RUN curl https://dl.google.com/dl/cloudsdk/release/google-cloud-sdk.tar.gz > /tmp/google-cloud-sdk.tar.gz
RUN mkdir -p /usr/local/gcloud \
&& tar -C /usr/local/gcloud -xvf /tmp/google-cloud-sdk.tar.gz \
&& /usr/local/gcloud/google-cloud-sdk/install.sh
ENV PATH $PATH:/usr/local/gcloud/google-cloud-sdk/bin
</code></pre>
<p>This agent has Java 8 and Docker.</p>
<p>I want to use fabric8 maven plugin to perform the docker image build and push to Google Container Registry. The point is no matter what I do I always end up facing this error (unauthorized): </p>
<blockquote>
<p>Unable to push 'eu.gcr.io/myprojectid/gke-springboot-sample'
from registry 'eu.gcr.io' : unauthorized: You don't have the needed
permissions to perform this operation, and you may have invalid
credentials. To authenticate your request, follow the steps in:
<a href="https://cloud.google.com/container-registry/docs/advanced-authentication" rel="nofollow noreferrer">https://cloud.google.com/container-registry/docs/advanced-authentication</a>
-> [Help 1]</p>
</blockquote>
<p>NOTE: </p>
<ol>
<li><p>I have created a service account associated to my GC project and given it these permissions: project editor and storage admin.</p></li>
<li><p>I have configured a Global Jenkins Secret (cloud-registry-credentials) associated to the key.json file belonging to that service account</p></li>
</ol>
<p>There are all the things I have tried in my pipeline so far:</p>
<p><strong>1: oauth2accesstoken</strong> </p>
<pre><code>stage("Build docker image") {
agent {
label 'jenkins-slave'
}
steps {
container('jenkins-slave'){
script {
sh "./mvnw clean build fabric8:build"
withCredentials([file(credentialsId: 'cloud-registry-credentials', variable: 'crc')]) {
sh "gcloud auth print-access-token | docker login -u oauth2accesstoken --password-stdin https://eu.gcr.io"
sh "./mvnw fabric8:push"
}
}
}
}
}
</code></pre>
<p><strong>OUTPUT:</strong></p>
<pre><code>+ gcloud auth print-access-token
+ docker login -u oauth2accesstoken --password-stdin https://eu.gcr.io
WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
[Pipeline] sh
Running shell script
+ ./mvnw fabric8:push
[INFO] F8> The push refers to a repository [eu.gcr.io/myprojectid/gke-springboot-sample]
#
[ERROR] F8> Unable to push 'eu.gcr.io/projectid/gke-springboot-sample' from registry 'eu.gcr.io' : unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication [unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication ]
</code></pre>
<p><strong>2: key.json</strong> </p>
<pre><code>stage("Build docker image") {
agent {
label 'jenkins-slave'
}
steps {
container('jenkins-slave'){
script {
sh "./mvnw fabric8:build"
withCredentials([file(credentialsId: 'cloud-registry-credentials', variable: 'crc')]) {
sh "docker login -u _json_key --password-stdin https://eu.gcr.io < ${crc}"
sh "gcloud auth activate-service-account --key-file=${crc}"
sh "./mvnw fabric8:push"
}
}
}
}
}
</code></pre>
<p><strong>OUTPUT:</strong></p>
<pre><code>+ docker login -u _json_key --password-stdin https://eu.gcr.io
WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
[Pipeline] sh
Running shell script
+ gcloud auth activate-service-account --key-file=****
Activated service account credentials for: [[email protected]]
[Pipeline] sh
Running shell script
+ ./mvnw fabric8:push
[ERROR] F8> Unable to push 'eu.gcr.io/myprojectid/gke-springboot-sample' from registry 'eu.gcr.io' : unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication [unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication ]
</code></pre>
| codependent | <p>you might be looking for the Jenkins <a href="https://wiki.jenkins.io/display/JENKINS/Google+Container+Registry+Auth+Plugin" rel="nofollow noreferrer">Google Container Registry Auth Plugin</a>.</p>
<p>another possible reason may be, that the user <code>jenkins-cr</code> does not have the required role <code>storage.admin</code> assigned, in Cloud IAM... here's the relevant <a href="https://cloud.google.com/container-registry/docs/access-control" rel="nofollow noreferrer">documentation</a>. just seen you've assigned it; maybe double-check that.</p>
<p>you might also be able to get support <a href="https://github.com/fabric8io/fabric8-maven-plugin/issues" rel="nofollow noreferrer">there</a>, in case this is rather related to the fabric8 plugin.</p>
<p>also, hence it is a bucket, that bucket's <code>ACL</code> could eventually interfere.</p>
| Martin Zeitler |
<p>I’m somewhat new to Kubernetes and not sure the standard way to do this. I’d like to have many instances of a single microservice, but with each of the containers parameterized slightly differently. (Perhaps an environment variable passed to the container that’s different for each instance, as specified in the container spec of the .yaml file?)</p>
<p>It seems like a single deployment with multiple replicas wouldn’t work. Yet, having n different deployments with very slightly different .yaml files seems a bit redundant. Is there some sort of templating solution perhaps?</p>
<p>Or should each microservice be identical and seek out its parameters from a central service?</p>
<p>I realize this could be interpreted as an “opinion question” but I am looking for typical solutions.</p>
| vmayer | <p>There are definitely several ways of doing it. One popular option is to use <a href="https://helm.sh/" rel="nofollow noreferrer">Helm</a>. Helm lets you define kubernetes manifests using Go templates, and package them on a single unit called a Helm Chart. Later on you can install this Chart (install is what Helm calls to save these manifests in the Kubernetes API). When installing the Helm Chart, you can pass arguments that will be used when rendering the templates. That way you can re-use pretty much everything, and just replace the significant bits of your manifests: Deployments, Services, etc.</p>
<p>There are <a href="https://github.com/helm/charts/tree/master/stable" rel="nofollow noreferrer">plenty of Helm charts available as open sources projects</a>, that you can use as an example on how to create your own Chart.</p>
<p>And many <a href="https://docs.bitnami.com/kubernetes/how-to/create-your-first-helm-chart/" rel="nofollow noreferrer">useful guides on how to create your first Helm Chart</a>.</p>
<p>Here you can find the <a href="https://helm.sh/docs/developing_charts/" rel="nofollow noreferrer">official docs on developing your own Charts</a>.</p>
| Jose Armesto |
<p>We are using the default ingress gateway for istio. We would like to create two different ingress gateway for using private and public external load balancer.</p>
<p>Is there any way to achieve this?</p>
| Santhosh Kumar A | <p>See <a href="https://github.com/istio-ecosystem/multi-mesh-examples/tree/master/add_hoc_limited_trust#deploy-a-private-ingress-gateway-in-the-second-cluster" rel="nofollow noreferrer">this example</a>, step 3: <em>Deploy a private ingress gateway and mount the new secrets as data volumes by the following command</em>. You may want to edit the helm values of the example, for example remove the mounted volumes with the certificates, change the name of the gateway, the namespace it is deployed to.</p>
| Vadim Eisenberg |
<p>My application is deployed on a Kubernetes Cluster that runs on Google Cloud. I want to fetch logs written by my application using <a href="https://cloud.google.com/logging/docs/reference/v2/rest/v2/logs/list" rel="nofollow noreferrer">Stackdriver's REST APIs for logging</a>.</p>
<p>From the above documentation page and <a href="https://googlecloudplatform.github.io/google-cloud-python/latest/logging/usage.html#retrieving-log-entries" rel="nofollow noreferrer">this example</a>, it seems that I can only list logs of a project, organization, billing account or folder.</p>
<p>I want to know if there are any REST APIs using which I can fetch logs of:</p>
<ul>
<li>A pod in a Kubernetes Cluster running on Google Cloud</li>
<li>A VM instance running on Google Cloud</li>
</ul>
| xennygrimmato | <p>you need to request per <a href="https://cloud.google.com/logging/docs/reference/v2/rest/v2/MonitoredResource" rel="nofollow noreferrer">MonitoredResource</a>, which permits instance names and alike... for GCE that would be <code>gce_instance</code> while for GKE it would be <code>container</code>. individual pods of a cluster can be filtered by their <code>cluster_name</code> & <code>pod_id</code>; the documentation for <a href="https://cloud.google.com/logging/docs/api/v2/resource-list" rel="nofollow noreferrer">resource-list</a> describes it:</p>
<blockquote>
<p><strong>container</strong> (GKE Container) A Google Container Engine (GKE) container instance.</p>
<p><strong>project_id</strong>: The identifier of the GCP project associated with this resource, such as "my-project".</p>
<p><strong>cluster_name</strong>: An immutable name for the cluster the container is running in.</p>
<p><strong>namespace_id</strong>: Immutable ID of the cluster namespace the container is running in.</p>
<p><strong>instance_id</strong>: Immutable ID of the GCE instance the container is running in.</p>
<p><strong>pod_id</strong>: Immutable ID of the pod the container is running in.</p>
<p><strong>container_name</strong>: Immutable name of the container.</p>
<p><strong>zone</strong>: The GCE zone in which the instance is running.</p>
</blockquote>
| Martin Zeitler |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.