Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>How to modify the promql query to get the sum of all the pods (replicas) CPU usage belonging to one particular service?</p>
<p>To get the CPU usage of a single pod I use the following query :</p>
<pre><code>rate(container_cpu_usage_seconds_total{pod="pod name",container="container name"}[5m]) by (container)
</code></pre>
<p>How can I modify this?</p>
| jkmp | <p>You can use <code>sum(rate(container_cpu_usage_seconds_total{pod="pod name",container="container name"}[5m])) by (container)</code> to get the sum of all the pods (replicas) CPU usage belonging to one particular service.</p>
| Amirreza Hashemi |
<p>I am trying to build and tag a docker image in Github Actions runner and am getting this error from the runner</p>
<pre class="lang-sh prettyprint-override"><code>unable to prepare context: path " " not found
Error: Process completed with exit code 1.
</code></pre>
<p>I have gone through all other similar issues on StackOverflow and implemented them but still, no way forward.</p>
<p>The interesting thing is, I have other microservices using similar workflow and Dockerfile working perfectly fine.</p>
<p><strong>My workflow</strong></p>
<pre class="lang-yaml prettyprint-override"><code>name: some-tests
on:
pull_request:
branches: [ main ]
jobs:
tests:
runs-on: ubuntu-latest
env:
AWS_REGION: us-east-1
IMAGE_NAME: service
IMAGE_TAG: 1.1.0
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Create cluster
uses: helm/[email protected]
- name: Read secrets from AWS Secrets Manager into environment variables
uses: abhilash1in/[email protected]
id: read-secrets
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
secrets: |
users-service/secrets
parse-json: true
- name: Build and Tag Image
id: build-image
run: |
# Build a docker container and Tag
docker build --file Dockerfile \
--build-arg APP_API=$USERS_SERVICE_SECRETS_APP_API \
-t $IMAGE_NAME:$IMAGE_TAG .
echo "::set-output name=image::$IMAGE_NAME:$IMAGE_TAG"
- name: Push Image to Kind cluster
id: kind-cluster-image-push
env:
KIND_IMAGE: ${{ steps.build-image.outputs.image }}
CLUSTER_NAME: chart-testing
CLUSTER_CONTROLLER: chart-testing-control-plane
run: |
kind load docker-image $KIND_IMAGE --name $CLUSTER_NAME
docker exec $CLUSTER_CONTROLLER crictl images
</code></pre>
<p><strong>Dockerfile</strong>*</p>
<pre><code>FROM node:14 AS base
WORKDIR /app
FROM base AS development
COPY .npmrc .npmrc
COPY package.json ./
RUN npm install --production
RUN cp -R node_modules /tmp/node_modules
RUN npm install
RUN rm -f .npmrc
COPY . .
FROM development AS builder
COPY .npmrc .npmrc
RUN yarn run build
RUN rm -f .npmrc
RUN ls -la
FROM node:14-alpine AS production
# Install curl
RUN apk update && apk add curl
COPY --from=builder /tmp/node_modules ./node_modules
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/package.json ./
ARG APP_API
# set environmental variables
ENV APP_API=$APP_API
EXPOSE ${PORT}
CMD [ "yarn", "start" ]
</code></pre>
<p>I guess the problem is coming from the building command or something, these are the different things I have tried</p>
<p><strong>I used --file explicitly with period(.)</strong>*</p>
<pre class="lang-yaml prettyprint-override"><code>docker build --file Dockerfile \
--build-arg APP_API=$USERS_SERVICE_SECRETS_APP_API \
-t $IMAGE_NAME:$IMAGE_TAG .
echo "::set-output name=image::$IMAGE_NAME:$IMAGE_TAG"
</code></pre>
<p><strong>I used only period (.)</strong></p>
<pre class="lang-yaml prettyprint-override"><code>docker build \
--build-arg APP_API=$USERS_SERVICE_SECRETS_APP_API \
-t $IMAGE_NAME:$IMAGE_TAG .
echo "::set-output name=image::$IMAGE_NAME:$IMAGE_TAG"
</code></pre>
<p><strong>I used relative path for Dockerfile (./Dockerfile)</strong></p>
<pre class="lang-yaml prettyprint-override"><code>docker build --file ./Dockerfile \
--build-arg APP_API=$USERS_SERVICE_SECRETS_APP_API \
-t $IMAGE_NAME:$IMAGE_TAG .
echo "::set-output name=image::$IMAGE_NAME:$IMAGE_TAG"
</code></pre>
<p><strong>I used relative path for the period (./)</strong></p>
<pre class="lang-yaml prettyprint-override"><code>docker build \
--build-arg APP_API=$USERS_SERVICE_SECRETS_APP_API \
-t $IMAGE_NAME:$IMAGE_TAG ./
echo "::set-output name=image::$IMAGE_NAME:$IMAGE_TAG"
</code></pre>
<p>I have literally exhausted everything I've read from SO</p>
| sam hassan | <p>The problem was basically a white-spacing issue. Nothing could show this. Thanks to <a href="https://stackoverflow.com/a/69033092/16069603">This Github answer</a></p>
| sam hassan |
<p>I was trying to configure a new installation of Lens IDE to work with my remote cluster (on a remote server, on a VM), but encountered some errors and can't find a proper explanation for this case.</p>
<p>Lens expects a config file, I gave it to it from my cluster having it changed from</p>
<p><code>server: https://127.0.0.1:6443</code></p>
<p>to</p>
<p><code>server: https://</code><strong>(address to the remote server)</strong><code>:</code><strong>(assigned intermediate port to 6443 of the VM with the cluster)</strong></p>
<p>After which in Lens I'm getting this:</p>
<pre><code>2021/06/14 22:55:13 http: proxy error: x509: certificate is valid for 10.43.0.1, 127.0.0.1, 192.168.1.122, not (address to the remote server)
</code></pre>
<p>I can see that some cert has to be reconfigured, but I'm absolutely new to the thing.</p>
<p>Here the full contents of the original config file:</p>
<pre><code>apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0...
server: https://127.0.0.1:6443
name: default
contexts:
- context:
cluster: default
user: default
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
user:
client-certificate-data: LS0...
client-key-data: LS0...
</code></pre>
| Mike | <p>The solution is quite obvious and easy.</p>
<p>k3s has to add the new IP to the certificate. Since by default, it includes only localhost and the IP of the node it's running on, if you (like me) have some kind of machine in from of it(like an lb or a dedicated firewall), the IP of one has to be added manually.</p>
<p>There are two ways how it can be done:</p>
<ol>
<li>During the installation of k3s:</li>
</ol>
<blockquote>
<pre><code>curl -sfL https://get.k3s.io | sh -s - server --tls-san desired IP
</code></pre>
</blockquote>
<ol start="2">
<li>Or this argument can be added to already installed k3s:</li>
</ol>
<blockquote>
<pre><code>sudo nano /etc/systemd/system/k3s.service
</code></pre>
</blockquote>
<blockquote>
<pre><code>ExecStart=/usr/local/bin/k3s \
server \
'--tls-san' \
'desired IP' \
</code></pre>
</blockquote>
<blockquote>
<pre><code>sudo systemctl daemon-reload
</code></pre>
</blockquote>
<p>P.S. Although, I have faced issues with the second method.</p>
| Mike |
<p>Currently I'm learning Kubernetes.
It's running on my laptop VBox.
I plan to deploy it on 'real' network, but with verry limited public IP.
So all API service and Ingress-Nginx will be on private IP address (i.e: 192.168.x.y)</p>
<p>My question is: Can I do the SSL termination on ingress-nginx if it behind HA-Proxy that only reverse-proxying TCP?</p>
<p><a href="https://i.stack.imgur.com/8cps8.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8cps8.jpg" alt="enter image description here" /></a></p>
<p>Note : The line in red is the only physical ethernet network with Public IP Address</p>
<p>Sincerely</p>
<p>-bino-</p>
| Bino Oetomo | <p>ingress is more like an API gateway (reverse proxy) which routes the request to a specific backend service based on, for instance, the URL.</p>
<p>SSL Termination is part of reverse proxy. Encrypting the traffic between clients and servers protects it as it crosses a public network like the Internet. But decryption and encryption can be computationally expensive. By decrypting incoming requests and encrypting server responses, the reverse proxy frees up resources on backend servers which they can then devote to their main purpose, serving content.</p>
<p>Reverse proxy as a website’s “public face.” Its address is the one advertised for the website, and it sits at the edge of the site’s network to accept requests from web browsers and mobile apps for the content hosted at the website.</p>
<p>For more information refer to this <a href="https://www.nginx.com/resources/glossary/reverse-proxy-vs-load-balancer/" rel="nofollow noreferrer">document</a>.</p>
<p>HAProxy is a reverse proxy for TCP and HTTP applications. Users can make use of HAProxy to improve the performance of websites and applications by distributing their workloads. Performance improvements include minimized response times and increased throughput.</p>
<p><a href="https://jhooq.com/ingress-controller-nginx/" rel="nofollow noreferrer">HAProxy Ingress Controller</a> - It does all the heavy lifting when it comes to managing external traffic into a kubernetes cluster.</p>
| Ramesh kollisetty |
<p>When I use the following command:</p>
<pre><code>k logs <podname>-<podhash>
</code></pre>
<p>I get the pod log as expected.</p>
<p>But suppose I have multiple instances of this pod. For example:</p>
<pre><code>k logs <podname>-<podhash1>
k logs <podname>-<podhash2>
k logs <podname>-<podhash3>
</code></pre>
<p>If I use</p>
<pre><code>k logs -l app=podname
</code></pre>
<p>shouldn't I get an output of some aggregation of all these pods?</p>
<p>Because I'm not.</p>
<p>I guess I get only last 10 lines of one or all of the logs.</p>
| dushkin | <p><code>k logs -l app=podname</code> will only print 10 logs.</p>
<p>The tail argument when using a selector will print logs of a pod to 10 lines length by default.</p>
<p>By executing the below command you will get the output logs to 100 lines:</p>
<pre><code>kubectl logs --tail=100 <podname>
</code></pre>
<p>For logs from all containers in pods you need to give</p>
<pre><code>kubectl logs <podname> --all-containers=true
</code></pre>
| Mayur Kamble |
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: {{ include "backstage.fullname" . }}-backend
type: Opaque
stringData:
GH_APP_PRIVATEKEY_BASE:|-
{{ .Values.auth.ghApp.privateKey | quote | b64dec | indent 2 }}
</code></pre>
<p>Getting <code>error converting YAML to JSON: yaml: line 22: could not find expected ':'</code> as the result when</p>
<p>trying to store a base64 encoded string to <code>GH_APP_PRIVATEKEY_BASE</code></p>
<p>My application (backstage) is using helm charts to map the env secret.</p>
<p>I keep having trouble with storing/passing multi-line RSA. private key,</p>
<p>Currently trying to base64 encoded private key into one-liner, but still failed at validating the secret file. Would love to know other approach like passing a file with key written on it?</p>
<p>BTW, I use <code>GITHUB_PRVATE_KEY=$(echo -n $GITHUB_PRVATE_KEY | base64 -w 0)</code> and</p>
<pre><code>helm_overrides="$helm_overrides --set auth.ghApp.clientSecret=$GITHUB_PRVATE_KEY"
</code></pre>
<p>at GitHub action to encoded the private key.</p>
| Peter | <p>Try increase the indent to 4:</p>
<pre><code>...
stringData:
GH_APP_PRIVATEKEY_BASE: |-
{{ .Values.auth.ghApp.privateKey | quote | b64dec | indent 4 }}
</code></pre>
| gohm'c |
<p>Cert-manager/secret-for-certificate-mapper "msg"="unable to fetch certificate that owns the secret" "error"="Certificate.cert-manager.io "grafanaps-tls" not found"</p>
<p>So , from the investigation , I’m not able to find the grafanaps-tls</p>
<blockquote>
<pre><code>Kubectl get certificates
NAME READY SECRET AGE
Alertmanagerdf-tls False alertmanagerdf-tls 1y61d
Prometheusps-tls False prometheusps-tls 1y58
</code></pre>
</blockquote>
<p>We have do this followings : The nginx ingress and cert-manager were outdated and not compatible with the Kubernetes version of 1.22 anymore. As a result, an upgrade of those components was initiated in order to restore pod operation.</p>
<p>The cmctl check api -n cert-manager command now returns: The cert-manager API has been upgraded to version 1.7 and orphaned secrets have been cleaned up</p>
<p>Cert-manager/webhook "msg"="Detected root CA rotation - regenerating serving certificates"</p>
<p>After a restart the logs looked mainly clean.</p>
<p>For my finding , the issue is integration of cert-manager with the Kubernetes ingress controlle .
So I was interest in cert-manager configuration mostly on <code>ingressshim</code> configuration and <code>args</code> section</p>
<p>It appears that the SSL certificate for several servers has expired and looks like the issue with the certificate resources or the integration of cert-manager with the Kubernetes ingress controller.</p>
<p><strong>Config:</strong></p>
<pre><code>C:\Windows\system32>kubectl describe deployment cert-manager-cabictor -n cert-manager
Name: cert-manager-cabictor
Namespace: cert-manager
CreationTimestamp: Thu, 01 Dec 2022 18:31:02 +0530
Labels: app=cabictor
app.kubernetes.io/component=cabictor
app.kubernetes.io/instance=cert-manager
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=cabictor
app.kubernetes.io/version=v1.7.3
helm.sh/chart=cert-manager-v1.7.3
Annotations: deployment.kubernetes.io/revision: 2
meta.helm.sh/release-name: cert-manager
meta.helm.sh/release-namespace: cert-manager
Selector: app.kubernetes.io/component=cabictor ,app.kubernetes.io/instance=cert-manager,app.kubernetes.io/name=cabictor
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=cabictor
app.kubernetes.io/component=cabictor
app.kubernetes.io/instance=cert-manager
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=cabictor
app.kubernetes.io/version=v1.7.3
helm.sh/chart=cert-manager-v1.7.3
Service Account: cert-manager-cabictor
Containers:
cert-manager:
Image: quay.io/jetstack/cert-manager-cabictor :v1.7.3
Port: <none>
Host Port: <none>
Args:
--v=2
--leader-election-namespace=kube-system
Environment:
POD_NAMESPACE: (v1:metadata.namespace)
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Progressing True NewReplicaSetAvailable
Available True MinimumReplicasAvailable
OldReplicaSets: <none>
NewReplicaSet: cert-manager-cabictor -5b65bcdbbd (1/1 replicas created)
Events: <none>
</code></pre>
<p>I was not able to identify and fix the root cause here ..</p>
<p>What is the problem here, and how can it be resolved? Any help would be greatly appreciated</p>
| sam23 | <p><strong>Error message:</strong></p>
<p><strong>Cert-manager/secret-for-certificate-mapper "msg"="unable to fetch certificate that owns the secret" "error"="Certificate.cert-manager.io "grafanaps-tls" not found"</strong></p>
<p>The above error message says that the certificate corresponding to the certificate file cannot be found in some namespaces</p>
<p>To view the secrets and certificates execute the below command:</p>
<p>kubectl get secrets -n monitoring</p>
<p>kubectl get certificate -n monitoring</p>
<p>The output will show the missing secret and certificate.</p>
<p>As per the GitHub link you can also use the solution.</p>
<p><a href="https://github.com/cert-manager/cert-manager/issues/1944" rel="nofollow noreferrer">https://github.com/cert-manager/cert-manager/issues/1944</a></p>
<blockquote>
<blockquote>
<p>Script to clean up/fix this issue</p>
</blockquote>
<p>"unable to fetch a certificate that owns the secret"</p>
<p>This script will find TLS secrets in a given namespace which have no
matching certificate resource and delete them.</p>
<blockquote>
<p>Usage</p>
</blockquote>
<p>./clean-orphans.sh </p>
<p>Specifying no namespace will check the default. You will be prompted
before anything is deleted.</p>
</blockquote>
<p>For more information refer to the document <a href="https://cert-manager.io/docs/usage/certificate/#cleaning-up-secrets-when-certificates-are-deleted" rel="nofollow noreferrer">cer-manager</a>. You can also refer to the blog by <a href="https://blog.alexellis.io/expose-grafana-dashboards/" rel="nofollow noreferrer">Alex Ellis</a> on Grafana dashboard with TLS.</p>
| Mayur Kamble |
<p>We are planning to run our Azure Devops build agents in a Kubernetes pods.But going through the internet, couldn't find any recommended approach to follow.</p>
<p>Details:</p>
<ul>
<li>Azure Devops Server</li>
<li>AKS- 1.19.11</li>
</ul>
<p>Looking for</p>
<ul>
<li>AKS kubernetes cluster where ADO can trigger its pipeline with the dependencies.</li>
<li>The scaling of pods should happen as the load from the ADO will be initiating</li>
<li>Is there any default MS provided image available currently for the build agents?</li>
<li>The image should be light weight with BuildAgents and the zulu jdk debian as we are running java based apps.</li>
</ul>
<p>Any suggestions highly appreciated</p>
| Vowneee | <p><a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/docker?view=azure-devops" rel="nofollow noreferrer">This article</a> provides instructions for running your Azure Pipelines agent in Docker. You can set up a self-hosted agent in Azure Pipelines to run inside a Windows Server Core (for Windows hosts), or Ubuntu container (for Linux hosts) with Docker.</p>
<blockquote>
<p>The image should be light weight with BuildAgents and the zulu jdk debian as we are running java based apps.</p>
</blockquote>
<h5><a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/docker?view=azure-devops#add-tools-and-customize-the-container" rel="nofollow noreferrer">Add tools and customize the container</a></h5>
<p>Once you have created a basic build agent, you can extend the Dockerfile to include additional tools and their dependencies, or build your own container by using this one as a base layer. Just make sure that the following are left untouched:</p>
<ul>
<li>The <code>start.sh</code> script is called by the Dockerfile.</li>
<li>The <code>start.sh</code> script is the last command in the Dockerfile.</li>
<li>Ensure that derivative containers don't remove any of the dependencies stated by the Dockerfile.</li>
</ul>
<blockquote>
<p><strong>Note:</strong> <a href="https://learn.microsoft.com/en-us/azure/aks/cluster-configuration#container-runtime-configuration" rel="nofollow noreferrer">Docker was replaced with containerd</a> in Kubernetes 1.19, and <strong>Docker-in-Docker</strong> became unavailable. A few use cases to run docker inside a docker container:</p>
<ul>
<li>One potential use case for docker in docker is for the CI pipeline, where you need to build and push docker images to a container registry after a successful code build.</li>
<li>Building Docker images with a VM is pretty straightforward. However, when you plan to use Jenkins <a href="https://devopscube.com/docker-containers-as-build-slaves-jenkins/" rel="nofollow noreferrer">Docker-based dynamic agents</a> for your CI/CD pipelines, docker in docker comes as a must-have functionality.</li>
<li>Sandboxed environments.</li>
<li>For experimental purposes on your local development workstation.</li>
</ul>
</blockquote>
<p>If your use case requires running docker inside a container then, you must use Kubernetes with version <= 1.18.x (currently not supported on Azure) as shown <a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/docker?view=azure-devops#configure-secrets-and-deploy-a-replica-set" rel="nofollow noreferrer">here</a> or run the agent in an alternative <strong>docker</strong> environment as shown <a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/docker?view=azure-devops#start-the-image-1" rel="nofollow noreferrer">here</a>.</p>
<p>Else if you are <a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/docker?view=azure-devops#use-azure-kubernetes-service-cluster" rel="nofollow noreferrer">deploying the self hosted agent on AKS</a>, the <code>azdevops-deployment</code> <em>Deployment</em> at step 4, <a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/docker?view=azure-devops#configure-secrets-and-deploy-a-replica-set" rel="nofollow noreferrer">here</a>, must be changed to:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: azdevops-deployment
labels:
app: azdevops-agent
spec:
replicas: 1 #here is the configuration for the actual agent always running
selector:
matchLabels:
app: azdevops-agent
template:
metadata:
labels:
app: azdevops-agent
spec:
containers:
- name: azdevops-agent
image: <acr-server>/dockeragent:latest
env:
- name: AZP_URL
valueFrom:
secretKeyRef:
name: azdevops
key: AZP_URL
- name: AZP_TOKEN
valueFrom:
secretKeyRef:
name: azdevops
key: AZP_TOKEN
- name: AZP_POOL
valueFrom:
secretKeyRef:
name: azdevops
key: AZP_POOL
</code></pre>
<blockquote>
<p>The scaling of pods should happen as the load from the ADO will be initiating</p>
</blockquote>
<p>You can use cluster-autoscaler and <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">horizontal pod autoscaler</a>. When combined, the horizontal pod autoscaler is focused on running the number of pods required to meet application demand. The cluster autoscaler is focused on running the number of nodes required to support the scheduled pods. [<a href="https://learn.microsoft.com/en-us/azure/aks/cluster-autoscaler#about-the-cluster-autoscaler" rel="nofollow noreferrer">Reference</a>]</p>
| Srijit_Bose-MSFT |
<p>Let say there's a deployment named my-deployment which is consisted of 3 pods, now we use port-forward forward a local port to this deployment :</p>
<pre><code>kubectl port-forward deployment/my-deployment 8888 9999
</code></pre>
<p>My question is : when I visit localhost:8888 serveral times then which pod would be forwarded to ? Always forward to a fixed pod(like first pod) ? Or forward it by random ? Or use round-roubin strategy ?</p>
| chenxinlong | <p><code>when I visit localhost:8888 serveral times then which pod would be forwarded to ?</code></p>
<p>Will forward to the first pod sorted by name.</p>
<p><code>Always forward to a fixed pod(like first pod) ?</code></p>
<p>Fixed.</p>
<p><code>Or forward it by random ? Or use round-roubin strategy ?</code></p>
<p>Fixed to the first pod sorted by name.</p>
<p>Presumed you have performed a port-forward command and curl successfully. Now if you scale the deployment to 0; then scale up; if you curl again now you will get an error. This is because the pod that the port forwarded to has been terminated during the scale to 0.</p>
| gohm'c |
<p>In my final <code>deployment.yaml</code> created from helm template I would like to have liveness and readiness blocks only if in <code>values.yaml</code> the block <code>.Values.livenessReadinessProbe</code> <strong>doesn't exist</strong> or if <code>.Values.livenessReadinessProbe.enabled</code> <strong>is true</strong>.</p>
<p>I tried to do it so:</p>
<pre><code> {{- if or (not .Values.livenessReadinessProbe) (.Values.livenessReadinessProbe.enabled) }}
livenessProbe:
httpGet:
path: /actuator/health/liveness
port: 8080
initialDelaySeconds: 300
failureThreshold: 5
periodSeconds: 10
readinessProbe:
httpGet:
path: /actuator/health/readiness
port: 8080
initialDelaySeconds: 200
failureThreshold: 5
periodSeconds: 10
{{- end }}
</code></pre>
<p>But I'm getting <code>nil pointer evaluating interface {}.enabled</code>, if <code>livenessReadinessProbe</code> is absent in <code>values.yaml</code>, so it seems like the second OR condition is being executed, even though the first condition is <code>true</code> (i.e. <code>.Values.livenessReadinessProbe</code> is absent).</p>
<p>How can I achieve it?</p>
<p>My <code>values.yaml</code> with existing <code>livenessReadinessProbe</code> value:</p>
<pre><code>livenessReadinessProbe:
enabled: true
</code></pre>
<p>Thank you in advance!</p>
| Georgii Lvov | <blockquote>
<p>I would like to have liveness and readiness blocks only if in
values.yaml the block .Values.livenessReadinessProbe doesn't exist or
if .Values.livenessReadinessProbe.enabled is true.</p>
</blockquote>
<p>You can test it <a href="https://helm-playground.com/" rel="nofollow noreferrer">here</a>:</p>
<p>template.yaml</p>
<pre><code>{{- if .Values.livenessReadinessProbe }}
{{- if eq (.Values.livenessReadinessProbe.enabled) true }}
livenessProbe:
httpGet:
path: /actuator/health/liveness
port: 8080
initialDelaySeconds: 300
failureThreshold: 5
periodSeconds: 10
readinessProbe:
httpGet:
path: /actuator/health/readiness
port: 8080
initialDelaySeconds: 200
failureThreshold: 5
periodSeconds: 10
{{- end }}
{{- end }}
</code></pre>
<p>values.yaml</p>
<pre><code>livenessReadinessProbe:
enabled: true
</code></pre>
<p>See the output when you adjust the values.yaml:</p>
<ul>
<li>Delete <code>livenessReadinessProbe.enabled</code>, or set it to false.</li>
<li>Delete the entire value block.</li>
</ul>
| gohm'c |
<p>I am using <a href="https://learn.microsoft.com/en-us/azure/aks/node-updates-kured" rel="nofollow noreferrer">Kured</a> to perform safe reboots of our nodes to upgrade the OS and kernel versions.
In my understanding, it works by cordoning and draining the node, and the pods are scheduled on a new node with the older version. After the reboot, the nodes are uncordoned and back to the ready state and the temporary worker nodes get deleted.</p>
<p>It was perfectly fine until yesterday when one of the nodes failed to upgrade to the latest kernel version. It was on 5.4.0-1058-azure last week after a successful upgrade and it should be on 5.4.0-1059-azure yesterday after the latest patch, but it is using the old version 5.4.0-1047-azure (which I think is the version of the temporary node that got created).</p>
<p>Upon checking the log analytics on azure, it says that it failed to scale down.</p>
<p>Reason: ScaleDownFailed</p>
<p>Message: failed to drain the node, aborting ScaleDown</p>
<p><a href="https://i.stack.imgur.com/NTw84.png" rel="nofollow noreferrer">Error message</a></p>
<p>Any idea on why this is happening?</p>
| themochishifter | <p>Firstly, there is a little misunderstanding of the OS and Kernel patching process.</p>
<blockquote>
<p>In my understanding, it works by cordoning and draining the node, and the pods are scheduled on a new node with the older version.</p>
</blockquote>
<p>The new node that is/are added should come with the latest <a href="https://learn.microsoft.com/en-us/azure/aks/node-image-upgrade#check-if-your-node-pool-is-on-the-latest-node-image" rel="nofollow noreferrer">node image version</a> with latest security patches (which <em>usually</em> does not fall back to an older kernel version) available for the node pool. You can check out the AKS node image releases <a href="https://github.com/Azure/AKS/tree/2021-09-16/vhd-notes" rel="nofollow noreferrer">here</a>. <a href="https://learn.microsoft.com/en-us/azure/aks/node-updates-kured#node-upgrades" rel="nofollow noreferrer">Reference</a></p>
<p>However, it is <em><strong>not</strong></em> necessary that the pod(s) evicted by the drain operation from the node that is being rebooted at any point during the process has to land on the <em>surge node</em>. Evicted pod(S) might very well be scheduled on an existing node should the node fit the bill for scheduling these pods.</p>
<p>For every Pod that the scheduler discovers, the scheduler becomes responsible for finding the <strong>best Node for that Pod</strong> to run on. The scheduler reaches this placement decision taking into account the scheduling principles described <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/kube-scheduler/#scheduling" rel="nofollow noreferrer">here</a>.</p>
<p>The documentation, at the time of writing, might be a little misleading on this.</p>
<hr />
<p>About the error:</p>
<blockquote>
<p>Reason: ScaleDownFailed<br>
Message: failed to drain the node, aborting ScaleDown</p>
</blockquote>
<p>This might happen due to a number of reasons. Common ones might be:</p>
<ul>
<li><p>The scheduler could not find a suitable node to place evicted pods and the node pool could not scale up due to insufficient compute quota available. [<a href="https://learn.microsoft.com/en-us/azure/aks/upgrade-cluster#before-you-begin" rel="nofollow noreferrer">Reference</a>]</p>
</li>
<li><p>The scheduler could not find a suitable node to place evicted pods and the cluster could not scale up due to insufficient IP addresses in the node pool's subnet. [<a href="https://learn.microsoft.com/en-us/azure/aks/configure-azure-cni#plan-ip-addressing-for-your-cluster" rel="nofollow noreferrer">Reference</a>]</p>
</li>
<li><p><code>PodDisruptionBudgets</code> (PDBs) did not allow for at least 1 pod replica to be moved at a time causing the drain/evict operation to fail. [<a href="https://learn.microsoft.com/en-us/azure/aks/upgrade-cluster#upgrade-an-aks-cluster" rel="nofollow noreferrer">Reference</a>]</p>
</li>
</ul>
<p>In general,</p>
<p>The <a href="https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/#eviction-api" rel="nofollow noreferrer">Eviction API</a> can respond in one of three ways:</p>
<ul>
<li>If the eviction is granted, then the Pod is deleted as if you sent a <code>DELETE</code> request to the Pod's URL and received back <code>200 OK</code>.</li>
<li>If the current state of affairs wouldn't allow an eviction by the rules set forth in the budget, you get back <code>429 Too Many Requests</code>. This is typically used for generic rate limiting of <em>any</em> requests, but here we mean that this request isn't allowed <em>right now</em> but it may be allowed later.</li>
<li>If there is some kind of misconfiguration; for example multiple PodDisruptionBudgets that refer the same Pod, you get a <code>500 Internal Server Error</code> response.</li>
</ul>
<p>For a given eviction request, there are two cases:</p>
<ul>
<li>There is no budget that matches this pod. In this case, the server always returns <code>200 OK</code>.</li>
<li>There is at least one budget. In this case, any of the three above responses may apply.</li>
</ul>
<p><strong>Stuck evictions</strong><br>
In some cases, an application may reach a broken state, one where unless you intervene the eviction API will never return anything other than 429 or 500.</p>
<p>For example: this can happen if ReplicaSet is creating Pods for your application but the replacement Pods do not become <code>Ready</code>. You can also see similar symptoms if the last Pod evicted has a very long termination grace period.</p>
<hr />
<p><strong>How to investigate further?</strong></p>
<ol>
<li><p>On the Azure Portal navigate to your AKS cluster</p>
</li>
<li><p>Go to <strong>Resource Health</strong> on the left hand menu as shown below and click on <strong>Diagnose and solve problems</strong></p>
<p><a href="https://i.stack.imgur.com/pjz9X.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pjz9X.png" alt="enter image description here" /></a></p>
</li>
<li><p>You should see something like the following</p>
<p><a href="https://i.stack.imgur.com/lRjwr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lRjwr.png" alt="enter image description here" /></a></p>
</li>
<li><p>If you click on each of the options, you should see a number of checks loading. You can set the time frame of impact on the top right hand corner of the screen as shown below (Please press the <code>Enter</code> key after you have set the correct timeframe). You can click on the <code>More Info</code> link on the right hand side of each entry for detailed information and recommended action.</p>
<p><a href="https://i.stack.imgur.com/BngP2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BngP2.png" alt="enter image description here" /></a></p>
</li>
</ol>
<p><strong>How to mitigate the issue?</strong></p>
<p>Once you have identified the issue and followed the recommendations to fix the same, please perform an <code>az aks upgrade</code> on the AKS cluster to the same Kubernetes version it is currently running. This should initiate a reconcile operation wherever required under the hood.</p>
| Srijit_Bose-MSFT |
<p>Using the below command labels can be added to a pod.</p>
<pre><code>kubectl label pod <pod-name> {key1=value1,key2=value2,key3=value3}
</code></pre>
<p>What is the best way to add or remove a label, say, [ env: dev ] from all pods running in a given namespace.</p>
| P Ekambaram | <blockquote>
<p>...to add or remove a label, say, [ env: dev ] from all pods running
in a given namespace.</p>
</blockquote>
<p>Try:</p>
<p><code>kubectl label pods --namespace <name> --all env=dev</code> # <-- add</p>
<p><code>kubectl label pods --namespace <name> --all env-</code> # <-- remove</p>
| gohm'c |
<p>I have set a quota policy on my Kubernetes namespaces. I want to update the policy using kubectl command. Is there a kubectl command to update quota policy eg: kubectl edit resourcequota tmc.orgp.large -n quota-mem-cpu-example (where i can pass the update cpu mem ).</p>
<p><a href="https://i.stack.imgur.com/G8ZCB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/G8ZCB.png" alt="enter image description here" /></a></p>
<p>Currently, limits.cpu is 4, can i update it to 8 using the command line?</p>
| Aishvarya Suryawanshi | <p>Try:</p>
<p><code>kubectl patch -p '{"spec":{"hard":{"limits.cpu":"8"}}}' resourcequota tmc.orgp.large --namespace quota-mem-cpu-example</code></p>
| gohm'c |
<p>I have set up my application to be served by a Kubernetes NGINX ingress in AKS. Today while experimenting with the Azure API management, I tried to set it up so that all the traffic to the ingress controller would go through the API management. I pointed its backend service to the current public address of the ingress controller but I was wondering when I make the ingress controller private or remove it altogether to rely on the Kubernetes services instead, how API management could access it and how I would define the backend service in API management. By the way, while provisioning the API management instance, I added a new subnet to the existing virtual network of the AKS instance so they are in the same network.</p>
| Mar Chal | <p>There are two modes of <a href="https://learn.microsoft.com/en-us/azure/api-management/api-management-using-with-vnet" rel="nofollow noreferrer">deploying API Management into a VNet</a> – External and Internal.</p>
<p>If API consumers do not reside in the cluster VNet, the External mode (Fig below) should be used. In this mode, the API Management gateway is injected into the cluster VNet but accessible from public internet via an external load balancer. It helps to hide the cluster completely while still allowing external clients to consume the microservices. Additionally, you can use Azure networking capabilities such as Network Security Groups (NSG) to restrict network traffic.</p>
<p><a href="https://i.stack.imgur.com/N6Usy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/N6Usy.png" alt="enter image description here" /></a></p>
<p>If all API consumers reside within the cluster VNet, then the Internal mode (Figure below) could be used. In this mode, the API Management gateway is injected into the cluster VNET and accessible only from within this VNet via an internal load balancer. There is no way to reach the API Management gateway or the AKS cluster from public internet.</p>
<p><a href="https://i.stack.imgur.com/IbA1Q.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IbA1Q.png" alt="enter image description here" /></a></p>
<p><strong>In both cases, the AKS cluster is not publicly visible</strong>. The Ingress Controller may not be necessary. Depending on your scenario and configuration, authentication might still be required between API Management and your microservices. For instance, if a Service Mesh is adopted, it always requires mutual TLS authentication.</p>
<p>Pros:</p>
<ul>
<li>The most secure option because the AKS cluster has no public endpoint</li>
<li>Simplifies cluster configuration since it has no public endpoint</li>
<li>Ability to hide both API Management and AKS inside the VNet using the Internal mode</li>
<li>Ability to control network traffic using Azure networking capabilities such as Network Security Groups (NSG)</li>
</ul>
<p>Cons:</p>
<ul>
<li>Increases complexity of deploying and configuring API Management to work inside the VNet</li>
</ul>
<p><a href="https://learn.microsoft.com/en-us/azure/api-management/api-management-kubernetes#option-3-deploy-apim-inside-the-cluster-vnet" rel="nofollow noreferrer">Reference</a></p>
<hr />
<p>To restrict access to your applications in Azure Kubernetes Service (AKS), you can create and use an internal load balancer. An internal load balancer makes a Kubernetes service accessible only to applications running in the same virtual network as the Kubernetes cluster.</p>
<p>You can either expose your the backends on the AKS cluster through <a href="https://learn.microsoft.com/en-us/azure/aks/ingress-internal-ip" rel="nofollow noreferrer">internal Ingress</a> or simply using <a href="https://learn.microsoft.com/en-us/azure/aks/internal-lb" rel="nofollow noreferrer">Services of type internal load balancer</a>.</p>
<p>You can then point the API Gateway's backend to the internal Ingress' Private IP address or the internal load balancers Service's EXTERNAL IP (which would also be a private IP address). These private IP addresses are accessible within the Virtual Network and any connected network (i.e. Azure virtual networks connected through peering or Vnet-to-Vnet Gateway, or on-premises networks connected to the AKS Vnet). In your case, if the API Gateway is deployed in the same Virtual Network then, it should be able to access these private IP addresses. If the API Gateway is deployed in a different Virtual Network, please connect it to the AKS virtual network using <a href="https://azure.microsoft.com/en-in/blog/vnet-peering-and-vpn-gateways/" rel="nofollow noreferrer">VNET Peering or Vnet-to-Vnet Gateway</a>, depending on your use-case.</p>
| Srijit_Bose-MSFT |
<p>I'm trying to mount existing google cloud Persistent Disk(balanced) to Jenkins in Kubernetes.
In the root of the disk located fully configured Jenkins. I want to bring up Jenkins in k8s with already prepared configuration on google Persistent Disk.</p>
<p>I'm using latest chart from the <a href="https://charts.jenkins.io" rel="nofollow noreferrer">https://charts.jenkins.io</a> repo</p>
<p>Before run <code>helm install</code> I'm applying pv and pvc.</p>
<p><strong>PV</strong> for existent disk:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins-persistent-volume
spec:
storageClassName: standard
capacity:
storage: 50Gi
accessModes:
- ReadWriteOnce
csi:
driver: pd.csi.storage.gke.io
volumeHandle: projects/Project/zones/us-central1-a/disks/jenkins-pv
fsType: ext4
</code></pre>
<p><strong>PVC</strong></p>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: jenkins-pvc
namespace: jenkins
spec:
volumeName: jenkins-persistent-volume
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "50Gi"
</code></pre>
<p><a href="https://i.stack.imgur.com/W1k4j.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/W1k4j.png" alt="pv" /></a>
<a href="https://i.stack.imgur.com/XQaPK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XQaPK.png" alt="pvc" /></a></p>
<p>Files in Persistent Google Disk are 100% <strong>1000:1000</strong> permissions (uid, gid)</p>
<p>I made only one change in official helm chart, it was in values file</p>
<pre><code> existingClaim: "jenkins-pvc"
</code></pre>
<p>After running <code>helm install jenkins-master . -n jenkins</code>
I'm getting next:
<a href="https://i.stack.imgur.com/wzbxM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wzbxM.png" alt="failed pod" /></a></p>
<p>Just for ensure that problem not from GCP side.
I mount pvc to busybox and it works perfect.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: busybox
image: busybox:1.32.0
command:
- "/bin/sh"
args:
- "-c"
- "while true; do echo $(date) >> /app/buffer; cat /app/buffer; sleep 5; done;"
volumeMounts:
- name: my-volume
mountPath: /app
volumes:
- name: my-volume
persistentVolumeClaim:
claimName: jenkins-pvc
</code></pre>
<p><a href="https://i.stack.imgur.com/3PBIb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3PBIb.png" alt="busybox" /></a></p>
<p>I tried to change a lot of values in values.yaml also tried use old charts, or even <strong>bitnami charts</strong> with deployment instead of stateful set, but always error is the same.
Could somebody shows my the right way please.</p>
<p><strong>Storage classes</strong>
<a href="https://i.stack.imgur.com/wgGbL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wgGbL.png" alt="storage classes" /></a></p>
| Артем Черемісін | <p>Try set the <code>podSecurityContextOverride</code> and re-install:</p>
<pre><code>controller:
podSecurityContextOverride:
runAsUser: 1000
runAsNonRoot: true
supplementalGroups: [1000]
persistence:
existingClaim: "jenkins-pvc"
</code></pre>
| gohm'c |
<p>We are trying to inter communicate with multiple services without exposing them to the public.</p>
<p>I have a service like this:-</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: "v1"
kind: "Service"
metadata:
name: "config"
namespace: "noicenamespace"
labels:
app: "config"
spec:
ports:
- protocol: "TCP"
port: 8888
targetPort: 8888
selector:
app: "config"
type: "LoadBalancer"
loadBalancerIP: ""
</code></pre>
<p>Due to type LoadBalancer, the service is accesible on public network. But we only want this service to be visible to our internal services in our cluster.</p>
<p>So if I comment out the loadBalancerIP and set type as ClusterIP, My other pods can't access the service.
I tried specifying the service name like this:-</p>
<pre><code>http://config.noicenamespace.svc.cluster.local:8888
</code></pre>
<p>But I get timeout. We have created cluster from scratch on Google Kubernetes Engine</p>
| LightSith | <p>This error <code>"Error from server: error dialing backend: dial timeout"</code> is related to the progressive introduction of Konnectivity network proxy in some clusters starting from GKE 1.19.</p>
<p>The Konnectivity network proxy (KNP) provides a TCP level proxy for master egress (kube-apiserver to cluster communication),</p>
<p>The Konnectivity service consists of two parts: the Konnectivity server in the control plane network and the Konnectivity agents in the nodes network. The Konnectivity agents initiate connections to the Konnectivity server and maintain the network connections.
After enabling the Konnectivity service, all control plane to nodes traffic goes through these connections, due to this there must be a firewall rule that allows communication correctly through port(use the port number getting displayed in the error message with the IP of the end point), otherwise dial timeout errors may occur.</p>
<p>By using this filter on Cloud Logging you can found the error logs related to konnectivity agent timeout connection due to missing firewall rule:(Note the IP address and the Port number of the endpoint from the error use the details in the firewall rule)</p>
<pre><code>resource.labels.cluster_name="cluster name"
"konnectivity-agent"
</code></pre>
<p>Add a firewall egress rule that allow you to connect to the port (use the port number getting displayed in the error message with the IP of the end point) , you could use the following command to add that rule. that should allow the konnectivity-agent to connect to the control plane.</p>
<pre><code>gcloud compute firewall-rules create gke-node-to-konnectivity-service \
--allow=tcp:<port number> \
--direction=EGRESS \
--destination-ranges=<endpoint IP address > \
--target-tags=< node name> \
--priority=1000
</code></pre>
| Goli Nikitha |
<p>I'm trying to set some alarms based on replica set metrics but Prometheus cannot find <a href="https://github.com/kubernetes/kube-state-metrics/blob/master/docs/replicaset-metrics.md" rel="nofollow noreferrer">replicaset kube state metrics</a> while browsing expressions. What would be the problem with that? On Prometheus dashboard, I can see lots of metrics, which is in kube state metrics repo, but replica sets. Any ideas?</p>
<p>Kube state metrics version: v1.9.7</p>
<p>Update:</p>
<p>For example, I can see most of <a href="https://github.com/kubernetes/kube-state-metrics/blob/master/docs/deployment-metrics.md" rel="nofollow noreferrer">deployment metrics</a> on the dashboard, but no metrics for replica sets.</p>
<p><a href="https://i.stack.imgur.com/W2GXB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/W2GXB.png" alt="enter image description here" /></a></p>
| cosmos-1905-14 | <p>This is a community wiki answer posted for better clarity. Feel free to expand it.</p>
<p>As @ cosmos-1905-14 described, he checked the kube-state-metric logs and found that the ServiceAccount did not have sufficient rights to access the ReplicaSets. After he added the necessary rights, the issue was resolved.</p>
| Andrew Skorkin |
<p>we have a basic AKS cluster setup and we need to whitelist this AKS outbound ipadress in one of our services, i scanned the AKS cluster setting in Azure portal, i was not able to find any outbound IpAddress.</p>
<p>how do we get the outboud IP ?</p>
<p>Thanks -Nen</p>
| nen | <p>If you are using an AKS cluster with a <strong>Standard SKU Load Balancer</strong> i.e.</p>
<pre><code>$ az aks show -g $RG -n akstest --query networkProfile.loadBalancerSku -o tsv
Standard
</code></pre>
<p>and the <code>outboundType</code> is set to <code>loadBalancer</code> i.e.</p>
<pre><code>$ az aks show -g $RG -n akstest --query networkProfile.outboundType -o tsv
loadBalancer
</code></pre>
<p>then you should be able to fetch the outbound IP addresses for the AKS cluster like (mind the capital <code>IP</code>):</p>
<pre><code>$ az aks show -g $RG -n akstest --query networkProfile.loadBalancerProfile.effectiveOutboundIPs[].id
[
"/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/MC_xxxxxx_xxxxxx_xxxxx/providers/Microsoft.Network/publicIPAddresses/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
]
# Using $PUBLIC_IP_RESOURCE_ID obtained from the last step
$ az network public-ip show --ids $PUBLIC_IP_RESOURCE_ID --query ipAddress -o tsv
xxx.xxx.xxx.xxx
</code></pre>
<p>For more information please check <a href="https://learn.microsoft.com/en-us/azure/aks/load-balancer-standard" rel="noreferrer">Use a public Standard Load Balancer in Azure Kubernetes Service (AKS)</a></p>
<hr />
<p>If you are using an AKS cluster with a <strong>Basic SKU Load Balancer</strong> i.e.</p>
<pre><code>$ az aks show -g $RG -n akstest --query networkProfile.loadBalancerSku -o tsv
Basic
</code></pre>
<p>and the <code>outboundType</code> is set to <code>loadBalancer</code> i.e.</p>
<pre><code>$ az aks show -g $RG -n akstest --query networkProfile.outboundType -o tsv
loadBalancer
</code></pre>
<p>Load Balancer Basic chooses a single frontend to be used for outbound flows when multiple (public) IP frontends are candidates for outbound flows. This <strong>selection is not configurable</strong>, and you should <strong>consider the selection algorithm to be</strong> <em><strong>random</strong></em>. This public IP address is only valid for the lifespan of that resource. If you delete the Kubernetes <code>LoadBalancer</code> service, the associated load balancer and IP address are also deleted. If you want to assign a specific IP address or retain an IP address for redeployed Kubernetes services, you can <a href="https://learn.microsoft.com/en-us/azure/aks/egress" rel="noreferrer">create and use a static public IP address</a>, as @nico-meisenzahl mentioned.</p>
<p>The static IP address works only as long as you have one Service on the AKS cluster (with a Basic Load Balancer). When multiple addresses are configured on the Azure Load Balancer, any of these public IP addresses are a candidate for outbound flows, and one is selected at random. Thus every time a Service gets added, you will have to add that corresponding IP address to the whitelist which isn't very scalable. [<a href="https://learn.microsoft.com/en-us/azure/aks/egress#create-a-service-with-the-static-ip" rel="noreferrer">Reference</a>]</p>
<hr />
<p>In the latter case, we would recommend setting <code>outBoundType</code> to <code>userDefinedRouting</code> at the time of AKS cluster creation. If <code>userDefinedRouting</code> is set, AKS won't automatically configure egress paths. The egress setup must be done by you.</p>
<p>The AKS cluster must be deployed into an existing virtual network with a subnet that has been previously configured because when not using standard load balancer (SLB) architecture, you must establish explicit egress. As such, this architecture requires explicitly sending egress traffic to an appliance like a firewall, gateway, proxy or to allow the Network Address Translation (NAT) to be done by a public IP assigned to the standard load balancer or appliance.</p>
<h5>Load balancer creation with userDefinedRouting</h5>
<p>AKS clusters with an outbound type of UDR receive a standard load balancer (SLB) only when the first Kubernetes service of type 'loadBalancer' is deployed. The load balancer is configured with a public IP address for inbound requests and a backend pool for <em>inbound</em> requests. Inbound rules are configured by the Azure cloud provider, but no <strong>outbound public IP address or outbound rules</strong> are configured as a result of having an outbound type of UDR. Your UDR will still be the only source for egress traffic.</p>
<p>Azure load balancers <a href="https://azure.microsoft.com/pricing/details/load-balancer/" rel="noreferrer">don't incur a charge until a rule is placed</a>.</p>
<p>[<strong>!! Important:</strong> Using outbound type is an advanced networking scenario and requires proper network configuration.]</p>
<p>Here's instructions to <a href="https://learn.microsoft.com/en-us/azure/aks/egress-outboundtype#deploy-a-cluster-with-outbound-type-of-udr-and-azure-firewall" rel="noreferrer">Deploy a cluster with outbound type of UDR and Azure Firewall</a></p>
| Srijit_Bose-MSFT |
<p>I set up a kubernetes cluster on GKE that runs a UDP game server. How do I ask kubernetes to start a new container from a game client, and how do I then get the ip address and port of that container so I can communicate with the server from the client.</p>
| Ben Baldwin | <p>You can create a service in Google Kubernetes Engine and can be accessed by public IP address. You can create a load balancer autoscaler service which internally takes care of creating new containers when there is increase in traffic.</p>
<p>Here are some documentation for creating <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/exposing-apps" rel="nofollow noreferrer">service to access cluster</a> and creating a <a href="https://cloud.google.com/architecture/udp-with-network-load-balancing" rel="nofollow noreferrer">load balancer service</a>.There are other equivalent ways to start new pods on demand using <a href="https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/" rel="nofollow noreferrer">Kubernetes API</a>.</p>
<p>You can get IP address by using this command: <code>kubectl get pods -o wide</code></p>
| Goli Nikitha |
<p>I have a service which is running in kubernetes, and has a path prefix <code>/api</code>. Now I want to use Ingress to access it through the host address <code>example.com/service1/</code> because I have multiple services. But the problem is that ingress redirects all the requests from path <code>service1/</code> with that prefix <code>service1/</code>, but I want it to redirect from <code>example.com/service1/</code> to my service with just <code>/</code> (so if I request <code>example.com/service1/api</code> it will redirect to service with just <code>/api</code>). Can I achieve something like this? I'm writing Ingress configuration in the helm chart of the service.
Ingress configuration in service chart file <code>values.yaml</code> looks like this:</p>
<pre><code>...
ingress:
enabled: true
className: ""
annotations: {}
# kubernetes.io/ingress.class: nginx // this comment was created when generating helm chart
hosts:
- host: example.com
paths:
- path: /service1(/|$)(.*)
pathType: ImplementationSpecific
backend:
serviceName: $name
servicePort: http
tls: []
...
</code></pre>
<p>And <code>ingress.yaml</code> inside <code>templates/</code> folder is a default file that was generated by <code>helm</code> when I was creating a chart for the service. It just uses values from <code>values.yaml</code> to configure ingress. I didn't find anything, only <a href="https://stackoverflow.com/questions/63532836/kubernetes-ingress-not-redirecting-to-correct-path">this question</a> which is basically saying that I need to add either prefix <code>service1/</code> to my service or just use <code>/api</code> in the Ingress configuration. But is there solution suitable for my needs?</p>
| Arzybek | <p><em>This is a community wiki answer posted for better visibility. Feel free to expand it.</em></p>
<p>Based on the solution provided in the comments (method 1 example 2 in the Medium <a href="https://medium.com/ww-engineering/kubernetes-nginx-ingress-traffic-redirect-using-annotations-demystified-b7de846fb43d" rel="nofollow noreferrer">post</a> ), a possible <code>values.yaml</code> file for Ingress might looks like below.</p>
<pre><code>...
ingress:
enabled: true
className: ""
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: http://example.com/$2
hosts:
- host: example.com
paths:
- path: /service1(/|$)(.*)
backend:
serviceName: $name
servicePort: http
tls: []
...
</code></pre>
| Andrew Skorkin |
<p>New to Docker/K8s. I need to be able to mount all the containers (across all pods) on my K8s cluster to a shared file system, so that they can all read from and write to files on this shared file system. The file system needs to be something residing inside of -- or at the very least accessible to -- all containers in the K8s cluster.</p>
<p>As far as I can tell, I have two options:</p>
<ol>
<li>I'm guessing K8s offers some type of persistent, durable block/volume storage facility? Maybe PV or PVC?</li>
<li>Maybe launch a Dockerized Samba container and give my others containers access to it somehow?</li>
</ol>
<p>Does K8s offer this type of shared file system capability or do I need to do something like a Dockerized Samba?</p>
| hotmeatballsoup | <p>NFS is a common solution to provide you the file sharing facilities. <a href="https://matthewpalmer.net/kubernetes-app-developer/articles/kubernetes-volumes-example-nfs-persistent-volume.html" rel="nofollow noreferrer">Here</a>'s a good explanation with example to begin with. Samba can be used if your file server is Windows based.</p>
| gohm'c |
<p>We have deployed Apache Spark on Azure Kubernetes Services (AKS).</p>
<p>Able to submit spark application via CLI <a href="https://spark.apache.org/docs/latest/running-on-kubernetes.html#cluster-mode" rel="nofollow noreferrer">https://spark.apache.org/docs/latest/running-on-kubernetes.html#cluster-mode</a></p>
<p><strong>Question</strong>: Is it possible to submit a spark job/run a spark application from Azure Data factory version 2? That way we can orchestrate spark application from data factory.</p>
| databash | <h5>High-Level Architecture</h5>
<p><a href="https://i.stack.imgur.com/4QFj3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4QFj3.png" alt="enter image description here" /></a></p>
<p>Quick explanation of the architecture flow:</p>
<ul>
<li><p>In order to connect any on-premise data sources to Azure, you can install an integration runtime (executable installer from Data Factory) on a dedicated VM. This allows Data Factory to create connections to these on-premise servers.</p>
</li>
<li><p>A Data Factory pipeline will load raw data into Data Lake Storage Gen 2. ADLS2 applies hierarchical namespace to blob storage (think folders). A downstream task in the pipeline will trigger a custom Azure Function.</p>
</li>
<li><p>Custom python Azure Function will load a config yaml from ADLS2, and submit a spark application to k8s service via k8s python client.
The container registry is essentially a docker hub in Azure. Docker images are deployed to the registry, and k8s will pull as required.</p>
</li>
<li><p>Upon submission by the Azure Function, k8s will pull the spark image from container registry and execute the spark application. Spark leverages the k8s scheduler to automatically spin up driver and executor pods. Once the application is complete, the executor pods self-terminate while the driver pod persists logs and remains in "completed" state (which uses no resources).</p>
</li>
</ul>
<p>ADF Pipeline Example:</p>
<p><a href="https://i.stack.imgur.com/AgiAN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AgiAN.png" alt="enter image description here" /></a></p>
<p>few challenges with this setup:</p>
<ol>
<li>No programmatic way of submitting a spark application to k8s (running command line "kubectl" or "spark-submit" isn't going to cut it in production)</li>
<li>No OOTB method to orchestrate a spark application submission to k8s using Data Factory</li>
<li>Spark 2.4.5 with Hadoop 2.7 doesn't support read/writes to ADLS2, and cannot be built with Hadoop 3.2.1 (unless you have extensive dev knowledge of spark source code)</li>
</ol>
<p>Let's walk through the top secret tools used to make the magic happen. Deployed a custom <a href="https://github.com/GoogleCloudPlatform/spark-on-k8s-operator" rel="nofollow noreferrer">spark-on-k8s-operator</a> resource to the kubernetes cluster which allows submitting spark applications with a <a href="https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/examples/spark-py-pi.yaml" rel="nofollow noreferrer">yaml</a> file. However, the documentation only shows how to submit a spark app using cmd line <code>kubectl apply -f yaml</code>. To submit the spark application programmatically (ie - via REST API), leveraged the <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/docs/CustomObjectsApi.md" rel="nofollow noreferrer">CustomObjectsApi</a> from k8s python client SDK inside a python Azure Function. Why? Because ADF has an OOTB task to trigger Azure Functions. 🎉</p>
<p>Spark 3.0.0-preview2 already has built-in integration with Hadoop 3.2+ so that's not so top secret, but there are a couple of things to look out for when you build the docker image. The bin/docker-image-tool.sh needs extra quotes on line 153 (I think this is just a problem with windows filesystem). To support read/write to ADLS2, you need to download the <a href="https://mvnrepository.com/artifact/org.apache.hadoop/hadoop-azure" rel="nofollow noreferrer">hadoop-azure jar</a> & <a href="https://mvnrepository.com/artifact/org.wildfly.openssl/wildfly-openssl" rel="nofollow noreferrer">wildfly.openssl</a> jar (place them in spark_home/jars). Finally, replace the kubernetes/dockerfiles/spark/entrypoint.sh with the one from Spark 2.4.5 pre-built for Hadoop 2.7+ (missing logic to support python driver).</p>
<p>Quick tips: package any custom jars into spark_home/jars before building your docker image & reference them as dependencies via "local:///opt/spark/jars", upload extra python libs to ADLS2 and use <code>sc.addPyFile(public_url_of_python_lib)</code> in your main application before importing.</p>
<p>Reference: <a href="https://www.linkedin.com/pulse/ultimate-evolution-spark-data-pipelines-azure-kubernetes-kenny-bui/" rel="nofollow noreferrer">https://www.linkedin.com/pulse/ultimate-evolution-spark-data-pipelines-azure-kubernetes-kenny-bui/</a></p>
| Srijit_Bose-MSFT |
<p>I have created a GKE Service Account.</p>
<p>I have been trying to use it within GKE, but I get the error:</p>
<pre><code>pods "servicepod" is forbidden: error looking up service account service/serviceaccount: serviceaccount "serviceaccount" not found
</code></pre>
<p>I have followed the setup guide in this <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform#gcloud_1" rel="nofollow noreferrer">documentation</a>.</p>
<p>1.Created a GCP Service Account called "serviceaccount"</p>
<p>2.I created, and downloaded the JSON key as key.json.</p>
<p>3.<code>kubectl create secret generic serviceaccountkey --from-file key.json -n service</code></p>
<p>4.Added the following items to my deployment:</p>
<pre><code> spec:
volumes:
- name: serviceaccountkey
secret:
secretName: serviceaccountkey
containers:
volumeMounts:
- name: serviceaccountkey
mountPath: /var/secrets/google
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/key.json
</code></pre>
<p>When I deploy this out, I get:
<code>pods "service-7cdbcc67b9-" is forbidden: error looking up service account service/serviceaccount: serviceaccount "serviceaccount" not found</code></p>
<p>I'm not sure what else to do to get this working, I've followed the guide and can't see anything that's been missed.</p>
<p>Any help on this would be greatly appreciated!</p>
| fuzzi | <p>One of the reasons for getting this error can be if you have created a service account in one namespace and trying to use that service account only for another namespace.</p>
<p>We can resolve this error by rolebinding the service account with a new namespace. If the existing service account is in default namespace then you can use this YAML file with the new namespace for rolebinding.</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernetes-enforce-default
namespace: <new-namespace>
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-enforce
subjects:
- kind: ServiceAccount
name: kubernetes-enforce
namespace: kube-system
</code></pre>
<p>Refer to this <a href="https://stackoverflow.com/questions/63283438/how-can-i-create-a-service-account-for-all-namespaces-in-a-kubernetes-cluster">similar issue</a> for more information.</p>
| Goli Nikitha |
<p>In my helm template I want to have the following block:</p>
<pre><code>ports:
- containerPort: {{ .Values.ports.containerPort | default 8080}}
name: {{ .Values.ports.name | default "8080tcp02" | quote}}
protocol: {{ .Values.ports.protocol | default "TCP" | quote}}
{{- end }}
</code></pre>
<p>If in <code>values.yaml</code> file block <code>ports</code> exist, then take values for <code>containerPort</code>, <code>name</code> and<code>protocol</code> from there. Otherwise, if block <code>ports</code> is absent in <code>values.yaml</code>, then take default values. Now I'm getting <code>nil pointer evaluating interface {}.containerPort</code>, if block <code>ports</code> is absent in <code>values.yaml</code>.</p>
<p>I also tried to do so:</p>
<pre><code>{{- with .Values.ports }}
ports:
- containerPort: {{ .containerPort | default 8080}}
name: {{ .name | default "8080tcp02"}}
protocol: {{ .protocol | default "TCP"}}
{{- end }}
</code></pre>
<p>But then If block <code>ports</code> is absent in <code>values.yaml</code> It will absent in result as well. How can I achieve this?</p>
<p>Thanks in advance!</p>
<p>EDIT:</p>
<pre><code> ports:
{{- if .Values.ports }}
{{- with . }}
- containerPort: {{ .containerPort }}
name: {{ .name | quote }}
protocol: {{ .protocol | quote }}
{{- else }}
- containerPort: 8080
name: 8080tcp02
protocol: TCP
{{- end }}
{{- end }}
</code></pre>
<p>Now it works with if statement, but not with <code>with</code> statement.</p>
| Georgii Lvov | <p>Try if/else block:</p>
<pre><code>ports:
{{- if .Values.ports }}
- containerPort: {{ .Values.ports.containerPort | default 8080 }}
name: {{ .Values.ports.name | default "8080tcp02" }}
protocol: {{ .Values.ports.protocol | default "TCP" }}
{{- else }}
- containerPort: 8080
name: "8080tcp02"
protocol: "TCP"
{{- end }}
</code></pre>
<p>Loop thru the ports:</p>
<pre><code> ports:
{{- if .Values.ports }}
{{- range $content := .Values.ports }}
- containerPort: {{ $content.containerPort | default 8080 }}
name: {{ $content.name | default "8080tcp02" }}
protocol: {{ $content.protocol | default "TCP" }}
{{- end }}
{{- else }}
- containerPort: 8080
name: "8080tcp02"
protocol: "TCP"
{{- end }}
</code></pre>
| gohm'c |
<p>I have a Django application deployed on a K8s cluster. I need to send some emails (some are scheduled, others should be sent asynchronously), and the idea was to delegate those emails to Celery.</p>
<p>So I set up a Redis server (with Sentinel) on the cluster, and deployed an instance for a Celery worker and another for Celery beat.</p>
<p>The k8s object used to deploy the Celery worker is pretty similar to the one used for the Django application. The main difference is the command introduced on the celery worker: <code>['celery', '-A', 'saleor', 'worker', '-l', 'INFO']</code></p>
<p>Scheduled emails are sent with no problem (celery worker and celery beat don't have any problems connecting to the Redis server). However, the asynchronous emails - "delegated" by the Django application - are not sent because it is not possible to connect to the Redis server (<code>ERROR celery.backends.redis Connection to Redis lost: Retry (1/20) in 1.00 second. [PID:7:uWSGIWorker1Core0]</code>)</p>
<p>Error 1:</p>
<pre><code>socket.gaierror: [Errno -5] No address associated with hostname
</code></pre>
<p>Error 2:</p>
<pre><code>redis.exceptions.ConnectionError: Error -5 connecting to redis:6379. No address associated with hostname.
</code></pre>
<p>The Redis server, Celery worker, and Celery beat are in a "redis" namespace, while the other things, including the Django app, are in the "development" namespace.</p>
<p>Here are the variables that I define:</p>
<pre><code>- name: CELERY_PASSWORD
valueFrom:
secretKeyRef:
name: redis-password
key: redis_password
- name: CELERY_BROKER_URL
value: redis://:$(CELERY_PASSWORD)@redis:6379/1
- name: CELERY_RESULT_BACKEND
value: redis://:$(CELERY_PASSWORD)@redis:6379/1
</code></pre>
<p>I also tried to define <code>CELERY_BACKEND_URL</code> (with the same value as <code>CELERY_RESULT_BACKEND</code>), but it made no difference.</p>
<p>What could be the cause for not connecting to the Redis server? Am I missing any variables? Could it be because pods are in a different namespace?</p>
<p>Thanks!</p>
| Sofia | <p><strong>Solution from @sofia that helped to fix this issue:</strong></p>
<p>You need to use the same namespace for the Redis server and for the Django application. In this particular case, change the namespace "redis" to "development" where the application is deployed.</p>
| Andrew Skorkin |
<p>I have a GKE cluster.</p>
<p>I used <code>kubectl apply</code> to apply the following YAML from my local machine:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: flask-app-svc
namespace: myapp
spec:
ports:
- port: 5000
targetPort: 5000
selector:
component: flask-app
</code></pre>
<p>Got applied. All Good. ✅</p>
<hr />
<p>Then I used <code>kubectl get service</code> to get back the YAML from the cluster. It returned this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/neg: '{"ingress":true}'
cloud.google.com/neg-status: '{"network_endpoint_groups":{"5000":"k8s1-5fe0c3c1-myapp-flask-app-svc-5000-837dba94"},"zones":["asia-southeast1-a"]}'
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"flask-app-svc","namespace":"myapp"},"spec":{"ports":[{"port":5000,"targetPort":5000}],"selector":{"component":"flask-app"}}}
creationTimestamp: "2021-10-29T14:40:49Z"
name: flask-app-svc
namespace: myapp
resourceVersion: "242820340"
uid: ad80f634-5aab-4147-8f71-11ccc44fd867
spec:
clusterIP: 10.22.52.180
clusterIPs:
- 10.22.52.180
ports:
- port: 5000
protocol: TCP
targetPort: 5000
selector:
component: flask-app
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
</code></pre>
<hr />
<h3>1. What kubernetes "concept" is at play here?</h3>
<h3>2. Why are the 2 YAMLs SO DIFFERENT from each other?</h3>
<h3>3. What is happening under the hood?</h3>
<h3>4. Is this specific to GKE, or would any k8s cluster behave this way?</h3>
<h3>5. Where can I find some info/articles to learn more about this concept?</h3>
<hr />
<p>Thank you in advance.</p>
<p>I've been trying to wrap my head around this for a while. Appreciate any help you can advise and suggest here.</p>
| Rakib | <p>A <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/service" rel="nofollow noreferrer">service in GKE</a> is a way to expose to the intended final users, those applications running in a set of pods. All these elements form part of a GKE cluster.
If you apply a YAML to crate a service, several additional thins are needed in order to have the applications reachable for your users. One of the features of kubernetes and of GKE is to automatically create, set and mantain the resources required to, in this case for example, create a service. All those extra settings and definitions made by GKE are recorded in a YAML file.</p>
<p>If you can know more about this concept, you can start in the <a href="https://cloud.google.com/kubernetes-engine" rel="nofollow noreferrer">Google Kubernetes Engine product page</a>, or consult in this same page the <a href="https://cloud.google.com/kubernetes-engine/docs" rel="nofollow noreferrer">GKE documentation</a>. Another good point to start is to read this <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/kubernetes-engine-overview" rel="nofollow noreferrer">GKE overview</a>.</p>
| Jesus Huesca |
<p>I'm trying to learn nginx-ingress-controller per <a href="https://devopscube.com/setup-ingress-kubernetes-nginx-controller" rel="nofollow noreferrer">https://devopscube.com/setup-ingress-kubernetes-nginx-controller</a></p>
<p>my laptop is at 192.168.12.71</p>
<p>my ingress-nginx-controller pod and service is at 192.168.1.67.</p>
<p>a. Pod</p>
<pre><code> bino@corobalap ~/k8nan/ingresnginx/nginx-ingress-controller bino01 ± kubectl --namespace ingress-nginx describe pod ingress-nginx-controller
Name: ingress-nginx-controller-78f456c879-w6pbd
Namespace: ingress-nginx
Priority: 0
Node: bino-k8-wnode1/192.168.1.67
Start Time: Tue, 19 Jul 2022 12:49:19 +0700
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/name=ingress-nginx
pod-template-hash=78f456c879
Annotations: <none>
Status: Running
IP: 192.168.1.67
IPs:
IP: 192.168.1.67
Controlled By: ReplicaSet/ingress-nginx-controller-78f456c879
Containers:
controller:
Container ID: containerd://8606c2dd3800502eb56dd6de2decab93f2b9567916b462a83acf8917bcb7696d
Image: k8s.gcr.io/ingress-nginx/controller:v1.1.1
Image ID: k8s.gcr.io/ingress-nginx/controller@sha256:0bc88eb15f9e7f84e8e56c14fa5735aaa488b840983f87bd79b1054190e660de
Ports: 80/TCP, 443/TCP, 8443/TCP
Host Ports: 80/TCP, 443/TCP, 8443/TCP
Args:
/nginx-ingress-controller
--publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
--election-id=ingress-controller-leader
--controller-class=k8s.io/ingress-nginx
--configmap=$(POD_NAMESPACE)/ingress-nginx-controller
--validating-webhook=:8443
--validating-webhook-certificate=/usr/local/certificates/cert
--validating-webhook-key=/usr/local/certificates/key
State: Running
Started: Tue, 19 Jul 2022 12:49:21 +0700
Ready: True
Restart Count: 0
Requests:
cpu: 100m
memory: 90Mi
Liveness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
Readiness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
Environment:
POD_NAME: ingress-nginx-controller-78f456c879-w6pbd (v1:metadata.name)
POD_NAMESPACE: ingress-nginx (v1:metadata.namespace)
LD_PRELOAD: /usr/local/lib/libmimalloc.so
Mounts:
/usr/local/certificates/ from webhook-cert (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5tzs5 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
webhook-cert:
Type: Secret (a volume populated by a Secret)
SecretName: ingress-nginx-admission
Optional: false
kube-api-access-5tzs5:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
</code></pre>
<p>b. service :</p>
<pre><code> ✘ bino@corobalap ~/k8nan/ingresnginx/nginx-ingress-controller bino01 ± kubectl --namespace ingress-nginx describe service ingress-nginx-controller
Name: ingress-nginx-controller
Namespace: ingress-nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/name=ingress-nginx
Annotations: <none>
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.109.72.34
IPs: 10.109.72.34
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 31324/TCP
Endpoints: 192.168.1.67:80
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 31645/TCP
Endpoints: 192.168.1.67:443
Session Affinity: None
External Traffic Policy: Local
Events: <none>
</code></pre>
<p>my hello-app pod and service and ingress object also at 192.168.1.67</p>
<p>a. pod :</p>
<pre><code> bino@corobalap ~/k8nan/ingresnginx/nginx-ingress-controller bino01 ± kubectl --namespace dev describe pod hello-app
Name: hello-app-5c554f556c-4jhhv
Namespace: dev
Priority: 0
Node: bino-k8-wnode1/192.168.1.67
Start Time: Tue, 19 Jul 2022 11:45:09 +0700
Labels: app=hello
pod-template-hash=5c554f556c
Annotations: <none>
Status: Running
IP: 10.244.1.19
IPs:
IP: 10.244.1.19
Controlled By: ReplicaSet/hello-app-5c554f556c
Containers:
hello:
Container ID: containerd://8f4c02f60c82a70db4f7d0954ee19f606493a6ee5517d0d0f7641429682d86fb
Image: gcr.io/google-samples/hello-app:2.0
Image ID: gcr.io/google-samples/hello-app@sha256:2b0febe1b9bd01739999853380b1a939e8102fd0dc5e2ff1fc6892c4557d52b9
Port: <none>
Host Port: <none>
State: Running
Started: Tue, 19 Jul 2022 11:45:16 +0700
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6zghn (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-6zghn:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
Name: hello-app-5c554f556c-c879k
Namespace: dev
Priority: 0
Node: bino-k8-wnode1/192.168.1.67
Start Time: Tue, 19 Jul 2022 11:45:09 +0700
Labels: app=hello
pod-template-hash=5c554f556c
Annotations: <none>
Status: Running
IP: 10.244.1.21
IPs:
IP: 10.244.1.21
Controlled By: ReplicaSet/hello-app-5c554f556c
Containers:
hello:
Container ID: containerd://37ff6a278c3e5398ff3c70d1b2db47bfaa421a757eefaf4f34befa83b9fd8569
Image: gcr.io/google-samples/hello-app:2.0
Image ID: gcr.io/google-samples/hello-app@sha256:2b0febe1b9bd01739999853380b1a939e8102fd0dc5e2ff1fc6892c4557d52b9
Port: <none>
Host Port: <none>
State: Running
Started: Tue, 19 Jul 2022 11:45:17 +0700
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xr2hw (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-xr2hw:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
Name: hello-app-5c554f556c-n8pf8
Namespace: dev
Priority: 0
Node: bino-k8-wnode1/192.168.1.67
Start Time: Tue, 19 Jul 2022 11:45:09 +0700
Labels: app=hello
pod-template-hash=5c554f556c
Annotations: <none>
Status: Running
IP: 10.244.1.20
IPs:
IP: 10.244.1.20
Controlled By: ReplicaSet/hello-app-5c554f556c
Containers:
hello:
Container ID: containerd://833ac8b81b454261b472cf8e2d790cdf712a2e038074acd8d53563c25c677bdd
Image: gcr.io/google-samples/hello-app:2.0
Image ID: gcr.io/google-samples/hello-app@sha256:2b0febe1b9bd01739999853380b1a939e8102fd0dc5e2ff1fc6892c4557d52b9
Port: <none>
Host Port: <none>
State: Running
Started: Tue, 19 Jul 2022 11:45:17 +0700
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2hh5w (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-2hh5w:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
</code></pre>
<p>b. service</p>
<pre><code> bino@corobalap ~/k8nan/ingresnginx/nginx-ingress-controller bino01 ± kubectl --namespace dev describe service hello-service
Name: hello-service
Namespace: dev
Labels: app=hello
Annotations: <none>
Selector: app=hello
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.97.83.176
IPs: 10.97.83.176
Port: <unset> 80/TCP
TargetPort: 8080/TCP
Endpoints: 10.244.1.19:8080,10.244.1.20:8080,10.244.1.21:8080
Session Affinity: None
Events: <none>
</code></pre>
<p>c. ingress object</p>
<pre><code> ✘ bino@corobalap ~/k8nan/ingresnginx/nginx-ingress-controller bino01 ± kubectl --namespace dev describe ingress hello-app-ingress
Name: hello-app-ingress
Labels: <none>
Namespace: dev
Address:
Ingress Class: nginx
Default backend: <default>
Rules:
Host Path Backends
---- ---- --------
kopet.kpt
/ hello-service:80 (10.244.1.19:8080,10.244.1.20:8080,10.244.1.21:8080)
Annotations: <none>
Events: <none>
</code></pre>
<p>I tried to access hello-service localy at node where it life, got:</p>
<pre><code>ubuntu@bino-k8-wnode1:~$ curl http://10.244.1.19:8080
Hello, world!
Version: 2.0.0
Hostname: hello-app-5c554f556c-4jhhv
</code></pre>
<p>I set a dummy hostname 'kopet.kpt' to point to the node where all the things life.
but when i tried to curl from my laptop, nginx say 404</p>
<pre><code> bino@corobalap ~/k8nan/ingresnginx/nginx-ingress-controller bino01 ± curl http://kopet.kpt/
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>
</code></pre>
<p>Kindly please tell me what todo/read to fix it</p>
<p>-bino-</p>
| Bino Oetomo | <p>Your ingress definition created rules that proxy traffic from one path to another. In your case I believe the reason it's not working is that the app is proxied to app-service:80/app but your intending on serving traffic at the /root. Please try adding this annotation to your ingress resource:</p>
<p>nginx.ingress.kubernetes.io/rewrite-target: / <a href="https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/rewrite" rel="nofollow noreferrer">Refer</a> github link for more information.</p>
<p>Also, please provide a complete error message.</p>
| Venkata Satya Karthik Varun Ku |
<p>I have a multi-container pod in k8s, let's call them A and B. When stopping the pod, A must stop before B because A needs B until it's off.</p>
<p>To do that, I registered a <code>preStop</code> hook on A so A can gracefully stop before B.</p>
<p>However I'm not sure this is a good solution, because I miss some information I can't find in k8s documentation:</p>
<p>What happens when a multi-container pod is stopped?</p>
<ul>
<li>All containers <code>preStop</code> hooks are called, then when they are all over all containers receive <code>SIGTERM</code>, or</li>
<li>In parallel, all containers receive <code>preStop</code> if they have one or directly <code>SIGTERM</code> if they don't?</li>
</ul>
<p>In the second case, <code>preStop</code> is useless for what I want to do as B will be instantly killed.</p>
| Benjamin Barrois | <p>Typically, during pod deletion, the container runtime sends a TERM signal to the main process in each container.</p>
<p>According to <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination" rel="nofollow noreferrer">the official documentation</a>:</p>
<blockquote>
<ol>
<li><p>If one of the Pod's containers has defined a <code>preStop</code> <a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks" rel="nofollow noreferrer">hook</a>,
the kubelet runs that hook inside of the container.</p>
</li>
<li><p>The kubelet triggers the container runtime to send a TERM signal to process 1 inside each container.</p>
</li>
</ol>
</blockquote>
<p>This numeration can confuse - looks like TERM signal will be sent only after <code>preStop</code> hook will be finished.
I decided to check the order of work with a simple example below.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: lifecycle-demo
spec:
restartPolicy: Never
volumes:
- name: config
configMap:
name: nginx-conf
containers:
- name: container-1
image: nginx
lifecycle:
preStop:
exec:
command: ["/bin/sleep","15"]
ports:
- containerPort: 80
- name: container-2
image: nginx
ports:
- containerPort: 81
volumeMounts:
- name: config
mountPath: /etc/nginx/conf.d
terminationGracePeriodSeconds: 30
</code></pre>
<p>Container-1 has <code>preStop</code> hook for 15 seconds delay.
I've connected to both containers to see behavior during pod deletion.</p>
<p><strong>Result</strong></p>
<p>After pod deletion:</p>
<ol>
<li><p>Container-1 worked for 15 seconds, before the connection was lost</p>
</li>
<li><p>Container-2 immediately lost connection</p>
</li>
</ol>
<p><strong>Conclusion</strong></p>
<p>If the container has a <code>preStop</code> hook, it will try to execute it. Only then it will receive TERM signal. <em>The main condition in this case: the grace period has not expired.</em></p>
<p>If the container doesn't have a <code>preStop</code> hook, it will receive TERM signal immediately after the command to remove the pod. Thus, it will not wait while<code>preStop</code> hook will be executed for another container.</p>
<blockquote>
<p><strong>Note:</strong> The containers in the Pod receive the TERM signal at different times and in an arbitrary order. If the order of shutdowns
matters, consider using a <code>preStop</code> hook to synchronize.</p>
</blockquote>
| Andrew Skorkin |
<p>I have changed my image in docker from Alpine base image to node:14.16-buster, While running the code I am getting 'apk not found' error.</p>
<p>Sharing the codes snippet :</p>
<pre><code>FROM node:14.16-buster
# ========= steps for Oracle instant client installation (start) ===============
RUN apk --no-cache add libaio libnsl libc6-compat curl && \
cd /tmp && \
curl -o instantclient-basiclite.zip https://download.oracle.com/otn_software/linux/instantclient/instantclient-basiclite-linuxx64.zip -SL && \
unzip instantclient-basiclite.zip && \
mv instantclient*/ /usr/lib/instantclient && \
rm instantclient-basiclite.zip
</code></pre>
<p>Can you please help here, what do I need to change?</p>
| Danish | <p>The issue comes from the fact that you're changing your base image from Alpine based to Debian based.</p>
<p>Debian based Linux distributions use <code>apt</code> as their package manager (Alpine uses <code>apk</code>).</p>
<p>That is the reason why you get <code>apk not found</code>. Use <code>apt install</code>, but also keep in mind that the package names could differ and you might need to look that up. After all, <code>apt</code> is a different piece of software with it's own capabilities.</p>
| theUndying |
<p>I noticed that a new cluster role - "eks:cloud-controller-manager" appeared in our EKS cluster. we never created it.I tried to find origin/creation of this cluster role but not able to find it.</p>
<p>any idea what does "eks:cloud-controller-manager" cluster role does in EKS cluster?</p>
<p><code>$ kubectl get clusterrole eks:cloud-controller-manager -o yaml</code></p>
<pre><code>kind: ClusterRole
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"rbac.authorization.k8s.io/v1","kind":"ClusterRole","metadata":{"annotations":{},"name":"eks:cloud-controller-manager"},"rules":[{"apiGroups":[""],"resources":["events"],"verbs":["create","patch","update"]},{"apiGroups":[""],"resources":["nodes"],"verbs":["*"]},{"apiGroups":[""],"resources":["nodes/status"],"verbs":["patch"]},{"apiGroups":[""],"resources":["services"],"verbs":["list","patch","update","watch"]},{"apiGroups":[""],"resources":["services/status"],"verbs":["list","patch","update","watch"]},{"apiGroups":[""],"resources":["serviceaccounts"],"verbs":["create","get"]},{"apiGroups":[""],"resources":["persistentvolumes"],"verbs":["get","list","update","watch"]},{"apiGroups":[""],"resources":["endpoints"],"verbs":["create","get","list","watch","update"]},{"apiGroups":["coordination.k8s.io"],"resources":["leases"],"verbs":["create","get","list","watch","update"]},{"apiGroups":[""],"resources":["serviceaccounts/token"],"verbs":["create"]}]}
creationTimestamp: "2022-08-02T00:25:52Z"
name: eks:cloud-controller-manager
resourceVersion: "762242250"
uid: 34e568bb-20b5-4c33-8a7b-fcd081ae0a28
rules:
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- update
- apiGroups:
- ""
resources:
- nodes
verbs:
- '*'
- apiGroups:
- ""
resources:
- serviceaccounts/token
verbs:
- create```
I tried to find this object in our Gitops repo but do not find it.
</code></pre>
| Amit Raj | <p>This role is created by AWS when you provision the cluster. This role is for the AWS <a href="https://kubernetes.io/docs/concepts/architecture/cloud-controller/" rel="nofollow noreferrer">cloud-controller-manager</a> to integrate AWS services (eg. CLB/NLB, EBS) with Kubernetes. You will also find other roles like eks:fargate-manager to integrate with Fargate.</p>
| gohm'c |
<p>Using different tools (kubent for example) I see that I have deprecated API in my cluster. For example</p>
<pre><code>Type: Ingress Name: kibana API: networking.k8s.io/v1beta1
</code></pre>
<p>But when I open Ingress itself, I can see this:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
managedFields:
- manager: Go-http-client
operation: Update
apiVersion: networking.k8s.io/v1beta1
</code></pre>
<p>So, it shows that API of my Ingress is actually "v1", not "beta". But "managedFields" section indeed has "v1beta1" API. According to official <a href="https://kubernetes.io/docs/reference/using-api/server-side-apply/" rel="nofollow noreferrer">documentation</a>, this is server side API that should not be edited by user.</p>
<p>So, my question is - should/can I do anything with deprecated API in this "managedField"? Will there be any issues during upgrade to next k8s version? Because currently my GCP console shows that there will be problems.</p>
| user15824359 | <p>There will be no issue while upgrading your Kubernetes cluster to the latest version even if you have deprecated API version in the <code>managed field</code> in the ingress configuration. The reason why you still see versions <strong>“/v1beta1”</strong> in the UI is because there are different parts of GKE that rely on both versions(v1 and v1beta1).</p>
<p>Between the two Kubernetes versions 1.19 and 1.21, both endpoints <code>networking.k8s.io/v1</code> and <code>extensions/v1beta1</code> are supported. They are functionally identical, and it is down to the given UI's preference for which version is displayed. So it won’t affect the functionality of your Ingress. As said, GKE clusters were created on <a href="https://cloud.google.com/kubernetes-engine/docs/deprecations/apis-1-22#ingress-v122" rel="nofollow noreferrer">versions 1.22</a> and later stopped supporting extensions/v1beta1 and networking.k8s.io/v1beta1 Ingress.</p>
| Srividya |
<p>I wrote a pipeline for an Hello World web app, nothing biggy, it's a simple hello world page.
I made it so if the tests pass, it'll deploy it to a remote kubernetes cluster.</p>
<p>My problem is that if I change the html page and try to redeploy into k8s the page remains the same (the pods aren't rerolled and the image is outdated).</p>
<p>I have the <code>autopullpolicy</code> set to always. I thought of using specific tags within the deployment yaml but I have no idea how to integrate that with my jenkins (as in how do I make jenkins set the <code>BUILD_NUMBER</code> as the tag for the image in the deployment).</p>
<p>Here is my pipeline:</p>
<pre><code>pipeline {
agent any
environment
{
user = "NAME"
repo = "prework"
imagename = "${user}/${repo}"
registryCreds = 'dockerhub'
containername = "${repo}-test"
}
stages
{
stage ("Build")
{
steps {
// Building artifact
sh '''
docker build -t ${imagename} .
docker run -p 80 --name ${containername} -dt ${imagename}
'''
}
}
stage ("Test")
{
steps {
sh '''
IP=$(docker inspect --format '{{ .NetworkSettings.IPAddress }}' ${containername})
STATUS=$(curl -sL -w "%{http_code} \n" $IP:80 -o /dev/null)
if [ $STATUS -ne 200 ]; then
echo "Site is not up, test failed"
exit 1
fi
echo "Site is up, test succeeded"
'''
}
}
stage ("Store Artifact")
{
steps {
echo "Storing artifact: ${imagename}:${BUILD_NUMBER}"
script {
docker.withRegistry('https://registry.hub.docker.com', 'dockerhub') {
def customImage = docker.image(imagename)
customImage.push(BUILD_NUMBER)
customImage.push("latest")
}
}
}
}
stage ("Deploy to Kubernetes")
{
steps {
echo "Deploy to k8s"
script {
kubernetesDeploy(configs: "deployment.yaml", kubeconfigId: "kubeconfig") }
}
}
}
post {
always {
echo "Pipeline has ended, deleting image and containers"
sh '''
docker stop ${containername}
docker rm ${containername} -f
'''
}
}
</code></pre>
<p>}</p>
<p>EDIT:
I used <code>sed</code> to replace the latest tag with the build number every time I'm running the pipeline and it works. I'm wondering if any of you have other ideas because it seems so messy right now.
Thanks.</p>
| Nutz | <p>According to the information from <a href="https://github.com/jenkinsci/kubernetes-cd-plugin#configure-the-plugin" rel="nofollow noreferrer">Kubernetes Continuous Deploy Plugin</a> p.6. you can add <code>enableConfigSubstitution: true</code> to <code>kubernetesDeploy()</code> section and use <code>${BUILD_NUMBER}</code> instead of <code>latest</code> in deployment.yaml:</p>
<blockquote>
<p>By checking "Enable Variable Substitution in Config", the variables
(in the form of $VARIABLE or `${VARIABLE}) in the configuration files
will be replaced with the values from corresponding environment
variables before they are fed to the Kubernetes management API. This
allows you to dynamically update the configurations according to each
Jenkins task, for example, using the Jenkins build number as the image
tag to be pulled.</p>
</blockquote>
| Andrew Skorkin |
<p>when I change the <code>replicas: x</code> in my .yaml file I can see GKE autopilot boots pods up/down depending on the value, but what will happen if the load on my deployment gets too big. Will it then autoscale the number of pods and nodes to handle the traffic and then reduce back to the value specified in replicas when the request load is reduced again?</p>
<p>I'm basically asking how does autopilot horizontal autoscaling works?
and how do I get a minimum of 2 pod replicas that can horizontally autoscale in autopilot?</p>
| Alex Skotner | <p>GKE autopilot by default will not scale the replicas count beyond what you specified. This is the default behavior of Kubernetes in general.</p>
<p>If you want automatic autoscaling you have to use Horizental Pod Autoscaler (<a href="https://cloud.google.com/kubernetes-engine/docs/how-to/horizontal-pod-autoscaling" rel="nofollow noreferrer">HPA</a>) which is supported in <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-overview#comparison" rel="nofollow noreferrer">Autopilot</a></p>
<p>If you deploy HPA to scale up and down your workload, Autopilot will scale up and down the nodes automatically and that's transparent for you as the nodes are managed by Google.</p>
| boredabdel |
<p>I have a kubernetes cluster setup at home on two bare metal machines.
I used kubespray to install both and it uses kubeadm behind the scenes.</p>
<p>The problem I encounter is that all containers within the cluster have a restartPolicy: no which makes my cluster break when I restart the main node.</p>
<p>I have to manually run "docker container start" for all containers in "kube-system" namespace to make it work after reboot.</p>
<p>Does anyone have an idea where the problem might be coming from ?</p>
| sashok_bg | <p>Docker provides <a href="https://docs.docker.com/engine/reference/run/#restart-policies---restart" rel="nofollow noreferrer">restart policies</a> to control whether your containers start automatically when they exit, or when Docker restarts. Here your containers have the <strong>restart policy - no</strong> which means this policy will never automatically start the container under any circumstance.</p>
<p>You need to change the restart policy to <strong>Always</strong> which restarts the container if it stops. If it is manually stopped, it is restarted only when Docker daemon restarts or the container itself is manually restarted.</p>
<p>You can change the restart policy of an existing container using <strong><code>docker update</code></strong>. Pass the name of the container to the command. You can find container names by running <strong><code>docker ps -a</code>.</strong></p>
<pre><code>docker update --restart=always <CONTAINER NAME>
</code></pre>
<p><strong>Restart policy details:</strong></p>
<p>Keep the following in mind when using restart policies:</p>
<ul>
<li><p>A restart policy only takes effect after a container starts successfully. In this case, starting successfully means that the container is up for at least 10 seconds and Docker has started monitoring it. This prevents a container which does not start at all from going into a restart loop.</p>
</li>
<li><p>If you manually stop a container, its restart policy is ignored until the Docker daemon restarts or the container is manually restarted. This is another attempt to prevent a restart loop.</p>
</li>
</ul>
| Srividya |
<p>I am new to Kubernetes, so the there might already be new solutions which I am missing here.</p>
<p><strong>Requirement</strong></p>
<p>How do I create and API endpoint in Kubernetes which I can use to spawn new deployments and services?</p>
<p><strong>Why do I require API Endpoint??</strong></p>
<p>The requirement is such that new service needs to be get spawned whenever there are information (say name of new service, port it should run on, what config and resources it uses, so on...) pis provided by already running service (say service A).</p>
<p>So, when these informations are fed to the endpoint, a service running behind that api endpoint will create a template based on obtainted infromation, and execute necessary commands to spawn the new services.</p>
<p><strong>If there is any better approach than this please suggest me as well.</strong></p>
| dempti | <p>For me the solution was to using some standard library e.g. kubernetes python client to interact with kubernetes API from my application as suggested by @mario.</p>
<p>I will share further solution on how I wrote the api using python client and may be even using go ;)</p>
| dempti |
<p>Currently, my Kubernetes cluster is provisioned via <code>GKE</code>.</p>
<p>I use <code>GCE Persistent Disks</code> to persist my data.</p>
<p>In <code>GCE</code>, persistent storage is provided via <code>GCE Persistent Disks</code>. Kubernetes supports adding them to <code>Pods</code> or <code>PersistenVolumes</code> or <code>StorageClasses</code> via the <code>gcePersistentDisk</code> volume/provisioner type.</p>
<p>What if I would like to transfer my cluster from <code>Google</code> to, lets say, <code>Azure</code> or <code>AWS</code>?
Then I would have to change value of volume type to <code>azureFile</code> or <code>awsElasticBlockStore</code> respectively in all occurrences in the manifest files.</p>
<p>I hope <code>CSI</code> driver will solve that problem, unfortunately, they also use a different type of volume for each provider cloud provider, for example <code>pd.csi.storage.gke.io</code> for <code>GCP</code> or <code>disk.csi.azure.com</code> for <code>Azure</code>.</p>
<p>Is there any convenient way to make the Kubernetes volumes to be cloud agnostic? In which I wouldn't have to make any changes in manifest files before K8s cluster migration.</p>
| Mikolaj | <p>You cannot have cloud agnostic storage by using the CSI drivers or the native VolumeClaims in Kubernetes. That's because these API's are the upstream way of provisioning storage which each cloud provider has to integrate with to translate them into the Cloud Specific API (PD for Google, EBS for AWS...)</p>
<p>Unless you have a self-managed Storage that you can access via an NFS driver or a specific driver from the tools managed above. And still with that the Self-Managed Storage solution is going to be based on a Cloud provider specific volume. So You are just going to shift the issue to a different place.</p>
| boredabdel |
<p>Suppose, I have several Kubernetes clusters and I have some namespaces in each of them.</p>
<p>Some of these namespaces are labeled <code>project: a</code> and others <code>project: b</code>.</p>
<p>Now I want to ensure that resources in namespaces labeled <code>project: a</code> can communicate with each other, but not with something else, same for the other projects.</p>
<p>If it was just one Kubernetes cluster, then I would simply use NetworkPolicies.</p>
<p>However, I would like to connect the clusters somehow and to ensure that this restriction applies also when the resources are spread in many clusters.</p>
| tobias | <p>Network policies are bound to the local cluster and don't work across clusters for now.</p>
| boredabdel |
<p>my cluster have 3 master node, now I have shutdown 1 master node, then I check the member from etcd database:</p>
<pre><code>[root@fat001 bin]# ETCDCTL_API=3 /opt/k8s/bin/etcdctl member list
56298c42af788da7, started, azshara-k8s02, https://172.19.104.230:2380, https://172.19.104.230:2379
5ab2d0e431f00a20, started, azshara-k8s01, https://172.19.104.231:2380, https://172.19.104.231:2379
84c70bf96ccff30f, started, azshara-k8s03, https://172.19.150.82:2380, https://172.19.150.82:2379
</code></pre>
<p>still show 3 nodes started. why the etcd did not refresh the node status? what should I do to update the etcd status to the latest? is it possible to refresh the status manually? the kubernetes version is <code>1.15.x</code>.</p>
| spark | <p>If you delete a node that was in a cluster, you should manually delete it from the etcd also i.e. by doing <strong>'etcdctl member remove 84c70bf96ccff30f '.</strong></p>
<p>Make sure that etcd container is no longer running on the failed node, and that the node does not contain any data anymore:</p>
<pre><code>rm -rf /etc/kubernetes/manifests/etcd.yaml /var/lib/etcd/
crictl rm "$CONTAINER_ID"
</code></pre>
<p>The commands above will remove the static-pod for etcd and data-directory /var/lib/etcd on the node.Of course, you can also use the <strong>kubeadm reset</strong> command as an alternative. However, it also will remove all Kubernetes-related resources and certificates from this node.</p>
| Srividya |
<p>I accidentally deleted kube-proxy daemonset by using command: <code>kubectl delete -n kube-system daemonset kube-proxy</code> which should run kube-proxy pods in my cluster, what the best way to restore it?
<a href="https://i.stack.imgur.com/AChcS.png" rel="nofollow noreferrer">That's how it should look</a></p>
| Alexandr Lebedev | <p>Kubernetes allows you to <a href="https://kubernetes.io/docs/reference/setup-tools/kubeadm/generated/kubeadm_init_phase_addon_kube-proxy/" rel="nofollow noreferrer">reinstall kube-proxy</a> by running the following command which install the kube-proxy addon components via the API server.</p>
<pre><code>$ kubeadm init phase addon kube-proxy --kubeconfig ~/.kube/config --apiserver-advertise-address string
</code></pre>
<p>This will generate the output as</p>
<pre><code>[addons] Applied essential addon: kube-proxy
</code></pre>
<p>The IP address the API Server will advertise it's listening on. If not set the default network interface will be used.</p>
<p>Hence kube-proxy will be reinstalled in the cluster by creating a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="nofollow noreferrer">DaemonSet</a> and launching the pods.</p>
| Srividya |
<p>I am trying to deploy my react app to a kubernetes cluster by serving it via an nginx server. As you can see in the below Dockerfile I am building my app and afterwards copying the build artefacts into the <code>/usr/share/nginx/html/</code> path on my nginx server.</p>
<pre><code># Stage 1 - build container
FROM node:12-alpine as build
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY yarn.lock ./
COPY package.json ./
RUN yarn install
COPY . ./
ENV GENERATE_SOURCEMAP=false SASS_PATH=node_modules:src
ARG env
RUN yarn run build:${env}
# Stage 2 - productive environment
FROM nginx:stable-alpine
COPY --from=build /app/build /usr/share/nginx/html/
COPY nginx.conf /etc/nginx/conf.d/default.conf
RUN apk update
RUN apk upgrade
EXPOSE 80
CMD ["nginx","-g","daemon off;"]
</code></pre>
<p>I am using the following nginx configuration. From what I understand this should instruct the nginx server to search for resources using the specified root path.</p>
<pre><code>server {
listen 80;
root /usr/share/nginx/html;
error_page 500 502 503 504 /50x.html;
location / {
index index.html index.htm;
try_files $uri $uri/ /index.html;
}
}
</code></pre>
<p>I can see that this works when running the docker container locally (all react app resources get loaded), but when I deploy it onto my kubernetes cluster and expose it via an ingress controller, I am getting the following errors for the build artifacts:</p>
<pre><code>GET https://*host*/static/css/main.0e41ac5f.chunk.css net::ERR_ABORTED 404
</code></pre>
<p>This is interesting since when I ssh into the container I can see that all the requested files still exist at the correct directory (<code>/usr/share/nginx/html/static/</code>).</p>
<p>I already tried setting the <strong>homepage</strong> value in my <strong>package.json</strong> to <code>"."</code>, but this didn't change anything.</p>
<p>My ingress configuration looks like this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: search-dev
annotations:
kubernetes.io/ingress.class: "public-iks-k8s-nginx"
spec:
tls:
hosts:
- host
secretName: secret
rules:
- host: host
http:
paths:
- path: /
pathType: Exact
backend:
service:
name: search-frontend-dev
port:
number: 80
</code></pre>
<p>I also tried setting this annotation:</p>
<pre class="lang-yaml prettyprint-override"><code>nginx.ingress.kubernetes.io/rewrite-target: /
</code></pre>
<p>But unfortunately this didn't work either.</p>
<p>Would appreciate some help with this.</p>
| mike | <p>For all of you who are having a similar problem, set pathType to <strong>Prefix</strong>. Otherwise only requests to "/" will get routed to your service. Requests to "/static/..." were simply not routed to the services and therefore to my nginx server.</p>
| mike |
<p>I am trying to convert my dockerised application for testing a Kafka functionality, to a Kubernetes deployment file.</p>
<p>The docker command for execution of the container which is working as expected is:</p>
<pre><code>docker run --name consumer-1 --network="host" -dt 56d57e1538d3 pizzaapp_multiconsumer1.py bash
</code></pre>
<p>However, when converting it to the below Kubernetes deployment file and executing it, I am getting a CrashLoopBackOff error on the pods.</p>
<pre><code>spec:
hostNetwork: true
containers:
- name: kafka-consumer
image: bhuvidockerhub/kafkaproject:v1.0
imagePullPolicy: IfNotPresent
args: ["pizzaapp_multiconsumer1.py", "bash"]
imagePullSecrets:
- name: regcred
</code></pre>
<p>On checking the logs of the failed pods I am seeing this error:</p>
<pre><code>Traceback (most recent call last):
File "//pizzaapp_multiconsumer1.py", line 12, in <module>
multiconsume_pizza_messages()
File "/testconsumer1.py", line 14, in multiconsume_pizza_messages
kafka_admin_client: KafkaAdminClient = KafkaAdminClient(
File "/usr/local/lib/python3.9/site-packages/kafka/admin/client.py", line 208, in __init__
self._client = KafkaClient(metrics=self._metrics,
File "/usr/local/lib/python3.9/site-packages/kafka/client_async.py", line 244, in __init__
self.config['api_version'] = self.check_version(timeout=check_timeout)
File "/usr/local/lib/python3.9/site-packages/kafka/client_async.py", line 900, in check_version
raise Errors.NoBrokersAvailable()
kafka.errors.NoBrokersAvailable: NoBrokersAvailable
</code></pre>
<p>But the broker container is already up and running</p>
<pre><code>my-cluster-with-metrics-entity-operator-7d8894b79f-99fwt 3/3 Running 181 27d
my-cluster-with-metrics-kafka-0 1/1 Running 57 19d
my-cluster-with-metrics-kafka-1 1/1 Running 5 19h
my-cluster-with-metrics-kafka-2 1/1 Running 0 27m
my-cluster-with-metrics-kafka-exporter-568968bd5c-mrg7f 1/1 Running 108 27d
</code></pre>
<p>and the corresponding services are also there</p>
<pre><code>my-cluster-with-metrics-kafka-bootstrap ClusterIP 10.98.78.168 <none> 9091/TCP,9100/TCP 27d
my-cluster-with-metrics-kafka-brokers ClusterIP None <none> 9090/TCP,9091/TCP,9100/TCP 27d
my-cluster-with-metrics-kafka-external-0 NodePort 10.110.196.75 <none> 9099:30461/TCP 27d
my-cluster-with-metrics-kafka-external-1 NodePort 10.107.225.187 <none> 9099:32310/TCP 27d
my-cluster-with-metrics-kafka-external-2 NodePort 10.103.99.151 <none> 9099:31950/TCP 27d
my-cluster-with-metrics-kafka-external-bootstrap NodePort 10.98.131.151 <none> 9099:31248/TCP 27d
</code></pre>
<p>And I have port forwarded the svc port so that the brokers can be found:</p>
<pre><code>kubectl port-forward svc/my-cluster-with-metrics-kafka-external-bootstrap 9099:9099 -n kafka
</code></pre>
<p>And post this when I run the docker command it executes, as expected.</p>
<p>But in K8s even after adding the bash in the args and trying, it still gives no brokers available.</p>
<p>Can anyone suggest what changes shall I try out in the deployment file, so that it works exactly as the successful docker command run as stated above?</p>
| Bhuvi | <p>If an application is deployed in K8s, then we don't need port forwarding since there is nothing to expose outside the cluster. When we run things inside K8s, we normally do not access things using localhost. Localhost refers to the pod's container itself.
Therefore, to resolve the above issue, completely removed the localhost reference from the bootstrap server. This was then replaced with the external bootstrap service IP [10.X.X.X:9099] and then executed the K8s deployment file. Following which the producer and consumer pods came up successfully, and this resolved the issue.</p>
| Bhuvi |
<p>I have a Kubernetes cluster and I'm deploying my app there with Helm. Everything works fine, but one aspect, the Job update. As I've read, the Jobs are immutable and that's why they can't be updated, but I don't get, why is helm not creating a new job as it does for the Pods?</p>
<p>In the end, I want to achieve that my app code is deployed as a job, that runs DB migrations. I tried to do it as a Pod, but for pods, the restart policy can be only "Always", "never" is not supported, even though the doc says otherwise. How can I achieve this, so the Migration can be updated with every deployment (new image tag) and it runs once and not restarts?</p>
| ghostika | <p>You can use helm hooks here.
Official Link: <a href="https://helm.sh/docs/topics/charts_hooks/" rel="nofollow noreferrer">https://helm.sh/docs/topics/charts_hooks/</a></p>
<p>Once job is completed with "helm install", helm hook should delete it. Once you perform "helm upgrade", a new job should be triggered. Application logic should handle install and upgrade scenarios.</p>
<p>Below are some concepts related to helm hooks.</p>
<h1>Types of Helm Hooks</h1>
<ul>
<li>pre-install : hooks run after templates are rendered and before any resources are created in a Kubernetes cluster</li>
<li>post-install : hooks run after all Kubernetes resources have been loaded</li>
<li>pre-delete : hooks run before any existing resources are deleted from Kubernetes</li>
<li>post-delete : hooks run after all Kubernetes resources have been deleted</li>
<li>pre-upgrade : hooks run after chart templates have been rendered and before any resources are loaded into Kubernetes</li>
<li>post-upgrade : hooks run after all Kubernetes resources have been upgraded</li>
<li>pre-rollback : hooks run after templates have been rendered and before any resources are rolled back</li>
<li>post-rollback : hooks run after all resources have been modified</li>
<li>test : hooks run when helm test subcommand is executed</li>
</ul>
<p>NOTE: One resource can implement multiple hooks:</p>
<p>Eg:
annotations:
"helm.sh/hook": post-install,post-upgrade</p>
<h1>How Helm Chart Hooks Are Executed</h1>
<ul>
<li>When a Helm chart containing hooks is executed, components like pods or jobs pertaining to hooks are not directly applied in a Kubernetes environment.
Instead when a hook is executed, a new pod is created corresponding to the hook.
If successfully run, they will be in "Completed" state.</li>
<li>Any resources created by a Helm hook are un-managed Kubernetes objects.
In other words, uninstalling a Helm chart using "helm uninstall" will not remove the underlying resources created by hooks.
A separate deletion policy needs to be defined in the form of annotation if those resources need to be deleted.</li>
<li>Any hook resources that must never be deleted should be annotated with "helm.sh/resource-policy: keep".</li>
</ul>
<h1>Helm Hook Annotations</h1>
<ul>
<li>"helm.sh/hook": post-install</li>
<li>"helm.sh/hook-weight": "-5" ## NOTE: This MUST be string</li>
<li>"helm.sh/hook-delete-policy": hook-succeeded</li>
<li>"helm.sh/resource-policy": keep</li>
</ul>
<h1>Hook Deletion Policies</h1>
<ul>
<li>“helm.sh/hook-delete-policy" annotation to be used.</li>
</ul>
<h2>Three different deletion policies are supported which will decide when to delete the resources:</h2>
<ul>
<li>before-hook-creation : Delete the previous resource before a new hook is launched</li>
<li>hook-succeeded : Delete the resource after the hook is successfully executed</li>
<li>hook-failed : Delete the resource if the hook failed during execution</li>
</ul>
<p>NOTE: If no hook deletion policy annotation is specified, the before-hook-creation behavior is applied by default.</p>
<h1>Hook Weights</h1>
<ul>
<li>"helm.sh/hook-weight" annotation to be used.</li>
<li>Hook weights can be positive or negative numbers but must be represented as strings.</li>
<li>When Helm starts the execution cycle of hooks of a particular Kind it will sort those hooks in ascending order.</li>
</ul>
<h2>Hook weights ensure below:</h2>
<ul>
<li>execute in the right weight sequence</li>
<li>block each other</li>
<li>all block main K8s resource from starting</li>
</ul>
<h1>Complete Execution Flow Example</h1>
<ol>
<li>Step-1: Create post-install and post-install hook YAML files</li>
</ol>
<hr />
<p>pre-install.yaml</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: hook-preinstall
annotations:
"helm.sh/hook": "pre-install" ## Without this line, this becomes a normal K8s resource.
spec:
containers:
- name: hook1-container
image: busybox
imagePullPolicy: IfNotPresent
command: ['sh', '-c', 'echo The pre-install hook Pod is running - hook-preinstall && sleep 15']
restartPolicy: Never
terminationGracePeriodSeconds: 0
</code></pre>
<hr />
<p>post-install.yaml</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: hook-postinstall
annotations:
"helm.sh/hook": "post-install" ## Without this line, this becomes a normal K8s resource.
spec:
containers:
- name: hook2-container
image: busybox
imagePullPolicy: IfNotPresent
command: ['sh', '-c', 'echo post-install hook Pod is running - hook-postinstall && sleep 10']
restartPolicy: Never
terminationGracePeriodSeconds: 0
</code></pre>
<hr />
<ol start="2">
<li>Step-2: Install Helm Chart (Assuming other K8s resources are defined under /templates/ directory)</li>
</ol>
<hr />
<ol start="3">
<li>Get Pods:</li>
</ol>
<hr />
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
demohook-testhook-5ff88bb44b-qc4n2 1/1 Running 0 5m45s
hook-postinstall 0/1 Completed 0 5m45s
hook-preinstall 0/1 Completed 0 6m2s
$
</code></pre>
<hr />
<ol start="4">
<li>Describe Pods and notice Started & Finished time of the pods:</li>
</ol>
<hr />
<pre><code>$ kubectl describe pod demohook-testhook-5ff88bb44b-qc4n2 | grep -E 'Anno|Started:|Finished:'
$ kubectl describe pod hook-postinstall | grep -E 'Anno|Started:|Finished:'
$ kubectl describe pod hook-preinstall | grep -E 'Anno|Started:|Finished:'
</code></pre>
<hr />
| Pankaj Yadav |
<p>Very new to K3s and I'm attempting to just practice by creating a deployment with 3 replicas of an ngnix pod. It creates on 2 of my worker nodes but one of the pods tried creating on my master node but I got a CreateContainerError.
After digging further I found the following error: Error: failed to get sandbox container task: no running task found: task e2829c0383965aa4556c9eecf1ed72feb145211d23f714bdc0962b188572f849 not found: not found.</p>
<p>Any help would be greatly appreciated</p>
<p>After running <code>kubectl describe node</code> and checking the taints for the master node, it shows <code><none></code></p>
| Bret Beatty | <p>So all it needed was a fresh install and that seems to have solved everything. Probably should have tried that first.</p>
| Bret Beatty |
<h1>Context</h1>
<p>Currently putting online for the first time an Elixir/Phoenix app on Google Cloud and Kubernetes, (I found a tutorial that I follow a tutorial => <a href="https://cloud.google.com/community/tutorials/elixir-phoenix-on-kubernetes-google-container-engine" rel="nofollow noreferrer">run an Elixir/Phoenix app in containers using Google Kubernetes Engine</a>), I'm getting stuck at what seems to be the last step : <a href="https://cloud.google.com/community/tutorials/elixir-phoenix-on-kubernetes-google-container-engine#deploy_to_the_cluster" rel="nofollow noreferrer">Deploy to the cluster</a> due to some error I haven't found a fix for.</p>
<h2>The app</h2>
<p>The elixir app is an <strong>umbrella app</strong> with two phoenix app, each one with a port (on for the admin website, the other for the general website) and three other elixir app.</p>
<p>There is a custom docker for dev (using docker-compose), and another <strong>Dockerfile for production</strong>, which is the following one (separated in two parts, I guess the first one is for the image building and the second is for kubernetes):</p>
<pre><code># prod.Dockerfile
FROM elixir:alpine
ARG app_name=prod
ARG phoenix_subdir=.
ARG build_env=prod
RUN apk add --no-cache make build-base openssl ncurses-libs libgcc libstdc++
ENV MIX_ENV=${build_env} TERM=xterm
WORKDIR /opt/app
RUN apk update \
&& apk --no-cache --update add nodejs npm \
&& mix local.rebar --force \
&& mix local.hex --force
COPY . .
RUN mix do deps.get, compile
RUN cd apps/admin/assets \
&& npm rebuild node-sass \
&& npm install \
&& ./node_modules/webpack/bin/webpack.js \
&& cd .. \
&& mix phx.digest
RUN cd apps/app/assets \
&& npm rebuild node-sass \
&& npm install \
&& ./node_modules/webpack/bin/webpack.js \
&& cd .. \
&& mix phx.digest
RUN mix release ${app_name} \
&& mv _build/${build_env}/rel/${app_name} /opt/release \
&& mv /opt/release/bin/${app_name} /opt/release/bin/start_server
FROM alpine:latest
RUN apk add make build-base --no-cache openssl ncurses-libs libgcc libstdc++
ARG hello
RUN apk update \
&& apk add --no-cache postgresql-client \
&& apk --no-cache --update add bash ca-certificates openssl-dev \
&& mkdir -p /usr/local/bin \
&& wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 \
-O /usr/local/bin/cloud_sql_proxy \
&& chmod +x /usr/local/bin/cloud_sql_proxy \
&& mkdir -p /tmp/cloudsql
ENV GCLOUD_PROJECT_ID=${project_id} \
REPLACE_OS_VARS=true
EXPOSE ${PORT}
EXPOSE 4011
WORKDIR /opt/app
COPY --from=0 /opt/release .
CMD (/usr/local/bin/cloud_sql_proxy \
-projects=${GCLOUD_PROJECT_ID} -dir=/tmp/cloudsql &); \
exec /opt/app/bin/start_server start
</code></pre>
<p>Which is called by <code>cloudbuild.yaml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>steps:
- name: "gcr.io/cloud-builders/docker"
args: ["build", "-t", "gcr.io/hello/prod:$_TAG",
"--build-arg", "project_id=hello", ".",
"--file=./prod.Dockerfile"]
images: ["gcr.io/hello/prod:$_TAG"]
</code></pre>
<h1>The steps</h1>
<p>(re)building the image</p>
<pre class="lang-sh prettyprint-override"><code>$> gcloud builds submit --substitutions=_TAG=v1 .
</code></pre>
<p>Then create a deployment</p>
<pre class="lang-sh prettyprint-override"><code>$> kubectl run hello-web --image=gcr.io/${PROJECT_ID}/hello:v1 --port 8080
pod/hello-web created
</code></pre>
<p>Check if the deployment went well (spoiler: it doesn't)</p>
<pre class="lang-sh prettyprint-override"><code>$> kubectl get pods
NAME READY STATUS RESTARTS AGE
hello-web 0/1 CrashLoopBackOff 1 15s
</code></pre>
<p>Check the log</p>
<pre class="lang-sh prettyprint-override"><code>$> kubectl logs {POD-NAME}
</code></pre>
<p>Which display the following error:</p>
<h1>The error</h1>
<pre class="lang-sh prettyprint-override"><code>2021/08/09 23:49:15 current FDs rlimit set to 1048576, wanted limit is 8500. Nothing to do here.
2021/08/09 23:49:15 gcloud is not in the path and -instances and -projects are empty
Error loading shared library libstdc++.so.6: No such file or directory (needed by /opt/app/erts-12.0.3/bin/beam.smp)
Error loading shared library libgcc_s.so.1: No such file or directory (needed by /opt/app/erts-12.0.3/bin/beam.smp)
Error relocating /opt/app/erts-12.0.3/bin/beam.smp: __cxa_begin_catch: symbol not found
Error relocating /opt/app/erts-12.0.3/bin/beam.smp: _ZSt24__throw_out_of_range_fmtPKcz: symbol not found
Error relocating /opt/app/erts-12.0.3/bin/beam.smp: _Znwm: symbol not found
Error relocating /opt/app/erts-12.0.3/bin/beam.smp: _ZSt20__throw_length_errorPKc: symbol not found
Error relocating /opt/app/erts-12.0.3/bin/beam.smp: __cxa_guard_release: symbol not found
Error relocating /opt/app/erts-12.0.3/bin/beam.smp: _ZNKSt8__detail20_Prime_rehash_policy11_M_next_bktEm: symbol not found
Error relocating /opt/app/erts-12.0.3/bin/beam.smp: __popcountdi2: symbol not found
Error relocating /opt/app/erts-12.0.3/bin/beam.smp: _ZSt29_Rb_tree_insert_and_rebalancebPSt18_Rb_tree_node_baseS0_RS_: symbol not found
Error relocating /opt/app/erts-12.0.3/bin/beam.smp: _ZSt17__throw_bad_allocv: symbol not found
Error relocating /opt/app/erts-12.0.3/bin/beam.smp: _ZNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEE9_M_appendEPKcm: symbol not found
Error relocating /opt/app/erts-12.0.3/bin/beam.smp: _ZNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEE9_M_createERmm: symbol not found
Error relocating /opt/app/erts-12.0.3/bin/beam.smp: _ZSt18_Rb_tree_incrementPKSt18_Rb_tree_node_base: symbol not found
Error relocating /opt/app/erts-12.0.3/bin/beam.smp: __cxa_end_catch: symbol not found
Error relocating /opt/app/erts-12.0.3/bin/beam.smp: __cxa_guard_acquire: symbol not found
Error relocating /opt/app/erts-12.0.3/bin/beam.smp: _ZNKSt8__detail20_Prime_rehash_policy14_M_need_rehashEmmm: symbol not found
Error relocating /opt/app/erts-12.0.3/bin/beam.smp: _ZSt19__throw_logic_errorPKc: symbol not found
Error relocating /opt/app/erts-12.0.3/bin/beam.smp: _ZSt18_Rb_tree_decrementPSt18_Rb_tree_node_base: symbol not found
Error relocating /opt/app/erts-12.0.3/bin/beam.smp: _ZNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEE7reserveEm: symbol not found
Error relocating /opt/app/erts-12.0.3/bin/beam.smp: __cxa_rethrow: symbol not found
Error relocating /opt/app/erts-12.0.3/bin/beam.smp: _Unwind_Resume: symbol not found
Error relocating /opt/app/erts-12.0.3/bin/beam.smp: _ZdlPvm: symbol not found
Error relocating /opt/app/erts-12.0.3/bin/beam.smp: _ZTVN10__cxxabiv120__si_class_type_infoE: symbol not found
...
Error relocating /opt/app/erts-12.0.3/bin/beam.smp: _ZTVN10__cxxabiv120__si_class_type_infoE: symbol not found
Error relocating /opt/app/erts-12.0.3/bin/beam.smp: __cxa_pure_virtual: symbol not found
...
Error relocating /opt/app/erts-12.0.3/bin/beam.smp: __cxa_pure_virtual: symbol not found
Error relocating /opt/app/erts-12.0.3/bin/beam.smp: _ZTVN10__cxxabiv117__class_type_infoE: symbol not found
...
Error relocating /opt/app/erts-12.0.3/bin/beam.smp: _ZTVN10__cxxabiv117__class_type_infoE: symbol not found
Error relocating /opt/app/erts-12.0.3/bin/beam.smp: _ZTVN10__cxxabiv121__vmi_class_type_infoE: symbol not found
Error relocating /opt/app/erts-12.0.3/bin/beam.smp: __gxx_personality_v0: symbol not found
</code></pre>
<h1>What have I tried</h1>
<p>Though I know almost nothing, I still tried to modify a little bit the Dockerfile, without success. I also search on Google the libgcc error without any success.</p>
<p>That's about it, as I don't have clues were to look other than that.</p>
<p>So, any advises to make it works ?</p>
<h1>Other actions</h1>
<p>Delete the pod, then get the image's tags and clean it</p>
<pre class="lang-sh prettyprint-override"><code>$> kubectl delete pod hello-web
$> gcloud container images list-tags gcr.io/hello/prod
$> gcloud container images delete gcr.io/hello/prod@sha256:...
</code></pre>
<h1>Edits</h1>
<h2>Edit 1 (12/08/2021 19:04)</h2>
<ul>
<li>Update the Dockerfile with the latest version of it</li>
<li>Add another action list</li>
</ul>
<p><strong>Result : nothing changed</strong></p>
| Aridjar | <p>It looks like your question has been asked on the Elixir forums already:</p>
<p><a href="https://elixirforum.com/t/docker-run-error-loading-shared-library-libstdc-so-6-and-libgcc-s-so-1/40496" rel="nofollow noreferrer">https://elixirforum.com/t/docker-run-error-loading-shared-library-libstdc-so-6-and-libgcc-s-so-1/40496</a></p>
<blockquote>
<p>It looks like a missing runtime dependency in your final image. Try changing RUN apk add --no-cache openssl ncurses-libs to RUN apk add --no-cache openssl ncurses-libs libstdc++.</p>
</blockquote>
<p>The fix being to add <code>libstdc++</code> to your install line.</p>
<p>The reasoning for this is also outlined in the forum post:</p>
<blockquote>
<p>The beam has native runtime dependencies and OTP 24 added libc as runtime dependency to support the JIT. With that change it seems like bare alpine:3.9 no longer brings all the required runtime dependencies. You’ll need to make sure that all of those those are present in the app container.</p>
</blockquote>
<p>Best of luck!</p>
| TheQueenIsDead |
<p>I have started using KubernetesExecutor and I have set up a PV/PVC with an AWS EFS to store logs for my dags. I am also using s3 remote logging.</p>
<p>All the logging is working perfectly fine after a dag completes. However, I want to be able to see the logs of my jobs as they are running for long running ones.</p>
<p>When I exec into my scheduler pod, while an executor pod is running, I am able to see the <code>.log</code> file of the currently running job because of the shared EFS. However, when I <code>cat</code> the log file, I do not see the logs as long as the executor is still running. Once the executor finishes however, I can see the full logs both when I <code>cat</code> the file and in the airflow UI.</p>
<p>Weirdly, on the other hand, when I exec into the executor pod as it is running, and I <code>cat</code> the exact same log file in the shared EFS, I am able to see the correct logs up until that point in the job, and when I immediately <code>cat</code> from the scheduler or check the UI, I can also see the logs up until that point.</p>
<p>So it seems that when I <code>cat</code> from within the executor pod, it is causing the logs to be flushed in some way, so that it is available everywhere. Why are the logs not flushing regularly?</p>
<p>Here are the config variables I am setting, note these env variables get set in my webserver/scheduler and executor pods:</p>
<pre><code># ----------------------
# For Main Airflow Pod (Webserver & Scheduler)
# ----------------------
export PYTHONPATH=$HOME
export AIRFLOW_HOME=$HOME
export PYTHONUNBUFFERED=1
# Core configs
export AIRFLOW__CORE__LOAD_EXAMPLES=False
export AIRFLOW__CORE__SQL_ALCHEMY_CONN=${AIRFLOW__CORE__SQL_ALCHEMY_CONN:-postgresql://$DB_USER:$DB_PASSWORD@$DB_HOST:5432/$DB_NAME}
export AIRFLOW__CORE__FERNET_KEY=$FERNET_KEY
export AIRFLOW__CORE__DAGS_FOLDER=$AIRFLOW_HOME/git/dags/$PROVIDER-$ENV/
# Logging configs
export AIRFLOW__LOGGING__BASE_LOG_FOLDER=$AIRFLOW_HOME/logs/
export AIRFLOW__LOGGING__REMOTE_LOGGING=True
export AIRFLOW__LOGGING__REMOTE_LOG_CONN_ID=aws_default
export AIRFLOW__LOGGING__REMOTE_BASE_LOG_FOLDER=s3://path-to-bucket/airflow_logs
export AIRFLOW__LOGGING__TASK_LOG_READER=s3.task
export AIRFLOW__LOGGING__LOGGING_CONFIG_CLASS=config.logging_config.LOGGING_CONFIG
# Webserver configs
export AIRFLOW__WEBSERVER__COOKIE_SAMESITE=None
</code></pre>
<p>My logging config looks like the one in the question <a href="https://stackoverflow.com/questions/55526759/airflow-1-10-2-not-writing-logs-to-s3">here</a></p>
<p>I thought this could be a python buffering issue so added <code>PYTHONUNBUFFERED=1</code>, but that didn't help. This is happening whether I use the <code>PythonOperator</code> or <code>BashOperator</code></p>
<p>Is it the case that K8sExecutors logs just won't be available during their runtime? Only after? Or is there some configuration I must be missing?</p>
| user6407048 | <p>I had the same issue and those are things that helped me - worth checking them on your end</p>
<ul>
<li><code>PYTHONUNBUFFERED=1</code> is not enough, but necessary to view logs in realtime. Please keep it</li>
<li>have EFS mounted in web, scheduler, and pod_template (executor).</li>
<li>Your experience with log file being complete after task having finished, makes me wonder if PVC you use for logs, has ReadWriteMany accessMode</li>
<li>Are the paths you cat in different pods, identical? Do they include full task format, eg <code>efs/logs/dag_that_executes_via_KubernetesPodOperator/task1/2021-09-21T19\:00\:21.894859+00\:00/1.log</code> ? Asking because, before i had EFS hooked up in every place (scheduler, web, pod_template), i could only access executor logs that do not include task name and task time</li>
<li>have EFS logs folder belong to airflow (for me uid 50000 because may have to prepare this from different place), group root, mode 755</li>
<li>do not have AIRFLOW__LOGGING__LOGGING_CONFIG_CLASS set up. Try to get things running as vanilla as possible, before introducing custom logging config</li>
</ul>
<p>If you have remote logging set up, i understand that after task completes, the first line in the UI is going to say <code>Reading remote log from</code>, but what does the first line say for you when the task is running? <code>reading remote</code> or mentioning usage of local log file ?</p>
<ul>
<li>If it says about remote, this would mean that you don't have EFS hooked up in every place.</li>
<li>If it says about local, i would check your EFS settings (readwritemany) and directory ownership and mode</li>
</ul>
| Jedrzej G |
<p>I am trying <strong>to configure Fluentd in a GKE cluster to log all Kubernetes events</strong>, like HPA changes, pods coming up etc.</p>
<p>Where do GKE stores node level event logs? Looking for source path of GKE node level event logs.</p>
| xyphan | <p>You can find the directory (/var/log/containers) by doing ssh into the node in which your deployment is created.</p>
<p>You can follow this <a href="https://cloud.google.com/architecture/customizing-stackdriver-logs-fluentd#objectives" rel="nofollow noreferrer">guide</a> which I used to configure cluster-level fluentd which parses all the logs to cloud logging. You can filter the event logs by using the query in cloud logging.</p>
<pre><code>log_name = projects/[YOUR_PROJECT_ID]/logs/events
</code></pre>
| Srividya |
<p><strong>Symptom:</strong></p>
<p>creating/testing database in Superset connection with this URL</p>
<pre><code>solr://solr-master:8983/solr/my-collection
</code></pre>
<p>receiving error message</p>
<pre><code>Could not load database driver: SolrEngineSpec
</code></pre>
<p><strong>Environment:</strong></p>
<p>Installed HELM Chart version: 0.6.1 on kubernetes cluster</p>
<p><strong>Approach to solve the problem</strong></p>
<p>adding sqlalchemy-solr to the bootstrapScript in values.yaml</p>
<pre><code>#!/bin/bashrm -rf /var/lib/apt/lists/* && pip install sqlalchemy-solr && pip install psycopg2-binary==2.9.1 && pip install redis==3.5.3 && \if [ ! -f ~/bootstrap ]; then echo "Running Superset with uid {{ .Values.runAsUser }}" > ~/bootstrap; fi
</code></pre>
<p><strong>Result:</strong></p>
<p>sqlalchemy-solr was curiously not installed by pip</p>
| Roland Kopetsch | <p>It was a syntax problem in the bootstrapScript. Line break must be marked with an empty row.</p>
<pre><code>source:
repoURL: 'https://apache.github.io/superset'
targetRevision: 0.6.1
helm:
parameters:
- name: bootstrapScript
value: >
#!/bin/bash
rm -rf /var/lib/apt/lists/*
pip install sqlalchemy-solr
pip install psycopg2-binary==2.9.1
pip install redis==3.5.3
if [ ! -f ~/bootstrap ]; then echo "Running Superset with uid {{ .Values.runAsUser }}" > ~/bootstrap; fi
chart: superset
</code></pre>
| Roland Kopetsch |
<p>I have a very simple program:</p>
<pre><code>package main
import (
"fmt"
"github.com/vishvananda/netlink"
)
func main() {
_, err := netlink.LinkByName("wlp164s0")
if err != nil {
fmt.Println("error finding VIP Interface, for building DHCP Link : %v", err)
return
}
fmt.Println("Worked..")
}
</code></pre>
<p>If I create a docker image and run it with "--net host", this program prints "Worked". It is able to find the interface wlp164s0.</p>
<p>If I create a k8s deployment like this:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: netlink-example
labels:
app: netlink-example
spec:
replicas: 1
selector:
matchLabels:
app: netlink-example
template:
metadata:
labels:
app: netlink-example
spec:
hostNetwork: true
containers:
- name: netlink
image: suruti94/netlink:0.1
imagePullPolicy: IfNotPresent
hostNetwork: true
nodeSelector:
kubernetes.io/os: linux
</code></pre>
<p>This program prints the error indicating that it can't lookup the interface which means the "hostNetwork: true" is not taking effect. From all my research, this looks right. Any help would be appreciated. I am running this program on Ubuntu 21.04, k8s version 1.22.</p>
| Mohan Parthasarathy | <p>After some experimentation, I have come to an understanding that the docker option "--net host" is not the same as "hostNetwork: true" in k8s. I wrongly assumed they produce similar behavior.</p>
<ul>
<li>docker --net host option makes the host interfaces available in the container which is useful for some applications</li>
<li>When you deploy a pod with hostNetwork:true, it means the host network is reachable from the pod. By default when a pod is deployed (I verified this on my local machine using <a href="https://kind.sigs.k8s.io/" rel="nofollow noreferrer">Kind</a>) the host network is reachable. I can see the veth interface connected to the bridge on the host. Even with hostNetwork: false, I was able to update packages on my pod.So, not sure what to make out of this setting. At this stage, I am concluding that there is no option to expose the host interface directly on the pod.</li>
</ul>
| Mohan Parthasarathy |
<p>There is an old COM-Object that we need to use (no way around it) for an application that is planned to be in a Kubernetes container.
Is there any way to achieve this and if yes how?</p>
<p>I tried to research this, but with no results as of now.</p>
| Tai Kahar | <p>Yes, it is possible to add the COM-Object to the Kubernetes windows container only. COM is a technology that allows objects to interact across process and computer boundaries as easily as within a single process. COM enables this by specifying that the only way to manipulate the data associated with an object is through an interface on the object.</p>
<p>Since <a href="https://forums.docker.com/t/windows-application-32-bit-com-dll-registration/47205/6" rel="nofollow noreferrer">COM Object Model</a> is developed by Windows, You need to add the COM Object to the Windows container and add reference to the COM object using regsvr32. After creating a Dockerfile,</p>
<ul>
<li>Copy external DLLs into the container (the COM DLLs).</li>
<li>Register the DLLs using regsvr32 in the container.</li>
<li>Perform msbuild.</li>
</ul>
<p>Refer to the <a href="https://learn.microsoft.com/en-us/windows/win32/com/com-objects-and-interfaces" rel="nofollow noreferrer">documentation</a> for more information on COM-Objects.</p>
| Srividya |
<p>I have met a problem like this:</p>
<p>Firstly, I using helm to create a release <code>nginx</code>:</p>
<pre><code>helm upgrade --install --namespace test nginx bitnami/nginx --debug
LAST DEPLOYED: Wed Jul 22 15:17:50 2020
NAMESPACE: test
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
nginx-server-block 1 2s
==> v1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 0/1 1 0 2s
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
nginx-6bcbfcd548-kdf4x 0/1 ContainerCreating 0 1s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx LoadBalancer 10.219.6.148 <pending> 80:30811/TCP,443:31260/TCP 2s
NOTES:
Get the NGINX URL:
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
Watch the status with: 'kubectl get svc --namespace test -w nginx'
export SERVICE_IP=$(kubectl get svc --namespace test nginx --template "{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}")
echo "NGINX URL: http://$SERVICE_IP/"
</code></pre>
<p>K8s only create a deployment with 1 pods:</p>
<pre><code># Source: nginx/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app.kubernetes.io/name: nginx
helm.sh/chart: nginx-6.0.2
app.kubernetes.io/instance: nginx
app.kubernetes.io/managed-by: Tiller
spec:
selector:
matchLabels:
app.kubernetes.io/name: nginx
app.kubernetes.io/instance: nginx
replicas: 1
...
</code></pre>
<p>Secondly, I using <code>kubectl</code> command to edit the deployment to scaling up to 2 pods</p>
<pre><code>kubectl -n test edit deployment nginx
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2020-07-22T08:17:51Z"
generation: 1
labels:
app.kubernetes.io/instance: nginx
app.kubernetes.io/managed-by: Tiller
app.kubernetes.io/name: nginx
helm.sh/chart: nginx-6.0.2
name: nginx
namespace: test
resourceVersion: "128636260"
selfLink: /apis/extensions/v1beta1/namespaces/test/deployments/nginx
uid: d63b0f05-cbf3-11ea-99d5-42010a8a00f1
spec:
progressDeadlineSeconds: 600
replicas: 2
...
</code></pre>
<p>And i save this, check status to see the deployment has scaled up to 2 pods:</p>
<pre><code>kubectl -n test get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 2/2 2 2 7m50s
</code></pre>
<p>Finally, I using helm to upgrade release, as expected, helm will override the deployment to 1 pod like first step but in for now, the deployment will keep the values <code>replicas: 2</code> even you set the values (in values.yaml file of helm) to any number.
I have using option <code>--recreate-pods</code> of <code>helm</code> command:</p>
<pre><code>helm upgrade --install --namespace test nginx bitnami/nginx --debug --recreate-pods
Release "nginx" has been upgraded. Happy Helming!
LAST DEPLOYED: Wed Jul 22 15:31:24 2020
NAMESPACE: test
STATUS: DEPLOYED
RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
nginx-server-block 1 13m
==> v1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 0/2 2 0 13m
==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
nginx-6bcbfcd548-b4bfs 0/1 ContainerCreating 0 1s
nginx-6bcbfcd548-bzhf2 0/1 ContainerCreating 0 1s
nginx-6bcbfcd548-kdf4x 0/1 Terminating 0 13m
nginx-6bcbfcd548-xfxbv 1/1 Terminating 0 6m16s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx LoadBalancer 10.219.6.148 34.82.120.134 80:30811/TCP,443:31260/TCP 13m
NOTES:
Get the NGINX URL:
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
Watch the status with: 'kubectl get svc --namespace test -w nginx'
export SERVICE_IP=$(kubectl get svc --namespace test nginx --template "{{ range (index .status.loadBalancer.ingress 0) }}{{.}}{{ end }}")
echo "NGINX URL: http://$SERVICE_IP/"
</code></pre>
<p>Result: after I edit the <code>replicas</code> in deployment manually, I can not use helm to override this values <code>replicas</code>, but I still can change the images and etc, ... only replicas will not change
I have run <code>--debug</code> and helm still create the deployment with <code>replicas: 1</code></p>
<pre><code># Source: nginx/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app.kubernetes.io/name: nginx
helm.sh/chart: nginx-6.0.2
app.kubernetes.io/instance: nginx
app.kubernetes.io/managed-by: Tiller
spec:
selector:
matchLabels:
app.kubernetes.io/name: nginx
app.kubernetes.io/instance: nginx
replicas: 1
template:
metadata:
labels:
app.kubernetes.io/name: nginx
helm.sh/chart: nginx-6.0.2
app.kubernetes.io/instance: nginx
app.kubernetes.io/managed-by: Tiller
spec:
containers:
- name: nginx
image: docker.io/bitnami/nginx:1.19.1-debian-10-r0
imagePullPolicy: "IfNotPresent"
ports:
- name: http
containerPort: 8080
livenessProbe:
failureThreshold: 6
initialDelaySeconds: 30
tcpSocket:
port: http
timeoutSeconds: 5
readinessProbe:
initialDelaySeconds: 5
periodSeconds: 5
tcpSocket:
port: http
timeoutSeconds: 3
resources:
limits: {}
requests: {}
volumeMounts:
- name: nginx-server-block-paths
mountPath: /opt/bitnami/nginx/conf/server_blocks
volumes:
- name: nginx-server-block-paths
configMap:
name: nginx-server-block
items:
- key: server-blocks-paths.conf
path: server-blocks-paths.conf
</code></pre>
<p>But the k8s deployment will keep the values <code>replicas</code> the same like the edit manual once <code>replicas: 2</code></p>
<p>As far as I know, the output of <code>helm</code> command is create k8s yaml file, Why I can not use <code>helm</code> to override the specific values <code>replicas</code> in this case?</p>
<p>Tks in advance!!!</p>
<p>P/S: I just want to know what is behavior here, Tks</p>
<p>Helm version</p>
<pre><code>Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
</code></pre>
| Tho Quach | <p>Follow the offical document from <code>Helm</code>: <a href="https://helm.sh/docs/faq/#improved-upgrade-strategy-3-way-strategic-merge-patches" rel="nofollow noreferrer">Helm | Docs</a></p>
<blockquote>
<p>Helm 2 used a two-way strategic merge patch. During an upgrade, it compared the most recent chart's manifest against the proposed chart's manifest (the one supplied during helm upgrade). It compared the differences between these two charts to determine what changes needed to be applied to the resources in Kubernetes. If changes were applied to the cluster out-of-band (such as during a kubectl edit), those changes were not considered. This resulted in resources being unable to roll back to its previous state: because Helm only considered the last applied chart's manifest as its current state, if there were no changes in the chart's state, the live state was left unchanged.</p>
</blockquote>
<p>And this thing will be improved in <code>Helm v3</code>, because <code>Helm v3</code> have removed <code>Tiller</code>, your values will be apply exactly to <code>Kubernetes resources</code>, and values of <code>Helm</code> and <code>Kubernetes</code> will be consistent.</p>
<p>==> Result is you will not meet this problem again if you use <code>Helm version 3</code></p>
| Penguin Geek |
<p>I am confused as to how the minikube single node cluster works, when I start a k8s cluster using <code>minikube start --driver=docker</code>, is the single node itself a docker container, or it is our local machine? I assume that single node is the local machine and the pods are containers inside this node.</p>
<p>Can anyone confirm this? Or perhaps correct me?</p>
<p>I have tried looking into the explanations in the minikube docs, but could not find a definitive answer.</p>
| Martian | <p>Minikube cluster is a single-node cluster, meaning that it consists of only one machine–typically a containerized or virtual machine running on our laptop or desktop and deploys a simple cluster containing only one node.</p>
<p>Nodes in Kubernetes clusters are assigned roles. Start your Minikube cluster and then run the following command:</p>
<pre><code>kubectl get nodes
</code></pre>
<p>You should see one node, named minikube, because this is a single-node cluster.</p>
<pre><code>NAME STATUS ROLES AGE VERSION
minikube Ready control-plane,master 99s v1.23.1
</code></pre>
<p>Refer to the <a href="https://www.mirantis.com/blog/the-architecture-of-a-kubernetes-cluster/" rel="nofollow noreferrer">blog</a> written by Eric Gregory for information on architecture of single-node clusters.</p>
| Srividya |
<p>I have 2 local vms on different clusters.
On the first one, I have a pod that listens on port <code>9090</code></p>
<pre><code>gateway gRPC server starting {"address": "0.0.0.0:9090"}
</code></pre>
<p>How can I expose this port on the VM to make the connection from the second VM to this pod?</p>
<p>Both vms are in the same network and they can see each other</p>
<p>Currently the pod has a SVC of type <code>ClusterIP</code></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2023-06-09T06:21:57Z"
labels:
app.kubernetes.io/name: myapp
service-type: public
name: myapp
namespace: myapp
ownerReferences:
- apiVersion: core.opni.io/v1beta1
blockOwnerDeletion: true
controller: true
kind: Gateway
name: myapp
uid: 5cf052fb-31cb-43b4-8b3c-264a4d2240ce
resourceVersion: "371786"
uid: 95a07669-fe15-40a2-9614-21d33475a54b
spec:
clusterIP: 10.43.66.183
clusterIPs:
- 10.43.66.183
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: grpc
port: 9090
protocol: TCP
targetPort: grpc
- name: noauth
port: 4000
protocol: TCP
targetPort: noauth
selector:
app.kubernetes.io/name: myapp
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
</code></pre>
<p>Is it possible to expose this app on the <code>nodeIP:9090</code> ?</p>
| Kristian Zhelyazkov | <p>ClusterIP service works within the cluster. You cannot use clusterIP to connect to virtual machines in different clusters. In this case where <strong>NodePort service</strong> comes in.</p>
<p>A NodePort is an open port on every node of your cluster. Kubernetes transparently routes incoming traffic on the NodePort to your service, even if your application is running on a different node.Refer to the <a href="https://sysdig.com/blog/kubernetes-services-clusterip-nodeport-loadbalancer/" rel="nofollow noreferrer">link</a> by Javier Martinez for more information on service types</p>
<p>Whenever a new Kubernetes cluster gets built and If you set the type field to NodePort, one of the available configuration parameters is <code>service-node-port-range</code> which defines a range of ports to use for NodePort allocation and usually defaults to <code>30000-32767</code></p>
<p>So, Nodeport service uses a port range from 30000 for which you may not use port 9090 for exposing the application.</p>
| Srividya |
<p>i have deployed my application in aks and used ingress to expose the services externally , but i need to restrict the access to my application to some IPs , i ready something about whitelist Ips, i tried add my ip in my ingress like this:
<a href="https://i.stack.imgur.com/wlIZb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wlIZb.png" alt="enter image description here" /></a></p>
<p>but when acces the app from my machine i got 403 forbidden , so i guess that i'm not using the right Ip adress.
So what ip should i put exactly in the ingress.</p>
| firas messaoudi | <p>I fixed the problem by adding --set controller.service.externalTrafficPolicy=Local to the install command of the ingress.</p>
| firas messaoudi |
<p>I am deploying a monitoring stack from the <a href="https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack" rel="noreferrer"><code>kube-prometheus-stack</code></a> helm chart and I am trying to configure alertmanager so that it has my custom configuration for alerting in a Slack channel.</p>
<p>The configuration in the pod is loaded from <code>/etc/alertmanager/config/alertmanager.yaml</code>.
From the pod description, this file is loaded from a secret automatically generated:</p>
<pre class="lang-yaml prettyprint-override"><code>...
volumeMounts:
- mountPath: /etc/alertmanager/config
name: config-volume
...
volumes:
- name: config-volume
secret:
defaultMode: 420
secretName: alertmanager-prometheus-community-kube-alertmanager-generated
</code></pre>
<p>If I inspect the secret, it contains the default configuration found in the default values in <code>alertmanager.config</code>, which I intend to overwrite.</p>
<p>If I pass the following configuration to alertmanager to a fresh installation of the chart, it does not create the alertmanager pod:</p>
<pre class="lang-yaml prettyprint-override"><code>alertmanager:
config:
global:
resolve_timeout: 5m
route:
group_by: ['job', 'alertname', 'priority']
group_wait: 10s
group_interval: 1m
routes:
- match:
alertname: Watchdog
receiver: 'null'
- receiver: 'slack-notifications'
continue: true
receivers:
- name: 'slack-notifications'
slack-configs:
- slack_api_url: <url here>
title: '{{ .Status }} ({{ .Alerts.Firing | len }}): {{ .GroupLabels.SortedPairs.Values | join " " }}'
text: '<!channel> {{ .CommonAnnotations.summary }}'
channel: '#mychannel'
</code></pre>
<p>First of all, if I don't pass any configuration in the <code>values.yaml</code>, the alertmanager pod is successfully created.</p>
<p>How can I properly overwrite alertmanager's configuration so it mounts the correct file with my custom configuration into <code>/etc/alertmanger/config/alertmanager.yaml</code>?</p>
| everspader | <p>The alertmanager requires certain non-default arguments to overwrite the default as it appears it fails in silence. Wrong configuration leads to the pod not applying the configuration (<a href="https://github.com/prometheus-community/helm-charts/issues/1998" rel="noreferrer">https://github.com/prometheus-community/helm-charts/issues/1998</a>). What worked for me was to carefully configure the alertmanager and add a watchdog child route and the null receiver</p>
<pre><code>route:
group_by: [ '...' ]
group_wait: 30s
group_interval: 10s
repeat_interval: 10s
receiver: 'user1'
routes:
- match:
alertname: Watchdog
receiver: 'null'
receivers:
- name: 'null'
- ...
</code></pre>
| m-eriksen |
<p>I'm attempting to build a branch using Jenkins and a 'docker in the docker' container to build a container from src.</p>
<p>I define the Docker cloud instance here:</p>
<p><a href="https://i.stack.imgur.com/oTSjY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oTSjY.png" alt="enter image description here" /></a></p>
<p>Should an extra tab be available that enable the job to use the Docker cloud instance setup above?</p>
<p>The job is a multi-branch pipeline:</p>
<p><a href="https://i.stack.imgur.com/JdfTV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JdfTV.png" alt="enter image description here" /></a></p>
<p>But when I attempt to configure a job that uses the docker cloud instance, configured above, the option to build with docker is not available:</p>
<p><a href="https://i.stack.imgur.com/WXfSP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WXfSP.png" alt="enter image description here" /></a></p>
<p>The build log contains:</p>
<blockquote>
<p>time="2021-04-04T14:27:16Z" level=error msg="failed to dial gRPC:
cannot connect to the Docker daemon. Is 'docker daemon' running on
this host?: dial unix /var/run/docker.sock: connect: no such file or
directory" error during connect: Post
http://%2Fvar%2Frun%2Fdocker.sock/v1.40/build?buildargs=%7B%7D&cachefrom=%5B%5D&cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=Dockerfile&labels=%7B%7D&memory=0&memswap=0&networkmode=default&rm=1&session=vgpahcarinxfh05klhxyk02gg&shmsize=0&t=ron%2Fml-services&target=&ulimits=null&version=1:
context canceled [Pipeline] } [Pipeline] // stage [Pipeline] }
[Pipeline] // node [Pipeline] End of Pipeline [Bitbucket] Notifying
commit build result [Bitbucket] Build result notified ERROR: script
returned exit code 1 Finished: FAILURE</p>
</blockquote>
<p>which suggests the build is searching for Docker on the same host as Jenkins, but I'm attempting to build with Docker on a different host?</p>
<p>Have I configured Docker with Jenkins correctly?</p>
<p>My <code>Jenkinsfile</code> contains:</p>
<pre><code>node {
def app
stage('Clone repository') {
checkout scm
}
stage('Build image') {
app = docker.build("ron/services")
}
stage('Push image') {
docker.withRegistry('https://registry.hub.docker.com', 'git') {
app.push("${env.BUILD_NUMBER}")
app.push("latest")
}
}
}
</code></pre>
<p>Update:</p>
<p>Clicking the checkmark at <code>Expose DOCKER_HOST</code> , rebuilding contains error:</p>
<pre><code>+ docker build -t ron/services .
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
[Bitbucket] Notifying commit build result
[Bitbucket] Build result notified
ERROR: script returned exit code 1
Finished: FAILURE
</code></pre>
| blue-sky | <p>Not clear if this is what you are trying to do, but configuring Docker cloud will tell your Jenkins to launch a container on 10.241.0.198 (client), and run your jenkins job in that container. To make this work, there are a couple of things to check:</p>
<ol>
<li>ensure that jenkins user on jenkins server can access port 2371 on client, ie 'Test Connection' returns success</li>
<li>Turn on '<em>Expose DOCKER_HOST</em>' if you want to use docker in the container</li>
<li>configure ssh so that jenkins user on jenkins server can ssh to the container when it's running on the client (<em>CMD ["/usr/sbin/sshd", "-D"]</em> in Dockerfile)</li>
<li>In Docker Agent Template: configure a label; turn on 'enabled'; configure a docker image to run in the container; set Remote filesystem Root: /home/jenkins</li>
<li>In Container Settings: (very important!!) add <em>/var/run/docker.sock:/var/run/docker.sock</em> to Volumes</li>
</ol>
<p>To get your Pipeline job to run on the docker image, set the agent label to the label you provided in step 4.</p>
<p>A couple of gotchas when creating the image to run in the container:</p>
<ol>
<li>install both openssh-clients and openssh-server</li>
<li>install java</li>
<li>install any other build tools you might need, eg git</li>
<li>install docker if you want docker in docker support</li>
<li>configure for sftp in /etc/ssh/sshd_config eg, Add</li>
</ol>
<ul>
<li></li>
</ul>
<pre><code># override default of no subsystems
Subsystem sftp /usr/lib/openssh/sftp-server
Match group sftp
X11Forwarding no
AllowTCPForwarding no
ForceCommand internal-sftp
</code></pre>
| user16707078 |
<p>I found two interpretations to a formula used for over-provisioning resources in <a href="https://cloud.google.com/kubernetes-engine" rel="nofollow noreferrer">GKE</a> when autoscaling.</p>
<p>According to the following two sources:</p>
<ul>
<li><p><a href="https://cloud.google.com/architecture/best-practices-for-running-cost-effective-kubernetes-applications-on-gke#autoscaler_and_over-provisioning" rel="nofollow noreferrer">Autoscaler and over-provisioning</a></p>
</li>
<li><p><a href="https://www.youtube.com/watch?v=VNAWA6NkoBs&t=403s" rel="nofollow noreferrer">Autoscaling with GKE: Clusters and nodes</a></p>
</li>
</ul>
<p>the formula:</p>
<pre><code>(1 - buffer) / (1 + traffic)
</code></pre>
<p>where:</p>
<pre><code>buffer: percentage of CPU buffer that you reserve, so your workloads do not get to 100% CPU utilization
traffic: percentage of traffic increase(expected) in the following two or three minutes
</code></pre>
<p>Will give you the value of a <em>new resource utilization target for the HPA</em> to appropriately handle the expected traffic growth while minimizing extra resources allocation.</p>
<p>So, for example, if you have the following values:</p>
<pre><code>buffer: 15%, so you would get a CPU utilization of up to 85%
traffic: 30% increase in the next two or three minutes
target utilization = (1 - 0.15) / (1 + 0.30) = 0.85 / 1.3 = 0.65384615
target utilization = 65%
</code></pre>
<p>The interpretation from those two sources would be that <strong>65% is the optimized target utilization for the HPA</strong>. Then, you <strong>get a 35% of over-provisioned resources</strong> to schedule new pods in existing nodes while the Cluster auto-scaler(and node auto-provisioner) will allocate new nodes during a peak in demand.</p>
<p>The problem is that the laboratory <a href="https://www.qwiklabs.com/focuses/15636?parent=catalog" rel="nofollow noreferrer">Understanding and Combining GKE Autoscaling Strategies</a> in the section <a href="https://www.qwiklabs.com/focuses/15636?parent=catalog#step11" rel="nofollow noreferrer">"Optimize larger loads"</a> (version: Manual Last Updated: March 19, 2021) establishes that the 65% value would be the percentage of over-provisioned resources you need to allocate in excess.</p>
<p>So, according to the first two sources:</p>
<ul>
<li>percentage of resources to overprovision: 35%</li>
</ul>
<p>But according to the laboratory "Understanding and Combining GKE Autoscaling Strategies":</p>
<ul>
<li>percentage of resources to overprovision: 65%</li>
</ul>
<p>Which one is the correct interpretation?</p>
<p>IMHO, the correct interpretation is that the value of over-provision equals 35%. The formula gives you a new resource utilization target for the HPA concerning the (new) traffic demand (and not the percentage of resources to allocate in excess).</p>
| Guillermo Ampie | <p>Yes, the first interpretation is correct. In the first interpretation, they compute over-provisioning as the unused resources over the total size of the cluster, since Horizontal Pod Autoscaling is configured to keep resource utilization to ~65%, you have a 100% - 65% = 35% unused resources which is the value of a new target resource utilization for the HPA.</p>
<p>In the second interpretation i.e., "Understanding and Combining GKE Autoscaling Strategies", they seem to consider "over-provisioning percent" as how much more computing power is added to the “needed" compute resources. In other words, you have a 3 node cluster, needed to run your workload, and you add 2 nodes on top, this makes it such that you have over-provisioned the cluster by 2/3 = 66.6666% ~= 65%.</p>
<p>The first interpretation is more intuitive and makes more sense in practical usage.</p>
| Jyothi Kiranmayi |
<p>What is the best way to enable BBR on default for my clusters?
In this <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/node-system-config" rel="nofollow noreferrer">link</a>, I didn't see an option for controlling the congestion control.</p>
| kfirt | <p>Google BBR can only be enabled in Linux operating systems. By default the Linux servers uses Reno and CUBIC but the latest version kernels also includes the google BBR algorithms and can be enabled manually.</p>
<p>To enable it on CentOS 8 add below lines in /etc/sysctl.conf and issue command sysctl -p</p>
<p>net.core.default_qdisc = fq
net.ipv4.tcp_congestion_control = bbr</p>
<p>For more Linux distributions you can refer to this <a href="https://supporthost.in/how-to-install-google-bbr/" rel="nofollow noreferrer">link</a>.</p>
| Edd |
<p>I'am trying to create pod with volume persistent disk of 10gb but seems I cannot create disk under 200Gb.</p>
<p>I can see pv listed but pvClaim is on pending. I can see what the pc is Available so I cannot understand what's happen</p>
<p><strong>Please find info below:</strong></p>
<pre><code>Invalid value for field 'resource.sizeGb': '10'. Disk size cannot be smaller than 200 GB., invalid
kubectl get pvc -n vault-ppd
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pv-vault-ppd-claim Pending balanced-persistent-disk 2m45s
kubectl get pv -n vault-ppd
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-vault-ppd 10Gi RWO Retain Available vault/pv-vault-ppd-claim
</code></pre>
<p>My manifest <strong>vault-ppd.yaml</strong></p>
<pre><code> kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: balanced-persistent-disk
provisioner: pd.csi.storage.gke.io
parameters:
type: pd-standard
replication-type: regional-pd
volumeBindingMode: WaitForFirstConsumer
allowedTopologies:
- matchLabelExpressions:
- key: topology.gke.io/zone
values:
- europe-west1-b
- europe-west1-c
- europe-west1-d
---
apiVersion: v1
kind: Namespace
metadata:
name: vault-ppd
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: vault-ppd
namespace: vault-ppd
labels:
app.kubernetes.io/name: vault-ppd
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-vault-ppd
spec:
storageClassName: "balanced-persistent-disk"
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
claimRef:
namespace: vault
name: pv-vault-ppd-claim
gcePersistentDisk:
pdName: gke-vault-volume
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-vault-ppd-claim
namespace: vault-ppd
spec:
storageClassName: "balanced-persistent-disk"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
</code></pre>
<p>Thx for helps guys</p>
| Rabah DevOps | <p>Your deployment has regional persistent disks of type <strong>pd-standard</strong> and replication-type: <strong>regional-pd</strong>, this means that volumes create a regional persistent disk. As mentioned in <a href="https://cloud.google.com/compute/docs/disks/regional-persistent-disk#restrictions" rel="nofollow noreferrer">documentation</a> the minimum capacity per disk for regional persistent disks is 200 GB . We cannot create a regional-pd with lower GB requirement for a standard disk. So now the workaround is, you can either create a PVC with a larger size or use pd-ssd instead.</p>
<p><strong>Note:</strong> To use regional persistent disks of type <strong>pd-standard</strong>, set the <strong>PersistentVolumeClaim.storage</strong> attribute to <strong>200Gi</strong> or higher. If you need a smaller persistent disk, use <strong>pd-ssd</strong> instead of <strong>pd-standard</strong>.</p>
<p>Refer <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/persistent-volumes/regional-pd#regional_persistent_disks" rel="nofollow noreferrer">Regional Persistent disks</a> for information.</p>
| Jyothi Kiranmayi |
<p>I'm learning Kubernetes over Minikube.
My demo consists of a Flask API and a MySQL Database.
I made all the <em>.yaml</em> files but something strange happens with services of the deployments...</p>
<p>I cannot communicate with the API <strong>externally</strong> (neither with Postman, Curl, browser...)</p>
<p>By "externally" I mean "from outside the cluster" (on the same machine, ex: from the browser, postman...)</p>
<p><strong>This the Deployment+Service for the API:</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: api-dip-api-deployment
labels:
app: api-dip-api
spec:
replicas: 1
selector:
matchLabels:
app: api-dip-api
template:
metadata:
labels:
app: api-dip-api
spec:
containers:
- name: api-dip-api
image: myregistry.com
ports:
- containerPort: 5000
env:
- name: DATABASE_USER
valueFrom:
secretKeyRef:
name: api-secret
key: api-db-user
- name: DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: api-secret
key: api-db-password
- name: DATABASE_HOST
valueFrom:
configMapKeyRef:
name: api-configmap
key: api-database-url
- name: DATABASE_NAME
valueFrom:
configMapKeyRef:
name: api-configmap
key: api-database-name
- name: DATABASE_PORT
valueFrom:
configMapKeyRef:
name: api-configmap
key: api-database-port
imagePullSecrets:
- name: regcred
---
apiVersion: v1
kind: Service
metadata:
name: api-service
spec:
selector:
app: api-dip-api
ports:
- port: 5000
protocol: TCP
targetPort: 5000
nodePort: 30000
type: LoadBalancer
</code></pre>
<p><strong>Dockerfile API:</strong></p>
<pre><code>FROM python:latest
# create a dir for app
WORKDIR /app
# intall dependecies
COPY requirements.txt .
RUN pip install -r requirements.txt
# source code
COPY /app .
EXPOSE 5000
# run the application
CMD ["python", "main.py"]
</code></pre>
<p>Since i'm using Minikube the correct IP for the service is displayed with</p>
<pre><code>minikube service <service_name>
</code></pre>
<p>I already tried looking the minikube context, as suggested in another post, but it shows like:</p>
<pre><code>CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* minikube minikube minikube default
</code></pre>
<p>so it should be ok.</p>
<p>I don't know what to try now... the ports are mapped correctly I think.</p>
| ANTONELLO BARBONE | <p>I <strong>did not</strong> found any solution to my problem.
I run Kubernetes with Minikube on Vmware Fusion on my Mac with BigSur.</p>
<p>I found out that the SAME EXACT deployment works on a machine with ubuntu installed, OR on a virtual machine made with VirtualBox.</p>
<p>Actually seems that this is a known issue:</p>
<ul>
<li><a href="https://github.com/kubernetes/minikube/issues/11577" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/issues/11577</a></li>
<li><a href="https://github.com/kubernetes/minikube/issues/11193" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/issues/11193</a></li>
<li><a href="https://github.com/kubernetes/minikube/issues/4027" rel="nofollow noreferrer">https://github.com/kubernetes/minikube/issues/4027</a></li>
</ul>
| ANTONELLO BARBONE |
<p>I'm having an issue with volumes on Kubernetes when I'm trying to mount hostPath volumes. (i also tried with PVC, but no success)</p>
<p>Dockerfile:</p>
<pre><code>FROM node:16
WORKDIR /usr/src/app
COPY package.json /usr/src/app
RUN yarn install
COPY . /usr/src/app
EXPOSE 3000
ENTRYPOINT ["yarn", "start:dev"]
</code></pre>
<p>docker-compose.yml:</p>
<pre><code>version: '3.8'
services:
api:
container_name: api
build:
context: .
dockerfile: Dockerfile
volumes:
- .:/usr/src/app
- /usr/src/app/node_modules
ports:
- 3000:3000
restart: always
labels:
kompose.volume.type: 'hostPath'
database:
container_name: database
image: postgres:latest
ports:
- 5432:5432
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: task-management
</code></pre>
<p>api-development.yml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose -f docker-compose.yml convert
kompose.version: 1.26.1 (HEAD)
kompose.volume.type: hostPath
creationTimestamp: null
labels:
io.kompose.service: api
name: api
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: api
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose -f docker-compose.yml convert
kompose.version: 1.26.1 (HEAD)
kompose.volume.type: hostPath
creationTimestamp: null
labels:
io.kompose.service: api
spec:
containers:
- image: task-management_api
name: api
imagePullPolicy: Never
ports:
- containerPort: 3000
resources: {}
volumeMounts:
- mountPath: /usr/src/app
name: api-hostpath0
- mountPath: /usr/src/app/node_modules
name: api-hostpath1
restartPolicy: Always
volumes:
- hostPath:
path: /Users/handrei/workspace/devs/nest-ws/task-management
name: api-hostpath0
- hostPath:
name: api-hostpath1
status: {}
</code></pre>
<p>the error I received from the pod is the next one:</p>
<p>kubectl logs api-84b56776c5-v86c7</p>
<pre><code>yarn run v1.22.17
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
error Couldn't find a package.json file in "/usr/src/app"
</code></pre>
<p>I assume that's something wrong with volumes because applying the deployment and service without volumes it's working</p>
| andrei | <blockquote>
<p>A <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer"><code>hostPath</code></a> volume mounts a file or directory from the host node's filesystem into your Pod.</p>
</blockquote>
<p>To the required <code>path</code> property, you can also specify a <code>type</code> for a <code>hostPath</code> volume.</p>
<blockquote>
<p><strong>NOTE</strong>: HostPath volumes present many security risks, and it is a best practice to avoid the use of HostPaths when possible. When a HostPath volume must be used, it should be scoped to only the required file or directory, and mounted as <strong>ReadOnly</strong>.</p>
</blockquote>
<hr />
<p>As @<a href="https://stackoverflow.com/users/10008173/david-maze" title="87,783 reputation">David Maze</a> mentioned before, It's better idea to</p>
<blockquote>
<p>use Node locally for day-to-day development and use a self-contained image (without any volume mounts at all) in Kubernetes. (...)</p>
<p>The <code>node_modules</code> directory is empty and nothing in Kubernetes will every copy data there. You'll need to delete all of the volume declarations from your Deployment spec for this to run.</p>
</blockquote>
<hr />
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/" rel="nofollow noreferrer">This quide</a> will help you to translate a Docker Compose File to Kubernetes Resources.</p>
<p>See also this questions on StackOverflow:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/39651908/why-node-modules-is-empty-after-docker-build">Why <em>node_modules</em> is empty after docker build?</a></li>
<li><a href="https://stackoverflow.com/questions/43910919/kubernetes-volume-for-node-modules">Kubernetes volume for <em>node_modules</em></a></li>
</ul>
| kkopczak |
<p>My readiness probe specifies HTTPS and 8200 as the port to check my hashicorp vault pod.</p>
<pre><code> readinessProbe:
httpGet:
scheme: HTTPS
path: '/v1/sys/health?activecode=200'
port: 8200
</code></pre>
<p>Once the pod is running kubectl describe pod shows this</p>
<pre><code>Readiness probe failed: Error checking seal status: Error making API request.
URL: GET http://127.0.0.1:8200/v1/sys/seal-status
Code: 400. Raw Message:
Client sent an HTTP request to an HTTPS server.
</code></pre>
| Matthew Wimpelberg | <p>I have found <a href="https://stackoverflow.com/questions/63564594/hashicorp-vault-client-sent-an-http-request-to-an-https-server-readiness-pro">this similar problem</a>. See <a href="https://stackoverflow.com/a/63565711/16739663">the whole answer here</a>.</p>
<p>According to <a href="https://www.vaultproject.io/docs/platform/k8s/helm/configuration" rel="nofollow noreferrer">this documentation</a>:</p>
<blockquote>
<p>The http/https scheme is controlled by the <code>tlsDisable</code> value.</p>
</blockquote>
<blockquote>
<p>When set to <code>true</code>, changes URLs from <code>https</code> to <code>http</code> (such as the <code>VAULT_ADDR=http://127.0.0.1:8200</code> environment variable set on the Vault pods).</p>
</blockquote>
<p>To turn it off:</p>
<pre class="lang-yaml prettyprint-override"><code>global:
tlsDisable: false
</code></pre>
<blockquote>
<p><strong>NOTE</strong>:
Vault should always be <a href="https://www.vaultproject.io/docs/configuration/listener/tcp#tls_cert_file" rel="nofollow noreferrer">used with TLS</a> in production to provide secure communication between clients and the Vault server. It requires a certificate file and key file on each host where Vault is running.</p>
</blockquote>
<hr />
<p>See also <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">this documentation</a>. One can find there many examples of using readiness and liveness probes, f.e</p>
<pre class="lang-yaml prettyprint-override"><code> readinessProbe:
exec:
command:
- /bin/sh
- -ec
- vault status
initialDelaySeconds: 5
periodSeconds: 5
</code></pre>
| kkopczak |
<p>We have a data processing service that tries to utilize as much CPU and memory as possible. In VMs this service uses max CPU and memory available and keeps on running. But when we run this service to Kubernetes it gets evicted as soon as it hits resource limits. Is there a way to let the service hit max resource usage and not get evicted?</p>
| shailesh | <blockquote>
<p>The kubelet is the primary "node agent" that runs on each node. [<a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="nofollow noreferrer">1</a>]</p>
<p>When you specify a <a href="https://kubernetes.io/docs/concepts/workloads/pods/" rel="nofollow noreferrer">Pod</a>, you can optionally specify how much of each resource a <a href="https://kubernetes.io/docs/concepts/containers/" rel="nofollow noreferrer">container</a> needs. [<a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/" rel="nofollow noreferrer">2</a>]</p>
</blockquote>
<p>@mmking has the point. Indeed kubelet requires some resources per node and that's the reason of seeing eviction.</p>
<p>And again like @mmking mentioned - unfortunately there's no way around that.</p>
<blockquote>
<p>I'd recommend setting resource limits to whatever the math comes down to (total resource minus kubelet requirements).</p>
</blockquote>
<p>I agree with the sentence above. <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits" rel="nofollow noreferrer">Here</a> you can find documentation.</p>
<p>References:</p>
<p>[1] <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="nofollow noreferrer"><em>Kubelet</em></a></p>
<p>[2] <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/" rel="nofollow noreferrer"><em>Manage resources containers</em></a></p>
| kkopczak |
<p>Today I found the host kubernetes(v1.21.3) folder <code>io.containerd.snapshotter.v1.overlayfs</code> takes too much spaces:</p>
<pre><code>[root@k8smasterone kubernetes.io~nfs]# pwd
/var/lib/kubelet/pods/8aafe99f-53c1-4bec-8cb8-abd09af1448f/volumes/kubernetes.io~nfs
[root@k8smasterone kubernetes.io~nfs]# duc ls -Fg /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/
13.5G snapshots/ [++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++]
2.2M metadata.db [
</code></pre>
<p>It takes 13.5GB of disk spaces. is it possible to shrink this folder?</p>
| Dolphin | <p>The directory <code>/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs</code> is where the various container and image layers are persisted by containerd. These layers are downloaded based on the containers running on the node. If we start running out of space, the kubelet has the ability to garbage collected unused images - which will reduce the size of this directory. The customer also has the ability to configure the size of the boot disk for the node-pools if needed.</p>
<p>It is expected that this would grow from the time a node is created. However when the node disk usage is above 85% then garbage collection will attempt to identify images that can be removed. It may not be able to remove images though if they are currently in use by an existing container running on the node or they have been recently pulled.</p>
<p>If you want to remove unused container images with just containerd, you can use the below command:</p>
<p><strong>$ <code>crictl rmi --prune</code></strong></p>
<p>Also you can use the <strong><code>$ docker image prune</code></strong> command which allows you to clean up unused images. By default, docker image prune only cleans up dangling images. A dangling image is one that is not tagged and is not referenced by any container.</p>
<p>To remove all images which are not used by existing containers, use the -a flag:</p>
<p><strong><code>$ docker image prune -a</code></strong></p>
| Jyothi Kiranmayi |
<p>I have created a deployment and I wanted to mount the host path to the container, and when I check the container I see only empty folder.</p>
<p>Why am I getting this error? What can be the cause?</p>
<p>EDIT: I am using Windows OS.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: myservicepod6
labels:
app: servicepod
spec:
replicas: 1
selector:
matchLabels:
app: servicepod
template:
metadata:
labels:
app: servicepod
spec:
containers:
- name: php
image: php:7.2-apache
command: ["/bin/sh", "-c"]
args: ["service apache2 start; sleep infinity"]
ports:
- name: serviceport
containerPort: 80
volumeMounts:
- mountPath: /var/www/html/
name: hostvolume
volumes:
- name: hostvolume
hostPath:
path: /C/Users/utkarsh/pentesting/learnings/kubernetes/app/objectmanagement/deployments/src/*
</code></pre>
<p>EDIT FOR THE ANSWER -</p>
<p>I start the minkube - <code> minikube start --mount-string="$HOME/test/src/code/file:/data"</code></p>
<p>Then I changed the deployment file like below</p>
<p>Showing only volume part</p>
<pre><code> spec:
volumes:
- name: hostvolume
hostPath:
path: /C/Users/utkarsh/pentesting/learnings/kubernetes/app/deployments/src
containers:
- name: php
image: php:7.2-apache
command: ["/bin/sh", "-c"]
args: ["service apache2 start; sleep infinity"]
ports:
- name: serviceport
containerPort: 80
volumeMounts:
- name: hostvolume
mountPath: /test/src/code/file
</code></pre>
<p>When I log into the pod and went to the directory (/test/src/code/file) I found the directory empty</p>
<p>let me know what am I missing?</p>
| Cloud Learner | <p>After a detailed search and hit and trial method - Found the way</p>
<p>Only for minikube:-</p>
<p>First we need to mount the host folder into the directory name:</p>
<p><code>minikube mount src/:/var/www/html</code></p>
<p>Then we need to define hostPath and mountPath as</p>
<p><code>/var/www/html</code></p>
<p>Because now we have mount the folder to html folder.</p>
<pre><code>volumes:
- name: hostvolume
hostPath:
path: /var/www/html
containers:
- name: php
image: php:7.2-apache
command: ["/bin/sh", "-c"]
args: ["service apache2 start; sleep infinity"]
workingDir: /var/www/html
ports:
- name: serviceport
containerPort: 80
volumeMounts:
- name: hostvolume
mountPath: /var/www/html
</code></pre>
| Cloud Learner |
<p>im having the following code which works on yq 3 and when I try to upgrade to yq4 it fails</p>
<p>this works on yq3
<code>yq w -i dep.yaml 'spec.spec.image' $(MY_VAL)</code></p>
<p>on yq4 I got error that it doenst know <code>w</code>how can I make it works
I didn't find any match example which can help to my case</p>
<p><a href="https://mikefarah.gitbook.io/yq/upgrading-from-v3" rel="nofollow noreferrer">https://mikefarah.gitbook.io/yq/upgrading-from-v3</a></p>
| PJEM | <p>Take a look at the section 'Updating / writing documents' of the <a href="https://mikefarah.gitbook.io/yq/upgrading-from-v3" rel="nofollow noreferrer">migration guide</a>.</p>
<p>The following command should work for your task with version 4 of <a href="https://mikefarah.gitbook.io/yq/" rel="nofollow noreferrer">yq</a>:</p>
<p><code>dep.yaml</code> before execution</p>
<pre class="lang-yaml prettyprint-override"><code>a:
b: 1
spec:
spec:
image: image_old.jpg
c:
d: 2
</code></pre>
<p><code>MY_VAL="image_new.jpg" yq -i e '.spec.spec.image = strenv(MY_VAL)' dep.yaml</code></p>
<p><code>dep.yaml</code> after execution</p>
<pre class="lang-yaml prettyprint-override"><code>a:
b: 1
spec:
spec:
image: image_new.jpg
c:
d: 2
</code></pre>
| jpseng |
<p>Is it somehow possible to seperately allow patching of resources' metadata through a role in a Kubernetes cluster?</p>
<p>I would like to solely allow patching of namespace's metadata without giving write permissions to the whole namespace object.</p>
<p>The usecase is to allow deployment pipelines to add/change annotations to the namespace without giving them full control.</p>
| roehrijn | <p>To add/change namespace's metadata without giving write permissions to the whole namespace object, you can create a RBAC role where you can restrict access to namespace resources. So that deployment pipeline can have access to change only the metadata i.e., annotations to the namespace without giving them full control.</p>
<p>An RBAC Role contains rules that represent a set of permissions. Permissions are purely additive (there are no "deny" rules).</p>
<p>A Role always sets permissions within a particular <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces" rel="nofollow noreferrer">namespace</a>; when you create a Role, you have to specify the namespace it belongs in.</p>
<p>Let us consider an example Role in the namespace that can be used to grant read access to resources pods and services:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: <namespace-name>
name: read-access
rules:
- apiGroups: [""]
resources: ["pods", “services”]
verbs: ["get", "watch", "list"]
</code></pre>
<p>To grant read access to all the resources which are mentioned in the namespace, use this special character <strong>“*”</strong> in the <strong>resources</strong> field i.e., <strong>resources: ["*”]</strong>.</p>
<p><strong>Note :</strong> If you want to restrict resources to a specific user you can use Rolebinding. A role binding grants the permissions defined in a role to a user or set of users. It holds a list of subjects (users, groups, or service accounts), and a reference to the role being granted. A RoleBinding grants permissions within a specific namespace</p>
<p>Refer <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#api-overview" rel="nofollow noreferrer">RBAC Role</a> for more information.</p>
| Jyothi Kiranmayi |
<p>I am using minikube to create k8s cluster. I am trying to create a pod. The pod is created but there is an error while pulling my image. When I try to run <code>kubectl describe pod posts</code>, I get the error below, but my image present locally.</p>
<pre><code>Failed to pull image "suresheerf/posts": rpc error: code = Unknown desc = Error response from daemon: pull access denied for suresheerf/posts, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
</code></pre>
<p>What's happening here? How to resolve it?</p>
<p>My K8s Pod yaml:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: posts
spec:
containers:
- name: posts
image: suresheerf/posts
</code></pre>
<p>Also, my terminal screen shot:<a href="https://i.stack.imgur.com/B4EUK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/B4EUK.png" alt="terminal" /></a></p>
| Suresh D | <p>Some possible causes for the error are:</p>
<ul>
<li>The image or tag doesn’t exist.</li>
<li>You’ve made a typo in the image name or tag.</li>
<li>The image registry requires authentication.</li>
</ul>
<p>Check whether the image name is correct or not. Update the image name and tag correctly.</p>
<p>If you need to pull an image from a private image registry, you need to make sure that you provide Kubernetes with the credentials it will need to pull the image. You can do this by creating a Secret. You have to create a Secret containing the credentials you need to access. Also make sure that you’ve added the Secret in the appropriate namespace. You’ll also need to set the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-pod-that-uses-your-secret" rel="nofollow noreferrer">imagePullSecrets</a> field on your Pod. This field tells Kubernetes which Secret it should use, when authenticating to the registry.</p>
<p><strong>Example :</strong></p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: myapp
spec:
containers:
- name: posts
image: <image name>
imagePullPolicy: Never
imagePullSecrets:
- name: the-secret
</code></pre>
<p>If Minikube is unable to directly access your local docker repository. There are several methods to resolve this issue. Refer <a href="https://minikube.sigs.k8s.io/docs/handbook/pushing/" rel="nofollow noreferrer">pulling images</a> for information.</p>
<p>One of the simple workarounds is, you can add the image to the Minikube cache using minikube cache add and you need to change the imagePullPolicy inside your yaml file to Never. This way, it will default to using the local image you cached into your minikube. You can reload your cache as well after adding it in.</p>
| Jyothi Kiranmayi |
<p>I am learning how to use an ingress to expose my application GKE v1.19.
I followed the tutorial on GKE docs for Service, Ingress, and BackendConfig to get to the following setup. However, my backend services still become UNHEALTHY after some time. My aim is to overwrite the default "/" health check path for the ingress controller.</p>
<p>I have the same health checks defined in my deployment.yaml file under livenessProbe and readinessProbe and they seem to work fine since the Pod enters running stage. I have also tried to curl the endpoint and it returns a 200 status.</p>
<p>I have no clue why are my service is marked as unhealthy despite them being accessible from the NodePort service I defined directly. Any advice or help would be appreciated. Thank you.</p>
<p>I will add my yaml files below:</p>
<p><strong>deployment.yaml</strong></p>
<pre><code>....
livenessProbe:
httpGet:
path: /api
port: 3100
initialDelaySeconds: 180
readinessProbe:
httpGet:
path: /api
port: 3100
initialDelaySeconds: 180
.....
</code></pre>
<p><strong>backendconfig.yaml</strong></p>
<pre><code>apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: backend-config
namespace: ns1
spec:
healthCheck:
checkIntervalSec: 30
port: 3100
type: HTTP #case-sensitive
requestPath: /api
</code></pre>
<p><strong>service.yaml</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/backend-config: '{"default": "backend-config"}'
name: service-ns1
namespace: ns1
labels:
app: service-ns1
spec:
type: NodePort
ports:
- protocol: TCP
port: 3100
targetPort: 3100
selector:
app: service-ns1
</code></pre>
<p><strong>ingress.yaml</strong></p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ns1-ingress
namespace: ns1
annotations:
kubernetes.io/ingress.global-static-ip-name: ns1-ip
networking.gke.io/managed-certificates: ns1-cert
kubernetes.io/ingress.allow-http: "false"
spec:
rules:
- http:
paths:
- path: /api/*
backend:
serviceName: service-ns1
servicePort: 3100
</code></pre>
| user782400 | <p>The ideal way to use the ‘BackendConfig’ is when the serving pods for your service contains multiple containers, if you're using the Anthos Ingress controller or if you need control over the port used for the load balancer's health checks, then you should use a BackendConfig CDR to define <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#health_checks" rel="nofollow noreferrer">health check</a> parameters. Refer to the <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#health_checks" rel="nofollow noreferrer">1</a>.</p>
<p>When a backend service's health check parameters are inferred from a serving Pod's readiness probe, GKE does not keep the readiness probe and health check synchronized. Hence any changes you make to the <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#interpreted_hc" rel="nofollow noreferrer">readiness probe</a> will not be copied to the health check of the corresponding backend service on the load balancer as per <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#interpreted_hc" rel="nofollow noreferrer">2</a>.</p>
<p>In your scenario, the backend is healthy when it follows path ‘/’ but showing unhealthy when it uses path ‘/api’, so there might be some misconfiguration in your ingress.</p>
<p>I would suggest you to add the annotations: ingress.kubernetes.io/rewrite-target: /api
so the path mentioned in spec.path will be rewritten to /api before the request is sent to the backend service.</p>
| Priya Gaikwad |
<p>I tried to set worker node of kubernetes on board using arm64 arch.
This worker node was not changed to Ready status from NotReady status.</p>
<p>I checked Conditions log using below command:</p>
<pre><code>$ kubectl describe nodes
...
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 02 Dec 2020 14:37:46 +0900 Wed, 02 Dec 2020 14:34:35 +0900 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 02 Dec 2020 14:37:46 +0900 Wed, 02 Dec 2020 14:34:35 +0900 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 02 Dec 2020 14:37:46 +0900 Wed, 02 Dec 2020 14:34:35 +0900 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Wed, 02 Dec 2020 14:37:46 +0900 Wed, 02 Dec 2020 14:34:35 +0900 KubeletNotReady [container runtime status check may not have completed yet, runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized, missing node capacity for resources: ephemeral-storage]
...
Capacity:
cpu: 8
memory: 7770600Ki
pods: 110
Allocatable:
cpu: 8
memory: 7668200Ki
pods: 110
...
</code></pre>
<p>This worker node seems not have ephemeral-storage resource, so
this log seems be created</p>
<p>"[container runtime status check may not have completed yet, runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized, missing node capacity for resources: ephemeral-storage]"</p>
<p>but root filesystem is mounted on / like follows,</p>
<pre><code>$ df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/root 23602256 6617628 15945856 30% /
devtmpfs 3634432 0 3634432 0% /dev
tmpfs 3885312 0 3885312 0% /dev/shm
tmpfs 3885312 100256 3785056 3% /run
tmpfs 3885312 0 3885312 0% /sys/fs/cgroup
tmpfs 524288 25476 498812 5% /tmp
tmpfs 524288 212 524076 1% /var/volatile
tmpfs 777060 0 777060 0% /run/user/1000
/dev/sde4 122816 49088 73728 40% /firmware
/dev/sde5 65488 608 64880 1% /bt_firmware
/dev/sde7 28144 20048 7444 73% /dsp
</code></pre>
<p>How can detect ephemeral-storage resource on worker node of kubernetes?</p>
<p>=======================================================================</p>
<p>I added full log of $ kubectl get nodes and $ kubectl describe nodes</p>
<pre><code>$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
raas-linux Ready master 6m25s v1.19.4
robot-dd9f6aaa NotReady <none> 5m16s v1.16.2-dirty
$
$ kubectl describe nodes
Name: raas-linux
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=raas-linux
kubernetes.io/os=linux
node-role.kubernetes.io/master=
Annotations: flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"a6:a1:0b:43:38:29"}
flannel.alpha.coreos.com/backend-type: vxlan
flannel.alpha.coreos.com/kube-subnet-manager: true
flannel.alpha.coreos.com/public-ip: 192.168.3.106
kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 04 Dec 2020 09:54:49 +0900
Taints: node-role.kubernetes.io/master:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: raas-linux
AcquireTime: <unset>
RenewTime: Fri, 04 Dec 2020 10:00:19 +0900
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Fri, 04 Dec 2020 09:55:14 +0900 Fri, 04 Dec 2020 09:55:14 +0900 FlannelIsUp Flannel is running on this node
MemoryPressure False Fri, 04 Dec 2020 09:55:19 +0900 Fri, 04 Dec 2020 09:54:45 +0900 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 04 Dec 2020 09:55:19 +0900 Fri, 04 Dec 2020 09:54:45 +0900 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Fri, 04 Dec 2020 09:55:19 +0900 Fri, 04 Dec 2020 09:54:45 +0900 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Fri, 04 Dec 2020 09:55:19 +0900 Fri, 04 Dec 2020 09:55:19 +0900 KubeletReady kubelet is posting ready status. AppArmor enabled
Addresses:
InternalIP: 192.168.3.106
Hostname: raas-linux
Capacity:
cpu: 8
ephemeral-storage: 122546800Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 8066548Ki
pods: 110
Allocatable:
cpu: 8
ephemeral-storage: 112939130694
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 7964148Ki
pods: 110
System Info:
Machine ID: 5aa3b32d7e9e409091929e7cba2d558b
System UUID: a930a228-a79a-11e5-9e9a-147517224400
Boot ID: 4e6dd5d2-bcc4-433b-8c4d-df56c33a9442
Kernel Version: 5.4.0-53-generic
OS Image: Ubuntu 18.04.5 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://19.3.10
Kubelet Version: v1.19.4
Kube-Proxy Version: v1.19.4
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (8 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system coredns-f9fd979d6-h7hd5 100m (1%) 0 (0%) 70Mi (0%) 170Mi (2%) 5m9s
kube-system coredns-f9fd979d6-hbkbl 100m (1%) 0 (0%) 70Mi (0%) 170Mi (2%) 5m9s
kube-system etcd-raas-linux 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m20s
kube-system kube-apiserver-raas-linux 250m (3%) 0 (0%) 0 (0%) 0 (0%) 5m20s
kube-system kube-controller-manager-raas-linux 200m (2%) 0 (0%) 0 (0%) 0 (0%) 5m20s
kube-system kube-flannel-ds-k8b2d 100m (1%) 100m (1%) 50Mi (0%) 50Mi (0%) 5m9s
kube-system kube-proxy-wgn4l 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m9s
kube-system kube-scheduler-raas-linux 100m (1%) 0 (0%) 0 (0%) 0 (0%) 5m20s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 850m (10%) 100m (1%)
memory 190Mi (2%) 390Mi (5%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 5m20s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 5m20s kubelet Node raas-linux status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 5m20s kubelet Node raas-linux status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 5m20s kubelet Node raas-linux status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 5m20s kubelet Updated Node Allocatable limit across pods
Normal Starting 5m8s kube-proxy Starting kube-proxy.
Normal NodeReady 5m kubelet Node raas-linux status is now: NodeReady
Name: robot-dd9f6aaa
Roles: <none>
Labels: beta.kubernetes.io/arch=arm64
beta.kubernetes.io/os=linux
kubernetes.io/arch=arm64
kubernetes.io/hostname=robot-dd9f6aaa
kubernetes.io/os=linux
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 04 Dec 2020 09:55:58 +0900
Taints: node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/not-ready:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: robot-dd9f6aaa
AcquireTime: <unset>
RenewTime: Fri, 04 Dec 2020 10:00:16 +0900
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Fri, 04 Dec 2020 09:55:58 +0900 Fri, 04 Dec 2020 09:55:58 +0900 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Fri, 04 Dec 2020 09:55:58 +0900 Fri, 04 Dec 2020 09:55:58 +0900 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Fri, 04 Dec 2020 09:55:58 +0900 Fri, 04 Dec 2020 09:55:58 +0900 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Fri, 04 Dec 2020 09:55:58 +0900 Fri, 04 Dec 2020 09:55:58 +0900 KubeletNotReady [container runtime status check may not have completed yet, runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized, missing node capacity for resources: ephemeral-storage]
Addresses:
InternalIP: 192.168.3.102
Hostname: robot-dd9f6aaa
Capacity:
cpu: 8
memory: 7770620Ki
pods: 110
Allocatable:
cpu: 8
memory: 7668220Ki
pods: 110
System Info:
Machine ID: de6c58c435a543de8e13ce6a76477fa0
System UUID: de6c58c435a543de8e13ce6a76477fa0
Boot ID: d0999dd7-ab7d-4459-b0cd-9b25f5a50ae4
Kernel Version: 4.9.103-sda845-smp
OS Image: Kairos - Smart Machine Platform 1.0
Operating System: linux
Architecture: arm64
Container Runtime Version: docker://19.3.2
Kubelet Version: v1.16.2-dirty
Kube-Proxy Version: v1.16.2-dirty
PodCIDR: 10.244.1.0/24
PodCIDRs: 10.244.1.0/24
Non-terminated Pods: (2 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system kube-flannel-ds-9xc6n 100m (1%) 100m (1%) 50Mi (0%) 50Mi (0%) 4m21s
kube-system kube-proxy-4dk7f 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4m21s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 100m (1%) 100m (1%)
memory 50Mi (0%) 50Mi (0%)
ephemeral-storage 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 4m22s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 4m21s kubelet Node robot-dd9f6aaa status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m21s kubelet Node robot-dd9f6aaa status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m21s kubelet Node robot-dd9f6aaa status is now: NodeHasSufficientPID
Normal Starting 4m10s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 4m10s kubelet Node robot-dd9f6aaa status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 4m10s kubelet Node robot-dd9f6aaa status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 4m10s kubelet Node robot-dd9f6aaa status is now: NodeHasSufficientPID
Normal Starting 3m59s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 3m59s kubelet Node robot-dd9f6aaa status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 3m59s kubelet Node robot-dd9f6aaa status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 3m59s kubelet Node robot-dd9f6aaa status is now: NodeHasSufficientPID
Normal Starting 3m48s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 3m48s kubelet Node robot-dd9f6aaa status is now: NodeHasSufficientMemory
Normal NodeHasSufficientPID 3m48s kubelet Node robot-dd9f6aaa status is now: NodeHasSufficientPID
Normal NodeHasNoDiskPressure 3m48s kubelet Node robot-dd9f6aaa status is now: NodeHasNoDiskPressure
Normal Starting 3m37s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 3m36s kubelet Node robot-dd9f6aaa status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 3m36s kubelet Node robot-dd9f6aaa status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 3m36s kubelet Node robot-dd9f6aaa status is now: NodeHasSufficientPID
Normal Starting 3m25s kubelet Starting kubelet.
Normal Starting 3m14s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 3m3s kubelet Node robot-dd9f6aaa status is now: NodeHasSufficientMemory
Normal Starting 3m3s kubelet Starting kubelet.
Normal Starting 2m52s kubelet Starting kubelet.
Normal Starting 2m40s kubelet Starting kubelet.
Normal Starting 2m29s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 2m29s kubelet Node robot-dd9f6aaa status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m29s kubelet Node robot-dd9f6aaa status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m29s kubelet Node robot-dd9f6aaa status is now: NodeHasSufficientPID
Normal NodeHasSufficientMemory 2m18s kubelet Node robot-dd9f6aaa status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m18s kubelet Node robot-dd9f6aaa status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m18s kubelet Node robot-dd9f6aaa status is now: NodeHasSufficientPID
Normal Starting 2m18s kubelet Starting kubelet.
Normal Starting 2m7s kubelet Starting kubelet.
Normal Starting 115s kubelet Starting kubelet.
Normal NodeHasNoDiskPressure 104s kubelet Node robot-dd9f6aaa status is now: NodeHasNoDiskPressure
Normal Starting 104s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 104s kubelet Node robot-dd9f6aaa status is now: NodeHasSufficientMemory
Normal Starting 93s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 93s kubelet Node robot-dd9f6aaa status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 93s kubelet Node robot-dd9f6aaa status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 93s kubelet Node robot-dd9f6aaa status is now: NodeHasSufficientPID
Normal Starting 82s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 82s kubelet Node robot-dd9f6aaa status is now: NodeHasSufficientMemory
Normal Starting 71s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 70s kubelet Node robot-dd9f6aaa status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 70s kubelet Node robot-dd9f6aaa status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 70s kubelet Node robot-dd9f6aaa status is now: NodeHasSufficientPID
Normal Starting 59s kubelet Starting kubelet.
Normal Starting 48s kubelet Starting kubelet.
Normal Starting 37s kubelet Starting kubelet.
Normal Starting 26s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 25s kubelet Node robot-dd9f6aaa status is now: NodeHasSufficientMemory
Normal Starting 15s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 14s kubelet Node robot-dd9f6aaa status is now: NodeHasSufficientMemory
Normal Starting 3s kubelet Starting kubelet.
Normal NodeHasSufficientMemory 3s kubelet Node robot-dd9f6aaa status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 3s kubelet Node robot-dd9f6aaa status is now: NodeHasNoDiskPressure
</code></pre>
| dofmind | <ol>
<li>Delete /etc/docker/daemon.json file and reboot</li>
<li>Install CNI plugins binaries in /opt/cni/bin directory
<a href="https://github.com/containernetworking/plugins/releases/download/v0.8.7/cni-plugins-linux-arm64-v0.8.7.tgz" rel="nofollow noreferrer">https://github.com/containernetworking/plugins/releases/download/v0.8.7/cni-plugins-linux-arm64-v0.8.7.tgz</a></li>
</ol>
| dofmind |
<p>I'm containerizing an existing application and I need a basicauth for a single path prefix, currently I have the following Ingress configuration:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: service-auth
spec:
basicAuth:
secret: service-auth
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: service
namespace: default
annotations:
kubernetes.io/ingress.class: "traefik"
cert-manager.io/issuer: "letsencrypt-prod"
traefik.ingress.kubernetes.io/frontend-entry-points: http, https
traefik.ingress.kubernetes.io/redirect-entry-point: https
spec:
tls:
- hosts:
- fqdn
secretName: fqdn-tls
rules:
- host: fqdn
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: service
port:
name: http
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: service-auth
namespace: default
annotations:
kubernetes.io/ingress.class: "traefik"
cert-manager.io/issuer: "letsencrypt-prod"
traefik.ingress.kubernetes.io/frontend-entry-points: http, https
traefik.ingress.kubernetes.io/redirect-entry-point: https
traefik.ingress.kubernetes.io/router.middlewares: default-service-auth@kubernetescrd
spec:
tls:
- hosts:
- fqdn
secretName: fqdn-tls
rules:
- host: fqdn
http:
paths:
- path: /admin/
pathType: Prefix
backend:
service:
name: service
port:
name: http
</code></pre>
<p>This seems to be working, but I just want to make sure - can I rely on the <code>/admin/</code> prefix to be always picked up by the second ingress or is there a chance that it will be picked up by the ingress with <code>/</code> prefix and thus displayed without basicauth?</p>
| Zv0n | <p>As you can read in <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">this documentation</a>:</p>
<p>Each path in Ingress must have the appropriate path type. Paths without an explicit <code>pathType</code> will not be validated. There are three supported path types:</p>
<blockquote>
<p><code>ImplementationSpecific</code>: With this path type, matching is up to the IngressClass. Implementations can treat this as a separate <code>pathType</code> or treat it identically to <code>Prefix</code> or <code>Exact</code> path types.</p>
<p><code>Exact</code>: Matches the URL path exactly and with case sensitivity.</p>
<p><strong><code>Prefix</code>: Matches based on a URL path prefix split by <code>/</code>. Matching is case sensitive and done on a path element by element basis. A path element refers to the list of labels in the path split by the <code>/</code> separator. A request is a match for path <em>p</em> if every <em>p</em> is an element-wise prefix of <em>p</em> of the request path.</strong></p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#examples" rel="nofollow noreferrer">Here</a> is also a link for examples in documentation.</p>
| kkopczak |
<p>I've recently started a proof-of-concept to extend our Airflow to use the KubernetesPodOperator to spin up a pod in our kubernetes environment, that also hosts our airflow. This all works; however I've noticed that the logs that we get have the Info for running the task instance and the task instance success; however the stdout from the container is not captured in the log files.</p>
<p>I can access this information if I set the KubernetesPodOperator to leave the pod and then I can do a kubectl logs from the container and get the stdout information.</p>
<p>Example Log Output:</p>
<pre><code>[2020-11-17 03:09:16,604] {{taskinstance.py:670}} INFO - Dependencies all met for <TaskInstance: alex_kube_test.passing-task 2020-11-17T02:50:00+00:00 [queued]>
[2020-11-17 03:09:16,632] {{taskinstance.py:670}} INFO - Dependencies all met for <TaskInstance: alex_kube_test.passing-task 2020-11-17T02:50:00+00:00 [queued]>
[2020-11-17 03:09:16,632] {{taskinstance.py:880}} INFO -
--------------------------------------------------------------------------------
[2020-11-17 03:09:16,632] {{taskinstance.py:881}} INFO - Starting attempt 2 of 3
[2020-11-17 03:09:16,632] {{taskinstance.py:882}} INFO -
--------------------------------------------------------------------------------
[2020-11-17 03:09:16,650] {{taskinstance.py:901}} INFO - Executing <Task(KubernetesPodOperator): passing-task> on 2020-11-17T02:50:00+00:00
[2020-11-17 03:09:16,652] {{standard_task_runner.py:54}} INFO - Started process 1380 to run task
[2020-11-17 03:09:16,669] {{standard_task_runner.py:77}} INFO - Running: ['airflow', 'run', 'alex_kube_test', 'passing-task', '2020-11-17T02:50:00+00:00', '--job_id', '113975', '--pool', 'default_pool', '--raw', '-sd', 'DAGS_FOLDER/alex_kube_test.py', '--cfg_path', '/tmp/tmpmgyu498h']
[2020-11-17 03:09:16,670] {{standard_task_runner.py:78}} INFO - Job 113975: Subtask passing-task
[2020-11-17 03:09:16,745] {{logging_mixin.py:112}} INFO - Running %s on host %s <TaskInstance: alex_kube_test.passing-task 2020-11-17T02:50:00+00:00 [running]> airflow-worker-686849bf86-bpq4w
[2020-11-17 03:09:16,839] {{logging_mixin.py:112}} WARNING - /usr/local/lib/python3.6/site-packages/urllib3/connection.py:395: SubjectAltNameWarning: Certificate for us-east-1-services-kubernetes-private.vevodev.com has no `subjectAltName`, falling back to check for a `commonName` for now. This feature is being removed by major browsers and deprecated by RFC 2818. (See https://github.com/urllib3/urllib3/issues/497 for details.)
SubjectAltNameWarning,
[2020-11-17 03:09:16,851] {{logging_mixin.py:112}} WARNING - /usr/local/lib/python3.6/site-packages/airflow/kubernetes/pod_launcher.py:330: DeprecationWarning: Using `airflow.contrib.kubernetes.pod.Pod` is deprecated. Please use `k8s.V1Pod`.
security_context=_extract_security_context(pod.spec.security_context)
[2020-11-17 03:09:16,851] {{logging_mixin.py:112}} WARNING - /usr/local/lib/python3.6/site-packages/airflow/kubernetes/pod_launcher.py:77: DeprecationWarning: Using `airflow.contrib.kubernetes.pod.Pod` is deprecated. Please use `k8s.V1Pod` instead.
pod = self._mutate_pod_backcompat(pod)
[2020-11-17 03:09:18,960] {{taskinstance.py:1070}} INFO - Marking task as SUCCESS.dag_id=alex_kube_test, task_id=passing-task, execution_date=20201117T025000, start_date=20201117T030916, end_date=20201117T030918
</code></pre>
<p>What the KubeCtl Logs output returns:</p>
<pre><code>uptime from procps-ng 3.3.10
</code></pre>
<p>Shouldn't this stdout be in the log if I have get_logs=True? How do I make sure that the logs capture the stdout of the container?</p>
| Alexander Yamashita | <p>I felt I had the same issue... but maybe not as you didn't mention if you were using a subdag (I'm using dag factories methodology). I was clicking on the dag task -> view logs in the UI. Since I was using a subdag for the first time I didn't realize I needed to zoom into it to view the logs.</p>
<p><img src="https://i.stack.imgur.com/HOL1j.png" alt="subdag zoom" /></p>
| Matt Peters |
<p>When I deployment IPFS-Cluster on Kubernetes, I get the following error (these are <code>ipfs-cluster</code> logs):</p>
<pre><code> error applying environment variables to configuration: error loading cluster secret from config: encoding/hex: invalid byte: U+00EF 'ï'
2022-01-04T10:23:08.103Z INFO service ipfs-cluster-service/daemon.go:47 Initializing. For verbose output run with "-l debug". Please wait...
2022-01-04T10:23:08.103Z ERROR config config/config.go:352 error reading the configuration file: open /data/ipfs-cluster/service.json: no such file or directory
error loading configurations: open /data/ipfs-cluster/service.json: no such file or directory
</code></pre>
<p>These are <code>initContainer</code> logs:</p>
<pre><code> + user=ipfs
+ mkdir -p /data/ipfs
+ chown -R ipfs /data/ipfs
+ '[' -f /data/ipfs/config ]
+ ipfs init '--profile=badgerds,server'
initializing IPFS node at /data/ipfs
generating 2048-bit RSA keypair...done
peer identity: QmUHmdhauhk7zdj5XT1zAa6BQfrJDukysb2PXsCQ62rBdS
to get started, enter:
ipfs cat /ipfs/QmS4ustL54uo8FzR9455qaxZwuMiUhyvMcX9Ba8nUH4uVv/readme
+ ipfs config Addresses.API /ip4/0.0.0.0/tcp/5001
+ ipfs config Addresses.Gateway /ip4/0.0.0.0/tcp/8080
+ ipfs config --json Swarm.ConnMgr.HighWater 2000
+ ipfs config --json Datastore.BloomFilterSize 1048576
+ ipfs config Datastore.StorageMax 100GB
</code></pre>
<p>These are <code>ipfs</code> container logs:</p>
<pre><code> Changing user to ipfs
ipfs version 0.4.18
Found IPFS fs-repo at /data/ipfs
Initializing daemon...
go-ipfs version: 0.4.18-aefc746
Repo version: 7
System version: amd64/linux
Golang version: go1.11.1
Error: open /data/ipfs/config: permission denied
Received interrupt signal, shutting down...
(Hit ctrl-c again to force-shutdown the daemon.)
</code></pre>
<p>The following is my kubernetes yaml file:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: ipfs-cluster
spec:
serviceName: ipfs-cluster
replicas: 3
selector:
matchLabels:
app: ipfs-cluster
template:
metadata:
labels:
app: ipfs-cluster
spec:
initContainers:
- name: configure-ipfs
image: "ipfs/go-ipfs:v0.4.18"
command: ["sh", "/custom/configure-ipfs.sh"]
volumeMounts:
- name: ipfs-storage
mountPath: /data/ipfs
- name: configure-script
mountPath: /custom/entrypoint.sh
subPath: entrypoint.sh
- name: configure-script-2
mountPath: /custom/configure-ipfs.sh
subPath: configure-ipfs.sh
containers:
- name: ipfs
image: "ipfs/go-ipfs:v0.4.18"
imagePullPolicy: IfNotPresent
env:
- name: IPFS_FD_MAX
value: "4096"
ports:
- name: swarm
protocol: TCP
containerPort: 4001
- name: swarm-udp
protocol: UDP
containerPort: 4002
- name: api
protocol: TCP
containerPort: 5001
- name: ws
protocol: TCP
containerPort: 8081
- name: http
protocol: TCP
containerPort: 8080
livenessProbe:
tcpSocket:
port: swarm
initialDelaySeconds: 30
timeoutSeconds: 5
periodSeconds: 15
volumeMounts:
- name: ipfs-storage
mountPath: /data/ipfs
- name: configure-script
mountPath: /custom
resources:
{}
- name: ipfs-cluster
image: "ipfs/ipfs-cluster:latest"
imagePullPolicy: IfNotPresent
command: ["sh", "/custom/entrypoint.sh"]
envFrom:
- configMapRef:
name: env-config
env:
- name: BOOTSTRAP_PEER_ID
valueFrom:
configMapRef:
name: env-config
key: bootstrap-peer-id
- name: BOOTSTRAP_PEER_PRIV_KEY
valueFrom:
secretKeyRef:
name: secret-config
key: bootstrap-peer-priv-key
- name: CLUSTER_SECRET
valueFrom:
secretKeyRef:
name: secret-config
key: cluster-secret
- name: CLUSTER_MONITOR_PING_INTERVAL
value: "3m"
- name: SVC_NAME
value: $(CLUSTER_SVC_NAME)
ports:
- name: api-http
containerPort: 9094
protocol: TCP
- name: proxy-http
containerPort: 9095
protocol: TCP
- name: cluster-swarm
containerPort: 9096
protocol: TCP
livenessProbe:
tcpSocket:
port: cluster-swarm
initialDelaySeconds: 5
timeoutSeconds: 5
periodSeconds: 10
volumeMounts:
- name: cluster-storage
mountPath: /data/ipfs-cluster
- name: configure-script
mountPath: /custom/entrypoint.sh
subPath: entrypoint.sh
resources:
{}
volumes:
- name: configure-script
configMap:
name: ipfs-cluster-set-bootstrap-conf
- name: configure-script-2
configMap:
name: configura-ipfs
volumeClaimTemplates:
- metadata:
name: cluster-storage
spec:
storageClassName: gp2
accessModes: ["ReadWriteOnce"]
persistentVolumeReclaimPolicy: Retain
resources:
requests:
storage: 5Gi
- metadata:
name: ipfs-storage
spec:
storageClassName: gp2
accessModes: ["ReadWriteOnce"]
persistentVolumeReclaimPolicy: Retain
resources:
requests:
storage: 200Gi
---
kind: Secret
apiVersion: v1
metadata:
name: secret-config
namespace: weex-ipfs
annotations:
kubesphere.io/creator: tom
data:
bootstrap-peer-priv-key: >-
UTBGQlUzQjNhM2RuWjFOcVFXZEZRVUZ2U1VKQlVVTjBWbVpUTTFwck9ETkxVWEZNYzJFemFGWlZaV2xKU0doUFZGRTBhRmhrZVhCeFJGVmxVbmR6Vmt4Nk9IWndZ...
cluster-secret: 7d4c019035beb7da7275ea88315c39b1dd9fdfaef017596550ffc1ad3fdb556f
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
name: env-config
namespace: weex-ipfs
annotations:
kubesphere.io/creator: tom
data:
bootstrap-peer-id: QmWgEHZEmJhuoDgFmBKZL8VtpMEqRArqahuaX66cbvyutP
---
kind: ConfigMap
apiVersion: v1
metadata:
name: ipfs-cluster-set-bootstrap-conf
namespace: weex-ipfs
annotations:
kubesphere.io/creator: tom
data:
entrypoint.sh: |2
#!/bin/sh
user=ipfs
# This is a custom entrypoint for k8s designed to connect to the bootstrap
# node running in the cluster. It has been set up using a configmap to
# allow changes on the fly.
if [ ! -f /data/ipfs-cluster/service.json ]; then
ipfs-cluster-service init
fi
PEER_HOSTNAME=`cat /proc/sys/kernel/hostname`
grep -q ".*ipfs-cluster-0.*" /proc/sys/kernel/hostname
if [ $? -eq 0 ]; then
CLUSTER_ID=${BOOTSTRAP_PEER_ID} \
CLUSTER_PRIVATEKEY=${BOOTSTRAP_PEER_PRIV_KEY} \
exec ipfs-cluster-service daemon --upgrade
else
BOOTSTRAP_ADDR=/dns4/${SVC_NAME}-0/tcp/9096/ipfs/${BOOTSTRAP_PEER_ID}
if [ -z $BOOTSTRAP_ADDR ]; then
exit 1
fi
# Only ipfs user can get here
exec ipfs-cluster-service daemon --upgrade --bootstrap $BOOTSTRAP_ADDR --leave
fi
---
kind: ConfigMap
apiVersion: v1
metadata:
name: configura-ipfs
namespace: weex-ipfs
annotations:
kubesphere.io/creator: tom
data:
configure-ipfs.sh: >-
#!/bin/sh
set -e
set -x
user=ipfs
# This is a custom entrypoint for k8s designed to run ipfs nodes in an
appropriate
# setup for production scenarios.
mkdir -p /data/ipfs && chown -R ipfs /data/ipfs
if [ -f /data/ipfs/config ]; then
if [ -f /data/ipfs/repo.lock ]; then
rm /data/ipfs/repo.lock
fi
exit 0
fi
ipfs init --profile=badgerds,server
ipfs config Addresses.API /ip4/0.0.0.0/tcp/5001
ipfs config Addresses.Gateway /ip4/0.0.0.0/tcp/8080
ipfs config --json Swarm.ConnMgr.HighWater 2000
ipfs config --json Datastore.BloomFilterSize 1048576
ipfs config Datastore.StorageMax 100GB
</code></pre>
<p>I follow the <a href="https://cluster.ipfs.io/documentation/guides/k8s/" rel="nofollow noreferrer">official steps</a> to build.</p>
<p>I follow the official steps and use the following command to generate cluster-secret:</p>
<pre><code>$ od -vN 32 -An -tx1 /dev/urandom | tr -d ' \n' | base64 -w 0 -
</code></pre>
<p>But I get:</p>
<pre><code> error applying environment variables to configuration: error loading cluster secret from config: encoding/hex: invalid byte: U+00EF 'ï'
</code></pre>
<p>I saw the same problem from the official <a href="https://github.com/ipfs/ipfs-cluster/issues/1005" rel="nofollow noreferrer">github issue</a>. So, I use <code>openssl rand -hex</code> command is not ok.</p>
| Jason Tom | <p>To clarify I am posting community Wiki answer.</p>
<hr />
<p>To solve following error:</p>
<pre><code>no such file or directory
</code></pre>
<p>you used <code>runAsUser: 0</code>.</p>
<hr />
<p>The second error:</p>
<pre><code>error applying environment variables to configuration: error loading cluster secret from config: encoding/hex: invalid byte: U+00EF 'ï'
</code></pre>
<p>was caused by different encoding than hex to <code>CLUSTER_SECRET</code>.</p>
<p>According to <a href="https://rossbulat.medium.com/using-ipfs-cluster-service-for-global-ipfs-data-persistence-69a260a0711c" rel="nofollow noreferrer">this page</a>:</p>
<blockquote>
<h4>The Cluster Secret Key</h4>
<p>The secret key of a cluster is a 32-bit <em><strong>hex encoded</strong></em> random string, of which <em>every cluster peer needs in their</em> <code>_service.json_</code> <em>configuration</em>.</p>
<p>A secret key can be generated and predefined in the <code>CLUSTER_SECRET</code> environment variable, and will subsequently be used upon running <code>ipfs-cluster-service init</code>.</p>
</blockquote>
<p><a href="https://github.com/ipfs/ipfs-cluster/issues/1005" rel="nofollow noreferrer">Here</a> is link to solved issue.</p>
<hr />
<p>See also:</p>
<ul>
<li><a href="https://cluster.ipfs.io/documentation/reference/configuration/" rel="nofollow noreferrer">Configuration reference</a></li>
<li><a href="https://labs.eleks.com/2019/03/ipfs-network-data-replication.html" rel="nofollow noreferrer">IPFS Tutorial</a></li>
<li>Documentation guide <a href="https://cluster.ipfs.io/documentation/guides/security/#the-cluster-secret" rel="nofollow noreferrer">Security and ports</a></li>
</ul>
| kkopczak |
<p>I'm migrating an architecture to kubernetes and I'd like to use the Haproxy ingress controller that I'm installling with helm, according the documentation (version 1.3).</p>
<p>Thing is, when I'm defining path rules through an ingress file, I can't define <em><strong>Regex</strong></em> or <em><strong>Beging</strong></em> path types as seen on documentation here : <a href="https://haproxy-ingress.github.io/docs/configuration/keys/#path-type" rel="nofollow noreferrer">https://haproxy-ingress.github.io/docs/configuration/keys/#path-type</a>.
My configuration file :</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: bo-ingress
annotations:
haproxy.org/path-rewrite: "/"
kubernetes.io/ingress.class: haproxy
spec:
rules:
- host: foo.com
http:
paths:
- path: /
pathType: Beging
backend:
service:
name: foo-service
port:
number: 80
</code></pre>
<p>When I install my ingress helm chart with this configuration, I have that error message :</p>
<pre><code>Error: UPGRADE FAILED: cannot patch "bo-ingress" with kind Ingress: Ingress.extensions "bo-ingress" is invalid: spec.rules[2].http.paths[12].pathType: Unsupported value: "Begin": supported values: "Exact", "ImplementationSpecific", "Prefix"
</code></pre>
<p>Am I missing something ? Is this feature only available for Enterprise plan ?</p>
<p>Thanks, Greg</p>
| GregOs | <p>HAProxy Ingress follows “<a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress v1 spec</a>”, so any Ingress spec configuration should work as stated by the Kubernetes documentation.</p>
<p>As per the kubernetes documentation, the supported path types are <strong>ImplementationSpecific, Exact and Prefix</strong>. Paths that do not include an explicit pathType will fail validation. Here the path type <strong>Begin</strong> is not supported as per kubernetes documentation. So use either of those <strong>3 types</strong>.</p>
<p>For more information on kubernetes supported path types refer to the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#path-types" rel="nofollow noreferrer">documentation</a>.</p>
| Chandra Kiran Pasumarti |
<p>I am trying to create a k8s pod with a docker container image from a private insecure registry. With the latest K8s, I get ErrImagePull as it complains of http vs https for the insecure registry.</p>
<pre><code> Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7s default-scheduler Successfully assigned imagename to xxxx
Normal Pulling 7s kubelet Pulling image "registry:5000/imagename:v1”
Warning Failed 6s kubelet Failed to pull image "registry:5000/imagename:v1”: rpc error: code = Unknown desc = failed to pull and unpack image "registry:5000/imagename:v1”: failed to resolve reference "registry:5000/imagename:v1”: failed to do request: Head "https://registry:5000/v2/imagename/manifests/v1”: http: server gave HTTP response to HTTPS client
Warning Failed 6s kubelet Error: ErrImagePull
Normal BackOff 6s kubelet Back-off pulling image "registry:5000/imagename:v1”
Warning Failed 6s kubelet Error: ImagePullBackOff
</code></pre>
<p>Before the CRI changes for K8s (i.e. <a href="https://kubernetes.io/blog/2020/12/02/dont-panic-kubernetes-and-docker/" rel="nofollow noreferrer">https://kubernetes.io/blog/2020/12/02/dont-panic-kubernetes-and-docker/</a>), this has worked for me when I used to have insecure registry configuration in /etc/docker/daemon.json, however with the new changes in K8s, I am trying to understand what is the right configuration needed here.</p>
<p>On the same node, I am able to pull the image from the insecure registry successfully with “docker pull imagename” (since I have /etc/docker/daemon.json configuration for the insecure registry), and I have also verified with containerd command “ctr -i pull —plain-http imagename”.</p>
<p>What configuration is needed for this to work in a pod.yaml for me to pull this image via “kubectl create -f pod.yaml”. It's just a simple pod.yaml with the image, nothing fancy.</p>
<p>I saw a post on creating secret key for private registry (<a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/</a>), but that requires registry authentication token to create a key. I just tried using /etc/docker/daemon.json to create a regcred, but when I used it in imagePullSecrets in pod.yaml, k8s was still complaining of the same http vs https error.</p>
<p>My /etc/docker/daemon.json</p>
<pre><code>{
"insecure-registries": ["registry:5000"]
}
</code></pre>
<p>I have a new install of K8s, and containerd is the CRI.</p>
<p>Thank you for your help.</p>
| UbflPM | <p>I faced a similar problem recently about not being able to pull images from an insecure private docker registry using containerd only. I will post my solution here in case it works for your question too. Steps below show the details of how I solved it on Ubuntu Server 20.04 LTS:</p>
<pre><code>$ containerd --version\
containerd containerd.io 1.6.4 212e8b6fa2f44b9c21b2798135fc6fb7c53efc16
</code></pre>
<p>insecure private docker registry running at 17.5.20.23:5000</p>
<p>The file <code>/etc/containerd/config.toml</code> gets created automatically when you install docker using <code>.deb</code> packages in ubuntu looks as follows:</p>
<pre><code># Copyright 2018-2022 Docker Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#disabled_plugins = ["cri"]
#root = "/var/lib/containerd"
#state = "/run/containerd"
#subreaper = true
#oom_score = 0
#[grpc]
# address = "/run/containerd/containerd.sock"
# uid = 0
# gid = 0
#[debug]
# address = "/run/containerd/debug.sock"
# uid = 0
# gid = 0
# level = "info"
</code></pre>
<p>In my first few attempts I was editing this file (which is created automatically) by simply adding the appropriate lines mentioned at <a href="https://stackoverflow.com/questions/65681045/adding-insecure-registry-in-containerd">Adding insecure registry in containerd</a> at the end of the file and restarting containerd. This made the file look as follows:</p>
<pre><code># Copyright 2018-2022 Docker Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#disabled_plugins = ["cri"]
#root = "/var/lib/containerd"
#state = "/run/containerd"
#subreaper = true
#oom_score = 0
#[grpc]
# address = "/run/containerd/containerd.sock"
# uid = 0
# gid = 0
#[debug]
# address = "/run/containerd/debug.sock"
# uid = 0
# gid = 0
# level = "info"
[plugins."io.containerd.grpc.v1.cri".registry]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."17.5.20.23:5000"]
endpoint = ["http://17.5.20.23:5000"]
[plugins."io.containerd.grpc.v1.cri".registry.configs]
[plugins."io.containerd.grpc.v1.cri".registry.configs."17.5.20.23:5000".tls]
insecure_skip_verify = true
</code></pre>
<p>This did not work for me. To know why, I checked the configurations with which containerd was running (after <code>/etc/containerd/config.toml</code> was edited) using:</p>
<pre><code>$ sudo containerd config dump
</code></pre>
<p>The output of the above command is shown below:</p>
<pre><code>disabled_plugins = []
imports = ["/etc/containerd/config.toml"]
oom_score = 0
plugin_dir = ""
required_plugins = []
root = "/var/lib/containerd"
state = "/run/containerd"
temp = ""
version = 2
[cgroup]
path = ""
[debug]
address = ""
format = ""
gid = 0
level = ""
uid = 0
[grpc]
address = "/run/containerd/containerd.sock"
gid = 0
max_recv_message_size = 16777216
max_send_message_size = 16777216
tcp_address = ""
tcp_tls_ca = ""
tcp_tls_cert = ""
tcp_tls_key = ""
uid = 0
[metrics]
address = ""
grpc_histogram = false
[plugins]
[plugins."io.containerd.gc.v1.scheduler"]
deletion_threshold = 0
mutation_threshold = 100
pause_threshold = 0.02
schedule_delay = "0s"
startup_delay = "100ms"
[plugins."io.containerd.internal.v1.opt"]
path = "/opt/containerd"
[plugins."io.containerd.internal.v1.restart"]
interval = "10s"
[plugins."io.containerd.internal.v1.tracing"]
sampling_ratio = 1.0
service_name = "containerd"
[plugins."io.containerd.metadata.v1.bolt"]
content_sharing_policy = "shared"
[plugins."io.containerd.monitor.v1.cgroups"]
no_prometheus = false
[plugins."io.containerd.runtime.v1.linux"]
no_shim = false
runtime = "runc"
runtime_root = ""
shim = "containerd-shim"
shim_debug = false
[plugins."io.containerd.runtime.v2.task"]
platforms = ["linux/amd64"]
sched_core = false
[plugins."io.containerd.service.v1.diff-service"]
default = ["walking"]
[plugins."io.containerd.service.v1.tasks-service"]
rdt_config_file = ""
[plugins."io.containerd.snapshotter.v1.aufs"]
root_path = ""
[plugins."io.containerd.snapshotter.v1.btrfs"]
root_path = ""
[plugins."io.containerd.snapshotter.v1.devmapper"]
async_remove = false
base_image_size = ""
discard_blocks = false
fs_options = ""
fs_type = ""
pool_name = ""
root_path = ""
[plugins."io.containerd.snapshotter.v1.native"]
root_path = ""
[plugins."io.containerd.snapshotter.v1.overlayfs"]
root_path = ""
upperdir_label = false
[plugins."io.containerd.snapshotter.v1.zfs"]
root_path = ""
[plugins."io.containerd.tracing.processor.v1.otlp"]
endpoint = ""
insecure = false
protocol = ""
[proxy_plugins]
[stream_processors]
[stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]
args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
path = "ctd-decoder"
returns = "application/vnd.oci.image.layer.v1.tar"
[stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
path = "ctd-decoder"
returns = "application/vnd.oci.image.layer.v1.tar+gzip"
[timeouts]
"io.containerd.timeout.bolt.open" = "0s"
"io.containerd.timeout.shim.cleanup" = "5s"
"io.containerd.timeout.shim.load" = "5s"
"io.containerd.timeout.shim.shutdown" = "3s"
"io.containerd.timeout.task.state" = "2s"
[ttrpc]
address = ""
gid = 0
uid = 0
</code></pre>
<p>In the above output I noticed that the configurations I was trying to add by editing the <code>/etc/containerd/config.toml</code> were actually not there. So somehow containerd was not accepting the added configurations. To fix this I decided to start from scratch by generating a full configuration file and editing it appropriately (according to instructions at <a href="https://stackoverflow.com/questions/65681045/adding-insecure-registry-in-containerd">Adding insecure registry in containerd</a>).</p>
<p>First took a backup of the current containerd configuration file:</p>
<pre><code>$ sudo su
$ cd /etc/containerd/
$ mv config.toml config_bkup.toml
</code></pre>
<p>Then generated a fresh full configuration file:</p>
<pre><code>$ containerd config default > config.toml
</code></pre>
<p>This generated a file that looked as follows:</p>
<pre><code>disabled_plugins = []
imports = []
oom_score = 0
plugin_dir = ""
required_plugins = []
root = "/var/lib/containerd"
state = "/run/containerd"
temp = ""
version = 2
[cgroup]
path = ""
[debug]
address = ""
format = ""
gid = 0
level = ""
uid = 0
[grpc]
address = "/run/containerd/containerd.sock"
gid = 0
max_recv_message_size = 16777216
max_send_message_size = 16777216
tcp_address = ""
tcp_tls_ca = ""
tcp_tls_cert = ""
tcp_tls_key = ""
uid = 0
[metrics]
address = ""
grpc_histogram = false
[plugins]
[plugins."io.containerd.gc.v1.scheduler"]
deletion_threshold = 0
mutation_threshold = 100
pause_threshold = 0.02
schedule_delay = "0s"
startup_delay = "100ms"
[plugins."io.containerd.grpc.v1.cri"]
device_ownership_from_security_context = false
disable_apparmor = false
disable_cgroup = false
disable_hugetlb_controller = true
disable_proc_mount = false
disable_tcp_service = true
enable_selinux = false
enable_tls_streaming = false
enable_unprivileged_icmp = false
enable_unprivileged_ports = false
ignore_image_defined_volumes = false
max_concurrent_downloads = 3
max_container_log_line_size = 16384
netns_mounts_under_state_dir = false
restrict_oom_score_adj = false
sandbox_image = "k8s.gcr.io/pause:3.6"
selinux_category_range = 1024
stats_collect_period = 10
stream_idle_timeout = "4h0m0s"
stream_server_address = "127.0.0.1"
stream_server_port = "0"
systemd_cgroup = false
tolerate_missing_hugetlb_controller = true
unset_seccomp_profile = ""
[plugins."io.containerd.grpc.v1.cri".cni]
bin_dir = "/opt/cni/bin"
conf_dir = "/etc/cni/net.d"
conf_template = ""
ip_pref = ""
max_conf_num = 1
[plugins."io.containerd.grpc.v1.cri".containerd]
default_runtime_name = "runc"
disable_snapshot_annotations = true
discard_unpacked_layers = false
ignore_rdt_not_enabled_errors = false
no_pivot = false
snapshotter = "overlayfs"
[plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
base_runtime_spec = ""
cni_conf_dir = ""
cni_max_conf_num = 0
container_annotations = []
pod_annotations = []
privileged_without_host_devices = false
runtime_engine = ""
runtime_path = ""
runtime_root = ""
runtime_type = ""
[plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
base_runtime_spec = ""
cni_conf_dir = ""
cni_max_conf_num = 0
container_annotations = []
pod_annotations = []
privileged_without_host_devices = false
runtime_engine = ""
runtime_path = ""
runtime_root = ""
runtime_type = "io.containerd.runc.v2"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
BinaryName = ""
CriuImagePath = ""
CriuPath = ""
CriuWorkPath = ""
IoGid = 0
IoUid = 0
NoNewKeyring = false
NoPivotRoot = false
Root = ""
ShimCgroup = ""
SystemdCgroup = false
[plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
base_runtime_spec = ""
cni_conf_dir = ""
cni_max_conf_num = 0
container_annotations = []
pod_annotations = []
privileged_without_host_devices = false
runtime_engine = ""
runtime_path = ""
runtime_root = ""
runtime_type = ""
[plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options]
[plugins."io.containerd.grpc.v1.cri".image_decryption]
key_model = "node"
[plugins."io.containerd.grpc.v1.cri".registry]
config_path = ""
[plugins."io.containerd.grpc.v1.cri".registry.auths]
[plugins."io.containerd.grpc.v1.cri".registry.configs]
[plugins."io.containerd.grpc.v1.cri".registry.headers]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
tls_cert_file = ""
tls_key_file = ""
[plugins."io.containerd.internal.v1.opt"]
path = "/opt/containerd"
[plugins."io.containerd.internal.v1.restart"]
interval = "10s"
[plugins."io.containerd.internal.v1.tracing"]
sampling_ratio = 1.0
service_name = "containerd"
[plugins."io.containerd.metadata.v1.bolt"]
content_sharing_policy = "shared"
[plugins."io.containerd.monitor.v1.cgroups"]
no_prometheus = false
[plugins."io.containerd.runtime.v1.linux"]
no_shim = false
runtime = "runc"
runtime_root = ""
shim = "containerd-shim"
shim_debug = false
[plugins."io.containerd.runtime.v2.task"]
platforms = ["linux/amd64"]
sched_core = false
[plugins."io.containerd.service.v1.diff-service"]
default = ["walking"]
[plugins."io.containerd.service.v1.tasks-service"]
rdt_config_file = ""
[plugins."io.containerd.snapshotter.v1.aufs"]
root_path = ""
[plugins."io.containerd.snapshotter.v1.btrfs"]
root_path = ""
[plugins."io.containerd.snapshotter.v1.devmapper"]
async_remove = false
base_image_size = ""
discard_blocks = false
fs_options = ""
fs_type = ""
pool_name = ""
root_path = ""
[plugins."io.containerd.snapshotter.v1.native"]
root_path = ""
[plugins."io.containerd.snapshotter.v1.overlayfs"]
root_path = ""
upperdir_label = false
[plugins."io.containerd.snapshotter.v1.zfs"]
root_path = ""
[plugins."io.containerd.tracing.processor.v1.otlp"]
endpoint = ""
insecure = false
protocol = ""
[proxy_plugins]
[stream_processors]
[stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]
args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
path = "ctd-decoder"
returns = "application/vnd.oci.image.layer.v1.tar"
[stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
path = "ctd-decoder"
returns = "application/vnd.oci.image.layer.v1.tar+gzip"
[timeouts]
"io.containerd.timeout.bolt.open" = "0s"
"io.containerd.timeout.shim.cleanup" = "5s"
"io.containerd.timeout.shim.load" = "5s"
"io.containerd.timeout.shim.shutdown" = "3s"
"io.containerd.timeout.task.state" = "2s"
[ttrpc]
address = ""
gid = 0
uid = 0
</code></pre>
<p>Then edited the above file to look as follows (the edited lines have been appended with the comment '# edited line'):</p>
<pre><code>disabled_plugins = []
imports = ["/etc/containerd/config.toml"] # edited line
oom_score = 0
plugin_dir = ""
required_plugins = []
root = "/var/lib/containerd"
state = "/run/containerd"
temp = ""
version = 2
[cgroup]
path = ""
[debug]
address = ""
format = ""
gid = 0
level = ""
uid = 0
[grpc]
address = "/run/containerd/containerd.sock"
gid = 0
max_recv_message_size = 16777216
max_send_message_size = 16777216
tcp_address = ""
tcp_tls_ca = ""
tcp_tls_cert = ""
tcp_tls_key = ""
uid = 0
[metrics]
address = ""
grpc_histogram = false
[plugins]
[plugins."io.containerd.gc.v1.scheduler"]
deletion_threshold = 0
mutation_threshold = 100
pause_threshold = 0.02
schedule_delay = "0s"
startup_delay = "100ms"
[plugins."io.containerd.grpc.v1.cri"]
device_ownership_from_security_context = false
disable_apparmor = false
disable_cgroup = false
disable_hugetlb_controller = true
disable_proc_mount = false
disable_tcp_service = true
enable_selinux = false
enable_tls_streaming = false
enable_unprivileged_icmp = false
enable_unprivileged_ports = false
ignore_image_defined_volumes = false
max_concurrent_downloads = 3
max_container_log_line_size = 16384
netns_mounts_under_state_dir = false
restrict_oom_score_adj = false
sandbox_image = "17.5.20.23:5000/pause-amd64:3.0" #edited line
selinux_category_range = 1024
stats_collect_period = 10
stream_idle_timeout = "4h0m0s"
stream_server_address = "127.0.0.1"
stream_server_port = "0"
systemd_cgroup = false
tolerate_missing_hugetlb_controller = true
unset_seccomp_profile = ""
[plugins."io.containerd.grpc.v1.cri".cni]
bin_dir = "/opt/cni/bin"
conf_dir = "/etc/cni/net.d"
conf_template = ""
ip_pref = ""
max_conf_num = 1
[plugins."io.containerd.grpc.v1.cri".containerd]
default_runtime_name = "runc"
disable_snapshot_annotations = true
discard_unpacked_layers = false
ignore_rdt_not_enabled_errors = false
no_pivot = false
snapshotter = "overlayfs"
[plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
base_runtime_spec = ""
cni_conf_dir = ""
cni_max_conf_num = 0
container_annotations = []
pod_annotations = []
privileged_without_host_devices = false
runtime_engine = ""
runtime_path = ""
runtime_root = ""
runtime_type = ""
[plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
base_runtime_spec = ""
cni_conf_dir = ""
cni_max_conf_num = 0
container_annotations = []
pod_annotations = []
privileged_without_host_devices = false
runtime_engine = ""
runtime_path = ""
runtime_root = ""
runtime_type = "io.containerd.runc.v2"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
BinaryName = ""
CriuImagePath = ""
CriuPath = ""
CriuWorkPath = ""
IoGid = 0
IoUid = 0
NoNewKeyring = false
NoPivotRoot = false
Root = ""
ShimCgroup = ""
SystemdCgroup = true # edited line
[plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
base_runtime_spec = ""
cni_conf_dir = ""
cni_max_conf_num = 0
container_annotations = []
pod_annotations = []
privileged_without_host_devices = false
runtime_engine = ""
runtime_path = ""
runtime_root = ""
runtime_type = ""
[plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options]
[plugins."io.containerd.grpc.v1.cri".image_decryption]
key_model = "node"
[plugins."io.containerd.grpc.v1.cri".registry]
config_path = ""
[plugins."io.containerd.grpc.v1.cri".registry.auths]
[plugins."io.containerd.grpc.v1.cri".registry.configs]
[plugins."io.containerd.grpc.v1.cri".registry.configs."17.5.20.23:5000"] # edited line
[plugins."io.containerd.grpc.v1.cri".registry.configs."17.5.20.23:5000".tls] # edited line
ca_file = "" # edited line
cert_file = "" # edited line
insecure_skip_verify = true # edited line
key_file = "" # edited line
[plugins."io.containerd.grpc.v1.cri".registry.headers]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."17.5.20.23:5000"] # edited line
endpoint = ["http://17.5.20.23:5000"] # edited line
[plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
tls_cert_file = ""
tls_key_file = ""
[plugins."io.containerd.internal.v1.opt"]
path = "/opt/containerd"
[plugins."io.containerd.internal.v1.restart"]
interval = "10s"
[plugins."io.containerd.internal.v1.tracing"]
sampling_ratio = 1.0
service_name = "containerd"
[plugins."io.containerd.metadata.v1.bolt"]
content_sharing_policy = "shared"
[plugins."io.containerd.monitor.v1.cgroups"]
no_prometheus = false
[plugins."io.containerd.runtime.v1.linux"]
no_shim = false
runtime = "runc"
runtime_root = ""
shim = "containerd-shim"
shim_debug = false
[plugins."io.containerd.runtime.v2.task"]
platforms = ["linux/amd64"]
sched_core = false
[plugins."io.containerd.service.v1.diff-service"]
default = ["walking"]
[plugins."io.containerd.service.v1.tasks-service"]
rdt_config_file = ""
[plugins."io.containerd.snapshotter.v1.aufs"]
root_path = ""
[plugins."io.containerd.snapshotter.v1.btrfs"]
root_path = ""
[plugins."io.containerd.snapshotter.v1.devmapper"]
async_remove = false
base_image_size = ""
discard_blocks = false
fs_options = ""
fs_type = ""
pool_name = ""
root_path = ""
[plugins."io.containerd.snapshotter.v1.native"]
root_path = ""
[plugins."io.containerd.snapshotter.v1.overlayfs"]
root_path = ""
upperdir_label = false
[plugins."io.containerd.snapshotter.v1.zfs"]
root_path = ""
[plugins."io.containerd.tracing.processor.v1.otlp"]
endpoint = ""
insecure = false
protocol = ""
[proxy_plugins]
[stream_processors]
[stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]
args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
path = "ctd-decoder"
returns = "application/vnd.oci.image.layer.v1.tar"
[stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
path = "ctd-decoder"
returns = "application/vnd.oci.image.layer.v1.tar+gzip"
[timeouts]
"io.containerd.timeout.bolt.open" = "0s"
"io.containerd.timeout.shim.cleanup" = "5s"
"io.containerd.timeout.shim.load" = "5s"
"io.containerd.timeout.shim.shutdown" = "3s"
"io.containerd.timeout.task.state" = "2s"
[ttrpc]
address = ""
gid = 0
uid = 0
</code></pre>
<p>Then I restarted containerd</p>
<pre><code>$ systemctl restart containerd
</code></pre>
<p>Finally I tried pulling an image from the private registry using <code>crictl</code> which pulled it successfully:</p>
<pre><code>$ crictl -r unix:///var/run/containerd/containerd.sock 17.5.20.23:5000/nginx:latest
Image is up to date for sha256:0e901e68141fd02f237cf63eb842529f8a9500636a9419e3cf4fb986b8fe3d5d
</code></pre>
| SimpleProgrammer |
<p>I'm looking for a way to quickly run/restart a Job/Pod from the command line and override the command to be executed in the created container.</p>
<p>For context, I have a Kubernetes Job that gets executed as a part of our deploy process. Sometimes that Job crashes and I need to run certain commands <em>inside the container the Job creates</em> to debug and fix the problem (subsequent Jobs then succeed).</p>
<p>The way I have done this so far is:</p>
<ul>
<li>Copy the YAML of the Job, save into a file</li>
<li>Clean up the YAML (delete Kubernetes-managed fields)</li>
<li>Change the <code>command:</code> field to <code>tail -f /dev/null</code> (so that the container stays alive)</li>
<li><code>kubectl apply -f job.yaml && kubectl get all && kubectl exec -ti pod/foobar bash</code></li>
<li>Run commands inside the container</li>
<li><code>kubectl delete job/foobar</code> when I am done</li>
</ul>
<p>This is very tedious. I am looking for a way to do something like the following</p>
<pre><code>kubectl restart job/foobar --command "tail -f /dev/null"
# or even better
kubectl run job/foobar --exec --interactive bash
</code></pre>
<hr />
<p>I cannot use the <code>run</code> command to create a Pod:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl run --image xxx -ti
</code></pre>
<p>because the Job I am trying to restart has certain <code>volumeMounts</code> and other configuration I need to reuse. So I would need something like <code>kubectl run --from-config job/foobar</code>.</p>
<hr />
<p>Is there a way to achieve this or am I stuck with juggling the YAML definition file?</p>
<hr />
<p>Edit: the Job YAML looks approx. like this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: batch/v1
kind: Job
metadata:
name: database-migrations
labels:
app: myapp
service: myapp-database-migrations
spec:
backoffLimit: 0
template:
metadata:
labels:
app: myapp
service: myapp-database-migrations
spec:
restartPolicy: Never
containers:
- name: migrations
image: registry.example.com/myapp:977b44c9
command:
- "bash"
- "-c"
- |
set -e -E
echo "Running database migrations..."
do-migration-stuff-here
echo "Migrations finished at $(date)"
imagePullPolicy: Always
volumeMounts:
- mountPath: /home/example/myapp/app/config/conf.yml
name: myapp-config-volume
subPath: conf.yml
- mountPath: /home/example/myapp/.env
name: myapp-config-volume
subPath: .env
volumes:
- name: myapp-config-volume
configMap:
name: myapp
imagePullSecrets:
- name: k8s-pull-project
</code></pre>
| Martin Melka | <p>The commands you suggested don't exist. Take a look at <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands" rel="noreferrer">this reference</a> where you can find all available commands.</p>
<p>Based on <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/" rel="noreferrer">that documentation</a> the task of the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/" rel="noreferrer"><em>Job</em></a> is to create one or more Pods and continue retrying execution them until the specified number of successfully terminated ones will be achieved. Then the <em>Job</em> tracks the successful completions. You cannot just update the Job because these fields are not updatable. To do what's you want you should delete current job and create one once again.</p>
<hr />
<p>I recommend you to keep all your configurations in files. If you have a problem with configuring job commands, practice says that you should modify these settings in yaml and apply to the cluster - if your deployment crashes - by storing the configuration in files, you have a backup.</p>
<p>If you are interested how to improve this task, you can try those 2 examples describe below:</p>
<p>Firstly I've created several files:</p>
<p>example job (<code>job.yaml</code>):</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: batch/v1
kind: Job
metadata:
name: test1
spec:
template:
spec:
containers:
- name: test1
image: busybox
command: ["/bin/sh", "-c", "sleep 300"]
volumeMounts:
- name: foo
mountPath: "/script/foo"
volumes:
- name: foo
configMap:
name: my-conf
defaultMode: 0755
restartPolicy: OnFailure
</code></pre>
<p><code>patch-file.yaml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>spec:
template:
spec:
containers:
- name: test1
image: busybox
command: ["/bin/sh", "-c", "echo 'patching test' && sleep 500"]
</code></pre>
<p>and <code>configmap.yaml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ConfigMap
metadata:
name: my-conf
data:
test: |
#!/bin/sh
echo "skrypt test"
</code></pre>
<hr />
<ol>
<li>If you want to automate this process you can use <a href="https://kubernetes.io/docs/tasks/extend-kubectl/kubectl-plugins/" rel="noreferrer"><code>plugin</code></a></li>
</ol>
<blockquote>
<p>A plugin is a standalone executable file, whose name begins with <code>kubectl-</code>. To install a plugin, move its executable file to anywhere on your <code>PATH</code>.</p>
<p>There is no plugin installation or pre-loading required. Plugin executables receive the inherited environment from the <code>kubectl</code> binary. A plugin determines which command path it wishes to implement based on its name.</p>
</blockquote>
<p>Here is the file that can replace your job</p>
<blockquote>
<p>A plugin determines the command path that it will implement based on its filename.</p>
</blockquote>
<p><code>kubectl-job</code>:</p>
<pre class="lang-sh prettyprint-override"><code>#!/bin/bash
kubectl patch -f job.yaml -p "$(cat patch-job.yaml)" --dry-run=client -o yaml | kubectl replace --force -f - && kubectl wait --for=condition=ready pod -l job-name=test1 && kubectl exec -it $(kubectl get pod -l job-name=test1 --no-headers -o custom-columns=":metadata.name") -- /bin/sh
</code></pre>
<p>This command uses an additional file (<code>patch-job.yaml</code>, see this <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/update-api-object-kubectl-patch/" rel="noreferrer">link</a>) - within we can put our changes for <code>job</code>.</p>
<p>Then you should change the permissions of this file and move it:</p>
<pre><code>sudo chmod +x .kubectl-job
sudo mv ./kubectl-job /usr/local/bin
</code></pre>
<p>It's all done. Right now you can use it.</p>
<pre><code>$ kubectl job
job.batch "test1" deleted
job.batch/test1 replaced
pod/test1-bdxtm condition met
pod/test1-nh2pv condition met
/ #
</code></pre>
<p>As you can see <code>Job</code> has been replaced (deleted and created).</p>
<hr />
<ol start="2">
<li>You can also use single-line command, here is the example:</li>
</ol>
<pre><code>kubectl get job test1 -o json | jq "del(.spec.selector)" | jq "del(.spec.template.metadata.labels)" | kubectl patch -f - --patch '{"spec": {"template": {"spec": {"containers": [{"name": "test1", "image": "busybox", "command": ["/bin/sh", "-c", "sleep 200"]}]}}}}' --dry-run=client -o yaml | kubectl replace --force -f -
</code></pre>
<p>With this command you can change your job entering parameters "by hand". Here is the output:</p>
<pre><code>job.batch "test1" deleted
job.batch/test1 replaced
</code></pre>
<p>As you can see this solution works as well.</p>
| kkopczak |
<p>I'm trying to deploy <code>mongo</code> in <code>Kubernetes</code>, but before I run the <code>mongo</code> itself, it should do some prerequisites in init containers.</p>
<p>This is the list of <code>configMaps</code></p>
<pre><code>mongo-auth-env 4 16m
mongo-config 1 16m
mongo-config-env 7 16m
mongo-scripts 10 16m
</code></pre>
<p><code>StatefulSet</code> looks like:</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
labels:
component: mongo
spec:
selector:
matchLabels:
component: mongo
serviceName: mongo
replicas: 1
template:
metadata:
labels:
component: mongo
spec:
initContainers:
- name: mongo-init
image: curlimages/curl:latest
volumeMounts:
- mountPath: /mongodb/mongodb-config.sh
name: mongo-config
subPath: mongodb-config.sh
- mountPath: /mongo/scripts
name: mongo-scripts
containers:
- name: mongo
image: bitnami/mongodb:latest
command: [ "/bin/sh", "-c" ]
args:
- /scripts/mongo-run.sh
livenessProbe:
exec:
command:
- '[ -f /data/health.check ] && exit 0 || exit 1'
failureThreshold: 300
periodSeconds: 2
timeoutSeconds: 60
ports:
- containerPort: 27017
imagePullPolicy: Always
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
- name: mongo-scripts
mountPath: /mongo/scripts
env:
- name: MONGO_USER_APP_NAME
valueFrom:
configMapKeyRef:
key: MONGO_USER_APP_NAME
name: mongo-auth-env
- name: MONGO_USER_APP_PASSWORD
valueFrom:
configMapKeyRef:
key: MONGO_USER_APP_PASSWORD
name: mongo-auth-env
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
configMapKeyRef:
key: MONGO_USER_ROOT_NAME
name: mongo-auth-env
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
configMapKeyRef:
key: MONGO_USER_ROOT_PASSWORD
name: mongo-auth-env
- name: MONGO_WIREDTIGER_CACHE_SIZE
valueFrom:
configMapKeyRef:
key: MONGO_WIREDTIGER_CACHE_SIZE
name: mongo-config-env
restartPolicy: Always
volumes:
- name: mongo-scripts
configMap:
name: mongo-scripts
defaultMode: 0777
- name: mongo-config
configMap:
name: mongo-config
defaultMode: 0777
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
</code></pre>
<p>Pod description:</p>
<pre><code>Init Containers:
mongo-init:
Container ID: docker://9a4c20c9b67470af03ee4f60a24eabc428ecafd3875b398ac52a54fe3b2b7b96
Image: curlimages/curl:latest
Image ID: docker-pullable://curlimages/curl@sha256:d588ff348c251f8e4d1b2053125c34d719a98ff3ef20895c49684b3743995073
Port: <none>
Host Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Fri, 19 Nov 2021 02:04:16 +0100
Finished: Fri, 19 Nov 2021 02:04:16 +0100
Ready: False
Restart Count: 8
Environment: <none>
Mounts:
/mongo/scripts from mongo-scripts (rw)
/mongodb/mongodb-config.sh from mongo-config (rw,path="mongodb-config.sh")
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9jwkz (ro)
Containers:
mongo:
Container ID:
Image: mongo:4.2.12-bionic
Image ID:
Port: 27017/TCP
Host Port: 0/TCP
Command:
/bin/sh
-c
Args:
/scripts/mongo-run.sh
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Liveness: exec [[ -f /data/health.check ] && exit 0 || exit 1] delay=0s timeout=60s period=2s #success=1 #failure=300
Environment:
MONGO_USER_APP_NAME: <set to the key 'MONGO_USER_APP_NAME' of config map 'mongo-auth-env'> Optional: false
MONGO_USER_APP_PASSWORD: <set to the key 'MONGO_USER_APP_PASSWORD' of config map 'mongo-auth-env'> Optional: false
MONGO_INITDB_ROOT_USERNAME: <set to the key 'MONGO_USER_ROOT_NAME' of config map 'mongo-auth-env'> Optional: false
MONGO_INITDB_ROOT_PASSWORD: <set to the key 'MONGO_USER_ROOT_PASSWORD' of config map 'mongo-auth-env'> Optional: false
MONGO_WIREDTIGER_CACHE_SIZE: <set to the key 'MONGO_WIREDTIGER_CACHE_SIZE' of config map 'mongo-config-env'> Optional: false
Mounts:
/data/db from mongo-persistent-storage (rw)
/mongo/scripts from mongo-scripts (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9jwkz (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
mongo-persistent-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: mongo-persistent-storage-mongo-0
ReadOnly: false
mongo-scripts:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: mongo-scripts
Optional: false
mongo-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: mongo-config
Optional: false
</code></pre>
<p>After all files/folders are mounted, there is a <code>run.sh</code> script in the <code>scripts</code> directory which should spin things up, but I can't get to that part because the init container is crashing. Is there any other way of doing this, I tried with <code>Jobs</code> but without success. Am I missing something obvious again, because I can see <code>configMaps</code> in the description, but unfortunately can't get logs because Pod is always initializing, and can't exec into the container because of the same reason? I followed some ideas from <a href="https://github.com/markpeterfejes/mongo-repl-init-container" rel="nofollow noreferrer">this repo</a>. Any input would be appreciated.</p>
| dejanmarich | <p>To clarify I am posting Community Wiki answer.</p>
<p>The problem here was small mistake - incorrect path (<code>/scripts/mongo-run.sh</code>) - the correct one: <code>/mongodb/scripts/mongo-run.sh</code></p>
<blockquote>
<p>@dejanmarich: correct path s /mongodb/scripts/mongo-run.sh, i just missed that, fixed now</p>
</blockquote>
| kkopczak |
<p>I deployed the OpenVPN server in the K8S cluster and deployed the OpenVPN client on a host outside the cluster. However, when I use client access, I can only access the POD on the host where the OpenVPN server is located, but cannot access the POD on other hosts in the cluster.
The network used by the cluster is Calico. I also added the following iptables rules to the openVPN server host in the cluster:</p>
<p>I found that I did not receive the package back when I captured the package of tun0 on the server.</p>
| yong.zhang | <p>When the server is deployed on hostnetwork, a forward rule is missing in the iptables field.</p>
| yong.zhang |
<p>I have created a EKS cluster using the the command line eksctl and verified that the application is working fine.</p>
<p>But noticing a strange issue, when i try yo access the nodes in the cluster in the web browser i see the following error</p>
<pre><code>Error loading Namespaces
Unauthorized: Verify you have access to the Kubernetes cluster
</code></pre>
<p><a href="https://i.stack.imgur.com/J6igl.png" rel="noreferrer"><img src="https://i.stack.imgur.com/J6igl.png" alt="enter image description here" /></a></p>
<p>I am able to see the nodes using <code>kubectl get nodes</code></p>
<p>I am logged in as the admin user. Any help on how to workaround this would be really great. Thanks.</p>
| opensource-developer | <p>You will need to add your IAM role/user to your cluster's aws-auth config map</p>
<p>Basic steps to follow taken from <a href="https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html" rel="noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html</a></p>
<pre class="lang-sh prettyprint-override"><code>kubectl edit -n kube-system configmap/aws-auth
</code></pre>
<pre><code># Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
data:
mapRoles: |
- rolearn: <arn:aws:iam::111122223333:role/eksctl-my-cluster-nodegroup-standard-wo-NodeInstanceRole-1WP3NUE3O6UCF>
username: <system:node:{{EC2PrivateDNSName}}>
groups:
- <system:bootstrappers>
- <system:nodes>
mapUsers: |
- userarn: <arn:aws:iam::111122223333:user/admin>
username: <admin>
groups:
- <system:masters>
- userarn: <arn:aws:iam::111122223333:user/ops-user>
username: <ops-user>
groups:
- <system:masters>
</code></pre>
| Carlos Perea |
<p>After setup my kubernetes cluster on GCP i used command <strong>kubectl scale deployment superappip--replicas=30</strong> from google console to scale my deployments, but what should be added in my deployment file myip-service.yaml to do the same?</p>
| babebort | <p>The following is an example of a Deployment. It creates a ReplicaSet to bring up three nginx Pods</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
</code></pre>
<p>you can follow more <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment" rel="nofollow noreferrer">here</a>.</p>
| Nikhil |
<p>I've been using <strong>Docker Desktop for Windows</strong> for a while and recently I updated to the latest version (<em>3.5.1</em>) but now I'm having problems with Kubernetes because it updated the <strong>client version</strong> (<em>1.21.2</em>) but the <strong>server version</strong> was not updated and continues on the version (<em>1.19.7</em>).</p>
<p>How can I update the server version to avoid the conflicts that K8s faces when the versions between client and server are more than 1 version different?</p>
| Alejandro | <p>To try to solve this problem I decided to install Minikube, after that, all those problems were solved. Thanks everyone</p>
| Alejandro |
<p><strong>Summary in one sentence</strong></p>
<p>I want to deploy Mattermost locally on a Kubernetes cluster using Minikube. I'm using Minikube v1.23.2 and Kubernetes v1.22.2</p>
<p><strong>Steps to reproduce</strong></p>
<p>I used this tutorial and the Github documentation:</p>
<ul>
<li><a href="https://mattermost.com/blog/how-to-get-started-with-mattermost-on-kubernetes-in-just-a-few-minutes/" rel="nofollow noreferrer">https://mattermost.com/blog/how-to-get-started-with-mattermost-on-kubernetes-in-just-a-few-minutes/</a></li>
<li><a href="https://github.com/mattermost/mattermost-operator/tree/v1.15.0" rel="nofollow noreferrer">https://github.com/mattermost/mattermost-operator/tree/v1.15.0</a></li>
</ul>
<ol>
<li>To start minikube: <code>minikube start</code></li>
<li>To start ingress; <code>minikube addons enable ingress</code></li>
<li>In the Github documentation they state that you need to install Custom Resources by running: <code>kubectl apply -f ./config/crd/bases</code></li>
<li>Afterwards I followed step 4 to 9 from the first tutorial I noted above (without step 8)</li>
</ol>
<p><strong>Observed behavior</strong></p>
<p>Unfortunately I keep getting the following error in the mattermost-operator: <code>no matches for kind \"Ingress\" in version \"networking.k8s.io/v1beta1\</code>.</p>
<p>See below for mattermost-operator logs:</p>
<pre class="lang-yaml prettyprint-override"><code>time="2021-10-13T15:56:47Z" level=info msg="[opr] Go Version: go1.16.3"
time="2021-10-13T15:56:47Z" level=info msg="[opr] Go OS/Arch: linux/amd64"
time="2021-10-13T15:56:47Z" level=info msg="[opr.controller-runtime.metrics] metrics server is starting to listen" addr="0.0.0.0:8383"
time="2021-10-13T15:56:47Z" level=info msg="[opr] Registering Components"
time="2021-10-13T15:56:47Z" level=info msg="[opr] Starting manager"
I1013 15:56:47.972667 1 leaderelection.go:243] attempting to acquire leader lease mattermost-operator/b78a986e.mattermost.com...
time="2021-10-13T15:56:47Z" level=info msg="[opr.controller-runtime.manager] starting metrics server" path=/metrics
time="2021-10-13T15:57:05Z" level=info msg="[opr.controller-runtime.manager.events] Normal" message="mattermost-operator-74bc664c46-866xd_4b3f7a90-923b-4b0a-ab01-4a1ebe955088 became leader" object="{ConfigMap mattermost-operator b78a986e.mattermost.com c7462714-0cda-4e03-9765-a2e8599fdbcf v1 11389 }" reason=LeaderElection
time="2021-10-13T15:57:05Z" level=info msg="[opr.controller-runtime.manager.events] Normal" message="mattermost-operator-74bc664c46-866xd_4b3f7a90-923b-4b0a-ab01-4a1ebe955088 became leader" object="{Lease mattermost-operator b78a986e.mattermost.com ffe4104b-efa9-42da-8d16-5a85766366e3 coordination.k8s.io/v1 11390 }" reason=LeaderElection
I1013 15:57:05.433562 1 leaderelection.go:253] successfully acquired lease mattermost-operator/b78a986e.mattermost.com
time="2021-10-13T15:57:05Z" level=info msg="[opr.controller-runtime.manager.controller.mattermost] Starting EventSource" reconciler group=installation.mattermost.com reconciler kind=Mattermost source="kind source: /, Kind="
time="2021-10-13T15:57:05Z" level=info msg="[opr.controller-runtime.manager.controller.mattermostrestoredb] Starting EventSource" reconciler group=mattermost.com reconciler kind=MattermostRestoreDB source="kind source: /, Kind="
time="2021-10-13T15:57:05Z" level=info msg="[opr.controller-runtime.manager.controller.clusterinstallation] Starting EventSource" reconciler group=mattermost.com reconciler kind=ClusterInstallation source="kind source: /, Kind="
E1013 15:57:05.460380 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"b78a986e.mattermost.com.16ada23da3ddb598", GenerateName:"", Namespace:"mattermost-operator", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"ConfigMap", Namespace:"mattermost-operator", Name:"b78a986e.mattermost.com", UID:"c7462714-0cda-4e03-9765-a2e8599fdbcf", APIVersion:"v1", ResourceVersion:"11389", FieldPath:""}, Reason:"LeaderElection", Message:"mattermost-operator-74bc664c46-866xd_4b3f7a90-923b-4b0a-ab01-4a1ebe955088 became leader", Source:v1.EventSource{Component:"mattermost-operator-74bc664c46-866xd_4b3f7a90-923b-4b0a-ab01-4a1ebe955088", Host:""}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc051de5459b4cb98, ext:17770579701, loc:(*time.Location)(0x2333fe0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc051de5459b4cb98, ext:17770579701, loc:(*time.Location)(0x2333fe0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:serviceaccount:mattermost-operator:mattermost-operator" cannot create resource "events" in API group "" in the namespace "mattermost-operator"' (will not retry!)
E1013 15:57:05.469732 1 event.go:264] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"b78a986e.mattermost.com.16ada23da3fa25ac", GenerateName:"", Namespace:"mattermost-operator", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Lease", Namespace:"mattermost-operator", Name:"b78a986e.mattermost.com", UID:"ffe4104b-efa9-42da-8d16-5a85766366e3", APIVersion:"coordination.k8s.io/v1", ResourceVersion:"11390", FieldPath:""}, Reason:"LeaderElection", Message:"mattermost-operator-74bc664c46-866xd_4b3f7a90-923b-4b0a-ab01-4a1ebe955088 became leader", Source:v1.EventSource{Component:"mattermost-operator-74bc664c46-866xd_4b3f7a90-923b-4b0a-ab01-4a1ebe955088", Host:""}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc051de5459d13bac, ext:17772215701, loc:(*time.Location)(0x2333fe0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc051de5459d13bac, ext:17772215701, loc:(*time.Location)(0x2333fe0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:serviceaccount:mattermost-operator:mattermost-operator" cannot create resource "events" in API group "" in the namespace "mattermost-operator"' (will not retry!)
time="2021-10-13T15:57:05Z" level=info msg="[opr.controller-runtime.manager.controller.mattermost] Starting EventSource" reconciler group=installation.mattermost.com reconciler kind=Mattermost source="kind source: /, Kind="
time="2021-10-13T15:57:05Z" level=info msg="[opr.controller-runtime.manager.controller.mattermostrestoredb] Starting Controller" reconciler group=mattermost.com reconciler kind=MattermostRestoreDB
time="2021-10-13T15:57:05Z" level=info msg="[opr.controller-runtime.manager.controller.clusterinstallation] Starting EventSource" reconciler group=mattermost.com reconciler kind=ClusterInstallation source="kind source: /, Kind="
time="2021-10-13T15:57:05Z" level=info msg="[opr.controller-runtime.manager.controller.mattermost] Starting EventSource" reconciler group=installation.mattermost.com reconciler kind=Mattermost source="kind source: /, Kind="
time="2021-10-13T15:57:05Z" level=info msg="[opr.controller-runtime.manager.controller.clusterinstallation] Starting EventSource" reconciler group=mattermost.com reconciler kind=ClusterInstallation source="kind source: /, Kind="
time="2021-10-13T15:57:05Z" level=info msg="[opr.controller-runtime.manager.controller.mattermostrestoredb] Starting workers" reconciler group=mattermost.com reconciler kind=MattermostRestoreDB worker count=1
time="2021-10-13T15:57:05Z" level=info msg="[opr.controller-runtime.manager.controller.mattermost] Starting EventSource" reconciler group=installation.mattermost.com reconciler kind=Mattermost source="kind source: /, Kind="
time="2021-10-13T15:57:05Z" level=info msg="[opr.controller-runtime.manager.controller.clusterinstallation] Starting EventSource" reconciler group=mattermost.com reconciler kind=ClusterInstallation source="kind source: /, Kind="
time="2021-10-13T15:57:05Z" level=error msg="[opr.controller-runtime.source] if kind is a CRD, it should be installed before calling Start" error="no matches for kind \"Ingress\" in version \"networking.k8s.io/v1beta1\"" kind=Ingress.networking.k8s.io
time="2021-10-13T15:57:05Z" level=info msg="[opr.controller-runtime.manager.controller.mattermostrestoredb] Stopping workers" reconciler group=mattermost.com reconciler kind=MattermostRestoreDB
I1013 15:57:07.096906 1 request.go:655] Throttling request took 1.0466297s, request: GET:https://10.96.0.1:443/apis/events.k8s.io/v1?timeout=32s
time="2021-10-13T15:57:07Z" level=error msg="[opr.controller-runtime.source] if kind is a CRD, it should be installed before calling Start" error="no matches for kind \"Ingress\" in version \"networking.k8s.io/v1beta1\"" kind=Ingress.networking.k8s.io
time="2021-10-13T15:57:07Z" level=error msg="[opr.controller-runtime.manager] error received after stop sequence was engaged" error="no matches for kind \"Ingress\" in version \"networking.k8s.io/v1beta1\""
time="2021-10-13T15:57:07Z" level=error msg="[opr.controller-runtime.manager] error received after stop sequence was engaged" error="leader election lost"
time="2021-10-13T15:57:07Z" level=error msg="[opr] Problem running manager" error="no matches for kind \"Ingress\" in version \"networking.k8s.io/v1beta1\""
</code></pre>
<p>Really tried many variations of implementations and changed the yml files but nothing seems to work. Anybody an idea?</p>
| Lucas Scheepers | <p><code>networking.k8s.io/v1beta1</code> is deprecated as of Kubernetes v1.22, according to this <a href="https://kubernetes.io/docs/reference/using-api/deprecation-guide/#ingress-v122" rel="nofollow noreferrer">link</a>, which is a problem if Mattermost installation defines and uses v1beta1 configurations.</p>
<p>Short of trying to change the existing Mattermost installation configuration, you may have more luck downgrading Minikube to v1.21.5.</p>
<pre><code>minikube start --kubernetes-version=v1.21.5
</code></pre>
| clarj |
<p>In deployment.yaml contains the condition:</p>
<pre><code> {{- if or $.Values.env $.Values.envSecrets }}
env:
{{- range $key, $value := $.Values.env }}
- name: {{ $key }}
value: {{ $value | quote }}
{{- end }}
{{- range $key, $secret := $.Values.envSecrets }}
- name: {{ $key }}
valueFrom:
secretKeyRef:
name: {{ $secret }}
key: {{ $key | quote }}
{{- end }}
{{- end }}
</code></pre>
<p>If I pass the $key = <code>helm install NAME nexus/stand --set env.server.servlet.context-path=/bpm/router-app</code>, then i don't get what i expect:</p>
<pre><code>Containers:
...
Environment:
server: map[servlet:map[context-path:/bpm/router-app]]
</code></pre>
<p>How can I get around this problem and get the environment like:</p>
<pre><code> Environment:
server.servlet.context-path: /bpm/router-app
</code></pre>
| Maksim | <p>Use double backslashes.</p>
<pre><code>helm install NAME nexus/stand --set env.server\\.servlet\\.context-path=/bpm/router-app
</code></pre>
<p>That is the equivalent of:</p>
<pre class="lang-yaml prettyprint-override"><code>env:
server.servlet.context-path: /bpm/router-app
</code></pre>
<p>This is useful particularly for annotations.</p>
<p>Alternatively you should be able to use quotes and single backslashes.</p>
<pre><code>helm install NAME nexus/stand --set 'env.server\.servlet\.context-path'=/bpm/router-app
</code></pre>
| clarj |
<p>I've installed RabbitMQ on a Kubernetes Cluster using Helm as follows:</p>
<p><code>helm repo add bitnami https://charts.bitnami.com/bitnami</code></p>
<p><code>helm install my-release bitnami/rabbitmq-cluster-operator</code></p>
<p>Then I setup a Go Client something like this running as a service on the same Kubernetes Cluster</p>
<p><code>import amqp "github.com/rabbitmq/amqp091-go" </code></p>
<p><code>conn, err = amqp.Dial("amqp://guest:guest@localhost:5672/")</code></p>
<p>But the client fails to connect. How do I figure out what the host and port should be set to for RabbitMQ on a Kubernetes Cluster?</p>
| Adrian | <p>If your Go client is running as a microservice on the same cluster, you need to use the appropriate DNS record to access it, <code>localhost</code> just attempts to access the Go client microservice itself.</p>
<p>In the namespace where RabbitMQ is installed, you can run <code>kubectl get svc</code> and there should be a ClusterIP service running with port 5672, likely called <code>my-release</code>.</p>
<p>You can then connect to it from any other service in the cluster with <code>my-release.NAMESPACE.svc.DOMAIN</code>.</p>
<p>The Helm release notes also show how to connect to the service, as well as many other helpful notes like authentication username and password, as well as external access availability etc.</p>
<p><code>helm get notes my-release</code></p>
| clarj |
<p>Running a k8 cronjob on an endpoint. Test works like a charm locally and even when I <code>sleep infinity</code> at the end of my entrypoint then curl inside the container. However once the cron kicks off I get some funky error:</p>
<pre><code>[ec2-user@ip-10-122-8-121 device-purge]$ kubectl logs appgate-device-cron-job-1618411080-29lgt -n device-purge
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to 52.61.245.214:444
</code></pre>
<p>docker-entrypoint.sh</p>
<pre><code>#! /bin/sh
export api_vs_hd=$API_VS_HD
export controller_ip=$CONTROLLER_IP
export password=$PASSWORD
export uuid=$UUID
export token=$TOKEN
# should be logged in after token export
# Test API call: list users
curl -k -H "Content-Type: application/json" \
-H "$api_vs_hd" \
-H "Authorization: Bearer $token" \
-X GET \
https://$controller_ip:444/admin/license/users
# test
# sleep infinity
</code></pre>
<p>Dockerfile</p>
<pre><code>FROM harbor/privateop9/python38:latest
# Use root user for packages installation
USER root
# Install packages
RUN yum update -y && yum upgrade -y
# Install curl
RUN yum install curl -y \
&& curl --version
# Install zip/unzip/gunzip
RUN yum install zip unzip -y \
&& yum install gzip -y
# Install wget
RUN yum install wget -y
# Install jq
RUN wget -O jq https://github.com/stedolan/jq/releases/download/jq-1.6/jq-linux64
RUN chmod +x ./jq
RUN cp jq /usr/bin
# Install aws cli
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
RUN unzip awscliv2.zip
RUN ./aws/install
## set working directory
WORKDIR /home/app
# Add user
RUN groupadd --system user && adduser --system user --no-create-home --gid user
RUN chown -R user:user /home/app && chmod -R 777 /home/app
# Make sure that your shell script file is in the same folder as your dockerfile while running the docker build command as the below command will copy the file to the /home/root/ folder for execution
# COPY . /home/root/
COPY ./docker-entrypoint.sh /home/app
RUN chmod +x docker-entrypoint.sh
# Switch to non-root user
USER user
# Run service
ENTRYPOINT ["/home/app/docker-entrypoint.sh"]
</code></pre>
<p>Cronjob.yaml</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: device-cron-job
namespace: device-purge
spec:
#Cron Time is set according to server time, ensure server time zone and set accordingly.
schedule: "*/2 * * * *" # test
jobTemplate:
spec:
template:
spec:
imagePullSecrets:
- name: appgate-cron
containers:
- name: device-cron-pod
image: harbor/privateop9/python38:device-purge
env:
- name: API_VS_HD
value: "Accept:application/vnd.appgate.peer-v13+json"
- name: CONTROLLER_IP
value: "value"
- name: UUID
value: "value"
- name: TOKEN
value: >-
curl -H "Content-Type: application/json" -H "${api_vs_hd}" --request POST
--data "{\"providerName\":\"local\",\"username\":\"admin\",\"password\":\"$password\",\"deviceId\":\"$uuid\"}"
https://$controller_ip:444/admin/login --insecure | jq -r '.token'
- name: PASSWORD
valueFrom:
secretKeyRef:
name: password
key: password
imagePullPolicy: Always
restartPolicy: OnFailure
backoffLimit: 3
</code></pre>
<p>Please help! I am running out of ideas....</p>
| kddiji | <p>The issue with my post was on the server itself due to some firewall with the IP whitelisting set up on the AWS cloud account. After that problem was addressed by the security team on the account I was able to pass the blocker.</p>
| kddiji |
<h1>What I wanna accomplish</h1>
<p>I'm trying to connect an external HTTPS (L7) load balancer with an NGINX Ingress exposed as a zonal Network Endpoint Group (NEG). My Kubernetes cluster (in GKE) contains a couple of web application deployments that I've exposed as a ClusterIP service.</p>
<p>I know that the NGINX Ingress object can be directly exposed as a TCP load balancer. But, this is not what I want. Instead in my architecture, I want to load balance the HTTPS requests with an external HTTPS load balancer. I want this external load balancer to provide SSL/TLS termination and forward HTTP requests to my Ingress resource.</p>
<p>The ideal architecture would look like this:</p>
<p>HTTPS requests --> external HTTPS load balancer --> HTTP request --> NGINX Ingress zonal NEG --> appropriate web application</p>
<p>I'd like to add the zonal NEGs from the NGINX Ingress as the backends for the HTTPS load balancer. This is where things fall apart.</p>
<h1>What I've done</h1>
<p><strong>NGINX Ingress config</strong></p>
<p>I'm using the default NGINX Ingress config from the official kubernetes/ingress-nginx project. Specifically, this YAML file <a href="https://github.com/kubernetes/ingress-nginx/blob/master/deploy/static/provider/cloud/deploy.yaml" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/blob/master/deploy/static/provider/cloud/deploy.yaml</a>.
Note that, I've changed the NGINX-controller Service section as follows:</p>
<ul>
<li><p>Added NEG annotation</p>
</li>
<li><p>Changed the Service type from <code>LoadBalancer</code> to <code>ClusterIP</code>.</p>
</li>
</ul>
<pre><code># Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
# added NEG annotation
cloud.google.com/neg: '{"exposed_ports": {"80":{"name": "NGINX_NEG"}}}'
labels:
helm.sh/chart: ingress-nginx-3.30.0
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/version: 0.46.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
type: ClusterIP
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/component: controller
---
</code></pre>
<p><strong>NGINX Ingress routing</strong></p>
<p>I've tested the path based routing rules for the NGINX Ingress to my web applications independently. This works when the NGINX Ingress is configured with a TCP Load Balancer. I've set up my application Deployment and Service configs the usual way.</p>
<p><strong>External HTTPS Load Balancer</strong></p>
<p>I created an external HTTPS load balancer with the following settings:</p>
<ul>
<li>Backend: added the zonal NEGs named <code>NGINX_NEG</code> as the backends. The backend is configured to accept HTTP requests on port 80. I configured the health check on the serving port via the TCP protocol. I added the firewall rules to allow incoming traffic from <code>130.211.0.0/22</code> and <code>35.191.0.0/16</code> as mentioned here <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/standalone-neg#traffic_does_not_reach_the_endpoints" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/standalone-neg#traffic_does_not_reach_the_endpoints</a></li>
</ul>
<h1>What's not working</h1>
<p>Soon after the external load balancer is set up, I can see that GCP creates a new endpoint under one of the zonal NEGs. But this shows as "Unhealthy". Requests to the external HTTPS load balancer return a 502 error.</p>
<ul>
<li><p>I'm not sure where to start debugging this configuration in GCP logging. I've enabled logging for the health check but nothing shows up in the logs.</p>
</li>
<li><p>I configured the health check on the <code>/healthz</code> path of the NGINX Ingress controller. That didn't seem to work either.</p>
</li>
</ul>
<p>Any tips on how to get this to work will be much appreciated. Thanks!</p>
<p>Edit 1: As requested, I ran the <code>kubectl get svcneg -o yaml --namespace=<namespace></code>, here's the output</p>
<pre><code>apiVersion: networking.gke.io/v1beta1
kind: ServiceNetworkEndpointGroup
metadata:
creationTimestamp: "2021-05-07T19:04:01Z"
finalizers:
- networking.gke.io/neg-finalizer
generation: 418
labels:
networking.gke.io/managed-by: neg-controller
networking.gke.io/service-name: ingress-nginx-controller
networking.gke.io/service-port: "80"
name: NGINX_NEG
namespace: ingress-nginx
ownerReferences:
- apiVersion: v1
blockOwnerDeletion: false
controller: true
kind: Service
name: ingress-nginx-controller
uid: <unique ID>
resourceVersion: "2922506"
selfLink: /apis/networking.gke.io/v1beta1/namespaces/ingress-nginx/servicenetworkendpointgroups/NGINX_NEG
uid: <unique ID>
spec: {}
status:
conditions:
- lastTransitionTime: "2021-05-07T19:04:08Z"
message: ""
reason: NegInitializationSuccessful
status: "True"
type: Initialized
- lastTransitionTime: "2021-05-07T19:04:10Z"
message: ""
reason: NegSyncSuccessful
status: "True"
type: Synced
lastSyncTime: "2021-05-10T15:02:06Z"
networkEndpointGroups:
- id: <id1>
networkEndpointType: GCE_VM_IP_PORT
selfLink: https://www.googleapis.com/compute/v1/projects/<project>/zones/us-central1-a/networkEndpointGroups/NGINX_NEG
- id: <id2>
networkEndpointType: GCE_VM_IP_PORT
selfLink: https://www.googleapis.com/compute/v1/projects/<project>/zones/us-central1-b/networkEndpointGroups/NGINX_NEG
- id: <id3>
networkEndpointType: GCE_VM_IP_PORT
selfLink: https://www.googleapis.com/compute/v1/projects/<project>/zones/us-central1-f/networkEndpointGroups/NGINX_NEG
</code></pre>
| zerodark | <p>As per my understanding, your issue is - “when an external load balancer is set up, GCP creates a new endpoint under one of the zonal NEGs and it shows "Unhealthy" and requests to the external HTTPS load balancer which return a 502 error”.</p>
<p>Essentially, the Service's annotation, cloud.google.com/neg: '{"ingress": true}', enables container-native load balancing. After creating the Ingress, an HTTP(S) load balancer is created in the project, and NEGs are created in each zone in which the cluster runs. The endpoints in the NEG and the endpoints of the Service are kept in sync.
Refer to the link [1].</p>
<p>New endpoints generally become reachable after attaching them to the load balancer, provided that they respond to health checks. You might encounter 502 errors or rejected connections if traffic cannot reach the endpoints.</p>
<p>One of your endpoints in zonal NEG is showing unhealthy so please confirm the status of other endpoints and how many endpoints are spread across the zones in the backend.
If all backends are unhealthy, then your firewall, Ingress, or service might be misconfigured.</p>
<p>You can run following command to check if your endpoints are healthy or not and refer link [2] for the same -
gcloud compute network-endpoint-groups list-network-endpoints NAME \ --zone=ZONE</p>
<p>To troubleshoot traffic that is not reaching the endpoints, verify that health check firewall rules allow incoming TCP traffic to your endpoints in the 130.211.0.0/22 and 35.191.0.0/16 ranges. But as you mentioned you have configured this rule correctly. Please refer link [3] for health check Configuration.</p>
<p>Run the Curl command against your LB IP to check for responses -<br />
Curl [LB IP]</p>
<p>[1] <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress-xlb" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/concepts/ingress-xlb</a></p>
<p>[2] <a href="https://cloud.google.com/load-balancing/docs/negs/zonal-neg-concepts#troubleshooting" rel="nofollow noreferrer">https://cloud.google.com/load-balancing/docs/negs/zonal-neg-concepts#troubleshooting</a></p>
<p>[3] <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#health_checks" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#health_checks</a></p>
| Anant Swaraj |
<p>After deploying your pods, how one can identify that all the pods are up and running? I have listed down few options which I think could be correct but wanted to understand what is the standard way to identify the successful deployment.</p>
<ol>
<li>Connect to application via its interface and use it to identify if all the pods (cluster) are up (maybe good for stateful applications). For stateless applications pod is up should be enough.</li>
<li>Expose a Restful API service which monitors the deployment and responds accordingly.</li>
<li>Use <code>Kubectl</code> to connect to pods and get the status of pods and containers running.</li>
</ol>
<p>I think number 1 is the right way but wanted to understand community view on it.</p>
| Manish Khandelwal | <p>All your approaches sounds reasonable and will do the job, but why not just use the tools that Kubernetes is giving us exactly for this purpose ? ;)</p>
<p>There are two main health check used by Kubernetes:</p>
<ul>
<li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-command" rel="nofollow noreferrer">Liveness probe</a>- to know if container is running and working without issues (not hanged, not in deadlock state)</li>
<li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-readiness-probes" rel="nofollow noreferrer">Readiness probe</a> - to know if container is able to accept more requests</li>
</ul>
<p>Worth to note there is also "<a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-startup-probes" rel="nofollow noreferrer">Startup probe</a>" which is responsible for protecting slow starting containers with difficult to estimate start time.</p>
<p><strong>Liveness:</strong></p>
<p>As mentioned earlier, main goal of the liveness probe is to ensure that container is not dead. If it is dead, Kubernetes removes the Pod and start a new one.</p>
<p><strong>Readiness:</strong></p>
<p>The main goal of the readiness probe is to check if container is able to handle additional traffic. In some case, the container may be working but it can't accept a traffic. You are defining readiness probes the same as the liveness probes, but the goal of this probe it to check if application is able to answer several queries in a row within a reasonable time. If not, Kubernetes stop sending traffic to the pod until it passes readiness probe.</p>
<p><strong>Implementation:</strong></p>
<p>You have a few ways to implement probes:</p>
<ul>
<li>run a command every specified period of time and check if it was done correctly - the return code is 0 (in <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-command" rel="nofollow noreferrer">this example</a>, the command <code>cat /tmp/healthy</code> is running every few seconds).</li>
<li>send a HTTP GET request to the container every specified period of time and check if it returns a success code (in <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-liveness-http-request" rel="nofollow noreferrer">this example</a>, Kubernetes is sending a HTTP request to the endpoint <code>/healthz</code> defined in container).</li>
<li>attempt to open a TCP socket in the container every specified period of time and make sure that connection is established (in <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-a-tcp-liveness-probe" rel="nofollow noreferrer">this example</a>, Kubernetes is connecting to container on port 8080).</li>
</ul>
<p>For both probes you can define few arguments:</p>
<blockquote>
<ul>
<li><code>initialDelaySeconds</code>: Number of seconds after the container has started before liveness or readiness probes are initiated. Defaults to 0 seconds. Minimum value is 0.</li>
<li><code>periodSeconds</code>: How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1.</li>
<li><code>timeoutSeconds</code>: Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1.</li>
<li><code>successThreshold</code>: Minimum consecutive successes for the probe to be considered successful after having failed. Defaults to 1. Must be 1 for liveness and startup Probes. Minimum value is 1.</li>
<li><code>failureThreshold</code>: When a probe fails, Kubernetes will try <code>failureThreshold</code> times before giving up. Giving up in case of liveness probe means restarting the container. In case of readiness probe the Pod will be marked Unready. Defaults to 3. Minimum value is 1.</li>
</ul>
</blockquote>
<p>Combining these two health checks will make sure that the application has been deployed and is working correctly - liveness probe for ensuring that pod is restarted when it container in it stopped working and readiness probe for ensuring that traffic does not reach pod with not-ready or overloaded container. The proper functioning of the probes requires an appropriate selection of the implementation method and definition of arguments - most often by trial and error. Check out these documentation:</p>
<ul>
<li><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">Configure Liveness, Readiness and Startup Probes - Kubernetes documentation</a></li>
<li><a href="https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-setting-up-health-checks-with-readiness-and-liveness-probes" rel="nofollow noreferrer">Kubernetes best practices: Setting up health checks with readiness and liveness probes - Google Cloud</a></li>
</ul>
| Mikolaj S. |
<p>It's my first time using google pubsub on production. We are using the Python client and deploying projects with docker and k8s. The code snippet is just as the sample code provided by google.</p>
<pre class="lang-py prettyprint-override"><code> os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = config["GOOGLE_APPLICATION_CREDENTIALS"]
project_id = "abc"
topic_id = "asd"
self._publisher = pubsub_v1.PublisherClient()
self._topic_path = self._publisher.topic_path(project_id, topic_id)
log_info_json = json.dumps({"id": 1})
publish_future = self._publisher.publish(self._topic_path, log_info_json.encode("utf-8"))
publish_future.result(timeout=5)
</code></pre>
<p>The code works fine locally, however, the k8s cluster has network constraints and I think this is the reason why we keep getting <code>TimeoutError</code> when we try on production. I have already made <code>*.googleapis.com</code> and <code>*.google.com</code> accessible but the error still exists. I have already tried to access the pod and ensure that all the links in the <code>GOOGLE_APPLICATION_CREDENTIALS</code> json file are accessible.</p>
| DYX | <p>@DYX, As you have mentioned in the comment, this issue was because of an unstable network and when using the client library it just hangs there forever.</p>
<p>This issue was resolved using <a href="https://cloud.google.com/pubsub/docs/reference/rest" rel="nofollow noreferrer">API call</a> instead of the client library.</p>
<p>If the issue persists, if you have a support plan you can create a new GCP <a href="https://cloud.google.com/support/" rel="nofollow noreferrer">support case</a>. Otherwise, you can open a new issue on the <a href="https://cloud.google.com/support/docs/issue-trackers" rel="nofollow noreferrer">issue tracker</a> describing your issue.</p>
<p>Posting the answer as community wiki for the benefit of the community that might encounter this use case in the future.</p>
<p>Feel free to edit this answer for additional information.</p>
| Prajna Rai T |
<p>EDIT:
It was a config error, I was setting wrong kv name :/</p>
<p>As said in title I'm facing an issue with secret creation using SecretProviderClass.</p>
<p>I've created my aks and my kv (and filled it) on azure. then I'll proceed to follow <a href="https://learn.microsoft.com/en-us/azure/aks/csi-secrets-store-driver" rel="nofollow noreferrer">those</a> steps using a user-assigned managed identity</p>
<p>but no <code>secret</code> resource get created and pods got stuck on creation with mount failure.</p>
<p>those are the steps I followed</p>
<pre><code>az extension add --name aks-preview
az extension update --name aks-preview
az aks enable-addons --addons azure-keyvault-secrets-provider -g $RESOURCE_GROUP -n $AKS_CLUSTER
az aks update -g $RESOURCE_GROUP -n $AKS_CLUSTER --enable-managed-identity --disable-secret-rotation
$AKS_ID = (az aks show -g $RESOURCE_GROUP -n $AKS_CLUSTER --query identityProfile.kubeletidentity.clientId -o tsv)
az keyvault set-policy -n $AZUREKEYVAULT --secret-permissions get --spn $AKS_ID
</code></pre>
<p>the SecretProviderClass manifest I'm using</p>
<pre><code>apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: azure-kvname
spec:
provider: azure
secretObjects:
- secretName: akvsecrets
type: Opaque
data:
- objectName: AzureSignalRConnectionString
key: AzureSignalRConnectionString
- objectName: BlobStorageConnectionString
key: BlobStorageConnectionString
- objectName: SqlRegistryConnectionString
key: SqlRegistryConnectionString
- objectName: TokenSymmetricKey
key: TokenSymmetricKey
parameters:
useVMManagedIdentity: "true"
userAssignedIdentityID: XXX # VMSS UserAssignedIdentity
keyvaultName: "sampleaks001" # the name of the KeyVault
objects: |
array:
- |
objectName: AzureSignalRConnectionString
objectType: secret
- |
objectName: BlobStorageConnectionString
objectType: secret
- |
objectName: SqlRegistryConnectionString
objectType: secret
- |
objectName: TokenSymmetricKey
objectType: secret
resourceGroup: sample # [REQUIRED for version < 0.0.4] the resource group of the KeyVault
subscriptionId: XXXX # [REQUIRED for version < 0.0.4] the subscription ID of the KeyVault
tenantId: XXX # the tenant ID of the KeyVault
</code></pre>
<p>and the deploy manifest</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: trm-api-test
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: trm-api-test
template:
metadata:
labels:
app: trm-api-test
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: trm-api-test
image: nginx
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
env:
- name: AzureSignalRConnectionString
valueFrom:
secretKeyRef:
name: akvsecrets
key: AzureSignalRConnectionString
- name: TokenSymmetricKey
valueFrom:
secretKeyRef:
name: akvsecrets
key: TokenSymmetricKey
- name: BlobStorageConnectionString
valueFrom:
secretKeyRef:
name: akvsecrets
key: BlobStorageConnectionString
- name: SqlRegistryConnectionString
valueFrom:
secretKeyRef:
name: akvsecrets
key: SqlRegistryConnectionString
volumeMounts:
- name: secrets-store-inline
mountPath: "/mnt/secrets-store"
readOnly: true
volumes:
- name: secrets-store-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "azure-kvname"
---
apiVersion: v1
kind: Service
metadata:
name: trm-api-service-test
namespace: default
spec:
type: ClusterIP
selector:
app: trm-api-test
ports:
- port: 80
targetPort: 80
protocol: TCP
</code></pre>
<p>I'm sure I'm missing something, but can't understand what.
Thanks in advance!</p>
| Michele Ietri | <p>you are using the clientId, but it should be the objectId form the kubelet identity:</p>
<pre><code>export KUBE_ID=$(az aks show -g <resource group> -n <aks cluster name> --query identityProfile.kubeletidentity.objectId -o tsv)
export AKV_ID=$(az keyvault show -g <resource group> -n <akv name> --query id -o tsv)
az role assignment create --assignee $KUBE_ID --role "Key Vault Secrets Officer" --scope $AKV_ID
</code></pre>
<p>This is a working SecretProviderClass i am using (adjusted to your config):</p>
<pre><code>apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: azure-kvname
spec:
provider: azure
secretObjects:
- data:
- objectName: AzureSignalRConnectionString
key: AzureSignalRConnectionString
- objectName: BlobStorageConnectionString
key: BlobStorageConnectionString
- objectName: SqlRegistryConnectionString
key: SqlRegistryConnectionString
- objectName: TokenSymmetricKey
key: TokenSymmetricKey
secretName: akvsecrets
type: Opaque
parameters:
usePodIdentity: "false"
useVMManagedIdentity: "true"
userAssignedIdentityID: XXX # Kubelet Client Id ( Nodepool Managed Idendity )
keyvaultName: "sampleaks001" # the name of the KeyVault
tenantId: XXX # the tenant ID of the KeyVault
objects: |
array:
- |
objectName: AzureSignalRConnectionString
objectAlias: AzureSignalRConnectionString
objectType: secret
- |
objectName: BlobStorageConnectionString
objectAlias: BlobStorageConnectionString
objectType: secret
- |
objectName: SqlRegistryConnectionString
objectAlias: SqlRegistryConnectionString
objectType: secret
- |
objectName: TokenSymmetricKey
objectAlias: TokenSymmetricKey
objectType: secret
</code></pre>
<p>You can also check documentation <a href="https://azure.github.io/secrets-store-csi-driver-provider-azure/configurations/sync-with-k8s-secrets/" rel="nofollow noreferrer">here</a> as you will find better examples as on the Azure Docs.</p>
| Philip Welz |
<p>I manage 3 Pods through Deployment and connect through NodePort of Service.<br />
I wonder which pod the service load balanced whenever I connect from outside.<br />
It's hard to check with Pods log, can I find out through the event or kubectl command?</p>
| H.jenny | <p>I am not sure if this is exactly what you're looking for, but you can use <a href="https://istio.io/latest/docs/concepts/observability/#distributed-traces" rel="nofollow noreferrer">Istio</a> to generate detailed telemetry for all service communications.</p>
<p>You may be particularly interested in <a href="https://istio.io/latest/docs/concepts/observability/#distributed-traces" rel="nofollow noreferrer">Distributed tracing</a>:</p>
<blockquote>
<p>Istio generates distributed trace spans for each service, providing operators with a detailed understanding of call flows and service dependencies within a mesh.</p>
</blockquote>
<p>By using distributed tracing, you are able to monitor every requests as they flow through a mesh.<br />
More information about Distributed Tracing with Istio can be found in the <a href="https://istio.io/latest/faq/distributed-tracing/" rel="nofollow noreferrer">FAQ on Distributed Tracing</a> documentation.<br />
Istio supports multiple tracing backends (e.g. <a href="https://www.jaegertracing.io/" rel="nofollow noreferrer">Jaeger</a>).</p>
<p>Jaeger is a distributed tracing system similar to <a href="https://zipkin.io/" rel="nofollow noreferrer">OpenZipkin</a> and as we can find in the <a href="https://www.jaegertracing.io/" rel="nofollow noreferrer">jaegertracing documentation</a>:</p>
<blockquote>
<p>It is used for monitoring and troubleshooting microservices-based distributed systems, including:</p>
<ul>
<li>Distributed context propagation</li>
<li>Distributed transaction monitoring</li>
<li>Root cause analysis</li>
<li>Service dependency analysis</li>
<li>Performance / latency optimization</li>
</ul>
</blockquote>
<p>Of course, you don't need to install Istio to use Jaeger, but you'll have to instrument your application so that trace data from different parts of the stack are sent to Jaeger.</p>
<hr />
<p>I'll show you how you can use Jaeger to monitor a sample request.</p>
<p>Suppose I have an <code>app-1</code> <code>Deployment</code> with three <code>Pods</code> exposed using the <code>NodePort</code> service.</p>
<pre><code>$ kubectl get pod,deploy,svc
NAME READY STATUS RESTARTS AGE IP
app-1-7ddf4f77c6-g682z 2/2 Running 0 25m 10.60.1.11
app-1-7ddf4f77c6-smlcr 2/2 Running 0 25m 10.60.0.7
app-1-7ddf4f77c6-zn7kh 2/2 Running 0 25m 10.60.2.5
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/app-1 3/3 3 3 21m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/app-1 NodePort 10.64.0.88 <none> 80:30881/TCP 25m
</code></pre>
<p>Additionally, I deployed <code>jaeger</code> (with <code>istio</code>):</p>
<pre><code>$ kubectl get deploy -n istio-system | grep jaeger
jaeger 1/1 1 1 67m
</code></pre>
<p>To check if Jaeger is working as expected, I will try to connect to this <code>app-1</code> application from outside the cluster (using the <code>NodePort</code> service):</p>
<pre><code>$ curl <PUBLIC_IP>:30881
app-1
</code></pre>
<p>Let's find this trace with Jaeger:</p>
<p><a href="https://i.stack.imgur.com/TdFlR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TdFlR.png" alt="enter image description here" /></a></p>
<p>As you can see, we can easily find out which Pod has received our request.</p>
| matt_j |
<p>Ive hit a wall I'm hoping the SO community can advise on where to go next. I've set up a 6 node kubernetes cluster with calico as the networking service. I've only got two pods, the dns debugging pods from kubernetes and a mysql pod. Well and the kube-system pods.
Anyways, I've been at this all day. I've started from scratch 3 times and I keep hitting a wall when it comes to DNS. I've been trying to sort through why I can't access my pods externally. Here are my configs.</p>
<p><strong>mysql.yaml</strong></p>
<pre><code>kind: Service
metadata:
name: mysql
namespace: new_namespace
spec:
type: ExternalName
externalName: mysql.new_namespace.svc.cluster.local
ports:
- port: 3306
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
namespace: new_namespace
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: mysql:5.6
name: mysql
env:
# Use secret in real usage
- name: MYSQL_ROOT_PASSWORD
value: *******
securityContext:
runAsUser: 0
allowPrivilegeEscalation: false
ports:
- name: mysql
containerPort: 3306
protocol: TCP
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: gluster-claim
</code></pre>
<p>Along with others, I've been primarily following <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/" rel="nofollow noreferrer">this guide</a>, but have been unsuccessful in determining my issue. DNS issues just... make no sense to me.</p>
<pre><code>$ kubectl exec -i -t -nnew_namespace dnsutils -- nslookup mysql
Server: 192.168.128.10
Address: 192.168.128.10#53
*** Can't find mysql.new_namespace.svc.cluster.local: No answer
</code></pre>
<p>It seems like things <em>should</em> be working...</p>
<pre><code>$ kubectl exec -i -t -nnew_namespace dnsutils -- nslookup kubernetes.default
Server: 192.168.128.10
Address: 192.168.128.10#53
Name: kubernetes.default.svc.cluster.local
Address: 192.168.128.1
</code></pre>
<pre><code>$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-58497c65d5-mcmb4 1/1 Running 0 31m
kube-system calico-node-27ln4 1/1 Running 0 36m
kube-system calico-node-dngfs 1/1 Running 6 (39m ago) 45m
kube-system calico-node-nq6bz 1/1 Running 0 38m
kube-system calico-node-p6jwz 1/1 Running 0 35m
kube-system calico-node-p8fzr 1/1 Running 0 35m
kube-system calico-node-wlzr9 1/1 Running 0 35m
kube-system calico-typha-68857595fc-kgnvx 1/1 Running 0 45m
kube-system calico-typha-68857595fc-n4hhq 1/1 Running 0 45m
kube-system calico-typha-68857595fc-vjgkc 1/1 Running 0 45m
kube-system coredns-78fcd69978-25bxh 1/1 Running 0 26m
kube-system coredns-78fcd69978-cfl52 1/1 Running 0 26m
kube-system etcd-new_namespace-master 1/1 Running 3 49m
kube-system kube-apiserver-new_namespace-master 1/1 Running 0 49m
kube-system kube-controller-manager-new_namespace-master 1/1 Running 0 31m
kube-system kube-proxy-4zx4m 1/1 Running 0 35m
kube-system kube-proxy-hhvh7 1/1 Running 0 38m
kube-system kube-proxy-m8sph 1/1 Running 0 35m
kube-system kube-proxy-qrfx7 1/1 Running 0 49m
kube-system kube-proxy-tkb4m 1/1 Running 0 35m
kube-system kube-proxy-vct78 1/1 Running 0 36m
kube-system kube-scheduler-new_namespace-master 1/1 Running 3 49m
new_namespace dnsutils 1/1 Running 0 30m
new_namespace mysql-554fd8859d-hb7lp 1/1 Running 0 4m5s
</code></pre>
<pre><code>$ kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 192.168.128.1 <none> 443/TCP 51m
kube-system calico-typha ClusterIP 192.168.239.47 <none> 5473/TCP 47m
kube-system kube-dns ClusterIP 192.168.128.10 <none> 53/UDP,53/TCP,9153/TCP 51m
new_namespace gluster-cluster ClusterIP 192.168.180.197 <none> 1/TCP 30m
new_namespace mysql ExternalName <none> mysql.new_namespace.svc.cluster.local <none> 31m
</code></pre>
<pre><code>$ kubectl get endpoints --all-namespaces
NAMESPACE NAME ENDPOINTS AGE
default kubernetes 10.1.0.125:6443 52m
kube-system calico-typha 10.1.0.126:5473,10.1.0.127:5473,10.1.0.128:5473 48m
kube-system kube-dns 192.168.13.1:53,192.168.97.65:53,192.168.13.1:53 + 3 more... 52m
new_namespace gluster-cluster 10.1.0.125:1,10.1.0.126:1,10.1.0.127:1 + 3 more... 31m
</code></pre>
<pre><code>$ kubectl describe endpoints kube-dns --namespace=kube-system
Name: kube-dns
Namespace: kube-system
Labels: k8s-app=kube-dns
kubernetes.io/cluster-service=true
kubernetes.io/name=CoreDNS
Annotations: endpoints.kubernetes.io/last-change-trigger-time: 2021-09-06T15:30:06Z
Subsets:
Addresses: 192.168.13.1,192.168.97.65
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
dns-tcp 53 TCP
dns 53 UDP
metrics 9153 TCP
Events: <none>
</code></pre>
<p>And the logs...don't really mean anything to me. It looks like things are working though? Yet I still can't access mysql..</p>
<pre><code>$ kubectl logs --namespace=kube-system -l k8s-app=kube-dns new_namespace-master: Mon Sep 6 16:01:47 2021
[INFO] 192.168.119.1:52410 - 18128 "A IN mysql.new_namespace.svc.cluster.local. udp 48 false 512" NOERROR qr,aa,rd 97 0.00009327s
[INFO] 192.168.119.1:41837 - 46102 "A IN mysql.new_namespace.new_namespace.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000240183s
[INFO] 192.168.119.1:42485 - 36923 "A IN mysql.new_namespace.new_namespace.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000097762s
[INFO] 192.168.119.1:54354 - 34171 "A IN mysql.new_namespace.new_namespace.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000155781s
[INFO] 192.168.119.1:36491 - 48004 "A IN mysql.new_namespace.svc.cluster.local. udp 48 false 512" NOERROR qr,aa,rd 141 0.000075232s
[INFO] 192.168.119.1:58078 - 26522 "A IN mysql.new_namespace.new_namespace.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000096242s
[INFO] 192.168.119.1:59389 - 51728 "AAAA IN mysql.new_namespace.svc.cluster.local. udp 48 false 512" NOERROR qr,aa,rd 141 0.000110561s
[INFO] 192.168.119.1:39553 - 24302 "A IN mysql.new_namespace.new_namespace.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000114412s
[INFO] 192.168.119.1:60340 - 28351 "A IN mysql.new_namespace.svc.cluster.local. udp 48 false 512" NOERROR qr,aa,rd 141 0.000175322s
[INFO] 192.168.119.1:36494 - 12725 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000149332s
[INFO] 192.168.119.1:45875 - 25210 "AAAA IN mysql. udp 23 false 512" NXDOMAIN qr,rd,ra,ad 98 0.000629398s
[INFO] 192.168.119.1:37467 - 44662 "A IN mysql.new_namespace.svc.cluster.local. udp 48 false 512" NOERROR qr,aa,rd 97 0.000115082s
[INFO] 192.168.119.1:37792 - 59085 "AAAA IN mysql. udp 23 false 512" NXDOMAIN qr,aa,rd,ra 98 0.000043841s
[INFO] 192.168.119.1:47263 - 56267 "AAAA IN mysql.new_namespace.svc.cluster.local. udp 48 false 512" NOERROR qr,aa,rd 141 0.000241662s
[INFO] 192.168.119.1:47070 - 59276 "A IN mysql.new_namespace.svc.cluster.local. udp 48 false 512" NOERROR qr,aa,rd 141 0.000144522s
[INFO] 192.168.119.1:46812 - 32557 "A IN mysql.new_namespace.svc.cluster.local. udp 48 false 512" NOERROR qr,aa,rd 141 0.00008474s
[INFO] 192.168.119.1:57113 - 14895 "AAAA IN mysql.new_namespace.svc.cluster.local. udp 48 false 512" NOERROR qr,aa,rd 141 0.000052781s
[INFO] 192.168.119.1:51403 - 18192 "AAAA IN mysql.new_namespace.svc.cluster.local. udp 48 false 512" NOERROR qr,aa,rd 141 0.000166912s
[INFO] 192.168.119.1:52077 - 43229 "A IN kubernetes.default.new_namespace.svc.cluster.local. udp 61 false 512" NXDOMAIN qr,aa,rd 154 0.000199803s
[INFO] 192.168.119.1:60907 - 16052 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000082662s
</code></pre>
<p>Oh and this was the init command I used:</p>
<pre><code>sudo kubeadm reset ; sudo kubeadm init --pod-network-cidr=192.168.0.0/17 --service-cidr=192.168.128.0/17 --apiserver-advertise-address 10.1.0.125 --control-plane-endpoint 10.1.0.125
</code></pre>
| The Kaese | <p>Turns out I just hadn't installed Calico correctly. I started from scratch one more time and installed calico using <a href="https://docs.projectcalico.org/getting-started/kubernetes/self-managed-onprem/onpremises" rel="nofollow noreferrer">this guide</a> (NOT THE QUICKSTART), and everything's working. I'm not really sure how I came across finding out calico was installed incorrectly. I think it was because I had two calico controllers in two different namespaces and I was very confused as to why. Anyways, reran the same init command, followed the guide, set up mysql and I have access!</p>
| The Kaese |
<p>I would like to create a yaml once the k8s pod is up, in my previous attempt, I just upload the yaml file and use wget to download it.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: p-test
image: p-test:latest
command:
- sh
- '-c'
- >-
wgethttps://ppt.cc/aId -O labels.yml
image: test/alpine-utils
</code></pre>
<p>In order to make it more explicit, I try to use heredoc to embed the content of <code>labels.yml</code> into the k8s pod manifest, like:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: p-test
image: p-test:latest
command:
- "/bin/bash"
- '-c'
- >
cat << LABEL > labels.yml
key: value
LABEL
</code></pre>
<p>But it doesn't work, please suggest how to modify it, thanks.</p>
| good5dog5 | <p>Instead of playing with <code>heredoc</code> in pod definition, it's much better and convenient to define your <code>yaml</code> file in <a href="https://kubernetes.io/docs/concepts/configuration/configmap/" rel="nofollow noreferrer">the ConfigMap</a> and refer to it in your pod definition (mount it as volume and <a href="https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath" rel="nofollow noreferrer">use <code>subPath</code></a>) - like in this example (I changed <code>p-test</code> image into <code>nginx</code> image):</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ConfigMap
metadata:
name: my-configmap
data:
labels.yaml: |-
key: value
---
apiVersion: v1
kind: Pod
metadata:
name: test
spec:
containers:
- name: p-test
image: nginx:latest
volumeMounts:
- name: my-configmap-volume
mountPath: /tmp/labels.yaml
subPath: labels.yaml
volumes:
- name: my-configmap-volume
configMap:
name: my-configmap
</code></pre>
<p>Then on the pod you will find your <code>labels.yaml</code> in the <code>/tmp/labels.yaml</code>.</p>
| Mikolaj S. |
<p>The k8s documentation says that kubeadm upgrade does not touch your workloads, only components internal to Kubernetes, but I don't understand the status of the Pods at this time.</p>
| hlwo jiv | <p>There are different upgrade strategies, but I assume you want to upgrade your cluster with zero downtime.<br />
In this case, the upgrade procedure at high level is the following:</p>
<ol>
<li>Upgrade control plane nodes - should be executed one node at a time.</li>
<li>Upgrade worker nodes - should be executed one node at a time or few nodes at a time, without compromising the minimum required capacity for running your workloads.</li>
</ol>
<p>It's important to prepare the node for maintenance by marking it <code>'unschedulable'</code> and evicting the workloads (moving the workloads to other nodes):</p>
<pre><code>$ kubectl drain <node-to-drain> --ignore-daemonsets
</code></pre>
<p><strong>NOTE:</strong> If there are Pods not managed by a <code>ReplicationController</code>, <code>ReplicaSet</code>, <code>Job</code>, <code>DaemonSet</code> or <code>StatefulSet</code> , the drain operation will be refused, unless you use the <code>--force</code> option.</p>
<p>As you can see in the <a href="https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/" rel="nofollow noreferrer">Safely Drain a Node</a> documentation:</p>
<blockquote>
<p>You can use kubectl drain to safely evict all of your pods from a node before you perform maintenance on the node (e.g. kernel upgrade, hardware maintenance, etc.). Safe evictions allow the pod's containers to gracefully terminate and will respect the PodDisruptionBudgets you have specified.</p>
</blockquote>
<p>If you finished the upgrade procedure on this node, you need to bring the node back online by running:</p>
<pre><code>$ kubectl uncordon <node name>
</code></pre>
<p>To sum up: <code>kubectl drain</code> changes the status of the <code>Pods</code> (workflow moves to another node).
Unlike <code>kubectl drain</code>, <code>kubeadm upgrade</code> does not touch/affect your workloads, only modifies components internal to Kubernetes.</p>
<p>Using "kube-scheduler" as an example, we can see what exactly happens to the control plane components when we run the <code>kubeadm upgrade apply</code> command:</p>
<pre><code>[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-03-07-15-42-15/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
</code></pre>
| matt_j |
<p>I have added few custom cipher suites at the gateway like this :</p>
<pre><code>tls:
mode: MUTUAL
credentialName: sds
minProtocolVersion: TLSV1_2
maxProtocolVersion: TLSV1_3
cipherSuites: [ECDHE-ECDSA-AES256-GCM-SHA384|ECDHE-ECDSA-AES128-GCM-SHA256|ECDHE-RSA-AES256-GCM-SHA384|ECDHE-RSA-AES128-GCM-SHA256|ECDHE-ECDSA-AES256-CBC-SHA384|ECDHE-ECDSA-AES128-CBC-SHA256|ECDHE-RSA-AES256-CBC-SHA384|ECDHE-RSA-AES128-CBC-SHA256]
</code></pre>
<p>Is there a way to validate if these cipher suites have actually been added? Does it order in the same way as we have specified?</p>
| Jim | <p>Just in case you are still wondering the correct format is:</p>
<pre><code>tls:
mode: MUTUAL
credentialName: sds
minProtocolVersion: TLSV1_2
maxProtocolVersion: TLSV1_3
cipherSuites:
- ECDHE-ECDSA-AES256-GCM-SHA384
- ECDHE-ECDSA-AES128-GCM-SHA256
- ECDHE-RSA-AES256-GCM-SHA384
- ECDHE-RSA-AES128-GCM-SHA256
- ECDHE-ECDSA-AES256-CBC-SHA384
- ECDHE-ECDSA-AES128-CBC-SHA256
- ECDHE-RSA-AES256-CBC-SHA384
- ECDHE-RSA-AES128-CBC-SHA256
</code></pre>
| Inamati |
<p>I have a folder containing multiple values.yaml files and I would like to pass all the yaml files in that folder as an argument to helm install.</p>
<p>It is possible to use like <code>helm install example . -f values/values1.yaml -f values/values2.yaml</code></p>
<p>But there are more than 10 files in values folder Is it possible to simply pass a folder as an argument</p>
<p>I already tried <code>helm install example . -f values/*</code> And this does not work.</p>
| Encycode | <p>This is not possible as <code>-f</code> expects a file or URL = <code>specify values in a YAML file or a URL (can specify multiple)</code> and helm does not know a command to use a directory.</p>
<p>Maybe you should reduce your values.yaml files to have a base value file and then one environment specific values file:</p>
<pre><code>helm install example . -f values.yaml -f env/values_dev.yaml
</code></pre>
| Philip Welz |
<p>I'm developing <a href="https://github.com/msrumon/microservice-architecture" rel="nofollow noreferrer">this dummy project</a> and trying to make it work locally via <code>Skaffold</code>.</p>
<p>There are 3 services in my project (running on ports <code>3001</code>, <code>3002</code> and <code>3003</code> respectively), wired via <code>NATS server</code>.</p>
<p>The problem is: I get different kinds of errors each time I run <code>skaffold debug</code>, and one/more service(s) don't work.</p>
<p>At times, I don't get any errors, and all services work as expected. The followings are some of the errors:</p>
<pre><code>Waited for <...>s due to client-side throttling, not priority and fairness,
request: GET:https://kubernetes.docker.internal:6443/api/v1/namespaces/default/pods?labelSelector=app%!D(MISSING). <...>%!C(MISSING)app.kubernetes.io%!F(MISSING)managed-by%!D(MISSING)skaffold%!C(MISSING)skaffold.dev%!F(MISSING)run-id%!D(MISSING)<...>` (from `request.go:668`)
- `0/1 nodes are available: 1 Insufficient cpu.` (from deployments)
- `UnhandledPromiseRejectionWarning: NatsError: CONNECTION_REFUSED` (from apps)
- `UnhandledPromiseRejectionWarning: Error: getaddrinfo EAI_AGAIN nats-service` (from apps)
</code></pre>
<p>I'm at a loss and can't help myself anymore. I hope someone here will be able to help me out.</p>
<p>Thanks in advance.</p>
<p>PS: Below is my machine's config, in case it's my machine's fault.</p>
<pre><code>Processor: AMD Ryzen 7 1700 (8C/16T)
Memory: 2 x 8GB DDR4 2667MHz
Graphics: AMD Radeon RX 590 (8GB)
OS: Windows 10 Pro 21H1
</code></pre>
<pre class="lang-sh prettyprint-override"><code>$ docker version
Client:
Version: 19.03.12
API version: 1.40
Go version: go1.13.12
Git commit: 0ed913b8-
Built: 07/28/2020 16:36:03
OS/Arch: windows/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 20.10.8
API version: 1.41 (minimum version 1.12)
Go version: go1.16.6
Git commit: 75249d8
Built: Fri Jul 30 19:52:10 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.4.9
GitCommit: e25210fe30a0a703442421b0f60afac609f950a3
runc:
Version: 1.0.1
GitCommit: v1.0.1-0-g4144b63
docker-init:
Version: 0.19.0
GitCommit: de40ad0
</code></pre>
<pre class="lang-sh prettyprint-override"><code>$ kubectl version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.2", GitCommit:"8b5a19147530eaac9476b0ab82980b4088bbc1b2", GitTreeState:"clean", BuildDate:"2021-09-15T21:38:50Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.4", GitCommit:"3cce4a82b44f032d0cd1a1790e6d2f5a55d20aae", GitTreeState:"clean", BuildDate:"2021-08-11T18:10:22Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>I use WSL2 (Debian) and <code>docker-desktop</code> is the context of Kubernetes.</p>
| msrumon | <p>The main reason of issues like this one is that you are setting <a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/#if-you-specify-a-cpu-limit-but-do-not-specify-a-cpu-request" rel="nofollow noreferrer">only CPU limit (without setting CPU request) so Kubernetes automatically assigns a CPU request which is equal to the CPU limit</a>:</p>
<blockquote>
<p>If you specify a CPU limit for a Container but do not specify a CPU request, Kubernetes automatically assigns a CPU request that matches the limit. Similarly, if a Container specifies its own memory limit, but does not specify a memory request, Kubernetes automatically assigns a memory request that matches the limit.</p>
</blockquote>
<p>So as requests are equal to the limits, your node can't meet these requirements (you have 16 CPUs available; to start all services you need 24 CPUs) - that's why you are getting <code>0/1 nodes are available: 1 Insufficient cpu</code> error message.</p>
<p><em>How to fix it?</em></p>
<ul>
<li>Set a <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits" rel="nofollow noreferrer">CPU request</a> which is different from the <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits" rel="nofollow noreferrer">CPU limit</a></li>
<li>Delete CPU limits</li>
</ul>
<p><em>But...</em></p>
<p>You wrote that:</p>
<blockquote>
<p>Should I also try setting up the <code>requests</code> key and set the lower limit too? Or what about completely omitting it?
I tried that one, and still same issue.</p>
</blockquote>
<p>So if you deleted all CPU limits from all deployments and you still have error related to the insufficient CPU it clearly shows that your app is too resource-hungry. I'd suggest optimizing the application in terms of resource utilization. Another option is to increase node resources.</p>
| Mikolaj S. |
<p>I'm trying build a docker container with the following command:</p>
<pre><code>sudo docker build docker_calculadora/
</code></pre>
<p>but when it's building, at the step 9 it appears the following error:</p>
<p>Step 9/27 : RUN set -ex; export GNUPGHOME="$(mktemp -d)"; for key in $GPG_KEYS; do gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; done; gpg --batch --export $GPG_KEYS > /etc/apt/trusted.gpg.d/mariadb.gpg; command -v gpgconf > /dev/null && gpgconf --kill all || :; rm -r "$GNUPGHOME"; apt-key list
---> Running in a80677ab986c</p>
<ul>
<li>mktemp -d</li>
<li>export GNUPGHOME=/tmp/tmp.TiWBSXwFOS</li>
<li>gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys 177F4010FE56CA3336300305F1656F24C74CD1D8
gpg: keybox '/tmp/tmp.TiWBSXwFOS/pubring.kbx' created
gpg: keyserver receive failed: No name
The command '/bin/sh -c set -ex; export GNUPGHOME="$(mktemp -d)"; for key in $GPG_KEYS; do gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; done; gpg --batch --export $GPG_KEYS > /etc/apt/trusted.gpg.d/mariadb.gpg; command -v gpgconf > /dev/null && gpgconf --kill all || :; rm -r "$GNUPGHOME"; apt-key list' returned a non-zero code: 2</li>
</ul>
<p>My DockerFile:</p>
<pre><code># vim:set ft=dockerfile:
FROM ubuntu:focal
# add our user and group first to make sure their IDs get assigned consistently, regardless of whatever dependencies get added
RUN groupadd -r mysql && useradd -r -g mysql mysql
# https://bugs.debian.org/830696 (apt uses gpgv by default in newer releases, rather than gpg)
RUN set -ex; \
apt-get update; \
if ! which gpg; then \
apt-get install -y --no-install-recommends gnupg; \
fi; \
if ! gpg --version | grep -q '^gpg (GnuPG) 1\.'; then \
# Ubuntu includes "gnupg" (not "gnupg2", but still 2.x), but not dirmngr, and gnupg 2.x requires dirmngr
# so, if we're not running gnupg 1.x, explicitly install dirmngr too
apt-get install -y --no-install-recommends dirmngr; \
fi; \
rm -rf /var/lib/apt/lists/*
# add gosu for easy step-down from root
# https://github.com/tianon/gosu/releases
ENV GOSU_VERSION 1.12
RUN set -eux; \
savedAptMark="$(apt-mark showmanual)"; \
apt-get update; \
apt-get install -y --no-install-recommends ca-certificates wget; \
rm -rf /var/lib/apt/lists/*; \
dpkgArch="$(dpkg --print-architecture | awk -F- '{ print $NF }')"; \
wget -O /usr/local/bin/gosu "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch"; \
wget -O /usr/local/bin/gosu.asc "https://github.com/tianon/gosu/releases/download/$GOSU_VERSION/gosu-$dpkgArch.asc"; \
export GNUPGHOME="$(mktemp -d)"; \
gpg --batch --keyserver hkps://keys.openpgp.org --recv-keys B42F6819007F00F88E364FD4036A9C25BF357DD4; \
gpg --batch --verify /usr/local/bin/gosu.asc /usr/local/bin/gosu; \
gpgconf --kill all; \
rm -rf "$GNUPGHOME" /usr/local/bin/gosu.asc; \
apt-mark auto '.*' > /dev/null; \
[ -z "$savedAptMark" ] || apt-mark manual $savedAptMark > /dev/null; \
apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false; \
chmod +x /usr/local/bin/gosu; \
gosu --version; \
gosu nobody true
RUN mkdir /docker-entrypoint-initdb.d
# install "pwgen" for randomizing passwords
# install "tzdata" for /usr/share/zoneinfo/
# install "xz-utils" for .sql.xz docker-entrypoint-initdb.d files
RUN set -ex; \
apt-get update; \
apt-get install -y --no-install-recommends \
pwgen \
tzdata \
xz-utils \
; \
rm -rf /var/lib/apt/lists/*
ENV GPG_KEYS \
# pub rsa4096 2016-03-30 [SC]
# 177F 4010 FE56 CA33 3630 0305 F165 6F24 C74C D1D8
# uid [ unknown] MariaDB Signing Key <[email protected]>
# sub rsa4096 2016-03-30 [E]
177F4010FE56CA3336300305F1656F24C74CD1D8
RUN set -ex; \
export GNUPGHOME="$(mktemp -d)"; \
for key in $GPG_KEYS; do \
gpg --batch --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; \
done; \
gpg --batch --export $GPG_KEYS > /etc/apt/trusted.gpg.d/mariadb.gpg; \
command -v gpgconf > /dev/null && gpgconf --kill all || :; \
rm -r "$GNUPGHOME"; \
apt-key list
# bashbrew-architectures: amd64 arm64v8 ppc64le
ENV MARIADB_MAJOR 10.5
ENV MARIADB_VERSION 1:10.5.8+maria~focal
# release-status:Stable
# (https://downloads.mariadb.org/mariadb/+releases/)
RUN set -e;\
echo "deb http://ftp.osuosl.org/pub/mariadb/repo/$MARIADB_MAJOR/ubuntu focal main" > /etc/apt/sources.list.d/mariadb.list; \
{ \
echo 'Package: *'; \
echo 'Pin: release o=MariaDB'; \
echo 'Pin-Priority: 999'; \
} > /etc/apt/preferences.d/mariadb
# add repository pinning to make sure dependencies from this MariaDB repo are preferred over Debian dependencies
# libmariadbclient18 : Depends: libmysqlclient18 (= 5.5.42+maria-1~wheezy) but 5.5.43-0+deb7u1 is to be installed
# the "/var/lib/mysql" stuff here is because the mysql-server postinst doesn't have an explicit way to disable the mysql_install_db codepath besides having a database already "configured" (ie, stuff in /var/lib/mysql/mysql)
# also, we set debconf keys to make APT a little quieter
RUN set -ex; \
{ \
echo "mariadb-server-$MARIADB_MAJOR" mysql-server/root_password password 'unused'; \
echo "mariadb-server-$MARIADB_MAJOR" mysql-server/root_password_again password 'unused'; \
} | debconf-set-selections; \
apt-get update; \
apt-get install -y \
"mariadb-server=$MARIADB_VERSION" \
# mariadb-backup is installed at the same time so that `mysql-common` is only installed once from just mariadb repos
mariadb-backup \
socat \
; \
rm -rf /var/lib/apt/lists/*; \
# purge and re-create /var/lib/mysql with appropriate ownership
rm -rf /var/lib/mysql; \
mkdir -p /var/lib/mysql /var/run/mysqld; \
chown -R mysql:mysql /var/lib/mysql /var/run/mysqld; \
# ensure that /var/run/mysqld (used for socket and lock files) is writable regardless of the UID our mysqld instance ends up having at runtime
chmod 777 /var/run/mysqld; \
# comment out a few problematic configuration values
find /etc/mysql/ -name '*.cnf' -print0 \
| xargs -0 grep -lZE '^(bind-address|log|user\s)' \
| xargs -rt -0 sed -Ei 's/^(bind-address|log|user\s)/#&/'; \
# don't reverse lookup hostnames, they are usually another container
echo '[mysqld]\nskip-host-cache\nskip-name-resolve' > /etc/mysql/conf.d/docker.cnf
VOLUME /var/lib/mysql
COPY docker-entrypoint.sh /usr/local/bin/
ENTRYPOINT ["docker-entrypoint.sh"]
EXPOSE 3306
RUN apt-get update
#RUN apt-get install -y software-properties-common
#RUN apt-get update
RUN apt-get install -y apache2 curl nano php libapache2-mod-php php7.4-mysql
EXPOSE 80
COPY calculadora.html /var/www/html/
COPY calculadora.php /var/www/html/
COPY success.html /var/www/html/
COPY start.sh /
COPY 50-server.cnf /etc/mysql/mariadb.conf.d/
RUN chmod 777 /start.sh
CMD ["/start.sh"]
'''
</code></pre>
| Nat | <p>The error is because some servers that used the Mariadb image in the Dockerfile are down. Just need to update them.</p>
| Nat |
<p>I have an application that will start Jobs in different K8S cluster/cloud provider depending of different criteria, including the current availability of the cluster.
In other word, my application will loop over acceptable K8S clusters, and if a cluster is currently full, the job can be launched in a different cluster.</p>
<p>My problem is, how to determine that the Job will fit or not in the cluster.
My first idea was to loop over all nodes, and see how many have enough resources (Mem/CPU) to run my job.</p>
<p>While doable, it seems there is no native API to get used resources for one specific node. I can code that, but this gives me the feeling this is not the right solution, as it seems to be redoing part of the kube scheduler, and I am sure there are lot of edge case to be taken in account.</p>
<p>Maybe I am overthinking this, and this totally the right path, but I wanted to check if there was not cleaner solution.</p>
| Djoby | <p>You can use <a href="https://github.com/kubernetes/kube-state-metrics" rel="nofollow noreferrer">kube-state-metrics</a> which exposes a metrics api that you could query.</p>
<p>With that you have <a href="https://github.com/kubernetes/kube-state-metrics/blob/master/docs/node-metrics.md" rel="nofollow noreferrer">node metrics</a> like <code>kube_node_status_allocatable</code> or <code>kube_node_status_capacity</code> and <a href="https://github.com/kubernetes/kube-state-metrics/blob/master/docs/pod-metrics.md" rel="nofollow noreferrer">pod metrics</a> like <code>kube_pod_container_resource_requests</code> or <code>kube_pod_container_resource_limits</code>.</p>
| Philip Welz |
<p>In K8s, every cluster has a set of nodes, some are master and others are worker nodes.
How can we know if a node is a master or a worker?</p>
| Arjun | <p>In general, the easiest way to check if node is master or worker is to check if it has label <code>node-role.kubernetes.io/control-plane</code> (<a href="https://v1-20.docs.kubernetes.io/docs/setup/release/notes/#urgent-upgrade-notes" rel="noreferrer">or before Kubernetes <code>v1.20</code></a>: <code>node-role.kubernetes.io/master</code>):</p>
<p>Since Kubernetes <code>v1.20</code>:</p>
<pre><code>kubectl get nodes -l 'node-role.kubernetes.io/control-plane'
</code></pre>
<p>Before Kubernetes <code>v1.20</code>:</p>
<pre><code>kubectl get nodes -l 'node-role.kubernetes.io/master'
</code></pre>
<p>To get workers we can use negation for above expressions (since Kubernetes <code>v1.20</code>):</p>
<pre><code>kubectl get nodes -l '!node-role.kubernetes.io/control-plane'
</code></pre>
<p>Before Kubernetes <code>v1.20</code>:</p>
<pre><code>kubectl get nodes -l '!node-role.kubernetes.io/master'
</code></pre>
<p>Another approach is to use command <code>kubectl cluster-info</code> which will print IP address of the <code>control-plane</code>:</p>
<pre><code>Kubernetes control plane is running at https://{ip-address-of-the-control-plane}:8443
</code></pre>
<p>Keep in mind that for some cloud provided solutions it may work totally different. For example, <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-architecture" rel="noreferrer">in GKE, nodes don't have any roles by default and IP address returned by <code>kubectl cluster-info</code> is address of the API Server</a>, not listed in <code>kubectl get nodes</code> command so always remember to double-check docs provided by your Kubernetes cluster provider.</p>
| Mikolaj S. |
<p>I have some average yaml file defining some average role resource, all yaml should reflect my resource's desired state.</p>
<p>To get new average role into cluster I usually run <code>kubectl apply -f my-new-role.yaml</code>
but now I see this (recommended!?) alternative <code>kubectl auth reconcile -f my-new-role.yaml</code></p>
<p>Ok, there may be RBAC relationships, ie Bindings, but shouldn't an <strong>apply</strong> do same thing?</p>
<p>Is there ever a case where one would update (cluster) roles but not want their related (cluster) bindings updated?</p>
| siwasaki | <p>The <code>kubectl auth reconcile</code> command-line utility has been added in Kubernetes <code>v1.8</code>.<br />
Properly applying RBAC permissions is a complex task because you need to compute logical covers operations between rule sets.</p>
<p>As you can see in the <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.8.md#cli-changes" rel="noreferrer">CHANGELOG-1.8.md</a>:</p>
<blockquote>
<p>Added RBAC reconcile commands with kubectl auth reconcile -f FILE. When passed a file which contains RBAC roles, rolebindings, clusterroles, or clusterrolebindings, this command computes covers and adds the missing rules. The logic required to properly apply RBAC permissions is more complicated than a JSON merge because you have to compute logical covers operations between rule sets. This means that we cannot use kubectl apply to update RBAC roles without risking breaking old clients, such as controllers.</p>
</blockquote>
<p>The <code>kubectl auth reconcile</code> command will ignore any resources that are not <code>Role</code>, <code>RoleBinding</code>, <code>ClusterRole</code>, and <code>ClusterRoleBinding</code> objects, so you can safely run reconcile on the full set of manifests (see: <a href="https://github.com/argoproj/argo-cd/issues/523#issuecomment-417911606" rel="noreferrer">Use 'kubectl auth reconcile' before 'kubectl apply'</a>)</p>
<hr />
<p>I've created an example to demonstrate how useful the <code>kubectl auth reconcile</code> command is.</p>
<p>I have a simple <code>secret-reader</code> <code>RoleBinding</code> and I want to change a binding's <code>roleRef</code> (I want to change the <code>Role</code> that this binding refers to):<br />
<strong>NOTE:</strong> A binding to a different role is a fundamentally different binding (see: <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#clusterrolebinding-example" rel="noreferrer">A binding to a different role is a fundamentally different binding</a>).</p>
<pre><code># BEFORE
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: secret-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: secret-reader
subjects:
- kind: ServiceAccount
name: service-account-1
namespace: default
# AFTER
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: secret-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: secret-creator
subjects:
- kind: ServiceAccount
name: service-account-1
namespace: default
</code></pre>
<p>As we know, <code>roleRef</code> is immutable, so it is not possible to update this <code>secret-admin</code> <code>RoleBinding</code> using <code>kubectl apply</code>:</p>
<pre><code>$ kubectl apply -f secret-admin.yml
The RoleBinding "secret-admin" is invalid: roleRef: Invalid value: rbac.RoleRef{APIGroup:"rbac.authorization.k8s.io", Kind:"Role", Name:"secret-creator"}: cannot change roleRef
</code></pre>
<p>Instead, we can use <code>kubectl auth reconcile</code>. If a <code>RoleBinding</code> is updated to a new <code>roleRef</code>, the <code>kubectl auth reconcile</code> command handles a delete/recreate related objects for us.</p>
<pre><code>$ kubectl auth reconcile -f secret-admin.yml
rolebinding.rbac.authorization.k8s.io/secret-admin reconciled
reconciliation required recreate
</code></pre>
<p>Additionally, you can use the <code>--remove-extra-permissions</code> and <code>--remove-extra-subjects</code> options.</p>
<p>Finally, we can check if everything has been successfully updated:</p>
<pre><code>$ kubectl describe rolebinding secret-admin
Name: secret-admin
Labels: <none>
Annotations: <none>
Role:
Kind: Role
Name: secret-creator
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount service-account-1 default
</code></pre>
| matt_j |
<p>I'm passing the following string through values.yaml:</p>
<pre><code>urls: http://example.com http://example2.com http://example3.com
</code></pre>
<p>Is there a way to create a list from this, so i can then do something like:</p>
<pre><code>{{ range $urls }}
{{ . }}
{{ end }}
</code></pre>
<p>The problem is I'm passing the urls var in a dynamic fashion, and I'm also can't avoid using a single string for that (ArgoCD ApplicationSet wont let me pass a list).</p>
| Andres Julia | <p>Basically all you need is just add this line in your template <code>yaml</code>:</p>
<pre><code>{{- $urls := splitList " " .Values.urls }}
</code></pre>
<p>It will import <code>urls</code> string from <code>values.yaml</code> <a href="http://masterminds.github.io/sprig/string_slice.html" rel="nofollow noreferrer">as the list</a> so you will be able run your code which you posted in your question.</p>
<p><strong>Simple example based on <a href="https://helm.sh/docs/chart_template_guide/control_structures/" rel="nofollow noreferrer">helm docs</a></strong>:</p>
<ol>
<li><p>Let's get simple chart used in <a href="https://helm.sh/docs/chart_template_guide/getting_started/" rel="nofollow noreferrer">helm docs</a> and prepare it:</p>
<pre><code>helm create mychart
rm -rf mychart/templates/*
</code></pre>
</li>
<li><p>Edit <code>values.yaml</code> and insert <code>urls</code> string:</p>
<pre><code>urls: http://example.com http://example2.com http://example3.com
</code></pre>
</li>
<li><p>Create ConfigMap in <code>templates</code> folder (name it <code>configmap.yaml</code>)</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-configmap
data:
{{- $urls := splitList " " .Values.urls }}
urls: |-
{{- range $urls }}
- {{ . }}
{{- end }}
</code></pre>
<p>As can see, I'm using your loop (with "- " to avoid creating empty lines).</p>
</li>
<li><p>Install chart and check it:</p>
<pre><code>helm install example ./mychart/
helm get manifest example
</code></pre>
<p>Output:</p>
<pre><code>---
# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: example-configmap
data:
urls: |-
- http://example.com
- http://example2.com
- http://example3.com
</code></pre>
</li>
</ol>
| Mikolaj S. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.