prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>I want to add an additional scrape config into Prometheus. I have followed the below method.</p>
<p><a href="https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/additional-scrape-config.md" rel="nofollow noreferrer">https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/additional-scrape-config.md</a></p>
<p>First, created a file prometheus-additional.yaml and added the new config</p>
<pre><code>- job_name: "prometheus"
static_configs:
- targets: ["localhost:9090"]
</code></pre>
<p>Secondly, created a secret out of it.</p>
<pre><code>kubectl create secret generic additional-scrape-configs --from-file=prometheus-additional.yaml --dry-run -oyaml > additional-scrape-configs.yaml
</code></pre>
<p>Then created the secret using the below command</p>
<pre><code>kubectl apply -f additional-scrape-configs.yaml -n monitoring
</code></pre>
<p>Then in the above link it says</p>
<p><strong>"Finally, reference this additional configuration in your prometheus.yaml CRD."</strong></p>
<pre><code>apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: prometheus
labels:
prometheus: prometheus
spec:
replicas: 2
serviceAccountName: prometheus
serviceMonitorSelector:
matchLabels:
team: frontend
additionalScrapeConfigs:
name: additional-scrape-configs
key: prometheus-additional.yaml
</code></pre>
<p>Where I can find the above? Do I need to create a new CRD? Can't I update the existing running deployment?</p>
| <p>This is somehow wrong in the Documentation, you have to use additionalScrapeConfigs<strong>Secret</strong>:</p>
<pre><code> additionalScrapeConfigsSecret:
enabled: true
name: additional-scrape-configs
key: prometheus-additional.yaml
</code></pre>
<p>Else you get the error <code>cannot unmarshal !!map into []yaml.MapSlice</code></p>
<p>Here is a better documentation:
<a href="https://github.com/prometheus-community/helm-charts/blob/8b45bdbdabd9b54766c4beb3c562b766b268a034/charts/kube-prometheus-stack/values.yaml#L2691" rel="nofollow noreferrer">https://github.com/prometheus-community/helm-charts/blob/8b45bdbdabd9b54766c4beb3c562b766b268a034/charts/kube-prometheus-stack/values.yaml#L2691</a></p>
<p>According to this, you could add scrape configs without packaging into a secret like this:</p>
<pre><code>additionalScrapeConfigs: |
- job_name: "prometheus"
static_configs:
- targets: ["localhost:9090"]
</code></pre>
|
<p>it was a working set up and no manual changes were made.</p>
<p>when we are trying to deploy application on aks; it fails to pull an image from the acr.</p>
<p>as per kubectl describe po output:</p>
<p>Failed to pull image "xyz.azurecr.io/xyz:-beta-68": [rpc error: code = Unknown desc = Error response from daemon: Get https://xyz.azurecr.io/v2/: dial tcp: lookup rxyz.azurecr.io on [::1]:53: read udp [::1]:46256->[::1]:53: read: connection refused, rpc error: code = Unknown desc = Error response from daemon: Get https://xyz.azurecr.io/v2/: dial tcp: lookup xyz.azurecr.io on [::1]:53: read udp [::1]:46112->[::1]:53: read: connection refused, rpc error: code = Unknown desc = Error response from daemon: Get https://xyz.azurecr.io/v2/: dial tcp: lookup xyz.azurecr.io on [::1]:53: read udp [::1]:36677->[::1]:53: read: connection refused]</p>
<p>while troubleshooting i realised, few nodes has the dns entry in /etc/resolv.conf where image pull is working fine without issue and few node doesn't have the dns entry in /etc/resolv.conf where the image pull fails.</p>
<p>and if i manually add dns entry to /etc/resolv.conf on the nodes that doesn't have the entry; the changes are reverted to the initial state withing few minutes.</p>
<p>is there a procedure to edit /etc/resolv.conf or fix image pull issues.?</p>
| <p>There is a bug in ubuntu that impacts AKS (global).
You can follow the link below to see the status.
<a href="https://status.azure.com/en-us/status" rel="nofollow noreferrer">https://status.azure.com/en-us/status</a>
In addition, there is a thread here you can follow the suggestions to overcome this issue.
<a href="https://learn.microsoft.com/en-us/answers/questions/987231/error-connecting-aks-with-acr.html" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/answers/questions/987231/error-connecting-aks-with-acr.html</a></p>
|
<p>Every deepcopy generated file that is produced by <code>make</code> with kubebuilder produces a file with a <code>// +build !ignore_autogenerated</code> build tag directive at the top.</p>
<pre><code>//go:build !ignore_autogenerated
// +build !ignore_autogenerated
/*
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
// Code generated by controller-gen. DO NOT EDIT.
</code></pre>
<p>Why is this specific build tag directive added to these generated files? What's its purpose?</p>
| <p>It's used by <code>controller-gen</code> to identify files it generated, it will only overwrite those.</p>
<p>E.g. edit a generated <code>zz_generated.deepcopy.go</code> and run <code>make generate</code> => the file is overwritten.</p>
<p>Now edit the file again, also remove the two lines with the build constraints (the <code>go:build</code> line is for go >= 1.17, the <code>+build</code>line for older versions IIRC) and run <code>make generate</code> again => your changes to the file have not been overwritten this time.</p>
|
<p>I'm trying to set up a K3s cluster. When I had a single master and agent setup cert-manager had no issues. Now I'm trying a 2 master setup with embedded etcd. I opened TCP ports <code>6443</code> and <code>2379-2380</code> for both VMs and did the following:</p>
<pre class="lang-none prettyprint-override"><code>VM1: curl -sfL https://get.k3s.io | sh -s server --token TOKEN --cluster-init
VM2: curl -sfL https://get.k3s.io | sh -s server --token TOKEN --server https://MASTER_IP:6443
</code></pre>
<pre class="lang-none prettyprint-override"><code># k3s kubectl get nodes
NAME STATUS ROLES AGE VERSION
VM1 Ready control-plane,etcd,master 130m v1.22.7+k3s1
VM2 Ready control-plane,etcd,master 128m v1.22.7+k3s1
</code></pre>
<p>Installing cert-manager works fine:</p>
<pre class="lang-none prettyprint-override"><code># k3s kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.0/cert-manager.yaml
# k3s kubectl get pods --namespace cert-manager
NAME READY STATUS
cert-manager-b4d6fd99b-c6fpc 1/1 Running
cert-manager-cainjector-74bfccdfdf-gtmrd 1/1 Running
cert-manager-webhook-65b766b5f8-brb76 1/1 Running
</code></pre>
<p>My manifest has the following definition:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: [email protected]
privateKeySecretRef:
name: letsencrypt-account-key
solvers:
- selector: {}
http01:
ingress: {}
</code></pre>
<p>Which results in the following error:</p>
<pre class="lang-none prettyprint-override"><code># k3s kubectl apply -f manifest.yaml
Error from server (InternalError): error when creating "manifest.yaml": Internal error occurred: failed calling webhook "webhook.cert-manager.io": failed to call webhook: Post "https://cert-manager-webhook.cert-manager.svc:443/mutate?timeout=10s": context deadline exceeded
</code></pre>
<p>I tried disabling both firewalls, waiting a day, reset and re-setup, but the error persists. Google hasn't been much help either. The little info I can find goes over my head for the most part and no tutorial seems to do any extra steps.</p>
| <p>A good starting point for troubleshooting issues with the webhook can be found int the <a href="https://cert-manager.io/docs/concepts/webhook/#known-problems-and-solutions" rel="nofollow noreferrer">docs</a>, e.g. there is a section for <a href="https://cert-manager.io/docs/concepts/webhook/#known-problems-and-solutions" rel="nofollow noreferrer">problems on GKE private clusters</a>.</p>
<p>In my case, however, this didn't really solve the problem. For me the issue was that when I played around with <code>cert-manager</code> I happen to install and uninstall it multiple times. It turned out that just removing the namespace, e.g. <code>kubectl delete namespace cert-manager</code> didn't remove the webhooks and other non-obvious resources.</p>
<p>Following the official guide for <a href="https://cert-manager.io/v1.2-docs/installation/uninstall/kubernetes/" rel="nofollow noreferrer">uninstalling cert-manager</a> and applying the manifests again solved the issue.</p>
|
<p>I created a private CloudSQL instance for a postgres db.
I followed the documentation at <a href="https://cloud.google.com/sql/docs/postgres/connect-kubernetes-engine" rel="nofollow noreferrer">https://cloud.google.com/sql/docs/postgres/connect-kubernetes-engine</a> and applied the yaml changes and restarted the deployment.
Added the below to my existing Yaml for deployment.
I was wondering if there is any way to test/check if the connection to the DB was successful.
EDIT - I didnt encounter any errors when the pod restarted, so thats the main reason I am asking this question.</p>
<pre><code>spec:
containers:
- name: <YOUR-APPLICATION-NAME>
# ... other container configuration
env:
- name: DB_USER
valueFrom:
secretKeyRef:
name: <YOUR-DB-SECRET>
key: username
- name: DB_PASS
valueFrom:
secretKeyRef:
name: <YOUR-DB-SECRET>
key: password
- name: DB_NAME
valueFrom:
secretKeyRef:
name: <YOUR-DB-SECRET>
key: database
- name: DB_HOST
valueFrom:
secretKeyRef:
name: <YOUR-PRIVATE-IP-SECRET>
key: db_host
</code></pre>
| <p>If you just to check the connection</p>
<p>you can run one ubuntu or busybox container to check the connection with the Postgres database.</p>
<pre><code>kubectl run -i --tty busybox --image=busybox --restart=Never -- sh
$> # ls
</code></pre>
<p>Once you are in the container you can run the command from the container to Postgres database connection or run any network commands to check further.</p>
<p>Command to check the <strong>Postgres sql</strong></p>
<pre><code>psql -h <REMOTE HOST> -p <REMOTE PORT> -U <DB_USER> <DB_NAME>
</code></pre>
|
<p>I'm running kuberenetes in localhost, the pod is running and I can access the services when I port forwarding:</p>
<pre><code>kubectl port-forward svc/my-service 8080:8080
</code></pre>
<p>I can get/post etc. the services in localhost.</p>
<p>I'm trying to use it with ingress to access it, here is the yml file:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-service
port:
number: 8080
</code></pre>
<p>I've also installed the ingress controller. But it isn't working as expected. Anything wrong with this?</p>
<p>EDIT: the service that Im trying to connect with ingress:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-service
labels:
app: my-service
spec:
replicas: 1
selector:
matchLabels:
app: my-service
template:
metadata:
labels: my-service
app: my-service
spec:
containers:
- image: test/my-service:0.0.1-SNAPSHOT
name: my-service
ports:
- containerPort:8080
... other spring boot override properties
---
apiVersion: v1
kind: Service
metadata:
name: my-service
labels:
app: my-service
spec:
type: ClusterIP
selector:
app: my-service
ports:
- name: 8080-8080
port: 8080
protocol: TCP
targetPort: 8080
</code></pre>
<p>service is working by itself though</p>
<p>EDIT:</p>
<p>It worked when I used <strong>https</strong> instead of <strong>http</strong></p>
| <p>Is ingress resource in the same namespace as the service? Can you share the manifest of service? Also, what do logs of nginx ingress-controller show and what sort of error do you face when hitting the endpoint in the browser?</p>
<p>Ingress's YAML file looks OK to me BTW.</p>
|
<p>I've been trying to deploy a workflow in Argo with Kubernetes and I'm getting this error</p>
<p><a href="https://i.stack.imgur.com/BGPAk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BGPAk.png" alt="![Kubernetes Argo Error" /></a></p>
<p>Can someone help me to know the root of the issue?</p>
<p>I’ve tried several things but I’ve been unsuccessful.</p>
| <p>The way Argo solves that problem is by using compression on the stored entity, but the real question is whether you have to have all 3MB worth of that data at once, or if it is merely more convenient for you and they could be decomposed into separate objects with relationships between each other. The kubernetes API is not a blob storage, and shouldn't be treated as one.</p>
<ul>
<li>The "error": "Request entity too large: limit is 3145728" is probably
the default response from kubernetes handler for objects larger than
3MB, as you can see <a href="https://github.com/kubernetes/kubernetes/blob/db1990f48b92d603f469c1c89e2ad36da1b74846/test/integration/master/synthetic_master_test.go#L315" rel="nofollow noreferrer">here at L305</a> of the source code:</li>
</ul>
<p>expectedMsgFor1MB := <code>etcdserver: request is too large</code>
expectedMsgFor2MB := <code>rpc error: code = ResourceExhausted desc = trying to send message larger than max</code>
expectedMsgFor3MB := <code>Request entity too large: limit is 3145728</code>
expectedMsgForLargeAnnotation := <code>metadata.annotations: Too long: must have at most 262144 bytes</code></p>
<ul>
<li>The <a href="https://github.com/etcd-io/etcd/issues/9925" rel="nofollow noreferrer">ETCD</a> has indeed a 1.5MB limit for processing a file and you will
find on ETCD Documentation a suggestion to try the--max-request-bytes
flag but it would have no effect on a GKE cluster because you don't
have such permission on master node.</li>
</ul>
<p>But even if you did, it would not be ideal because usually this error means that you are <a href="https://github.com/kubeflow/pipelines/issues/3134#issuecomment-591278230" rel="nofollow noreferrer">consuming the objects</a> instead of referencing them which would degrade your performance.</p>
<p>I highly recommend that you consider instead these options:</p>
<p><strong>- Determine whether your object includes references that aren't used</strong></p>
<p><strong>- Break up your resource</strong></p>
<p><strong>- Consider a volume mount instead</strong></p>
<p>There's a request for <a href="https://github.com/kubernetes/kubernetes/issues/88709" rel="nofollow noreferrer">a new API Resource</a>: File (orBinaryData) that could apply to your case. It's very fresh but it's good to keep an eye on.</p>
<p>Partial source for this answer: <a href="https://github.com/etcd-io/etcd/issues/9925" rel="nofollow noreferrer">https://stackoverflow.com/a/60492986/12153576</a></p>
|
<p>How do I run a new pipeline with it's own isolated Dockerfile in Airflow Kubernetes ?</p>
<p>I've been using Dagster and I can run new pipelines on their own Dockerfile, but can't figure out how to do this in Airflow</p>
| <p>If you want to run a docker container task on Kubernetes using Airflow, regardless the executor you are using and how you deployed Airflow server, you can use <a href="https://airflow.apache.org/docs/apache-airflow-providers-cncf-kubernetes/stable/operators.html" rel="nofollow noreferrer">KubernetesPodOperator</a>.</p>
<p>You can specify the docker image by providing the argument <code>image</code>, you can also override the image entrypoint and provide extra args (<code>cmds</code> and <code>arguments</code>). And you can configure your pod as you need (labesl, volumes, secrets, configMaps, ...).</p>
|
<p>I am configuring jenkins + jenkins agents in kubernetes using this guide:</p>
<p><a href="https://akomljen.com/set-up-a-jenkins-ci-cd-pipeline-with-kubernetes/" rel="nofollow noreferrer">https://akomljen.com/set-up-a-jenkins-ci-cd-pipeline-with-kubernetes/</a></p>
<p>which gives the below example of a jenkins pipeline using multiple/different containers for different stages:</p>
<pre><code>def label = "worker-${UUID.randomUUID().toString()}"
podTemplate(label: label, containers: [
containerTemplate(name: 'gradle', image: 'gradle:4.5.1-jdk9', command: 'cat', ttyEnabled: true),
containerTemplate(name: 'docker', image: 'docker', command: 'cat', ttyEnabled: true),
containerTemplate(name: 'kubectl', image: 'lachlanevenson/k8s-kubectl:v1.8.8', command: 'cat', ttyEnabled: true),
containerTemplate(name: 'helm', image: 'lachlanevenson/k8s-helm:latest', command: 'cat', ttyEnabled: true)
],
volumes: [
hostPathVolume(mountPath: '/home/gradle/.gradle', hostPath: '/tmp/jenkins/.gradle'),
hostPathVolume(mountPath: '/var/run/docker.sock', hostPath: '/var/run/docker.sock')
]) {
node(label) {
def myRepo = checkout scm
def gitCommit = myRepo.GIT_COMMIT
def gitBranch = myRepo.GIT_BRANCH
def shortGitCommit = "${gitCommit[0..10]}"
def previousGitCommit = sh(script: "git rev-parse ${gitCommit}~", returnStdout: true)
stage('Test') {
try {
container('gradle') {
sh """
pwd
echo "GIT_BRANCH=${gitBranch}" >> /etc/environment
echo "GIT_COMMIT=${gitCommit}" >> /etc/environment
gradle test
"""
}
}
catch (exc) {
println "Failed to test - ${currentBuild.fullDisplayName}"
throw(exc)
}
}
stage('Build') {
container('gradle') {
sh "gradle build"
}
}
stage('Create Docker images') {
container('docker') {
withCredentials([[$class: 'UsernamePasswordMultiBinding',
credentialsId: 'dockerhub',
usernameVariable: 'DOCKER_HUB_USER',
passwordVariable: 'DOCKER_HUB_PASSWORD']]) {
sh """
docker login -u ${DOCKER_HUB_USER} -p ${DOCKER_HUB_PASSWORD}
docker build -t namespace/my-image:${gitCommit} .
docker push namespace/my-image:${gitCommit}
"""
}
}
}
stage('Run kubectl') {
container('kubectl') {
sh "kubectl get pods"
}
}
stage('Run helm') {
container('helm') {
sh "helm list"
}
}
}
}
</code></pre>
<p>But why would you bother with this level of granularity? E.g. why not just have one container that have all you need, jnlp, helm, kubectl, java etc. and use that for all your stages?</p>
<p>I know from a purist perspective its good to keep container/images as small as possible but if that's the only argument I would rather have it one container + not having to bother my end users (developers writing jenkinsfiles) with picking the right container - they should not have to worry about stuff at this level instead they you need to be able to get an agent and that's it.</p>
<p>Or am I missing some <strong>functional</strong> reason for this multiple container setup?</p>
| <p>Using one single image to handle all process is funtionally feasible, but it adds burden to your <strong>operation</strong>.</p>
<p>We don't always find an image that fulfills all our needs, i.e. desired tools with desired version. Most likely, you are going to build one.</p>
<p>To achieve this, you need to build docker images for different arch (amd/arm) and maintain/use a docker registry to store your built image, this process can be time consuming as your image gets more complicated. <strong>More importantly, it is very likely that some of your tools 'favour' some particular linus distro, you will find it difficult and <em>not always functionally ok.</em></strong></p>
<p>Imagine you need to use a newer version of docker image in on of your pipeline's step, you will have you repeat the whole process of building and uploading images. Alternatively, you only need to change the image version in your pipeline, it minimises your operation effort.</p>
|
<p>I want to create a private Kubernetes registry from this tutorial: <a href="https://www.linuxtechi.com/setup-private-docker-registry-kubernetes/" rel="nofollow noreferrer">https://www.linuxtechi.com/setup-private-docker-registry-kubernetes/</a></p>
<p>I implemented this:</p>
<pre><code>Generate Self-Signed Certificate
cd /opt
sudo mkdir certs
cd certs
sudo touch registry.key
cd /opt
sudo openssl req -newkey rsa:4096 -nodes -sha256 -keyout \
./certs/registry.key -x509 -days 365 -out ./certs/registry.crt
ls -l certs/
Create registry folder
cd /opt
mkdir registry
</code></pre>
<p>Copy-paste <code>private-registry.yaml</code> into /opt/registry</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: private-repository-k8s
labels:
app: private-repository-k8s
spec:
replicas: 1
selector:
matchLabels:
app: private-repository-k8s
template:
metadata:
labels:
app: private-repository-k8s
spec:
volumes:
- name: certs-vol
hostPath:
path: /opt/certs
type: Directory
- name: registry-vol
hostPath:
path: /opt/registry
type: Directory
containers:
- image: registry:2
name: private-repository-k8s
imagePullPolicy: IfNotPresent
env:
- name: REGISTRY_HTTP_TLS_CERTIFICATE
value: "/certs/registry.crt"
- name: REGISTRY_HTTP_TLS_KEY
value: "/certs/registry.key"
ports:
- containerPort: 5000
volumeMounts:
- name: certs-vol
mountPath: /certs
- name: registry-vol
mountPath: /var/lib/registry
kubernetes@kubernetes1:/opt/registry$ kubectl create -f private-registry.yaml
deployment.apps/private-repository-k8s created
kubernetes@kubernetes1:/opt/registry$ kubectl get deployments private-repositor y-k8s
NAME READY UP-TO-DATE AVAILABLE AGE
private-repository-k8s 0/1 1 0 12s
kubernetes@kubernetes1:/opt/registry$
</code></pre>
<p>I have the following questions:</p>
<ol>
<li><p>I have a control plane and 2 work nodes. Is it possible to have a folder located only on the control plane under <code>/opt/registry</code> and deploy images on all work nodes without using shared folders?</p>
</li>
<li><p>As alternative more resilient solution I want to have a control plane and 2 work nodes. Is it possible to have a folder located on all work nodes and on the control plane under <code>/opt/registry</code> and deploy images on all work nodes without using manually created shared folders? I want Kubernetes to manage repository replication on all nodes. i.e data into <code>/opt/registry</code> to be synchronized automatically by Kubernetes.</p>
</li>
<li><p>Do you know how I can debug this configuration? As you can see pod is not starting.</p>
</li>
</ol>
<p>EDIT: Log file:</p>
<pre><code>kubernetes@kubernetes1:/opt/registry$ kubectl logs private-repository-k8s-6ddbcd9c45-s6dfq
Error from server (BadRequest): container "private-repository-k8s" in pod "private-repository-k8s-6ddbcd9c45-s6dfq" is waiting to start: ContainerCreating
kubernetes@kubernetes1:/opt/registry$
</code></pre>
<p><strong>Attempt 2:</strong></p>
<p>I tried this configuration deployed from control plane:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: pv1
spec:
capacity:
storage: 256Mi # specify your own size
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain
local:
path: /opt/registry # can be any path
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions: # specify the node label which maps to your control-plane node.
- key: kubernetes1
operator: In
values:
- controlplane-1
accessModes:
- ReadWriteOnce # only 1 node will read/write on the path.
# - ReadWriteMany # multiple nodes will read/write on the path
</code></pre>
<p>Note! control plane hostname is <code>kubernetes1</code> so I changed the value into above configuration. I get this:</p>
<pre><code>kubernetes@kubernetes1:~$ cd /opt/registry
kubernetes@kubernetes1:/opt/registry$ kubectl create -f private-registry1.yaml
persistentvolume/pv1 created
kubernetes@kubernetes1:/opt/registry$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default private-repository-k8s-6ddbcd9c45-s6dfq 0/1 ContainerCreating 0 2d1h
kube-system calico-kube-controllers-58dbc876ff-dgs77 1/1 Running 4 (125m ago) 2d13h
kube-system calico-node-czmzc 1/1 Running 4 (125m ago) 2d13h
kube-system calico-node-q4lxz 1/1 Running 4 (125m ago) 2d13h
kube-system coredns-565d847f94-k94z2 1/1 Running 4 (125m ago) 2d13h
kube-system coredns-565d847f94-nt27m 1/1 Running 4 (125m ago) 2d13h
kube-system etcd-kubernetes1 1/1 Running 5 (125m ago) 2d13h
kube-system kube-apiserver-kubernetes1 1/1 Running 5 (125m ago) 2d13h
kube-system kube-controller-manager-kubernetes1 1/1 Running 5 (125m ago) 2d13h
kube-system kube-proxy-97djs 1/1 Running 5 (125m ago) 2d13h
kube-system kube-proxy-d8bzs 1/1 Running 4 (125m ago) 2d13h
kube-system kube-scheduler-kubernetes1 1/1 Running 5 (125m ago) 2d13h
kubernetes@kubernetes1:/opt/registry$ kubectl logs private-repository-k8s-6ddbcd9c45-s6dfq
Error from server (BadRequest): container "private-repository-k8s" in pod "private-repository-k8s-6ddbcd9c45-s6dfq" is waiting to start: ContainerCreating
</code></pre>
<p>Unfortunately again the image is not created.</p>
| <p>For 1st question, you can try creating <code>PersistentVolume</code> with node affinity set to specific controlplane node and tie it with the deployment via <code>PersistentVolumeClaim</code>.Here's an example:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: pv1
spec:
capacity:
storage: 256Mi # specify your own size
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain
local:
path: /opt/registry # can be any path
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions: # specify the node label which maps to your control-plane node.
- key: kubernetes.io/hostname
operator: In
values:
- controlplane-1
accessModes:
- ReadWriteOnce # only 1 node will read/write on the path.
# - ReadWriteMany # multiple nodes will read/write on the path
</code></pre>
<hr />
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv1-claim
spec: # should match specs added in the PersistenVolume
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 256Mi
</code></pre>
<hr />
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: private-repository-k8s
labels:
app: private-repository-k8s
spec:
replicas: 1
selector:
matchLabels:
app: private-repository-k8s
template:
metadata:
labels:
app: private-repository-k8s
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: pv1-claim # specify the PVC that you've created. PVC and Deployment must be in same namespace.
containers:
- image: registry:2
name: private-repository-k8s
imagePullPolicy: IfNotPresent
env:
- name: REGISTRY_HTTP_TLS_CERTIFICATE
value: "/certs/registry.crt"
- name: REGISTRY_HTTP_TLS_KEY
value: "/certs/registry.key"
ports:
- containerPort: 5000
volumeMounts:
- name: task-pv-storage
mountPath: /opt/registry
</code></pre>
<p>For question # 2, can you share the logs of your pod?</p>
|
<p>I am trying to configure TLS using edge termination on openshift, am passing the TLS certificates and private key in values.yaml and referring it in route.yaml file, when I execute the helm chart the creation of the route fails due to improper indentation and newlines introduced while copying the certificate from values.yaml to the route.yaml file.</p>
<p>Below are the values.yaml file from which am referring the certificate in the route.yaml file. What is the better way to do this? how can I pass the tls cert and private key from values.yaml with proper indentation.</p>
<p>Values.yaml</p>
<pre><code>route:
Enabled: true
annotations:
haproxy.router.openshift.io/cookie_name: SESSION_XLD
haproxy.router.openshift.io/disable_cookies: "false"
haproxy.router.openshift.io/rewrite-target: /
path: /
hosts:
- www.example.com
tls:
key:
-----BEGIN CERTIFICATE-----
[...]
-----END CERTIFICATE-----
certificate:
-----BEGIN CERTIFICATE-----
[...]
-----END CERTIFICATE-----
caCertificate:
-----BEGIN CERTIFICATE-----
[...]
-----END CERTIFICATE-----
insecureEdgeTerminationPolicy: Redirect
</code></pre>
<p>route.yaml</p>
<pre><code>{{- if $.Values.route.tls }}
tls:
termination: edge
{{- with $.Values.route.tls }}
key: |
{{ .key }}
certificate: |
{{ .certificate }}
caCertificate: |
{{ .caCertificate }}
insecureEdgeTerminationPolicy: {{ .insecureEdgeTerminationPolicy }}
{{- end }}
{{- end }}
</code></pre>
| <p>add | before certificate like @Daein answer, but also in route yaml add <strong>quote</strong>
certificate:
{{ .certificate | quote }}</p>
|
<p>Sorry I'm quite a noob at Go so hope this isn't a stupid question. I know pointers in a general sense but struggling with Go semantics.</p>
<p>I can't get this to work:</p>
<pre><code>func DeleteOldCronJob(c client.Client, ctx context.Context, namespace string, name string) error {
cronJob := batchv1beta1.CronJob{}
key := client.ObjectKey{
Namespace: namespace,
Name: name,
}
return DeleteOldAny(c, ctx, name, key, &cronJob)
}
func DeleteOldAny(c client.Client, ctx context.Context, name string, key client.ObjectKey, resource interface{}) error {
err := c.Get(ctx, key, resource)
if err == nil {
err := c.Delete(ctx, resource)
if err != nil {
return err
}
} else {
return err
}
return nil
}
</code></pre>
<p>I get an error:</p>
<pre><code>interface {} does not implement "k8s.io/apimachinery/pkg/runtime".Object (missing DeepCopyObject method)
</code></pre>
<p>The point is so that I can reuse DeleteOldAny on multiple different types, to make my codebase more compact (I could just copy+paste DeleteOldCronjob and change the type).
As far as I read, pointers to interfaces in Go are usually wrong. Also, <a href="https://pkg.go.dev/k8s.io/api/batch/v1beta1?utm_source=gopls#CronJob" rel="nofollow noreferrer">the k8s type I'm importing</a> is just a struct. So, I thought since it's a struct not an interface I should pass resource a a pointer like:</p>
<pre><code> err := c.Get(ctx, key, &resource)
</code></pre>
<p>But that gives me another error:</p>
<pre><code>*interface {} is pointer to interface, not interface
</code></pre>
<p>So I'm a bit stuck. Am I doomed to copy+paste the same function for each type or is it a simple syntax mistake I'm making?</p>
| <pre><code>func (client.Reader).Get(ctx context.Context, key types.NamespacedName, obj client.Object) error
Get retrieves an obj for the given object key from the Kubernetes Cluster. obj must be a struct pointer so that obj can be updated with the response returned by the Server.
</code></pre>
<p>so the idea is right to have modular approach and stop reusing the code .</p>
<p>But implementation is wrong , the obj would be the resource you are trying to fetch from cluster which should be passed in the function as a struct pointer.</p>
<pre><code>err := c.Get(ctx, key, &resource)
</code></pre>
<p>here <strong>resource</strong> should be a struct as <strong>Get , delete etc.</strong> expects as pointer to the respective object to be passed .</p>
|
<p>I am using the kubernetes operator to create a custom resource in the cluster, the CR has the <code>Status</code> field populated, but when the object gets created the <code>Status</code> field is empty.</p>
<p>This is how I am creating the CR:</p>
<pre><code>reconcile.Create(ctx, &object)
</code></pre>
<p>This is what I am trying to accomplish with k8s operator:</p>
<p><a href="https://i.stack.imgur.com/IiurP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IiurP.png" alt="enter image description here" /></a></p>
| <p>The architecture of Kubernetes API and resources follows a pattern.</p>
<ol>
<li><p>Clients may create resources, by specifying a <em>desired state</em> (This is the <code>spec:</code> part of a resource). This is a "create" request sent to the API Server.</p>
</li>
<li><p>Controllers, subscribe/watch to changes of resources, while doing actions in a <em>reconciliation loop</em>, they might update the Status of the resource (this is the <code>status:</code> part of the resource).</p>
</li>
</ol>
<p>For an example of how a controller is implemented and updates the status, see the <a href="https://book.kubebuilder.io/cronjob-tutorial/controller-implementation.html#2-list-all-active-jobs-and-update-the-status" rel="nofollow noreferrer">Kubebuilder book: Implementing a Controller - Update the Status</a>.</p>
<p>The client in the example is a "controller runtime client":</p>
<pre><code>"sigs.k8s.io/controller-runtime/pkg/client"
</code></pre>
<p>Example code, where the <em>reconciler</em> updates the <code>status</code> sub-resource:</p>
<pre><code>if err := r.Status().Update(ctx, &cronJob); err != nil {
log.Error(err, "unable to update CronJob status")
return ctrl.Result{}, err
}
</code></pre>
|
<p>Prometheus has metrics such as <code>container_cpu_usage_seconds_total</code>. However, they are only grouped by pod. How can I group them by deployment/cronjobs/etc?</p>
| <p>I was able to handle this with the following query:</p>
<p><code>((label_replace((rate(container_cpu_usage_seconds_total{image!=""}[2m]) * on(pod) group_left(owner_name) (sum without (instance) (kube_pod_owner))), "replicaset", "$1", "owner_name", "(.*)")) * on(replicaset) group_left(owner_name) (sum without (instance) (kube_replicaset_owner{})))</code></p>
<p>Here is the explanation:</p>
<ul>
<li>Join <code>container_cpu_usage_seconds_total</code> with <code>kube_pod_owner</code> on <code>pod</code></li>
<li>Copy over the <code>owner_name</code> from <code>kube_pod_owner</code></li>
<li>Use <code>label_replace</code> to rename <code>kube_pod_owner</code>'s <code>owner_name</code> to <code>replicaset</code></li>
<li>Join that with kube_replicaset_owner on <code>replicaset</code></li>
<li>Copy over the <code>owner_name</code> from <code>kube_replicaset_owner</code> (this value is your deployment etc)</li>
</ul>
<p>The <code>without (instance)</code> are used to remove the <code>instance</code> field from the joined sets. Because there can be multiple instances for a single deployment, this can cause issues.</p>
<p>Lastly, the <code>rate</code> function is called on <code>container_cpu_usage_seconds_total</code> directly at the innermost area because otherwise Prometheus complains about <code>parse error: ranges only allowed for vector selectors</code>. Placing it in the innermost area is a workaround.</p>
|
<p>I have a single values.yaml file with the following two containers:</p>
<pre><code>...
nginx:
image:
repository: _ADDRESS_
tag: stable
pullPolicy: IfNotPresent
flask:
image:
repository: _ADDRESS_
tag: stable
pullPolicy: IfNotPresent
...
</code></pre>
<p>Is it best practice to duplicate the following code manually within the deployment file for each container and just change the variables or is there a more standard way of dealing with this? Feels as if it limits the abstraction so was just curious as to whether I am doing this wrong.</p>
<pre><code>- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{{- toYaml .Values.resources | nindent 12 }}
</code></pre>
<p>Thank you for your time. If there are any details I can add to simplify my question, please don't hesitate to mention them and I will edit the post.</p>
| <blockquote>
<p>Is it best practice to duplicate the following code manually within the deployment file for each container</p>
</blockquote>
<p>Yes.</p>
<p>In principle it's possible to write a generic Helm template that produces a Deployment YAML specification. But then you'll run into a problem where the Flask application listens on port 5000 but the Nginx server uses port 80, and the Flask application has a dedicated <code>/health</code> endpoint but the Nginx server should just probe <code>/</code>, and so on, and you'd need to make these differences visible in the <code>values.yaml</code>. This has two problems: you're exposing fixed details of the application as configuration, and in effect you're republishing the entire Kubernetes YAML structure as Helm values.</p>
<p>For some things that really do get repeated over and over, you could use a helper template; for example</p>
<pre class="lang-yaml prettyprint-override"><code>{{- define "container.common" -}}
securityContext: {{- toYaml .Values.securityContext | nindent 2 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
resources: {{- toYaml .Values.resources | nindent 2 }}
{{- end -}}
- name: {{ .Chart.Name }}
{{ include "container.common" . | indent 2 }}
ports: { ... }
livenessProbe: { ... }
readinessProbe: { ... }
</code></pre>
<p>But for details like <code>ports:</code> and the probes, again, these are fixed properties of the image (the Flask application will <em>always</em> listen on port 5000 and the end user will never need to configure it) and it's appropriate to write it in the <code>templates/flask-deployment.yaml</code> file, even if it looks very similar to settings in other files.</p>
|
<p>I want to create a private Kubernetes registry from this tutorial: <a href="https://www.linuxtechi.com/setup-private-docker-registry-kubernetes/" rel="nofollow noreferrer">https://www.linuxtechi.com/setup-private-docker-registry-kubernetes/</a></p>
<p>I implemented this:</p>
<pre><code>Generate Self-Signed Certificate
cd /opt
sudo mkdir certs
cd certs
sudo touch registry.key
cd /opt
sudo openssl req -newkey rsa:4096 -nodes -sha256 -keyout \
./certs/registry.key -x509 -days 365 -out ./certs/registry.crt
ls -l certs/
Create registry folder
cd /opt
mkdir registry
</code></pre>
<p>Copy-paste <code>private-registry.yaml</code> into /opt/registry</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: private-repository-k8s
labels:
app: private-repository-k8s
spec:
replicas: 1
selector:
matchLabels:
app: private-repository-k8s
template:
metadata:
labels:
app: private-repository-k8s
spec:
volumes:
- name: certs-vol
hostPath:
path: /opt/certs
type: Directory
- name: registry-vol
hostPath:
path: /opt/registry
type: Directory
containers:
- image: registry:2
name: private-repository-k8s
imagePullPolicy: IfNotPresent
env:
- name: REGISTRY_HTTP_TLS_CERTIFICATE
value: "/certs/registry.crt"
- name: REGISTRY_HTTP_TLS_KEY
value: "/certs/registry.key"
ports:
- containerPort: 5000
volumeMounts:
- name: certs-vol
mountPath: /certs
- name: registry-vol
mountPath: /var/lib/registry
kubernetes@kubernetes1:/opt/registry$ kubectl create -f private-registry.yaml
deployment.apps/private-repository-k8s created
kubernetes@kubernetes1:/opt/registry$ kubectl get deployments private-repositor y-k8s
NAME READY UP-TO-DATE AVAILABLE AGE
private-repository-k8s 0/1 1 0 12s
kubernetes@kubernetes1:/opt/registry$
</code></pre>
<p>I have the following questions:</p>
<ol>
<li><p>I have a control plane and 2 work nodes. Is it possible to have a folder located only on the control plane under <code>/opt/registry</code> and deploy images on all work nodes without using shared folders?</p>
</li>
<li><p>As alternative more resilient solution I want to have a control plane and 2 work nodes. Is it possible to have a folder located on all work nodes and on the control plane under <code>/opt/registry</code> and deploy images on all work nodes without using manually created shared folders? I want Kubernetes to manage repository replication on all nodes. i.e data into <code>/opt/registry</code> to be synchronized automatically by Kubernetes.</p>
</li>
<li><p>Do you know how I can debug this configuration? As you can see pod is not starting.</p>
</li>
</ol>
<p>EDIT: Log file:</p>
<pre><code>kubernetes@kubernetes1:/opt/registry$ kubectl logs private-repository-k8s-6ddbcd9c45-s6dfq
Error from server (BadRequest): container "private-repository-k8s" in pod "private-repository-k8s-6ddbcd9c45-s6dfq" is waiting to start: ContainerCreating
kubernetes@kubernetes1:/opt/registry$
</code></pre>
<p><strong>Attempt 2:</strong></p>
<p>I tried this configuration deployed from control plane:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: pv1
spec:
capacity:
storage: 256Mi # specify your own size
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain
local:
path: /opt/registry # can be any path
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions: # specify the node label which maps to your control-plane node.
- key: kubernetes1
operator: In
values:
- controlplane-1
accessModes:
- ReadWriteOnce # only 1 node will read/write on the path.
# - ReadWriteMany # multiple nodes will read/write on the path
</code></pre>
<p>Note! control plane hostname is <code>kubernetes1</code> so I changed the value into above configuration. I get this:</p>
<pre><code>kubernetes@kubernetes1:~$ cd /opt/registry
kubernetes@kubernetes1:/opt/registry$ kubectl create -f private-registry1.yaml
persistentvolume/pv1 created
kubernetes@kubernetes1:/opt/registry$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default private-repository-k8s-6ddbcd9c45-s6dfq 0/1 ContainerCreating 0 2d1h
kube-system calico-kube-controllers-58dbc876ff-dgs77 1/1 Running 4 (125m ago) 2d13h
kube-system calico-node-czmzc 1/1 Running 4 (125m ago) 2d13h
kube-system calico-node-q4lxz 1/1 Running 4 (125m ago) 2d13h
kube-system coredns-565d847f94-k94z2 1/1 Running 4 (125m ago) 2d13h
kube-system coredns-565d847f94-nt27m 1/1 Running 4 (125m ago) 2d13h
kube-system etcd-kubernetes1 1/1 Running 5 (125m ago) 2d13h
kube-system kube-apiserver-kubernetes1 1/1 Running 5 (125m ago) 2d13h
kube-system kube-controller-manager-kubernetes1 1/1 Running 5 (125m ago) 2d13h
kube-system kube-proxy-97djs 1/1 Running 5 (125m ago) 2d13h
kube-system kube-proxy-d8bzs 1/1 Running 4 (125m ago) 2d13h
kube-system kube-scheduler-kubernetes1 1/1 Running 5 (125m ago) 2d13h
kubernetes@kubernetes1:/opt/registry$ kubectl logs private-repository-k8s-6ddbcd9c45-s6dfq
Error from server (BadRequest): container "private-repository-k8s" in pod "private-repository-k8s-6ddbcd9c45-s6dfq" is waiting to start: ContainerCreating
</code></pre>
<p>Unfortunately again the image is not created.</p>
| <p>You can try with following file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: private-repository-k8s
labels:
app: private-repository-k8s
spec:
replicas: 1
selector:
matchLabels:
app: private-repository-k8s
template:
metadata:
labels:
app: private-repository-k8s
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: pv1-claim # specify the PVC that you've created. PVC and Deployment must be in same namespace.
containers:
- image: registry:2
name: private-repository-k8s
imagePullPolicy: IfNotPresent
env:
- name: REGISTRY_HTTP_TLS_CERTIFICATE
value: "/certs/registry.crt"
- name: REGISTRY_HTTP_TLS_KEY
value: "/certs/registry.key"
ports:
- containerPort: 5000
volumeMounts:
- name: task-pv-storage
mountPath: /opt/registry
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv1-claim
spec: # should match specs added in the PersistenVolume
accessModes:
- ReadWriteMany
volumeMode: Filesystem
resources:
requests:
storage: 256Mi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv1
spec:
capacity:
storage: 256Mi # specify your own size
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain
local:
path: /opt/registry # can be any path
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions: # specify the node label which maps to your control-plane node.
- key: kubernetes1
operator: In
values:
- controlplane-1
accessModes:
- ReadWriteMany
</code></pre>
|
<p>If we have a role change in the team, I read that EKS creator can NOT be transferred. Can we instead rename the creator's IAM user name via aws cli? Will that break EKS?</p>
<p>I only find ways to add new user using configmap but this configmap doesn't have the root user in there.</p>
<pre><code>$ kubectl edit configmap aws-auth --namespace kube-system
</code></pre>
| <p>There is no way to transfer the root user of an EKS cluster to another IAM user. The only way to do this would be to delete the cluster and recreate it with the new IAM user as the root user.</p>
|
<p>I use this manifest configuration to deploy a registry into 3 mode Kubernetes cluster:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: pv1
namespace: registry-space
spec:
capacity:
storage: 5Gi # specify your own size
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain
local:
path: /opt/registry # can be any path
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- kubernetes2
accessModes:
- ReadWriteMany # only 1 node will read/write on the path.
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv1-claim
namespace: registry-space
spec: # should match specs added in the PersistenVolume
accessModes:
- ReadWriteMany
volumeMode: Filesystem
resources:
requests:
storage: 5Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: private-repository-k8s
namespace: registry-space
labels:
app: private-repository-k8s
spec:
replicas: 1
selector:
matchLabels:
app: private-repository-k8s
template:
metadata:
labels:
app: private-repository-k8s
spec:
volumes:
- name: certs-vol
hostPath:
path: /opt/certs
type: Directory
- name: task-pv-storage
persistentVolumeClaim:
claimName: pv1-claim # specify the PVC that you've created. PVC and Deployment must be in same namespace.
containers:
- image: registry:2
name: private-repository-k8s
imagePullPolicy: IfNotPresent
env:
- name: REGISTRY_HTTP_TLS_CERTIFICATE
value: "/opt/certs/registry.crt"
- name: REGISTRY_HTTP_TLS_KEY
value: "/opt/certs/registry.key"
ports:
- containerPort: 5000
volumeMounts:
- name: certs-vol
mountPath: /opt/certs
- name: task-pv-storage
mountPath: /opt/registry
</code></pre>
<p>I manually created directories on every node under <code>/opt/certs</code> and <code>/opt/registry</code>.</p>
<p>But when I try to deploy the manifest without hardcoded <code>nodeSelectorTerms</code> on tha control plane I get error:</p>
<pre><code>kubernetes@kubernetes1:/opt/registry$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-58dbc876ff-fsjd5 1/1 Running 1 (74m ago) 84m
kube-system calico-node-5brzt 1/1 Running 1 (73m ago) 84m
kube-system calico-node-nph9n 1/1 Running 1 (76m ago) 84m
kube-system calico-node-pcd74 1/1 Running 1 (74m ago) 84m
kube-system calico-node-ph2ht 1/1 Running 1 (76m ago) 84m
kube-system coredns-565d847f94-7pswp 1/1 Running 1 (74m ago) 105m
kube-system coredns-565d847f94-tlrfr 1/1 Running 1 (74m ago) 105m
kube-system etcd-kubernetes1 1/1 Running 2 (74m ago) 105m
kube-system kube-apiserver-kubernetes1 1/1 Running 2 (74m ago) 105m
kube-system kube-controller-manager-kubernetes1 1/1 Running 2 (74m ago) 105m
kube-system kube-proxy-4slm4 1/1 Running 1 (76m ago) 86m
kube-system kube-proxy-4tnx2 1/1 Running 2 (74m ago) 105m
kube-system kube-proxy-9dgsj 1/1 Running 1 (73m ago) 85m
kube-system kube-proxy-cgr44 1/1 Running 1 (76m ago) 86m
kube-system kube-scheduler-kubernetes1 1/1 Running 2 (74m ago) 105m
registry-space private-repository-k8s-6d5d954b4f-xkmj5 0/1 Pending 0 4m55s
kubernetes@kubernetes1:/opt/registry$
</code></pre>
<p>Do you know how I can let Kubernetes to decide where to deploy the pod?</p>
| <p>It seems like your node has taints hence pods are not getting scheduled. Can you try using this command to remove taints from your node ?</p>
<pre><code>kubectl taint nodes <node-name> node-role.kubernetes.io/master-
</code></pre>
<p>or</p>
<pre><code>kubectl taint nodes --all node-role.kubernetes.io/master-
</code></pre>
<p>To get the node name use <code>kubectl get nodes</code></p>
<p>User was able to get the pod scheduled after running below command:</p>
<pre><code>kubectl taint nodes kubernetes1 node-role.kubernetes.io/control-plane:NoSchedule-
</code></pre>
<p>Now pod is failing due to crashloopbackoff this implies the pod has been scheduled.</p>
<p>Can you please check if this pod is getting scheduled and running properly ?</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: nginx1
namespace: test
spec:
containers:
- name: webserver
image: nginx:alpine
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "200m"
limits:
memory: "128Mi"
cpu: "350m"
</code></pre>
|
<p>I have deployed an AWS ALB Controller and I create listeners with ingress resources in a EKS cluster.</p>
<p>The steps I followed are the following:</p>
<ul>
<li>I had an ingress for a service named <code>first-test-api</code> and all where fine</li>
<li>I deploy a new Helm release [<code>first</code>] with just renaming the chart from <code>test-api</code> to <code>main-api</code>. So now is <code>first-main-api</code>.</li>
<li>Noting seems to break in terms of k8s resources , but...</li>
<li>the <code>test-api.mydomain.com</code> listener in the AWS ALB is stuck to the old service</li>
</ul>
<p>Has anyone encounter such a thing before?</p>
<p>I could delete the listener manually, but I don't want to. I'd like to know what is happening and why it didn't happened automatically :)</p>
<p>EDIT:</p>
<p>The ingress had an ALB annotation that enabled the deletion protection.</p>
| <p>I will provide some generic advice on things I would look at, but it might be better to detail a small example.</p>
<p>Yes, ALB controller should automatically manage changes on the backend.</p>
<p>I would suggest ignoring the helm chart and looking into the actual objects:</p>
<ul>
<li><code>kubectl get ing -n <namespace></code> shows the ingress you are expecting?</li>
<li><code>kubectl get ing -n <ns> <name of ingress> -o yaml</code> points to the correct/new service?</li>
<li><code>kubectl get svc -n <ns> <name of new svc></code> shows the new service?</li>
<li><code>kubectl get endpoints -n <ns> <name of new svc></code> shows the pod you are expecting?</li>
</ul>
<p>And then gut feeling.</p>
<ol>
<li>Check the labels in your new service are differents from the labels in the old service if you expect to both services serve different things.</li>
<li>Get the logs of the ALB controller. You will see registering/deregistering stuff. Sometimes errors. Especially if the role the node/service account doesn't have the proper IAM permissions.</li>
</ol>
<p>Happy to modify the answer if you expand the question with more details.</p>
|
<p>enviroment:centos7.0
I want to build K8s cluster that have 3 nodes,one of them is the master;every pod'status is running,but there is a pod name test-claim is pending, the pvc is pending status:
<a href="https://i.stack.imgur.com/w6aE2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/w6aE2.png" alt="enter image description here" /></a></p>
<p>then I eidt the /etc/kubernetes/manifests/kube-apiserver.yaml ,added a row</p>
<pre><code> - --feature-gates=RemoveSelfLink=false
</code></pre>
<p>the problem is appear :</p>
<pre><code>The connection to the server master:6443 was refused - did you specify the right host or port?
</code></pre>
<p>is there any problem I was missing?why the kubelet is crushdown? could somebody answer this question? thanks a lot previously.</p>
<p>this the content about kube-apiserver.yaml, could find some synatic error?</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
annotations:
kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.199.13:6443
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=192.168.199.13
- --allow-privileged=true
- --authorization-mode=Node,RBAC
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --enable-admission-plugins=NodeRestriction
- --enable-bootstrap-token-auth=true
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
- --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
- --etcd-servers=https://127.0.0.1:2379
- --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
- --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
- --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
- --requestheader-allowed-names=front-proxy-client
- --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
- --requestheader-extra-headers-prefix=X-Remote-Extra-
- --requestheader-group-headers=X-Remote-Group
- --requestheader-username-headers=X-Remote-User
- --secure-port=6443
- --service-account-issuer=https://kubernetes.default.svc.cluster.local
- --service-account-key-file=/etc/kubernetes/pki/sa.pub
- --service-account-signing-key-file=/etc/kubernetes/pki/sa.key
- --service-cluster-ip-range=10.1.0.0/16
- --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
- --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
- --feature-gates=RemoveSelfLink=false
image: registry.aliyuncs.com/google_containers/kube-apiserver:v1.24.3
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 192.168.199.13
path: /livez
port: 6443
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
name: kube-apiserver
readinessProbe:
failureThreshold: 3
httpGet:
host: 192.168.199.13
path: /readyz
port: 6443
scheme: HTTPS
periodSeconds: 1
timeoutSeconds: 15
resources:
requests:
cpu: 250m
startupProbe:
failureThreshold: 24
httpGet:
host: 192.168.199.13
path: /livez
port: 6443
scheme: HTTPS
initialDelaySeconds: 10
periodSeconds: 10
timeoutSeconds: 15
volumeMounts:
- mountPath: /etc/ssl/certs
name: ca-certs
readOnly: true
- mountPath: /etc/pki
name: etc-pki
readOnly: true
- mountPath: /etc/kubernetes/pki
name: k8s-certs
readOnly: true
hostNetwork: true
priorityClassName: system-node-critical
securityContext:
seccompProfile:
type: RuntimeDefault
volumes:
- hostPath:
path: /etc/ssl/certs
type: DirectoryOrCreate
name: ca-certs
- hostPath:
path: /etc/pki
type: DirectoryOrCreate
name: etc-pki
- hostPath:
path: /etc/kubernetes/pki
type: DirectoryOrCreate
name: k8s-certs
status: {}
</code></pre>
| <p>I finally find the solution don't need to edit the /etc/kubernetes/manifests/kube-apiserver.yaml file to add this row:- --feature-gates=RemoveSelfLink=false;this is not help for me. the solution is :</p>
<pre><code>1、docker pull vbouchaud/nfs-client-provisioner
</code></pre>
<p>Status: Downloaded newer image for vbouchaud/nfs-client-provisioner:latest
docker.io/vbouchaud/nfs-client-provisioner:latest</p>
<p>2、editing your deployment.yaml file
vi deployment.yaml</p>
<p>change the images from quay.io/external_storage/nfs-client-provisioner:latest to docker.io/vbouchaud/nfs-client-provisioner:latest
<a href="https://i.stack.imgur.com/MpGGP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MpGGP.png" alt="enter image description here" /></a>
3、
kubectl apply -f deployment.yaml</p>
<p>finally the pvc'state would change from pending to bounding like this :
<a href="https://i.stack.imgur.com/IKSPg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IKSPg.png" alt="enter image description here" /></a></p>
|
<p>In my project, we let developers update a repo containing all of the kubernetes manifests. The repo uses kustomize. I've decided to add a validation / lint step to our CI to catch mistakes early.</p>
<p>To do so, I'm trying to run <code>kustomize build</code> on everything in the repo. Where I'm running into trouble is our use of ksops. In this scenario, it's not important to actually decode the secrets. I don't want to install the appropriate key on the CI server or allow it to be pulled. What I'd really like to do is skip all the ksops stuff. I'm looking for something like this (doesn't seems to exist)</p>
<pre><code>kustomize build --ignore-kind=ksops ./apps/myapp/production
</code></pre>
<p>If I don't skip the ksops stuff, I get this:</p>
<blockquote>
<p>trouble decrypting file Error getting data key: 0 successful groups required, got 0Error: failure in plugin configured via /tmp/kust-plugin-config-24824323; exit status 1: exit status 1</p>
</blockquote>
<p>I noticed that someone else thought this was important too. <a href="https://github.com/argyle-engineering/ksops" rel="nofollow noreferrer">They made a patched version of ksops that can handle my scenario.</a> I'm hoping to do this with the unpatched stuff. Reason: because the folks that come after me will wonder what this is all about.</p>
<hr />
<p>Update: For reference, I'm doing this in Docker.</p>
<p>Trying out larsks' solution, here's the code I tried:</p>
<p>Dockerfile</p>
<pre><code>FROM alpine
RUN apk add bash curl git
RUN curl -s https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh | bash \
&& mv kustomize /usr/bin/kustomize \
&& kustomize version
ENV XDG_CONFIG_HOME=/root/.config
RUN mkdir -p /root/.config/kustomize/plugin
RUN mkdir -p /root/.config/kustomize/plugin/viaduct.ai/v1/ksops \
&& ln -s /bin/true /root/.config/kustomize/plugin/viaduct.ai/v1/ksops/ksops
ENV KUSTOMIZE_PLUGIN_HOME=/root/.config/kustomize/plugin
WORKDIR /code
COPY . /code
RUN ./validate.sh
</code></pre>
<p>validate.sh</p>
<pre><code>#! /bin/bash
set -e
for i in `find . -name kustomization* -type f | grep -v \/base`; do
d=`dirname $i`
echo "$d"
kustomize build --enable-alpha-plugins "$d"
done
</code></pre>
| <p>The solution is to create a dummy filter for processing ksops resources. For example, something like this:</p>
<pre><code>mkdir -p fakeplugins/viaduct.ai/v1/ksops
ln -s /bin/true fakeplugins/viaduct.ai/v1/ksops/ksops
export KUSTOMIZE_PLUGIN_HOME=$PWD/fakeplugins
kustomize build --enable-alpha-plugins
</code></pre>
<p>This will cause <code>kustomize</code> to call <code>/bin/true</code> when it encounters ksops-encrypted resources. You won't have secrets in your output, but it will generate all other resources.</p>
<p>(The above has been tested with kustomize 4.5.5)</p>
<hr />
<p>The reason your code is failing is because you're using a Busybox-based Docker image. Busybox is a multi-call binary; it figures out what applet to run based on the name with which it was called. So while on a normal system, we can run <code>ln -s /bin/true /path/to/ksops</code> and then run <code>/path/to/ksops</code>, this won't work in a Busybox environment: it sees that it's being called as <code>ksops</code> and doesn't know what to do.</p>
<p>Fortunately, that's an easy problem to solve:</p>
<pre><code>FROM alpine
RUN apk add bash curl git
RUN curl -s https://raw.githubusercontent.com/kubernetes-sigs/kustomize/master/hack/install_kustomize.sh | bash \
&& mv kustomize /usr/bin/kustomize \
&& kustomize version
RUN mkdir -p /root/fakeplugins/viaduct.ai/v1/ksops \
&& printf "#!/bin/sh\nexit 0\n" > /root/fakeplugins/viaduct.ai/v1/ksops/ksops \
&& chmod 755 /root/fakeplugins/viaduct.ai/v1/ksops/ksops
ENV KUSTOMIZE_PLUGIN_HOME=/root/fakeplugins
COPY validate.sh /bin/validate-overlays
WORKDIR /code
</code></pre>
<p>And now, given a layout like this:</p>
<pre><code>.
├── Dockerfile
├── example
│ ├── base
│ │ ├── deployment.yaml
│ │ ├── kustomization.yaml
│ │ └── pvc.yaml
│ └── overlay
│ ├── deployment_patch.yaml
│ ├── kustomization.yaml
│ ├── pg-password.enc.yaml
│ └── secret-generator.yaml
└── validate.sh
</code></pre>
<p>I can run from the top directory:</p>
<pre><code>docker run --rm -v $PWD:/code my-kustomize-image validate-overlays
</code></pre>
<hr />
<p>NB: I've slightly modified <code>validate.sh</code> to do the filtering in <code>find</code> rather than piping the output to <code>grep -v</code> :</p>
<pre><code>#!/bin/bash
set -e
find . -name base -prune -o -name kustomization.yaml -print |
while read -r overlay; do
overlay="${overlay%/*}"
echo "$overlay"
kustomize build --enable-alpha-plugins "$overlay"
done
</code></pre>
|
<p>I have a Mac with Apple Silicon (M1) and I have minikube installed. The installation was done following <a href="https://medium.com/@seohee.sophie.kwon/how-to-run-a-minikube-on-apple-silicon-m1-8373c248d669" rel="nofollow noreferrer">https://medium.com/@seohee.sophie.kwon/how-to-run-a-minikube-on-apple-silicon-m1-8373c248d669</a> by executing:</p>
<pre><code>curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-darwin-arm64
sudo install minikube-darwin-arm64 /usr/local/bin/minikube
</code></pre>
<p>How do I remove minikube?</p>
| <p>Have you tried to follow any online material to delete Minikube?? Test if this works for you and let me know if you face any issues.</p>
<p>Try using the below command :</p>
<pre><code>minikube stop; minikube delete &&
docker stop $(docker ps -aq) &&
rm -rf ~/.kube ~/.minikube &&
sudo rm -rf /usr/local/bin/localkube /usr/local/bin/minikube &&
launchctl stop '*kubelet*.mount' &&
launchctl stop localkube.service &&
launchctl disable localkube.service &&
sudo rm -rf /etc/kubernetes/ &&
docker system prune -af --volumes
</code></pre>
<p>Reference used: <a href="https://gist.github.com/rahulkumar-aws/65e6fbe16cc71012cef997957a1530a3" rel="noreferrer">Delete minikube on Mac</a></p>
|
<p>I have deployed simple app -NGINX and a Load balancer service in Kubernetes.
I can see that pods are running as well as service but calling Loadbalancer external IP is givings server error -site can't be reached .Any suggestion please</p>
<p><strong>app.yaml</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
</code></pre>
<p><strong>Service.Yaml:</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: nginx
</code></pre>
<p><a href="https://i.stack.imgur.com/W2nVy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/W2nVy.png" alt="enter image description here" /></a>
P.S. -Attached outcome from terminal.</p>
| <p>If you are using Minikube to access the service then you might need to run one extra command. But if this is on a cloud provider then you have an error in your service file.</p>
<p>Please ensure that you put two space in yaml file but your indentation of the yaml file is messed up as you have only added 1 space. Also you made a mistake in the last line of <code>service.yaml</code> file.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: nginx
</code></pre>
|
<p>I try to run my private docker image along with the <strong>docker-dind</strong> container to be able to run docker commands from the private image in Kubernetes.
My only issue is that the <code>docker run</code> command does not read the docker-secrets so fails by requiring to run <code>docker login</code>. How could I pass the credentials to the docker run command?</p>
<p>Here the piece of my Kubernetes deployment:</p>
<pre><code> containers:
- name: docker-private
image: docker:20.10
command: ['docker', 'run', '-p', '80:8000', 'private/image:latest' ]
resources:
requests:
cpu: 10m
memory: 256Mi
env:
- name: DOCKER_HOST
value: tcp://localhost:2375
envFrom:
- secretRef:
name: docker-secret-keys
- name: dind-daemon
image: docker:20.10-dind
command: ["dockerd", "--host", "tcp://127.0.0.1:2375"]
resources:
requests:
cpu: 20m
memory: 512Mi
securityContext:
privileged: true
volumeMounts:
- name: docker-graph-storage
mountPath: /var/lib/docker
</code></pre>
<p><strong>EDIT</strong>
I do have my certificate as Kubernetes secrets that I try to mount to the running docker but until now without any success :(</p>
<pre><code>apiVersion: v1
data:
.dockerconfigjson: eyJhXXXXXXdoihfc9w8fwpeojfOFwhfoiuwehfo8wfhoi2ehfioewNlcm5hbWUiOiJlbGRhcmVudGas4hti45ytg45hgiVsZGFXXXXXXyQGVudG9yLmlvIiwiYXV0aCI6IlpXeGtZWEpsYm5SdmNqb3dObVl4WmpjM1lTMDVPRFZrTFRRNU5HRXRZVEUzTXkwMk5UYzBObVF4T0RjeFpUWT0ifX19XXXXXXXXXXX
kind: Secret
metadata:
name: staging-docker-keys
namespace: staging
resourceVersion: "6383"
uid: a7yduyd-xxxx-xxxx-xxxx-ae2ede3e4ed
type: kubernetes.io/dockerconfigjson
</code></pre>
<p>The final goal is to get the "inner docker" (that runs private/image:latest) be able to run any docker command without a need to login before each command.</p>
| <p>docker:dind will create ca, server, client cert in <code>/certs</code>.
Just create <code>emptyDir</code> volume to share cert.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: myapp
labels:
name: myapp
spec:
volumes:
- name: docker-tls-certdir
emptyDir: {}
containers:
- name: docker-private
image: docker:20.10
command: ['docker', 'run', '-p', '80:8000', 'nginx' ]
env:
- name: DOCKER_HOST
value: tcp://localhost:2375
volumeMounts:
- name: docker-tls-certdir
mountPath: /certs
- name: dind-daemon
image: docker:20.10-dind
command: ["dockerd", "--host", "tcp://127.0.0.1:2375"]
securityContext:
privileged: true
volumeMounts:
- name: docker-tls-certdir
mountPath: /certs
</code></pre>
|
<p>I tried to <a href="https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/deployment/resource-providers/standalone/docker/#further-customization" rel="nofollow noreferrer">override the container entry point</a> of a Flink application in a Dockerfile, but it looks like that the <a href="https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-main/" rel="nofollow noreferrer">Apache Flink kubernetes operator</a> ignores it.</p>
<p>The Dockerfile is the following:</p>
<pre><code>FROM flink:1.14.2-scala_2.12-java11
ENV FLINK_HOME=/opt/flink
COPY custom-docker-entrypoint.sh /
RUN chmod a+x /custom-docker-entrypoint.sh
COPY --chown=flink:flink --from=build /target/*.jar /opt/flink/flink-web-upload/
ENTRYPOINT ["/custom-docker-entrypoint.sh"]
EXPOSE 6123 8081
CMD ["help"]
</code></pre>
<p>The definition of the FlinkDeployment uses the new image:</p>
<pre><code>apiVersion: flink.apache.org/v1beta1
kind: FlinkDeployment
metadata:
name: flink-example
namespace: default
spec:
image: "flink-example:0.1.0"
#...
</code></pre>
<p>In the description of the pod</p>
<pre><code>kubectl describe pod flink-example
</code></pre>
<p>I see the following output:</p>
<pre><code>Containers:
flink-main-container:
Command:
/docker-entrypoint.sh
</code></pre>
<p>I also tried to define the <code>custom-docker-entrypoint.sh</code> in the main container's <strong>command</strong>:</p>
<pre><code>apiVersion: flink.apache.org/v1beta1
kind: FlinkDeployment
metadata:
name: flink-example
namespace: default
spec:
flinkVersion: v1_14
image: "flink-example:0.1.0"
podTemplate:
apiVersion: v1
kind: Pod
metadata:
name: pod-template
spec:
containers:
- name: flink-main-container
# command: [ 'sh','-c','/custom-docker-entrypoint.sh' ]
command: [ "/custom-docker-entrypoint.sh" ]
</code></pre>
<p>Thank you.</p>
| <p>You can overwrite it via:</p>
<pre class="lang-yaml prettyprint-override"><code> flinkConfiguration:
kubernetes.entry.path: "/custom-docker-entrypoint.sh"
</code></pre>
<p>The Operator (by default) uses Flink's native Kubernetes integration. See: <a href="https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#kubernetes-entry-path" rel="nofollow noreferrer">https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#kubernetes-entry-path</a></p>
|
<p>I have got the following template for a job:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: "gpujob"
spec:
completions: 1
backoffLimit: 0
ttlSecondsAfterFinished: 600000
template:
metadata:
name: batch
spec:
volumes:
- name: data
persistentVolumeClaim:
claimName: "test"
containers:
- name: myhub
image: smat-jupyterlab
env:
- name: JUPYTERHUB_COOKIE_SECRET
value: "sdadasdasda"
resources:
requests:
memory: 500Gi
limits:
nvidia.com/gpu: 1
command: ["/bin/bash", "/usr/local/bin/jobscript.sh", smat-job]
volumeMounts:
- name: data
mountPath: /data
restartPolicy: Never
nodeSelector:
dso-node-role: "inference"
</code></pre>
<p>As you can see, I claim a lot of memory for the job. My Question is: Does the failed pod free the claimed resources, as soon as it is on a failed state? Due to regulations, I have to keep pods for one week in the cluster, otherwise I would just set a very low <code>ttlSecondsAfterFinished</code>. I read a lot of contradicting stuff in articles, but found nothing in the official docs.</p>
<p><a href="https://i.stack.imgur.com/mxRRB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mxRRB.png" alt="enter image description here" /></a></p>
<p><strong>TDLR: Does a failed Pod free claimed resources of a cluster? If no, what is a good way to do it?</strong></p>
| <p>Yes, a failed or completed job will produce a container in <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-state-terminated" rel="nofollow noreferrer">Terminated</a> state, and therefore the resources allocated to it are freed.</p>
<p>You can easily confirm this by using the command:</p>
<pre><code>kubectl top pod
</code></pre>
<p>You should not see any pod associated with the failed job consuming resources.</p>
|
<p>Does anyone know what am I doing wrong with my kubernetes secret yaml and why its not able to successfully create one programatically?</p>
<p>I am trying to programmatically create a secret in Kubernetes cluster with credentials to pull an image from a private registry but it is failing with the following:</p>
<pre><code>"Secret "secrettest" is invalid: data[.dockerconfigjson]: Invalid value: "<secret contents redacted>": invalid character 'e' looking for beginning of value"
</code></pre>
<p>This is the yaml I tried to use to create the secret with. It is yaml output from a secret previously created in my kubernetes cluster using the command line except without a few unnecessary properties. So I know this is valid yaml:</p>
<pre><code>apiVersion: v1
data:
.dockerconfigjson: eyJhdXRocyI6eyJoZWxsb3dvcmxkLmF6dXJlY3IuaW8iOnsidXNlcm5hbWUiOiJoZWxsbyIsInBhc3N3b3JkIjoid29ybGQiLCJhdXRoIjoiYUdWc2JHODZkMjl5YkdRPSJ9fX0=
kind: Secret
metadata:
name: secrettest
namespace: default
type: kubernetes.io/dockerconfigjson
</code></pre>
<p>This is the decoded value of the ".dockerconfigjson" property which seems to be throwing the error but not sure why if the value is supposed to be encoded per documentation:</p>
<pre><code>{"auths":{"helloworld.azurecr.io":{"username":"hello","password":"world","auth":"aGVsbG86d29ybGQ="}}}
</code></pre>
<p>According to the documentation, my yaml is valid so Im not sure whats the issue:
<a href="https://i.stack.imgur.com/Cdl2H.png" rel="nofollow noreferrer">Customize secret yaml</a></p>
<p><strong>Note: I tried creating the Secret using the Kubernetes client and "PatchNamespacedSecretWithHttpMessagesAsync" in C#</strong></p>
<p>Referenced documentaion: <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/</a></p>
| <p>I found my issue. I was trying to create the Secret object using</p>
<pre><code>Yaml.LoadAllFromString()
</code></pre>
<p>which was double encoding my <code>.dockerconfigjson</code> value. The weird part was the if the value wasnt encoded, it would fail. So I had to just manually create the Secret object instead of reading from a yaml file.</p>
|
<p>I am facing a major problem in installing Kubeflow locally on my Windows 10 Machine.</p>
<p>Machine Specs - OS: Windows 10, RAM: 16GB</p>
<p>Approaches Tried To Install</p>
<ol>
<li><p>Microk8s - Not Successful</p>
<p>I get
Cannot Install MicroK8s properly, due to “MicroK8s not found error”, along with instance crashing.</p>
</li>
<li><p>MiniKube + Vagrant with VirtualBox - Partially Successful</p>
<p>“Vagrant up” is <strong>very</strong> slow. And sometimes when I manage to open the Kubeflow console locally it crashes after running a couple of experiments. Errors arise from time-to-time and it is hard to pinpoint as to why they occur.</p>
</li>
<li><p>Kind - Not Successful</p>
<p>Out-dated docs and tutorials. Old commands don’t work and the manifests have been moved to another repo.</p>
</li>
<li><p>K3s - Not Successful</p>
<p>Cannot install manifests as it has been moved to another repo. And no updated docs directing as to how we can install it.</p>
</li>
</ol>
<p>Resources Referred:</p>
<ol>
<li><a href="https://kirenz.github.io/codelabs/codelabs/kubeflow-install/#4" rel="nofollow noreferrer">https://kirenz.github.io/codelabs/codelabs/kubeflow-install/#4</a></li>
<li><a href="https://www.kubeflow.org/docs/components/pipelines/installation/localcluster-deployment/" rel="nofollow noreferrer">https://www.kubeflow.org/docs/components/pipelines/installation/localcluster-deployment/</a></li>
<li><a href="https://github.com/kubeflow/manifests" rel="nofollow noreferrer">https://github.com/kubeflow/manifests</a></li>
<li>And official doc of all the Approaches taken</li>
</ol>
<p>Digging deep what I found was that some guides say that <strong>Kubeflow 1.5.0 is not compatible with version 1.22 and onwards</strong>. And as of now there are no older releases for Kubernetes(lower than 1.22) in the official site. Is this the root cause for the issues that I am facing?</p>
<p>Are there any other way to install and setup Kubeflow locally for Windows? It is hard to find a guide/tutorial or a video which is not outdated.</p>
| <p>The current <a href="https://www.kubeflow.org/docs/started/installing-kubeflow/" rel="nofollow noreferrer">https://www.kubeflow.org/docs/started/installing-kubeflow/</a> page suggests using a package.</p>
<p>None of the packages are expressly for Windows. The "Charmed Kubeflow" looked promising for an install on a Windows machine, so I went with it. Here are the steps I figured out after <em>much</em> trial and error.</p>
<ol>
<li><p>Enable Hyper V for Windows. (If you have Windows Home, see <a href="https://www.makeuseof.com/install-hyper-v-windows-11-home/." rel="nofollow noreferrer">https://www.makeuseof.com/install-hyper-v-windows-11-home/.</a>)</p>
</li>
<li><p><a href="https://multipass.run/install" rel="nofollow noreferrer">https://multipass.run/install</a>. Choose Hyper V over Virtual Box if you can. If you cannot, then finish the install while ignoring any VirtualBox related error, and then do Command prompt, <code>multipass set local.driver=hyperv</code>. (You may need to restart your computer here.)</p>
</li>
<li><p>Command prompt: <code>multipass shell</code></p>
</li>
<li><p>Shell: <code>exit</code></p>
</li>
<li><p>Command prompt: <code>multipass stop</code></p>
</li>
<li><p>In the Windows program "Hyper-V Manager", select the VM. Settings:</p>
<ul>
<li>Memory, RAM: 4096, Enable Dynamic Memory.</li>
<li>Processor, Number of Virtual Processors = 2.</li>
<li>SCSI Controller, Hard Drive, Virtual hard disk, Edit, Action: Expand, New size 50 GB.</li>
</ul>
</li>
<li><p>Command prompt:</p>
<pre class="lang-bash prettyprint-override"><code>multipass start
multipass shell
</code></pre>
</li>
<li><p>Shell: (These steps are mostly from <a href="https://charmed-kubeflow.io/docs/quickstart" rel="nofollow noreferrer">https://charmed-kubeflow.io/docs/quickstart</a>.)</p>
<pre class="lang-bash prettyprint-override"><code>sudo snap install microk8s --classic --channel=1.21/stable
sudo usermod -a -G microk8s $USER
newgrp microk8s
sudo chown -f -R $USER ~/.kube
microk8s enable dns
microk8s enable storage
microk8s enable ingress
microk8s enable metallb:10.64.140.43-10.64.140.49
microk8s enable dashboard
</code></pre>
</li>
<li><p>Check the status until those items are enabled. Shell: <code>microk8s status --wait-ready</code></p>
</li>
<li><p>Shell:</p>
<p><em>(Note: When I tried to enable <code>istio</code> before doing <code>juju bootstrap microk8s</code>, the juju bootstrap command consistently failed with the following error regardless of how much memory I allocated to the VM: <code>failed to bootstrap model: creating controller stack: creating statefulset for controller: timed out waiting for controller pod: unschedulable: 0/1 nodes are available: 1 Insufficient memory.</code>)</em></p>
<pre class="lang-bash prettyprint-override"><code>sudo snap install juju --classic
juju bootstrap microk8s
microk8s enable istio
juju add-model kubeflow
juju deploy kubeflow-lite --trust
juju config dex-auth public-url=http://10.64.140.43.nip.io
juju config oidc-gatekeeper public-url=http://10.64.140.43.nip.io
juju config dex-auth static-username=admin
juju config dex-auth static-password=admin
watch -c juju status --color
</code></pre>
</li>
<li><p>To access the Kubernetes dashboard: The Charmed Kubeflow Quickstart instructions for this will not work as-is for your Windows web browser. Try this:</p>
<ul>
<li>Shell: <code>microk8s dashboard-proxy</code>. (This will keep running to serve the dashboard until you Ctrl-C cancel, close the window, or shutdown the VM.)</li>
<li>Command prompt: <code>multipass list</code></li>
<li>Windows web browser: https://<ip address for the VM from multipass list>:<port number from dashboard-proxy>, copy-paste the token from dashboard-proxy</li>
</ul>
</li>
<li><p>To access the Kubeflow dashboard:</p>
<ul>
<li>Shell: <code>microk8s kubectl port-forward -n istio-system service/istio-ingressgateway 8080:80 --address=0.0.0.0</code>. (This will keep running to serve the dashboard until you Ctrl-C cancel, close the window, or shutdown the VM.)</li>
<li>Command prompt: <code>multipass list</code></li>
<li>Windows web browser: https://<ip address for the VM from multipass list>:8080</li>
</ul>
</li>
</ol>
|
<p>I have deployed the application via ArgoCD successfully, and I can access it via its ingress url</p>
<p>The applicaiton uses the image name with latest tag, such as</p>
<pre><code>image: <private_registry>/app_1_service:latest
</code></pre>
<p>I also manage other tags on same of tag <code>latest</code>, such as <code>image:<commit_id> or image:<1.0.xxx></code></p>
<p>Now, developers will update the codes, after commit changes, a gitlab pipeline autoamtically runs and build a new image and override to tag <code>latest</code> with other tags and push to private docker registry</p>
<p>So what's the next step in ArgoCD?</p>
<p>How argocd know the application is changed, and need be redeployed, and the <code>image:latest</code> need be pull again?</p>
| <p>You can use <a href="https://argocd-image-updater.readthedocs.io/en/stable/" rel="noreferrer">ArgoCD image-updater</a></p>
<p>But before using the the image-updater, you need to install and set appropriate permission</p>
<pre class="lang-bash prettyprint-override"><code>helm repo add argo https://argoproj.github.io/argo-helm
helm install argocd-image-updater argo/argocd-image-updater
</code></pre>
<p>Once image-updator is up and running, then you need to set a few <strong>annotations in the Argocd application</strong>, as the update workers on different strategies</p>
<blockquote>
<p><strong>semver</strong> - Update to the latest version of an image considering semantic versioning constraints
<strong>latest</strong> - Update to the most recently built image found in a registry<br />
<strong>digest</strong> - Update to the latest version of a given version (tag), using the tag's SHA digest<br />
<strong>name</strong> - Sorts tags alphabetically and update to the one with the highest cardinality</p>
</blockquote>
<p><code>latest</code> strategies working awesome with tagging under some regex and <code>digest</code> more suited for testing environment.</p>
<p><a href="https://argocd-image-updater.readthedocs.io/en/stable/basics/update-strategies/" rel="noreferrer">update-strategies</a></p>
<p>You can also pull the private image from gitlab as well.</p>
<p>Here is the working example with helm-release</p>
<pre><code>apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
annotations:
argocd-image-updater.argoproj.io/image-alias.allow-tags: 'regexp:^1.3.0-SNAPSHOT.[0-9]+$'
argocd-image-updater.argoproj.io/image-alias.force-update: 'true'
argocd-image-updater.argoproj.io/image-alias.pull-secret: 'pullsecret:develop-namespace/develop-app-gitlab-secrets'
argocd-image-updater.argoproj.io/image-alias.update-strategy: latest
argocd-image-updater.argoproj.io/image-list: >-
image-alias=registry.gitlab.com/myorg/my-test-image
finalizers:
- resources-finalizer.argocd.argoproj.io
labels:
app.kubernetes.io/instance: develop-platform
name: develop-app
namespace: argocd
spec:
destination:
namespace: develop-app
server: 'https://kubernetes.default.svc'
project: develop-app-west6-b
source:
helm:
releaseName: develop-app
valueFiles:
- develop-platform/values.yaml
path: helm-chart/helm-chart
repoURL: 'https://gitlab.com/my-org/develop-app.git'
targetRevision: staging
syncPolicy:
automated:
prune: true
selfHeal: true
</code></pre>
<p>If you need <code>digest</code> or just a latest then remove this</p>
<pre><code> argocd-image-updater.argoproj.io/image-alias.allow-tags: 'regexp:^1.3.0-SNAPSHOT.[0-9]+$'
</code></pre>
<p>this is working base on <a href="https://regexr.com/" rel="noreferrer">regex</a>.
so in your case <code><1.0.xxx></code></p>
<p>it can be <code>'regexp:^1.0.[0-9]+$'</code></p>
<p>If everything configured properly and image updater working fine then you should be able to see logs image updater logs like this</p>
<pre><code>time="2022-04-27T15:18:36Z" level=info msg="Successfully updated image 'registry.gitlab.com/test-image:0.3.0-SNAPSHOT.115' to 'registry.gitlab.com/test-image:0.3.0-SNAPSHOT.118'
</code></pre>
|
<p>I am trying to install Falco on my home lab (K3s v1.23.6+k3s1 on Pop!_OS v20.04). For work I need to create a response engine, for that I have chosen FalcoSidekick and OpenFAAS - before taking this to EKS I want to test it locally.</p>
<p>I am using the standard Helm chart:</p>
<pre class="lang-bash prettyprint-override"><code>kubectl create ns falco
helm upgrade --install falco \
--set falco.jsonOutput=true --set auditLog.enabled=true \
--set image.repository=falcosecurity/falco-no-driver \
--set falcosidekick.enabled=true \
--set falcosidekick.config.openfaas.functionname="falco-pod-delete" \
--namespace falco \
falcosecurity/falco
</code></pre>
<p>The issue occurs as the falco-driver-load InitContainer is unable to download the driver fom my OS (from <a href="https://download.falco.org/?prefix=driver/2.0.0%2Bdriver/x86_64/" rel="nofollow noreferrer">here</a>, so it tries to build the driver and fails.</p>
<pre class="lang-bash prettyprint-override"><code>* Looking for a falco module locally (kernel 5.17.5-76051705-generic)
* Trying to download a prebuilt falco module from https://download.falco.org/driver/2.0.0%2Bdriver/x86_64/falco_pop_5.17.5-76051705-generic_202204271406.ko
curl: (22) The requested URL returned error: 404
Unable to find a prebuilt falco module
* Trying to dkms install falco module with GCC /usr/bin/gcc
warning: the compiler differs from the one used to build the kernel
The kernel was built by: gcc (Ubuntu 11.2.0-19ubuntu1) 11.2.0
You are using: gcc (Debian 5.5.0-12) 5.5.0 20171010
CC [M] /var/lib/dkms/falco/2.0.0+driver/build/main.o
gcc: error: unrecognized command line option '-mharden-sls=all'
gcc: error: unrecognized command line option '-Wimplicit-fallthrough=5'
make[2]: *** [scripts/Makefile.build:288: /var/lib/dkms/falco/2.0.0+driver/build/main.o] Error 1
</code></pre>
<p>I was wondering if I would be able to use the Ubuntu driver, and if so how can I override this?</p>
| <p>I needed to install the kernel headers for my system, as well as increase some fx.inotify sysctl parameters.</p>
|
<p>In my firm our Kubernetes Cluster was recently updated to 1.22+ and we are using AKS. So I had to change the manifest of our ingress yaml file which was using : networking.k8s.io/v1beta1, to be compliant to the new apiVersion : networking.k8s.io/v1</p>
<p>This is the earlier manifest for the ingress file :</p>
<pre><code>{{- if .Values.ingress.enabled -}}
{{- $fullName := include "amroingress.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
{{- if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "amroingress.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
#{{- range .paths }}
#- path: {{ . }}
# backend:
# serviceName: {{ $fullName }}
# servicePort: {{ $svcPort }}
#{{- end }}
- path: /callista/?(.*)
backend:
serviceName: amro-amroingress
servicePort: 8080
{{- end }}
{{- end }}
</code></pre>
<p>and after my changes it looks like this:</p>
<pre><code>{{- if .Values.ingress.enabled -}}
{{- $fullName := include "amroingress.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
apiVersion: networking.k8s.io/v1
{{- end }}
kind: Ingress
metadata:
name: {{ include "amroingress.fullname" . }}
labels:
{{- include "amroingress.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: /callista/?(.*)
pathType: Prefix
backend:
service:
name: amro-amroingres
port:
number: 8080
{{- end }}
{{- end }}
</code></pre>
<p>But, after I made the changes and tried to deploy using helm, I receive this error:
<code>Error: UPGRADE FAILED: current release manifest contains removed kubernetes api(s) for this kubernetes version and it is therefore unable to build the kubernetes objects for performing the diff. error from kubernetes: unable to recognize "": no matches for kind "Ingress" in version "networking.k8s.io/v1beta1"</code></p>
<p>I am not sure why this error occurs even though the ingress manifest has changed and I have been stuck at this for a few days now. I am new to kubernetes and ingress in general, any help will be massively appreciated.</p>
| <p>The API resources on the Control plane are upgreaded but the ones in helm stored manifest (within a Secret resource) are old.</p>
<p>Here is the resolution:</p>
<pre><code>$ helm plugin install https://github.com/helm/helm-mapkubeapis
$ helm mapkubeapis my-release-name --namespace ns
</code></pre>
<p>After this run a <code>helm upgrade</code> again.</p>
|
<p>Is it possible to manage routes in Nginx server dynamically, add and remove them, without stopping it from work, something like a circuit breaker (turn on / turn off) ? (the most closest analogy, I've found is Istio in Kubernetes, that allows to do that) (If this functionality is available within some other Software, please mention it down below)</p>
| <p>Yes <strong>Nginx</strong> supports the circuit breaker option but it's in Nginx <strong>plus</strong> mode.</p>
<p>Ref article : <a href="https://www.nginx.com/blog/microservices-reference-architecture-nginx-circuit-breaker-pattern/" rel="nofollow noreferrer">https://www.nginx.com/blog/microservices-reference-architecture-nginx-circuit-breaker-pattern/</a></p>
<p>Apart of that many other service mesh are available who support it as you mentioned Istio, LinkerD etc.</p>
<p>i would recommend checking the Traefik proxy which also supports and it's open source : <a href="https://doc.traefik.io/traefik/middlewares/http/circuitbreaker/" rel="nofollow noreferrer">https://doc.traefik.io/traefik/middlewares/http/circuitbreaker/</a></p>
<p>If you are also familiar with the <strong>Kong</strong> you can also extend it and install the circuit breaker plugin into and use it.</p>
<p>Plugin example: <a href="https://github.com/dream11/kong-circuit-breaker" rel="nofollow noreferrer">https://github.com/dream11/kong-circuit-breaker</a></p>
<p>Checkout my article to how to extend the Kong with custom plugins and use it : <a href="https://faun.pub/building-kong-custom-docker-image-add-a-customized-kong-plugin-2157a381d7fd" rel="nofollow noreferrer">https://faun.pub/building-kong-custom-docker-image-add-a-customized-kong-plugin-2157a381d7fd</a></p>
|
<p>Assuming having 2 separate k3d clusters (namely: <code>vault</code>, <code>dev</code>)
is there is a way to have a distinct URL for each cluster (preferably with https) for example: <code>vault.cluster.internal</code> and <code>dev.cluster.internal</code>
and allow apps deployed in <code>dev.cluster.internal</code> to lookup something or interact with apps in the <code>vault.cluster.internal</code> ?</p>
<p>The cluster definitions are as follows:
<code>dev.yaml</code>:</p>
<pre><code>apiVersion: k3d.io/v1alpha4
kind: Simple
metadata:
name: dev
servers: 1
agents: 3
network: k3d-cluster
kubeAPI:
host: "dev.cluster.internal"
hostIP: "127.0.0.1"
image: rancher/k3s:v1.24.3-k3s1
ports:
- port: 3000:3000
nodeFilters:
- loadbalancer
options:
k3d:
wait: true
timeout: "60s"
k3s:
extraArgs:
- arg: --tls-san=dev.cluster.internal
nodeFilters:
- server:*
- arg: --disable=metrics-server
nodeFilters:
- server:*
- arg: --disable=traefik
nodeFilters:
- server:*
kubeconfig:
updateDefaultKubeconfig: true
switchCurrentContext: false
</code></pre>
<p>and the <code>vault.yaml</code>:</p>
<pre><code>apiVersion: k3d.io/v1alpha4
kind: Simple
metadata:
name: vault
servers: 1
agents: 3
network: k3d-cluster
kubeAPI:
host: "vault.cluster.internal"
hostIP: "127.0.0.1"
image: rancher/k3s:v1.24.3-k3s1
ports:
- port: 8200:8200
nodeFilters:
- loadbalancer
options:
k3d:
wait: true
timeout: "60s"
k3s:
extraArgs:
- arg: --tls-san=vault.cluster.internal
nodeFilters:
- server:*
- arg: --disable=metrics-server
nodeFilters:
- server:*
- arg: --disable=traefik
nodeFilters:
- server:*
kubeconfig:
updateDefaultKubeconfig: true
switchCurrentContext: false
</code></pre>
<ul>
<li>Can this be done without using service mesh?</li>
<li>Can I update the coredns in the clusters to allow resolving the other cluster host names, and how?</li>
<li>Can this be done with docker network configurations, and how?</li>
</ul>
<p>This is basically to simulate real world clusters (but for local development)</p>
| <p>I found 3 solutions for the problem.</p>
<p>The first solution is to add <code>HostAliases</code> section to the <code>dev</code> cluster definition, and make it point to the external IP of the <code>vault</code> cluster loadbalancer:</p>
<p>for example:
you can run the following command on the <code>vault</code> cluster after initializing it</p>
<pre><code>$ kubectl --context k3d-vault --namespace vault get services
NAME TYPE CLUSTER-IP EXTERNAL-IP ...
...
vault LoadBalancer 10.43.34.131 172.24.0.3 ...
^^^^^^^^^^
...
</code></pre>
<p><code>dev.yaml</code> would be</p>
<pre><code>#...
ports:
- port: 3000:3000
nodeFilters:
- loadbalancer
hostAliases:
- ip: 172.24.0.3
hostnames:
- vault.cluster.internal
#...
</code></pre>
<pre class="lang-bash prettyprint-override"><code># (alternatively, this can be automated using the following command without editing `dev.yaml` file)
$ KMS_IP=$(kubectl --context k3d-vault --namespace vault get services | grep LoadBalancer | awk -F " " '{ print $4 }')
$ k3d cluster create --config dev.yaml --host-alias $KMS_IP:vault.cluster.internal
</code></pre>
<p>this solution allow resolving of hostname (as you would expect in a production cluster)...</p>
<p>The second solution works similarly but using <code>docker network inspect k3d-cluster</code> (where <code>k3d-cluster</code> is the docker network name in cluster definition)</p>
<p>Similarly, run <code>docker network inspect k3d-cluster</code> and note down the IP of the loadbalancer subnet defined by docker:</p>
<pre><code>...
"cad3f3XXXXXX": {
"Name": "k3d-vault-serverlb",
"EndpointID": "47d5XXXX"
"MacAddress": "02:42:ac:18:00:04",
"IPv4Address": "172.24.0.4/16", #<<< This IP can be used in dev cluster HostAliases
"IPv6Address": ""
}
...
</code></pre>
<p>The last solution is simpler but less flexible.
it uses <code>host.k3d.internal</code> as the name for the other cluster (allowing to resolve it) but you have to take care of port mapping as all of the clusters would be resolving to use the same URL for the services (which isn't ideal, but easy enough to test multi-cluster communication/bugs/etc).
In other words, configure the <code>dev</code> cluster <code>VAULT_ADDR</code> to be <code>host.k3d.internal:8200</code> instead of <code>vault.cluster.internal:8200</code>
This is not flexible with TLS/HTTPS (AFAIK).</p>
|
<p>I experimenting and learning Rancher/Kubernetes. In a very short eBook I read this.</p>
<p>"In a Kubernetes Cluster, it can be desirable to have persistent storage available for applications to use. As we do not have a Kubernetes Cloud Provider enabled in this cluster, we will be deploying the nfs-server-provisioner which will run an NFS server inside of our Kubernetes cluster for persistent storage.</p>
<p><strong>This is not a production-ready solution by any means, but helps to illustrate the persistent storage constructs.</strong>"</p>
<p>I configured nfs-server-provisioner in Rancher and everything works as expected. But here the question.</p>
<p>For my "production" homelab, I prepared 5x Bare Metal Server and installed Rancher on top of a Kubernates, also I created an RKE2 Cluster with</p>
<ul>
<li>etcd Node</li>
<li>control plane Node</li>
<li>worker Node 1</li>
<li>worker Node 2</li>
</ul>
<p>I dont use AWS, Azure or any Cloud Solutions.</p>
<p>What would be a "production ready solution" for my stack. And why exactly is "nfs-server-provisioner" not a "production ready solution"?</p>
| <p>Without seeing the full text this is only a guess, but based on that quote only using nfs-server-provider isn't providing "true" and reliable persistence.</p>
<p>nfs-server-provider launches the NFS server within the cluster, which means also its data is within the kubernetes' storage system. There's no real persistence there: instead the persistence, availability and security of NFS based persistent volumes depend on how the nfs-server-provider stores the data. You definitely lose production-readiness if served NFS data is stored by the provider in a way that is not highly available - say, on a local hostpath on each node. Then again if nfs-server-provider <em>is</em> using a reliable storage class, why not cut the overhead and use <em>that</em> storage class directly for all persistent volumes? This might be what your quoted text refers to.</p>
<p>I'd also like to note (at least at the side), that using NFS as a storage class when the NFS server resides inside the same cluster might potentially mean asking for trouble. If provisioning of nfs-server-provider on nodes fails for some reason (even a trivial one, like not being able to fetch the image) you'd lose access to NFS based persistent volumes, sending all pods relying on NFS volumes on crashloops. This is however true of Longhorn, OpenEBS and other cluster-residing storage classes, too.</p>
<p>Making a production ready setup would require you to at least configure the nfs-server-provider itself to use a production-grade storage backend or use a highly available external NFS.</p>
<p>Also note that for production grade you should have at least two control plane and three etcd nodes instead of just one and one (never use an even number of etcd nodes!). One node can run multiple nodes, so with your equipment I'd probably go for two nodes running both control plane and etcd, two "pure" worker nodes and one node doing all three. The last isn't exactly recommended, but in a homelab environment would give you more workers when testing with pod replicas.</p>
|
<p>I'm trying to deploy a Flink stream processor to a Kubernetes cluster with the help of the official <a href="https://nightlies.apache.org/flink/flink-kubernetes-operator-docs-stable/" rel="nofollow noreferrer">Flink kubernetes operator</a>.
The Flink app also uses Minio as its state backend. Everything worked fine until I tried to provide the credentials from Hashicorp Vault in the following way:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: flink.apache.org/v1beta1
kind: FlinkDeployment
metadata:
name: flink-app
namespace: default
spec:
serviceAccount: sa-example
podTemplate:
apiVersion: v1
kind: Pod
metadata:
name: pod-template
spec:
serviceAccountName: default:sa-example
containers:
- name: flink-main-container
# ....
flinkVersion: v1_14
flinkConfiguration:
presto.s3.endpoint: https://s3-example-api.dev.net
high-availability: org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactory
high-availability.storageDir: s3p://example-flink/example-1/high-availability/
high-availability.cluster-id: example-1
high-availability.namespace: example
high-availability.service-account: default:sa-example
# presto.s3.access-key: *
# presto.s3.secret-key: *
presto.s3.path-style-access: "true"
web.upload.dir: /opt/flink
jobManager:
podTemplate:
apiVersion: v1
kind: Pod
metadata:
name: job-manager-pod-template
annotations:
vault.hashicorp.com/namespace: "/example/dev"
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/agent-init-first: "true"
vault.hashicorp.com/agent-inject-secret-appsecrets.yaml: "example/Minio"
vault.hashicorp.com/role: "example-serviceaccount"
vault.hashicorp.com/auth-path: auth/example
vault.hashicorp.com/agent-inject-template-appsecrets.yaml: |
{{- with secret "example/Minio" -}}
presto.s3.access-key: {{.Data.data.accessKey}}
presto.s3.secret-key: {{.Data.data.secretKey}}
{{- end }}
</code></pre>
<p>When I comment the <code>presto.s3.access-key</code> and <code>presto.s3.secret-key</code> config values in the flinkConfiguration, replace them with the above listed <code>Hashicorp Vault</code> annotations and try to provide them programmatically during runtime:</p>
<pre class="lang-scala prettyprint-override"><code>val configuration: Configuration = getSecretsFromFile("/vault/secrets/appsecrets.yaml")
val env = org.apache.flink.streaming.api.environment.StreamExecutionEnvironment.getExecutionEnvironment(configuration)
</code></pre>
<p>I receive the following error message:</p>
<p><em>java.io.IOException: com.amazonaws.SdkClientException: Unable to load AWS credentials from any provider in the chain: [EnvironmentVariableCredentialsProvider: Unable to load AWS credentials from environment variables (AWS_ACCESS_KEY_ID (or AWS_ACCESS_KEY) and AWS_SECRET_KEY (or AWS_SECRET_ACCESS_KEY)), SystemPropertiesCredentialsProvider: Unable to load AWS credentials from Java system properties (aws.accessKeyId and aws.secretKey), WebIdentityTokenCredentialsProvider: You must specify a value for roleArn and roleSessionName, com.amazonaws.auth.profile.ProfileCredentialsProvider@5331f738: profile file cannot be null, com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper@bc0353f: Failed to connect to service endpoint: ]
at com.facebook.presto.hive.s3.PrestoS3FileSystem$PrestoS3OutputStream.uploadObject(PrestoS3FileSystem.java:1278) ~[flink-s3-fs-presto-1.14.2.jar:1.14.2]
at com.facebook.presto.hive.s3.PrestoS3FileSystem$PrestoS3OutputStream.close(PrestoS3FileSystem.java:1226) ~[flink-s3-fs-presto-1.14.2.jar:1.14.2]
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72) ~[flink-s3-fs-presto-1.14.2.jar:1.14.2]
at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101) ~[flink-s3-fs-presto-1.14.2.jar:1.14.2]
at org.apache.flink.fs.s3presto.common.HadoopDataOutputStream.close(HadoopDataOutputStream.java:52) ~[flink-s3-fs-presto-1.14.2.jar:1.14.2]
at org.apache.flink.runtime.blob.FileSystemBlobStore.put(FileSystemBlobStore.java:80) ~[flink-dist_2.12-1.14.2.jar:1.14.2]
at org.apache.flink.runtime.blob.FileSystemBlobStore.put(FileSystemBlobStore.java:72) ~[flink-dist_2.12-1.14.2.jar:1.14.2]
at org.apache.flink.runtime.blob.BlobUtils.moveTempFileToStore(BlobUtils.java:385) ~[flink-dist_2.12-1.14.2.jar:1.14.2]
at org.apache.flink.runtime.blob.BlobServer.moveTempFileToStore(BlobServer.java:680) ~[flink-dist_2.12-1.14.2.jar:1.14.2]
at org.apache.flink.runtime.blob.BlobServerConnection.put(BlobServerConnection.java:350) [flink-dist_2.12-1.14.2.jar:1.14.2]
at org.apache.flink.runtime.blob.BlobServerConnection.run(BlobServerConnection.java:110) [flink-dist_2.12-1.14.2.jar:1.14.2]</em></p>
<p>I initially also tried to append the secrets to flink-config.yaml in the docker-entrypoint.sh based on this <a href="https://nightlies.apache.org/flink/flink-docs-release-1.15/docs/deployment/filesystems/s3/#configure-access-credentials" rel="nofollow noreferrer">documentation - Configure Access Credentials</a>:</p>
<pre><code>if [ -f '/vault/secrets/appsecrets.yaml' ]; then
(echo && cat '/vault/secrets/appsecrets.yaml') >> $FLINK_HOME/conf/flink-conf.yaml
fi
</code></pre>
<p>The question is how to provide the S3 credentials during the runtime since the Flink operator mounts the <strong>flink-config.yaml</strong> from a config map and it is a flink-conf.yaml: <strong>Read-only file system</strong>.</p>
<p>Thank you</p>
| <p>There is no support for this from the Kubernetes operator. In fact, this is not a limitation of the Flink Kubernetes operator, it is due to the fact of lack in support in Kubernetes native integration. There is a separate story for this in the Kubernetes operator side - <a href="https://issues.apache.org/jira/browse/FLINK-27491" rel="nofollow noreferrer">FLINK-27491</a>.</p>
<p>As a workaround, what you can do is, set up an init container and update the config map from the init container using kubernetes API after reading it from the vault. So the updated config map should have the secrets replaced by the init container and those will be visible to the job manager and all of its task managers. The whole Flink cluster journey starts only after updating the config map from the init container so it should be visible to the Flink cluster.</p>
<p>A simple example to update the config map from the init container can be found <a href="https://stackoverflow.com/questions/52046908/are-kubernetes-configmaps-writable#answer-71662405">here</a>. In this example, the config map is updated with a simple CURL command. In theory, you can use any lightweight client to update the config map like <a href="https://kubernetes.io/docs/tasks/run-application/access-api-from-pod/" rel="nofollow noreferrer">this</a>.</p>
<p>A side note: If possible I would suggest to use AWS IAM role rather than IAM plain secrets as IAM role is more secure compared to IAM static credentials.</p>
|
<p>I have a standalone Kubernetes cluster:</p>
<pre><code>plane node - hostname kubernetes1 - 192.168.1.126
work node - hostname kubernetes2 - 192.168.1.138
</code></pre>
<p>I deployed this private repository:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: pv1
spec:
capacity:
storage: 5Gi # specify your own size
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain
local:
path: /opt/registry # can be any path
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- kubernetes2
accessModes:
- ReadWriteMany # only 1 node will read/write on the path.
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv1-claim
spec: # should match specs added in the PersistenVolume
accessModes:
- ReadWriteMany
volumeMode: Filesystem
resources:
requests:
storage: 5Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: private-repository-k8s
labels:
app: private-repository-k8s
spec:
replicas: 1
selector:
matchLabels:
app: private-repository-k8s
template:
metadata:
labels:
app: private-repository-k8s
spec:
volumes:
- name: certs-vol
hostPath:
path: /opt/certs
type: Directory
- name: task-pv-storage
persistentVolumeClaim:
claimName: pv1-claim # specify the PVC that you've created. PVC and Deployment must be in same namespace.
containers:
- image: registry:2
name: private-repository-k8s
imagePullPolicy: IfNotPresent
env:
- name: REGISTRY_HTTP_TLS_CERTIFICATE
value: "/opt/certs/registry.crt"
- name: REGISTRY_HTTP_TLS_KEY
value: "/opt/certs/registry.key"
ports:
- containerPort: 5000
volumeMounts:
- name: certs-vol
mountPath: /opt/certs
- name: task-pv-storage
mountPath: /opt/registry
</code></pre>
<p>Service is deployed on the work node:</p>
<pre><code>kubernetes@kubernetes1:/opt/registry$ kubectl get pods
NAME READY STATUS RESTARTS AGE
private-repository-k8s-6d5d954b4f-ldwd6 1/1 Running 0 153m
kubernetes@kubernetes1:/opt/registry$
</code></pre>
<p>I tried to create a ingress access because I want to access this pod from the outside:</p>
<pre><code>kubectl create namespace test
</code></pre>
<p><em><strong>service controller:</strong></em></p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
namespace: test
name: private-repository-service-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: kubernetes2
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: private-repository-service
port:
number: 5000
</code></pre>
<p><em><strong>service-load-balancer:</strong></em></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
namespace: test
name: private-repository-service
spec:
#type: NodePort
selector:
app: private-repository-k8s
ports:
# By default and for convenience, the `targetPort` is set to the same value as the `port` field.
- port: 5000
targetPort: 5000
# Optional field
# By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)
#nodePort: 30007
</code></pre>
<p>When I run <code>curl 192.168.1.138:5000</code> there is no response.
Do you know where I might be wrong?</p>
<p><em><strong>EDIT:</strong></em></p>
<pre><code>kubernetes@kubernetes1:~$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d20h
private-registry-service ClusterIP 10.103.148.234 <none> 5000/TCP 6h34m
kubernetes@kubernetes1:~$
</code></pre>
| <p>Looks like you don't have an ingress-controller of any kind and just want to access the registry directly. In that case you need to create a service of type <code>NodePort</code>.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
namespace: test
name: private-repository-service
spec:
type: NodePort # Added
selector:
app: private-repository-k8s
ports:
- port: 5000
targetPort: 5000
nodePort: 30123 # Added
</code></pre>
<p>This will bind the service port 5000 to the host's port 30123.
If you run <code>kubectl get svc</code> this will than give you a slightly different output.</p>
<pre class="lang-bash prettyprint-override"><code>kubernetes@kubernetes1:~$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3d20h
private-registry-service ClusterIP 10.103.148.234 <none> 5000:30123/TCP 6h34m
</code></pre>
<p>Notice the mapping <code>30312:5000</code>. Now you can send a request to the registry on that port: <code>curl 192.168.1.138:30312</code>. You can also omit the <code>nodePort</code> field, kubernetes will then choose a random one in the range between 3000 and 32767 for you. It will be displayed in the <code>kubectl get svc</code> command as shown above. The <code>Ingress</code> is not needed and can be removed.</p>
<p>If you want to use an <code>Ingress</code> as you provided you need to use an ingress-controller, like nginx or traefik, see also <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">kubernetes docs</a> on that topic.</p>
<blockquote>
<p>[...] An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic.</p>
</blockquote>
<p><strong>EDIT</strong></p>
<p>There are a lot of ingress-controllers out there, they all have their advantages and disadvantages. For a beginner, nginx might be a good choise, see <a href="https://docs.nginx.com/" rel="nofollow noreferrer">docs</a>.</p>
<p>To install it, run these commands (from the <a href="https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-helm/" rel="nofollow noreferrer">install docs page</a>)</p>
<pre class="lang-bash prettyprint-override"><code>$ helm repo add nginx-stable https://helm.nginx.com/stable
$ helm repo update
$ helm install my-release nginx-stable/nginx-ingress
</code></pre>
<p>where <code>my-release</code> is a random name, you can choose what ever you want. This will create an nginx pod in the namespace you installed the chart. It will also create an nginx-ingress service of type <code>LoadBalancer</code>, like this:</p>
<pre><code>kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP
my-release-nginx-ingress LoadBalancer 10.109.27.49 <pending> 80:30536/TCP,443:31694/TCP 111s
</code></pre>
<p>As you can see the <code>EXTERNAL-IP</code> is in <code><pending></code> state. In a public cloud environment like AWS a load-balancer resource like ELB is created and its public ip will be assigned to the service as <code>EXTERNAL-IP</code>. In your on-premise setup it will stay in <code><pending></code> status. But as you can see, two random node ports are mapped to the http/https ports, like with the <code>nodePort</code> setup above. Here it's 80 -> 30536 and 433 -> 31694, for you it will be something similar.</p>
<p>Now you can apply your manifests as above. You'll get a service of type <code>ClusterIP</code>. Also create an <code>Ingress</code> like this:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: private-repository-service
spec:
ingressClassName: nginx
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: private-repository-service
port:
number: 5000
</code></pre>
<p>Now you can run curl against that host <code>curl -HHost:example.com "http://192.168.1.138:30536/"</code> (for me it's port 30536, will be a different one for you) and you will get an answer from the registry. Works the same with https using the other port.</p>
<p>Note that I installed everything in the same namespace. In reality you should have a dedicated ingress namespace.</p>
<p>I would also highly recommand to learn the basics of kubernetes, e.g. by a Udemy Course or a youtube tutorail series. If you want to know more about ingress-controller in an on-premise setup, check out my other <a href="https://stackoverflow.com/questions/71747959/what-is-a-kubernetes-loadbalancer-on-prem/71753345#71753345">answer on that topic</a>.</p>
|
<p>I'm trying to use the AWS S3 SDK for Java to connect to a bucket from a Kubernetes pod running an Spring Boot application. In order to get external access I had to create a service as follows:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: s3
namespace: production
spec:
type: ExternalName
externalName: nyc3.digitaloceanspaces.com
</code></pre>
<p>And then I modified my configuration in <code>application.properties</code> specifying the endpoint:</p>
<pre><code>cloud.aws.endpoint=s3
cloud.aws.credentials.accessKey=ASD
cloud.aws.credentials.secretKey=123
cloud.aws.credentials.instanceProfile=true
cloud.aws.credentials.useDefaultAwsCredentialsChain=true
</code></pre>
<p>Because the SDK builds the host name for the bucket as <code>bucket.s3...</code> I modified my client to use "path style" access with this configuration:</p>
<pre><code>@Bean(name = "amazonS3")
public AmazonS3Client amazonS3Client(AWSCredentialsProvider credentialsProvider,
RegionProvider regionProvider) {
EndpointConfiguration endpointConfiguration = new EndpointConfiguration(
endpoint, regionProvider.getRegion().getName());
return (AmazonS3Client) AmazonS3ClientBuilder.standard()
.withCredentials(credentialsProvider)
.withEndpointConfiguration(endpointConfiguration)
.withPathStyleAccessEnabled(true)
.build();
}
</code></pre>
<p>But when I try to perform any bucket operation I get the following error regarding the name mismatch with the SSL certificate:</p>
<pre><code>javax.net.ssl.SSLPeerUnverifiedException: Certificate for <s3> doesn't match any of the subject alternative names: [*.nyc3.digitaloceanspaces.com, nyc3.digitaloceanspaces.com]
</code></pre>
<p>How can I avoid this certificate error?</p>
| <p>I am having a similar issue. I believe AmazonS3Client API doesn't resolve the k8s service name. I had to directly use a host name instead of K8s service name.</p>
|
<p>I'm struggling to use Helm variables within my entry script for my container, when deploying to AKS. Running locally work perfectly fine, as I'm specifying them as docker -e arguement. How do I pass arguments, either specified as helm variables and/or overwrited when issuing the helm install command?</p>
<p>Entry script start.sh</p>
<pre><code>#!/bin/bash
GH_OWNER=$GH_OWNER
GH_REPOSITORY=$GH_REPOSITORY
GH_TOKEN=$GH_TOKEN
echo "variables"
echo $GH_TOKEN
echo $GH_OWNER
echo $GH_REPOSITORY
echo ${GH_TOKEN}
echo ${GH_OWNER}
echo ${GH_REPOSITORY}
env
</code></pre>
<p>Docker file</p>
<pre><code># base image
FROM ubuntu:20.04
#input GitHub runner version argument
ARG RUNNER_VERSION
ENV DEBIAN_FRONTEND=noninteractive
# update the base packages + add a non-sudo user
RUN apt-get update -y && apt-get upgrade -y && useradd -m docker
# install the packages and dependencies along with jq so we can parse JSON (add additional packages as necessary)
RUN apt-get install -y --no-install-recommends \
curl nodejs wget unzip vim git azure-cli jq build-essential libssl-dev libffi-dev python3 python3-venv python3-dev python3-pip
# cd into the user directory, download and unzip the github actions runner
RUN cd /home/docker && mkdir actions-runner && cd actions-runner \
&& curl -O -L https://github.com/actions/runner/releases/download/v${RUNNER_VERSION}/actions-runner-linux-x64-${RUNNER_VERSION}.tar.gz \
&& tar xzf ./actions-runner-linux-x64-${RUNNER_VERSION}.tar.gz
# install some additional dependencies
RUN chown -R docker ~docker && /home/docker/actions-runner/bin/installdependencies.sh
# add over the start.sh script
ADD scripts/start.sh start.sh
# make the script executable
RUN chmod +x start.sh
# set the user to "docker" so all subsequent commands are run as the docker user
USER docker
# set the entrypoint to the start.sh script
ENTRYPOINT ["/start.sh"]
</code></pre>
<p>Helm values</p>
<pre><code>replicaCount: 1
image:
repository: somecreg.azurecr.io/ghrunner
pullPolicy: Always
# tag: latest
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
env:
GH_TOKEN: "SET"
GH_OWNER: "SET"
GH_REPOSITORY: "SET"
serviceAccount:
create: true
annotations: {}
name: ""
podAnnotations: {}
podSecurityContext: {}
securityContext: {}
service:
type: ClusterIP
port: 80
ingress:
enabled: false
className: ""
annotations: {}
hosts:
- host: chart-example.local
paths:
- path: /
pathType: ImplementationSpecific
tls: []
resources: {}
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
affinity: {}
</code></pre>
<p>Deployment.yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "hostedrunner.fullname" . }}
labels:
{{- include "hostedrunner.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "hostedrunner.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "hostedrunner.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "hostedrunner.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 80
protocol: TCP
# livenessProbe:
# httpGet:
# path: /
# port: http
# readinessProbe:
# httpGet:
# path: /
# port: http
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
</code></pre>
<p>Console output for helm install
<a href="https://i.stack.imgur.com/bCJGv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bCJGv.png" alt="enter image description here" /></a></p>
<p>Helm command (tried both with set and set-string and values to get substituted correctly)</p>
<pre><code>helm install --set-string env.GH_TOKEN="$env:pat" --set-string env.GH_OWNER="SomeOwner" --set-string env.GH_REPOSITORY="aks-hostedrunner" $deploymentName .helm/ --debug
</code></pre>
<p>I thought the helm variables might be passed as environment variables, but that's not the case. Any input is greatly appreciated</p>
| <p>You can add and update your deployment template with</p>
<pre><code>env:
{{- range $key, $val := .Values.env }}
- name: {{ $key }}
value: {{ $val }}
{{- end }}
</code></pre>
<p>so it will add the env block into your deployment section and your <strong>shell</strong> script when will run inside the docker, it will be able to access the Environment variables</p>
<p>Deployment <strong>env</strong> example</p>
<pre><code>containers:
- name: envar-demo-container
image: <Your Docker image>
env:
- name: DEMO_GREETING
value: "Hello from the environment"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow"
</code></pre>
<p>Ref : <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/#define-an-environment-variable-for-a-container" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/#define-an-environment-variable-for-a-container</a></p>
<p>If you will implement above one those variables will get set as Environment variables and Docker will be able to access it(shell script inside the container).</p>
<p>You can also use the configmap and secret of Kubernetes to set values at <strong>Env</strong> level.</p>
|
<p>I have setup VirtualService and ServiceEntry for a few services outside my kubernetes cluster.<br />
My apps can access them without any issue.<br />
I was wondering if it would be somehow possible to perform a port-forward to these services via <code>kubectl</code>, similar to how you would port-forward into a pod or a service.</p>
<p>This works:<br />
<code>kubectl port-forward service/my-service 8080:80</code></p>
<p>This Doesn't:<br />
<code>kubectl port-forward vs/my-virtual-service 6379:6379</code></p>
<p>I get:</p>
<blockquote>
<p>error: no kind "VirtualService" is registered for version
"networking.istio.io/v1beta1" in scheme
"k8s.io/client-go/kubernetes/scheme/register.go:72"</p>
</blockquote>
| <p>Port forwarding can be done to istio-ingressgateway service but not to virtual service. More details of service and virtual service of istio are in their <a href="https://istio.io/latest/docs/reference/config/networking/virtual-service/" rel="nofollow noreferrer">official documentation</a></p>
<p>One more option is to change istio-ingressgateway service to NodePort</p>
<pre><code>kubectl edit svc istio-ingressgateway -n istio-system
</code></pre>
|
<p>I'm trying to spread my <code>ingress-nginx-controller</code> pods such that:</p>
<ul>
<li>Each availability zone has the same # of pods (+- 1).</li>
<li>Pods prefer Nodes that currently run the least pods.</li>
</ul>
<p>Following other questions here, I have set up Pod Topology Spread Constraints in my pod deployment:</p>
<pre><code> replicas: 4
topologySpreadConstraints:
- labelSelector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: DoNotSchedule
- labelSelector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
</code></pre>
<p>I currently have 2 Nodes, each in a different availability zone:</p>
<pre><code>$ kubectl get nodes --label-columns=topology.kubernetes.io/zone,kubernetes.io/hostname
NAME STATUS ROLES AGE VERSION ZONE HOSTNAME
ip-{{node1}}.compute.internal Ready node 136m v1.20.2 us-west-2a ip-{{node1}}.compute.internal
ip-{{node2}}.compute.internal Ready node 20h v1.20.2 us-west-2b ip-{{node2}}.compute.internal
</code></pre>
<p>After running <code>kubectl rollout restart</code> for that deployment, I get 3 pods in one Node, and 1 pod in the other, which has a skew of <code>2 > 1</code>:</p>
<pre><code>$ kubectl describe pod ingress-nginx-controller -n ingress-nginx | grep 'Node:'
Node: ip-{{node1}}.compute.internal/{{node1}}
Node: ip-{{node2}}.compute.internal/{{node2}}
Node: ip-{{node1}}.compute.internal/{{node1}}
Node: ip-{{node1}}.compute.internal/{{node1}}
</code></pre>
<p>Why is my constraint not respected? How can I debug the pod scheduler?</p>
<p>My kubectl version:</p>
<pre><code>$ kubectl version
Client Version: version.Info{Major:"1", Minor:"21+", GitVersion:"v1.21.0-beta.0.607+269d62d895c297", GitCommit:"269d62d895c29743931bfaaec6e8d37ced43c35f", GitTreeState:"clean", BuildDate:"2021-03-05T22:28:02Z", GoVersion:"go1.16", Compiler:"gc", Platform:"darwin/arm64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.2", GitCommit:"faecb196815e248d3ecfb03c680a4507229c2a56", GitTreeState:"clean", BuildDate:"2021-01-13T13:20:00Z", GoVersion:"go1.15.5", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
| <p>In Kubernetes 1.25, you can now use the alpha feature <code>matchLabelKeys</code> to resolve this issue. Because there is an automatically generated <code>pod-template-hash</code> for each version of a deployment added to each pod, that and your app specific label can provide the hashed value to prevent mis-scheduling.</p>
<pre><code>topologySpreadConstraints:
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
matchLabelKeys:
- app
- pod-template-hash
</code></pre>
<blockquote>
<p>matchLabelKeys is a list of pod label keys to select the pods over
which spreading will be calculated. The keys are used to lookup values
from the pod labels, those key-value labels are ANDed with
labelSelector to select the group of existing pods over which
spreading will be calculated for the incoming pod. Keys that don't
exist in the pod labels will be ignored. A null or empty list means
only match against the labelSelector.</p>
<p>With matchLabelKeys, users don't need to update the pod.spec between
different revisions. The controller/operator just needs to set
different values to the same label key for different revisions. The
scheduler will assume the values automatically based on
matchLabelKeys. For example, if users use Deployment, they can use the
label keyed with pod-template-hash, which is added automatically by
the Deployment controller, to distinguish between different revisions
in a single Deployment.</p>
<p>Note: The matchLabelKeys field is an alpha field added in 1.25. You
have to enable the MatchLabelKeysInPodTopologySpread feature gate in
order to use it.</p>
</blockquote>
<p>From the <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/" rel="nofollow noreferrer">docs</a></p>
|
<p>i have a helm chart having multiple StatefulSet templates.<br />
i want one of the template to be deployed only during <code>helm install</code><br />
and it should not teminate when doing <code>helm upgrade</code>.</p>
<pre><code>{{- if .Release.IsInstall -}}
apiVersion: apps/v1
kind: StatefulSet
metadata:
.....
.....
{{- end -}}
</code></pre>
<p>as you can see i want this <code>sts</code> to be deployed only during installation and also<br />
i want it to be unaffected(not terminated) when during <code>helm upgrade</code></p>
| <p>after doing through helm documentation, it provides solution using helm hooks</p>
<pre><code>"helm.sh/hook": pre-install
"helm.sh/hook-weight": "-10"
</code></pre>
<p><a href="https://helm.sh/docs/topics/charts_hooks/#:%7E:text=Description-,pre%2Dinstall,-Executes%20after%20templates" rel="nofollow noreferrer">https://helm.sh/docs/topics/charts_hooks/#:~:text=Description-,pre%2Dinstall,-Executes%20after%20templates</a></p>
|
<p>I have deployed Jupyterhub on Kubernetes following this guide <a href="https://zero-to-jupyterhub.readthedocs.io/en/latest/" rel="nofollow noreferrer">link</a>, I have setup nbgrader and ngshare on jupyterhub using this guide <a href="https://nbgrader.readthedocs.io/en/stable/configuration/jupyterhub_config.html" rel="nofollow noreferrer">link</a>, I have a Learning management system(LMS) similar to moodle, I want to view the list of assignments both for instructors and students I can do that by using the rest API of Jupyternotebook like this</p>
<pre><code>import requests
import json
api_url = 'http://xx.xxx.xxx.xx/user/kevin/api/contents/release'
payload = {'token': 'XXXXXXXXXXXXXXXXXXXXXXXXXX'}
r = requests.get(api_url,params = payload)
r.raise_for_status()
users = r.json()
print(json.dumps(users, indent = 1))
</code></pre>
<p>now I want to grade all submitted assignments using the nbgrader command <code>nbgrader autograde "Assignment1"</code>, I can do that by logging into instructor notebook server and going to terminal and running the same command but I want to run this command on the notebook server terminal using Jupyter Notebook server rest API, so that instructor clicks on grade button on the LMS frontend which sends a request to LMS backend and which sends a rest API request(which has the above command) to jupyter notebook , which runs the command on terminal and returns the response to LMS backend. I cannot find anything similar on the Jupyter Notebook API <a href="https://jupyter-server.readthedocs.io/en/latest/developers/rest-api.html" rel="nofollow noreferrer"> documentation</a> there is endpoint to start a terminal but not how to run commands on it.</p>
| <p>An easier way to invoke terminal using jupyter-notebooks is to use magic function %%bash and use the jupyter cell as a terminal:</p>
<pre><code>%%bash
head xyz.txt
pip install keras
git add model.h5.dvc data.dvc metrics.json
git commit -m "Second model, trained with 2000 images"
</code></pre>
<p>For more information refer to this <a href="https://www.dominodatalab.com/blog/lesser-known-ways-of-using-notebooks" rel="nofollow noreferrer">Advance Jupyter notebook Tricks.</a></p>
<p><strong>Check this <a href="https://stackoverflow.com/questions/54475896/interact-with-jupyter-notebooks-via-api">Link</a> to Interact with Jupyter Notebooks via AP</strong></p>
|
<p>I am trying to configure <a href="https://github.com/bitnami-labs/sealed-secrets" rel="nofollow noreferrer">Bitnami SealedSecrets</a> with <a href="https://argoproj.github.io/cd/" rel="nofollow noreferrer">ArgoCD</a> and <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/kustomization/" rel="nofollow noreferrer">Kustomize</a>.</p>
<p>I have managed to encrypt the secrets using the kubeseal CLI, these are already deployed on the Kubernetes cluster as Sealed secrets and can be unsealed by the Sealed Secret Controller running on the cluster. The unsealed Secrets contain the expected values. I have defined the secrets using Kustomize Secret Generators - as described in this tutorial: <a href="https://faun.pub/sealing-secrets-with-kustomize-51d1b79105d8" rel="nofollow noreferrer">Sealing Secrets with Kustomize</a>. This is also working fine, since ArgoCD recognizes that there should be Secrets generated.</p>
<p>However, ArgoCD expects the secrets to be empty, as they are defined as empty in the Secret Generator part of my kustomization.yaml for the application:</p>
<pre class="lang-yaml prettyprint-override"><code>secretGenerator:
- name: secret1
type: Opaque
- name: secret2
type: Opaque
- name: secret3
type: Opaque
...
</code></pre>
<p>Since ArgoCD expects the secrets to be empty, they are detected to be "out of sync" after the Sealed Secrets Controller unseals and decrypts the secrets:</p>
<p><a href="https://i.stack.imgur.com/VO9kc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VO9kc.png" alt="SecretOutOfSync" /></a></p>
<p>Since ArgoCD thinks that the secrets should be empty, these are replaced by empty secrets. Then the Sealed Secrets Operator updates the Secrets once again and populates the data fields with the decrypted data - leading to an endless loop of ArgoCD synchronization.</p>
<p>The secrets are marked to be managed by Bitnami Sealed Secrets using the <code>sealedsecrets.bitnami.com/managed: "true"</code> annotation. So they are being updated by the Sealed Secrets controller.</p>
<p>How could I change the manifest to make sure that the unsealed secrets are recognized as "in sync" and ArgoCD doesn't keep on syncing beceause of the "OutOfSync" status of the unsealed secrets? (Which seems to be caused by the decrypted data in the unsealed secrets - as shown in the diff on the screenshot above.)</p>
| <p>It is possible to ignore some differences.</p>
<p>This can be defined in the ArgoCD Application manifest:</p>
<pre><code>apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-app
...
spec:
project: my-project
...
syncPolicy:
...
syncOptions:
...
- RespectIgnoreDifferences=true
...
ignoreDifferences:
- kind: Secret
jsonPointers:
- /data
</code></pre>
<p>The <code>ignoreDifferences</code> specification tells ArgoCD to ignore the differences in the specified path. (In our case everything under <code>/data</code> for secrets).</p>
<p>It is also important to avoid applying the changes. This can be defined using the <code>RespectIgnoreDifferences</code> syncOption.</p>
<p>After adding the ignoreDifferences entry for the Secret kind and setting RespectIgnoreDifferences to true, the Sync Status of the application is shown as "Synced" and the endless loop of syncing has stopped.</p>
|
<p>I have kubernetes clusters with prometheus and grafana for monitoring and I am trying to build a dashboard panel that would display the number of pods that have been restarted in the period I am looking at.</p>
<p>Atm I have this query that fills a vector with 1 if the pod's creation time is in the range (meaning it has been restarted during this period) and -1 otherwise.</p>
<p><code>-sgn((time() - kube_pod_created{cluster="$cluster"}) - $__range_s)</code></p>
<p><a href="https://i.stack.imgur.com/j8BjL.png" rel="nofollow noreferrer">what this looks like</a></p>
<p>Is there a way to count the number of positive values in this vector and display it? Like in this example just have a box with red 1 inside.
Or maybe there is a better way to accomplish what I am trying.</p>
| <p>To display the Pod Restarts we have different Prometheus metrics</p>
<p><code>kube_pod_container_status_restarts_total</code>. This is the counter metrics and this will record the container restarts.</p>
<p>To calculate the restarts:</p>
<ul>
<li>If you want to see all pods then,
<code>sum(increase(kube_pod_container_status_restarts_total{namespace="My-Namespace"}[5m])) by(pod)</code></li>
<li>or If you want Particular Pod then use,
<code>sum(increase(kube_pod_container_status_restarts_total{namespace="My-Namespace", pod="My-Pod"}[5m]))</code></li>
<li>or to show by container wise use
<code>sum(increase(kube_pod_container_status_restarts_total{namespace="My-Namespace", pod="My-Pod"}[5m])) by(container)</code></li>
</ul>
|
<p>I want to trigger remote Jenkins job from my container(k8s).</p>
<p>Currently, I'm using:</p>
<p><code>curl -k -X POST -u $USER:$JENKINS_TOKEN "${JENKINS_URL}/job/{$JOB_NAME}/buildWithParameters?token=12345"</code></p>
<p>But this information($USER,$JENKINS_TOKEN) is displayed in ArgoUI, is there any secure/other way to save credentials for remote trigger?</p>
| <p>You can try one of the following.</p>
<p>Save the password in a file called <code>password-file</code> and read from that</p>
<pre><code>curl -k -X POST -u $USER:$(cat .password-file)"${JENKINS_URL}/job/{$JOB_NAME}/buildWithParameters?token=12345"
</code></pre>
<p>Accept credentials from the STDIN.</p>
<pre><code>curl -k -X POST "${JENKINS_URL}/job/{$JOB_NAME}/buildWithParameters?token=12345" -K- <<< "--user $USER:$JENKINS_TOKEN"
</code></pre>
<p>You can also try using <code>--netrc-file</code> option with curl where you can store the username and password in a file itself.</p>
<p>file</p>
<pre><code>machine JENKINS_HOST login USERNAME password PASSWORD
</code></pre>
<p>Curl Command</p>
<pre><code>curl -k -X POST --netrc-file my-password-file "${JENKINS_URL}/job/{$JOB_NAME}/buildWithParameters?token=12345"
</code></pre>
|
<p>I am trying to write a cron job which takes an excel file from SFTP server and upload the data on mongo DB.</p>
<p>Now, suppose there are errors in cronjob like sftp connection failure due to some credentials issue or path from where the file needs to be picked up from is not present, then the control is going inside catch in the below code snippet but instead of marking the cron as Error, it is showing as completed on kubernetes.</p>
<p>I have attached a basic sample code below of the cron job to get an idea about what I am trying to say.</p>
<pre><code>exports.fsmOverallPerformanceData = async (req, res) => {
let sftp = new Client;
const fileName = "FSM_Performance_Data.xlsx"
const remote = "/home/SI_MARCOM_TOPS/SI_JHDSFTP/SND/"
const remotePath = remote + fileName
const localePath = "./fsmperformance.csv";
sftp.connect(config.sftpSetting, 'once').then(() => {
sftp.fastGet(remotePath, localePath, {}).then(() => {
}).catch((err) => {
console.log(err, 'FSM Performance fastGet method error'); // this is getting printed
})
}).catch((err) => {
console.log(err, 'SFTP Connect method error'); // this is getting printed
});
setTimeout(() => {
process.exit();
}, 300000);
}
</code></pre>
<p>Thanks in advance for any suggestions or help.</p>
| <p>You must ensure that the process exits with a non-zero exit code. Your <code>catch</code> blocks could use a <code>process.exit(1)</code> and <code>throw Error</code>.</p>
|
<p>I'm using minikube on WSL2.
I deployed a simple flask app image and write a LoadBalancer to expose the service.</p>
<p>My question is, how do I modify service manifest to get the same result as expose?</p>
<p>Below are more details.</p>
<p>flask app deployment yaml.</p>
<p>rss.yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: esg
spec:
selector:
matchLabels:
app: rss
replicas: 3
template:
metadata:
labels:
app: rss
spec:
containers:
- name: rss
image: "idioluck/00esg_rss:v01"
ports:
- containerPort: 5000
</code></pre>
<p>service yaml (I tried nodeport and loadbalanacer either.)</p>
<p>rss_lb.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: esg-lb
spec:
type: NodePort # LoadBalancer
selector:
app: rss
ports:
- protocol: TCP
port: 8080
targetPort: 5000
</code></pre>
<p>kubectl command is</p>
<pre><code>sjw@DESKTOP-MFPNHRC:~/esg_kube/kubesvc/rss$ kubectl apply -f rss.yaml
deployment.apps/esg created
sjw@DESKTOP-MFPNHRC:~/esg_kube/kubesvc/rss$ kubectl apply -f rss_lb.yaml
service/esg-lb created
sjw@DESKTOP-MFPNHRC:~/esg_kube/kubesvc/rss$ kubectl get pods
NAME READY STATUS RESTARTS AGE
esg-757f659b4-4vndc 1/1 Running 0 13s
esg-757f659b4-4wd2w 1/1 Running 0 13s
esg-757f659b4-sf5q6 1/1 Running 0 13s
sjw@DESKTOP-MFPNHRC:~/esg_kube/kubesvc/rss$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/esg-757f659b4-4vndc 1/1 Running 0 16s
pod/esg-757f659b4-4wd2w 1/1 Running 0 16s
pod/esg-757f659b4-sf5q6 1/1 Running 0 16s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/esg-lb LoadBalancer 10.101.221.26 <pending> 8080:31308/TCP 8s
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/esg 3/3 3 3 16s
NAME DESIRED CURRENT READY AGE
replicaset.apps/esg-757f659b4 3 3 3 16s
</code></pre>
<p>Exteranl ip is pending.
so i delete loabdbalancer and use expose</p>
<pre><code>sjw@DESKTOP-MFPNHRC:~/esg_kube/kubesvc/rss$ kubectl expose deployment esg --type=LoadBalancer --port=8080
service/esg exposed
sjw@DESKTOP-MFPNHRC:~/esg_kube/kubesvc/rss$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
esg LoadBalancer 10.99.208.98 127.0.0.1 8080:30929/TCP 46s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 23h
</code></pre>
<p>The service has been successfully exposed. And the service is a load balancer.</p>
| <p>Your <code>LoadBalancer</code> type service is showing <code>Pending</code> as status because it is waiting for you to provision an external Load Balancer like AWS's Elastic Load Balancer or GCP's Load Balancer. LoadBalancer type services are usually used together with managed Kubernetes service e.g EKS, GKE etc.</p>
<p>On the other hand, you're able to expose your service because it already has <code>clusterIP</code> assigned to it.</p>
<p>If you want to use LB in Minikube, this official <a href="https://minikube.sigs.k8s.io/docs/handbook/accessing/#loadbalancer-access" rel="nofollow noreferrer">doc</a> may help you. Otherwise, you can use <code>NodePort</code> type service directly to expose your flask app.</p>
|
<p>I have been able to generate the following output with plugin of kubectl in following way :</p>
<pre><code>./kubectl get namespaces | awk '{ print $1 }' | while read x; do ./kubectl resource-capacity --sort cpu.util --namespace $x --util --pod-count --pods; done
Error getting Pod Metrics: pods.metrics.k8s.io is forbidden: User "u-567" cannot list resource "pods" in API group "metrics.k8s.io" in the namespace "NAME"
For this to work, metrics-server needs to be running in your cluster
NODE POD CPU REQUESTS CPU LIMITS CPU UTIL MEMORY REQUESTS MEMORY LIMITS MEMORY UTIL POD COUNT
* * 16550m (16%) 45060m (45%) 0Mi (0%) 38174Mi (9%) 33492Mi (8%) 0Mi (0%) 16/1540
ip-10-11-7-77.eu-west-1.compute.internal * 6150m (76%) 14160m (177%) 139m (1%) 16784Mi (53%) 9632Mi (30%) 3126Mi (9%) 5/110
ip-10-11-7-77.eu-west-1.compute.internal anomaly-detector-water-transformer-default-msxhv-deploymenrnv85 1035m (12%) 3040m (38%) 46m (0%) 2138Mi (6%) 2376Mi (7%) 300Mi (0%)
ip-10-11-7-77.eu-west-1.compute.internal anomaly-detector-blades-transformer-default-gmvb2-deploymezzt5r 1035m (12%) 3040m (38%) 45m (0%) 2138Mi (6%) 2376Mi (7%) 292Mi (0%)
ip-10-11-7-77.eu-west-1.compute.internal leadec-uk-inference-m1-predictor-default-zvxkb-deployment-8mscd 1035m (12%) 3040m (38%) 43m (0%) 2138Mi (6%) 2376Mi (7%) 277Mi (0%)
ip-10-11-7-77.eu-west-1.compute.internal datascience-0 2010m (25%) 2000m (25%) 8m (0%) 8232Mi (26%) 128Mi (0%) 2259Mi (7%)
ip-10-11-7-77.eu-west-1.compute.internal anomaly-detector-leadec-predictor-default-tqjsg-deployment4bqdc 1035m (12%) 3040m (38%) 0Mi (0%) 2138Mi (6%) 2376Mi (7%) 0Mi (0%)
ip-10-11-7-134.eu-west-1.compute.internal * 2070m (25%) 6080m (76%) 88m (1%) 4276Mi (13%) 4752Mi (15%) 597Mi (1%) 2/110
ip-10-11-7-134.eu-west-1.compute.internal leadec-uk-inference-m1-transformer-default-84tk6-deploymeng4qjf 1035m (12%) 3040m (38%) 45m (0%) 2138Mi (6%) 2376Mi (7%) 298Mi (0%)
ip-10-11-7-134.eu-west-1.compute.internal leadec-uk-synthetic-f65535db1d5e1078fca102b9506391d8-deplop74ww 1035m (12%) 3040m (38%) 44m (0%) 2138Mi (6%) 2376Mi (7%) 299Mi (0%)
ip-10-11-7-123.eu-west-1.compute.internal * 1035m (12%) 3040m (38%) 55m (0%) 2138Mi (6%) 2376Mi (7%) 317Mi (1%) 1/110
ip-10-11-7-123.eu-west-1.compute.internal leadec-uk-synthetic-58a5440150937e2198bfdd610a109b95-deplomfn97 1035m (12%) 3040m (38%) 55m (0%) 2138Mi (6%) 2376Mi (7%) 317Mi (1%)
ip-10-11-7-190.eu-west-1.compute.internal * 1085m (13%) 3540m (44%) 50m (0%) 2148Mi (6%) 2476Mi (7%) 327Mi (1%) 2/110
ip-10-11-7-190.eu-west-1.compute.internal leadec-uk-synthetic-year3-m4-predictor-default-ttdkp-deplovvtj6 1035m (12%) 3040m (38%) 47m (0%) 2138Mi (6%) 2376Mi (7%) 267Mi (0%)
ip-10-11-7-190.eu-west-1.compute.internal random-experiment-random-568ff69c58-j7lx4 50m (0%) 500m (6%) 3m (0%) 10Mi (0%) 100Mi (0%) 60Mi (0%)
ip-10-11-7-44.eu-west-1.compute.internal * 1035m (12%) 3040m (38%) 46m (0%) 2138Mi (6%) 2376Mi (7%) 268Mi (0%) 1/110
ip-10-11-7-44.eu-west-1.compute.internal leadec-uk-synthetic-year4-m4-predictor-default-7d9cb-deplofc5kf 1035m (12%) 3040m (38%) 46m (0%) 2138Mi (6%) 2376Mi (7%) 268Mi (0%)
ip-10-11-7-57.eu-west-1.compute.internal * 1035m (12%) 3040m (38%) 46m (0%) 2138Mi (6%) 2376Mi (7%) 290Mi (0%) 1/110
ip-10-11-7-57.eu-west-1.compute.internal anomaly-detector-leadec-transformer-default-7j42s-deploymeqhxtd 1035m (12%) 3040m (38%) 46m (0%) 2138Mi (6%) 2376Mi (7%) 290Mi (0%)
ip-10-11-7-113.eu-west-1.compute.internal * 1035m (25%) 3040m (76%) 43m (1%) 2138Mi (13%) 2376Mi (15%) 298Mi (1%) 1/110
ip-10-11-7-113.eu-west-1.compute.internal leadec-uk-synthetic-m4-transformer-default-9d4pz-deploymenhtfsv 1035m (25%) 3040m (76%) 43m (1%) 2138Mi (13%) 2376Mi (15%) 298Mi (1%)
ip-10-11-7-168.eu-west-1.compute.internal * 0Mi (0%) 0Mi (0%) 0Mi (0%) 0Mi (0%) 0Mi (0%) 0Mi (0%) 0/110
</code></pre>
<p>I want to append namespace in each loop as column, how can i do the same.</p>
<p>Thanks in advance</p>
| <p>The answer is adjusted as per the comments by OP.</p>
<p>The command to print the pod resources by node:</p>
<pre><code>kubectl describe node | perl -0777 -wnE '@pods = /Name:\s+([^ ]+)\n.*?Non-terminated\s+Pods:\s+\([0-9]+\s+in\s+total\)\n(.*?)\nAllocated resources:/gs;say for @pods'
</code></pre>
<p>1st step, Write the output of each node to their respective file.</p>
<pre><code>kubectl describe node | perl -0777 -wnE '@pods = /Name:\s+([^ ]+)\n.*?Non-terminated\s+Pods:\s+\([0-9]+\s+in\s+total\)\n(.*?)\nAllocated resources:/gs;say for @pods'|awk -v OFS=',' '/^[^ ]+/{node=$0;next} {print $0 > node ".csv"}'
</code></pre>
<p>The above command would create the files like below:</p>
<pre><code>ls -lrt *.csv
-rw-rw-r-- 1 p.. p.. 2800 Sep 8 12:43 dev-kube-worker-3.csv
-rw-rw-r-- 1 p.. p.. 2782 Sep 8 12:43 dev-kube-worker-2.csv
-rw-rw-r-- 1 p.. p.. 2800 Sep 8 12:43 dev-kube-worker-1.csv
-rw-rw-r-- 1 p.. p.. 1551 Sep 8 12:43 dev-kube-controller-1.csv
</code></pre>
<p><strong>Final modification:</strong>
However, the content of these files would not be in CSV format, so to make the content to be CSV:</p>
<pre><code>kubectl describe node | perl -0777 -wnE '@pods = /Name:\s+([^ ]+)\n.*?Non-terminated\s+Pods:\s+\([0-9]+\s+in\s+total\)\n(.*?)\nAllocated resources:/gs;say for @pods'|awk -v OFS=',' '/^[^ ]+/{node=$0;next} {$1=$1;print $0 > node ".csv"}'
</code></pre>
|
<p>So i have this project that i already deployed in GKE and i am trying to make the CI/CD from github action. So i added the workflow file which contains</p>
<pre><code>name: Build and Deploy to GKE
on:
push:
branches:
- main
env:
PROJECT_ID: ${{ secrets.GKE_PROJECT }}
GKE_CLUSTER: ${{ secrets.GKE_CLUSTER }} # Add your cluster name here.
GKE_ZONE: ${{ secrets.GKE_ZONE }} # Add your cluster zone here.
DEPLOYMENT_NAME: ems-app # Add your deployment name here.
IMAGE: ciputra-ems-backend
jobs:
setup-build-publish-deploy:
name: Setup, Build, Publish, and Deploy
runs-on: ubuntu-latest
environment: production
steps:
- name: Checkout
uses: actions/checkout@v2
# Setup gcloud CLI
- uses: google-github-actions/setup-gcloud@94337306dda8180d967a56932ceb4ddcf01edae7
with:
service_account_key: ${{ secrets.GKE_SA_KEY }}
project_id: ${{ secrets.GKE_PROJECT }}
# Configure Docker to use the gcloud command-line tool as a credential
# helper for authentication
- run: |-
gcloud --quiet auth configure-docker
# Get the GKE credentials so we can deploy to the cluster
- uses: google-github-actions/get-gke-credentials@fb08709ba27618c31c09e014e1d8364b02e5042e
with:
cluster_name: ${{ env.GKE_CLUSTER }}
location: ${{ env.GKE_ZONE }}
credentials: ${{ secrets.GKE_SA_KEY }}
# Build the Docker image
- name: Build
run: |-
docker build \
--tag "gcr.io/$PROJECT_ID/$IMAGE:$GITHUB_SHA" \
--build-arg GITHUB_SHA="$GITHUB_SHA" \
--build-arg GITHUB_REF="$GITHUB_REF" \
.
# Push the Docker image to Google Container Registry
- name: Publish
run: |-
docker push "gcr.io/$PROJECT_ID/$IMAGE:$GITHUB_SHA"
# Set up kustomize
- name: Set up Kustomize
run: |-
curl -sfLo kustomize https://github.com/kubernetes-sigs/kustomize/releases/download/v3.1.0/kustomize_3.1.0_linux_amd64
chmod u+x ./kustomize
# Deploy the Docker image to the GKE cluster
- name: Deploy
run: |-
./kustomize edit set image LOCATION-docker.pkg.dev/PROJECT_ID/REPOSITORY/IMAGE:TAG=$GAR_LOCATION-docker.pkg.dev/$PROJECT_ID/$REPOSITORY/$IMAGE:$GITHUB_SHA
./kustomize build . | kubectl apply -k ./
kubectl rollout status deployment/$DEPLOYMENT_NAME
kubectl get services -o wide
</code></pre>
<p>but when the workflow gets to the deploy part, it shows an error</p>
<pre><code>The Service "ems-app-service" is invalid: metadata.resourceVersion: Invalid value: "": must be specified for an update
</code></pre>
<p>Now i have searched that this is actually not true because the resourceVersion is supposed to change for every update so i just removed it</p>
<p>Here is my kustomization.yaml</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- service.yaml
- deployment.yaml
</code></pre>
<p>my deployment.yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
generation: 1
labels:
app: ems-app
name: ems-app
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app: ems-app
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: ems-app
spec:
containers:
- image: gcr.io/ciputra-nusantara/ems@sha256:70c34c5122039cb7fa877fa440fc4f98b4f037e06c2e0b4be549c4c992bcc86c
imagePullPolicy: IfNotPresent
name: ems-sha256-1
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
</code></pre>
<p>and my service.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/neg: '{"ingress":true}'
finalizers:
- service.kubernetes.io/load-balancer-cleanup
labels:
app: ems-app
name: ems-app-service
namespace: default
spec:
clusterIP: 10.88.10.114
clusterIPs:
- 10.88.10.114
externalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- nodePort: 30261
port: 80
protocol: TCP
targetPort: 80
selector:
app: ems-app
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 34.143.255.159
</code></pre>
| <p>As the title of this question is more Kubernetes related than GCP related, I will answer since I had this same problem using AWS EKS.</p>
<p><code>How to fix metadata.resourceVersion: Invalid value: 0x0: must be specified for an update</code> is an error that may appear when using <code>kubectl apply</code></p>
<p><code>Kubectl apply</code> makes a <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/" rel="noreferrer">three-way-merge</a> between your local file, the live kubernetes object manifest and the annotation <code>kubectl.kubernetes.io/last-applied-configuration</code> in that live object manifest.</p>
<p>So, for some reason, the value <code>resourceVersion</code> managed to be written in your <code>last-applied-configuration</code>, probably because of someone exporting the live manifests to a file, modifying it, and applying it back again.</p>
<p>When you try to apply your new local file that doesn't have that value -and should not have it-, but the value is present in the <code>last-applied-configuration</code>, it thinks it should be removed from thye live manifest and specifically send it in the subsequent <code>patch</code> operation like <code>resourceVersion: null</code>, which should get rid of it. But it won't work and the local file breakes the rules (out of my knowledge as now) and becomes invalid.</p>
<p>As <a href="https://feichashao.com/kubectl-apply-fail/" rel="noreferrer">feichashao</a> mentions, the way to solve it is to delete the <code>last-applied-configuration</code> annotation and apply again your local file.</p>
<p>Once you did solved, you <code>kubectl apply</code> output will be like:</p>
<pre><code>Warning: resource <your_resource> is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
</code></pre>
<p>And your live manifests will be updated.</p>
|
<p>I have a deployment with a pod that has his configuration defined through a lot of environment variables. Now I want to add a sidecar container that requires exactly the same environment variables as the already defined container. Instead of just copy/pasting all the variables I'd like to stick to the DRY principle.</p>
<p>The current definition looks something like this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
- name: container1
env:
- name: MYSQL_HOST
value: {{ template "mariadb.primary.fullname" .Subcharts.mariadb }}
- name: MYSQL_DATABASE
value: {{ .Values.mariadb.auth.database | quote }}
- name: MYSQL_USER
valueFrom:
secretKeyRef:
name: {{ .Values.secretName | default (printf "%s-%s" .Release.Name "db") }}
key: {{ .Values.usernameKey | default "db-username" }}
- name: MYSQL_PASSWORD
valueFrom:
secretKeyRef:
name: {{ .Values.secretName | default (printf "%s-%s" .Release.Name "db") }}
key: {{ .Values.passwordKey | default "db-password" }}
</code></pre>
<p>The recommended way to reuse environment variables is through a ConfigMap.
Now I can create this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ConfigMap
metadata:
name: config
data:
MYSQL_HOST: {{ template "mariadb.primary.fullname" .Subcharts.mariadb }}
MYSQL_DATABASE: {{ .Values.mariadb.auth.database | quote }}
---
apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
- name: container1
envFrom:
- configMapRef:
name: config
- name: container2
envFrom:
- configMapRef:
name: config
</code></pre>
<p>But now I would still need to repeat myself for <code>MYSQL_USER</code> and <code>MYSQL_PASSWORD</code> (and for the 20 other env variables that are like this).</p>
<p>Is it possible to reuse the existing secrets <code>MYSQL_USER</code> and <code>MYSQL_PASSWORD</code> as environment variables without creating duplicates on <code>container1</code> and <code>container2</code>?</p>
| <p>You can create a <a href="https://docs.helm.sh/docs/chart_template_guide/named_templates/" rel="nofollow noreferrer">named template</a> to do this. If you're only going to be using this template within a single Kubernetes object (multiple containers within the same Deployment) you could put it in that object's <code>deployment.yml</code> file, or into the shared <code>_helpers.tpl</code> file.</p>
<pre class="lang-yaml prettyprint-override"><code>{{- define "env.mysql" -}}
- name: MYSQL_HOST
value: {{ template "mariadb.primary.fullname" .Subcharts.mariadb }}
- name: MYSQL_DATABASE
value: {{ .Values.mariadb.auth.database | quote }}
- name: ET
value: cetera
{{- end -}}
</code></pre>
<p>Then wherever you need to use that block of variables, you can <code>include</code> it, and <code>indent</code> it to the correct level.</p>
<pre class="lang-yaml prettyprint-override"><code>spec:
template:
spec:
containers:
- name: container1
env:
{{- include "env.mysql" . | indent 10 }}
- name: OTHER_ENVIRONMENT_VARIABLES
value: are okay too
</code></pre>
<p>Having <code>define</code>d the template once, you can <code>include</code> it as many times as you need to.</p>
|
<p>k8s <a href="https://kubernetes.io/docs/concepts/containers/images/" rel="nofollow noreferrer">documentation</a> about <code>IfNotPresent</code> image pull policy states:</p>
<blockquote>
<p>the image is pulled only if it is not already present locally.</p>
</blockquote>
<p>I'm not sure what "locally" actually means. Does it mean "locally in a k8s cluster" or "locally on a node where pod is created"? If I have 2 pods with same images but different image pull policies, does pulling the image for the 1st pod affects another pod?</p>
| <p>it simply mean the : <strong>locally on worker nodes on which POD runs</strong></p>
<blockquote>
<p>If I have 2 pods with same images but different image pull policies,
does pulling the image for the 1st pod affects another pod?</p>
</blockquote>
<p>If it's the same image there won't be any issue in running either it gets pulled again or so, data will be the same each time.</p>
<p>However, the policy will work for each deployment separately so if it's <strong>always</strong>, it will <strong>pull</strong> images all time for that specific deployment.</p>
<p>If it's present in node <strong>locally</strong> with a similar <strong>TAG</strong> mentioned in YAML, policy set <strong>IfNotPresent</strong> local one will be used as <strong>TAG</strong> and URI matching and <strong>image</strong> is already available as it will use that.</p>
|
<p>The <a href="https://kubernetes.io/docs/reference/using-api/server-side-apply/" rel="nofollow noreferrer">page</a> on server-side apply in the Kubernetes docs suggests that it can be enabled or disabled (e.g., the docs say, "If you have Server Side Apply enabled ...").</p>
<p>I have a GKE cluster and I would like to check if server-side apply is enabled. How can I do this?</p>
| <p>You can try creating any object like <strong>namespace</strong> or so and try checking the YAML output using the command you will get an idea if SSA is enabled or not.</p>
<p><strong>Command</strong> :</p>
<p><code>kubectl create ns test-ssa</code></p>
<p>Get the created namespace</p>
<pre><code>kubectl get ns test-ssa -o yaml
</code></pre>
<p>If there is <strong>managedFields</strong> existing in output SSA is working.</p>
<p><strong>Server-side-apply</strong> i think introduced around <strong>K8s</strong> version <strong>1.14</strong> and now it's in <strong>GA</strong> with k8s version <strong>1.22</strong>. Wiht GKE i have noticed it's already been part of it alpha or beta.</p>
<p>If you are using the <strong>HELM</strong> on your <strong>GKE</strong> you might have noticed the <strong>Service Side Apply</strong>.</p>
|
<p>I am looking for an open-source WAF solution that could be deployed in Kubernetes. I've looked a ModSecurity but it seems like good rules cost money and it also requires lots of tuning.</p>
| <blockquote>
<p>it seems like good rules cost money and it also requires lots of tuning</p>
</blockquote>
<p>How can you define the "good rules"?</p>
<p>There are "good rules" in my opinion, called <a href="https://coreruleset.org" rel="noreferrer">CoreRuleSet</a>, it's absolutely free. And I don't think you need to tune it too much. CRS has a good <a href="https://github.com/coreruleset/coreruleset" rel="noreferrer">community</a>, you can ask anything. Its development is very active.</p>
<p>I think you should take a look.</p>
<p>Beside ModSecurity, there is a new competitor, called <a href="https://coraza.io/" rel="noreferrer">Coraza</a>. It also uses SecLang format for the configuration, and it's 100% compatible with CRS.</p>
|
<p><strong>The application:</strong><br />
I'm building a statistics app which allows users to login, upload data files, and do some statistics calculations with the data. The plots generated would be displayed to them on the browser and the plots would need to be saved to disk too (to avoid having to re-generate them the next time the user wants to view it).</p>
<p><strong>Files anticipated:</strong></p>
<ol>
<li>The data files the user uploads (<code>xls</code> or <code>csv</code> files).</li>
<li>The generated plots (<code>png</code> files).</li>
<li>User preferences/settings of how they use the app.</li>
<li>Log files of actions taken by the user, errors encountered, etc. (logger files generated by <code>Python</code>, <code>Julia</code> or <code>R</code>)</li>
</ol>
<p><strong>My initial assumptions:</strong></p>
<ol>
<li>and 2. I've seen <a href="https://docs.docker.com/storage/volumes/" rel="nofollow noreferrer">volumes</a>, but how will the user files be stored there? Do I just create a folder for a user and store all their data there? Any security issues you see?</li>
<li>I assume these would get stored in an SQL or no-SQL database).</li>
<li>Aren't logs normally sent to a separate Docker container that's specifically meant for storing logs? So rather than use a standard logger, would it be better to send the log message to a container or database, using <code>RabbitMQ</code>?</li>
</ol>
<p>Would there be any sample project on GitHub or GitLab that I could refer?<br />
If Docker wasn't used; if the app was just deployed on a server, the user files would simply be stored in folders on the server, correct? I believe it is "frowned upon" to store files as BLOBs <a href="https://stackoverflow.com/questions/29971148/mongodb-storing-user-files">in a database</a>.</p>
| <p>If at all possible, you should store all of the data in a database. If neither the input files nor the rendered charts will be too large (their size can be reasonably measured in kilobytes, say) then you could store it in a binary-object column in the database. If you can do this, then your application will not need any volumes or other persistent storage (the database will) and this will make it much easier to scale and update your application.</p>
<p>If you're running this in a cloud environment, using a hosted storage system like AWS S3 also makes sense, checks the same boxes, and avoids the minor ugliness of storing unstructured binary data in a structured data store.</p>
<p>Volumes are potentially an option, but become tricky. You tagged this questions with both <a href="/questions/tagged/docker" class="post-tag" title="show questions tagged 'docker'" rel="tag" aria-labelledby="docker-container">docker</a> and <a href="/questions/tagged/kubernetes" class="post-tag" title="show questions tagged 'kubernetes'" rel="tag" aria-labelledby="kubernetes-container">kubernetes</a>. Both have ways to allocate storage and mount them into containers. As far as your application code is concerned, the mounted volume is just a filesystem path and it can read and write files there normally. There are potential problems with filesystem permissions in both environments, and if you run multiple copies of your application (especially easy in Kubernetes) there are risks of the replicas trying to access the same files at the same time. The storage types that are easier to get in Kubernetes can't be used on multiple nodes at the same time, which limits the utility of trying to share a single volume.</p>
<p>(More specifically in Kubernetes, if you must use local storage, I'd recommend a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="nofollow noreferrer">StatefulSet</a> to manage this. This will automatically create the storage and attach it to the Pods if correctly configured. You probably shouldn't manually create a PersistentVolumeClaim; you very likely shouldn't manually create a PersistentVolume; you almost definitely shouldn't use <code>hostPath:</code> type storage.)</p>
<hr />
<p>For logs, you should configure your application to write logs normally to its stdout. <code>docker logs</code> or <code>kubectl logs</code> will be able to retrieve the logs, and most log-management systems can be straightforwardly configured to collect the Docker or Kubernetes container logs. Even if you're running your log collector in a container (or Kubernetes DaemonSet) it will often have access to the host's or node's log directory. You can also configure plain Docker to send the logs somewhere else, but only if the application sends the logs to stdout.</p>
<p>If the per-action log messages need to be visible to the end user then you'll somehow need to collect them within the application. This isn't specific to the container technology, you'd have to do this even if you were running the application directly on a server.</p>
|
<p>There is this example project on <a href="https://github.com/asynkron/protoactor-grains-tutorial" rel="nofollow noreferrer">GitHub</a> that I'm trying to deploy on local Kubernetes cluster (k3d). The developers of Proto.Actor described the k8s deployment pretty much here in the <a href="https://proto.actor/docs/cluster/getting-started-kubernetes/" rel="nofollow noreferrer">official docs</a>. The problem is that the documentation is deploying on Azure Kubernetes Service whereas I want to deploy on local k8s cluster (k3d).</p>
<p>As much as I understand the steps, it's as following:</p>
<ol>
<li>Build docker images for both projects in the solution [I was able to do that step]</li>
</ol>
<pre><code>docker build -f ./ProtoClusterTutorial/Dockerfile . -t proto-cluster-tutorial:1.0.0`
docker build -f ./SmartBulbSimulatorApp/Dockerfile . -t smart-bulb-simulator-app:1.0.0`
</code></pre>
<ol start="2">
<li>Push the docker images into a repository</li>
</ol>
<p>Push the docker images where? Local k3d repository? Docker Hub? GitHub Container Registry?</p>
<p>Next question, the file <code>values.yaml</code> in the Helm chart directory consists of a <code>repository</code> field (<a href="https://github.com/asynkron/protoactor-grains-tutorial/blob/main/chart-tutorial/values.yaml#L5" rel="nofollow noreferrer">here</a>). If I push the docker image to ghcr or Docker hub, I'll just put the image link there, but what if I have to use the k3d local repository? What link should I use in that case?</p>
<p>The next question is how does <code>kubectl get pods</code> know that it has to display the k3d cluster pods and not the Docker Desktop Kubernetes which I have enabled?</p>
<p>I would be grateful if you briefly list the steps that I have to accomplish using k3d, Helm chart and kubectl.</p>
| <p>It doesn't matter where you push your images to, as long as it's a valid implementation of the <a href="https://github.com/opencontainers/distribution-spec" rel="nofollow noreferrer">OCI Distribution Spec</a> (a valid container registry). All the registry options you've listed would work, just pick the one that fits your needs.</p>
<p>Regarding the <code>values.yaml</code> file, the <code>repository</code> field is the url to the repository, depending on which container registry you decide to use (<code>docker.io</code> for Docker Hub, <code>ghcr.io</code> for Github Container Registry, etc.) Please check the docs of the container registry you choose for specific instructions of setting up repositories, building, pushing and pulling.</p>
<p><code>kubectl</code> gets it's configuration from a config file, which can contain multiple clusters. The k3d install script is most likely adding the new cluster as an entry to the config file and setting it as the new context for kubectl.</p>
<p>Back to your problem. A simpler solution might be to import the images in k3d manually as noted <a href="https://stackoverflow.com/a/72120733/13415624">in this answer</a>. I haven't used k3d myself so I can't guarantee this method will work, but it seems like a much simpler approach that can save you a lot of headache.</p>
<p>In case, however, you want to get your hands dirty and learn more about container repositories, helm and k8s, here's an example scenario with a repository hosted on <code>localhost:5000</code> and I strongly encourage you to check the relevant <code>docker/helm/kubernetes</code> docs for each step</p>
<ol>
<li>Login to your registry</li>
</ol>
<pre><code>docker login localhost:5000
</code></pre>
<ol start="2">
<li>Build the images</li>
</ol>
<pre><code>//Note how the image tag includes the repository url where they'll be pushed to
docker build -f ./ProtoClusterTutorial/Dockerfile . -t localhost:5000/proto-cluster-tutorial:1.0.0
docker build -f ./SmartBulbSimulatorApp/Dockerfile . -t localhost:5000/smart-bulb-simulator-app:1.0.0
</code></pre>
<ol start="3">
<li>Push the images</li>
</ol>
<pre><code>docker push localhost:5000/proto-cluster-tutorial:1.0.0
docker push localhost:5000/smart-bulb-simulator-app:1.0.0
</code></pre>
<ol start="4">
<li>Edit the <code>values.yaml</code></li>
</ol>
<pre><code> image:
repository: localhost:5000/proto-cluster-tutorial
pullPolicy: IfNotPresent
tag: "1.0.0"
</code></pre>
<ol start="5">
<li>Run <code>helm install</code> with the modified <code>values.yaml</code> file</li>
</ol>
<p>One thing I've noticed is that guide's helm chart does not include a field for <code>imagePullSecrets</code> since they are using Azure Container Registry and hosting the cluster on Azure which handles the authentication automatically. This means that private repositories will not work with the chart in your scenario and you'll have to edit the helm chart and subsequently the <code>values.yaml</code> to make it work. You can read more about <code>imagePullSecrets</code> <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">here</a></p>
|
<p>I am trying to apply kubernetes code that was given for me. I am getting an error:</p>
<pre><code>Error from server (BadRequest): error when creating "infra/ecr-creds/rendered.yml": Secret in version "v1" cannot be handled as a Secret: illegal base64 data at input byte 0
</code></pre>
<p>From what I understand, the error is coming from this:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: aws-ecr-creds-secret
namespace: whatever
labels:
app.kubernetes.io/name: aws-multi-ecr-credentials
helm.sh/chart: aws-multi-ecr-credentials-1.4.3
app.kubernetes.io/instance: aws-ecr-creds-novisign
app.kubernetes.io/version: "1.4.3"
app.kubernetes.io/managed-by: Helm
type: Opaque
data:
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
</code></pre>
<p>I tried to set the <code>AWS_ACCESS_KEY_ID</code> variable using:</p>
<pre><code>export AWS_ACCESS_KEY_ID=$(echo "...code..." | base64)
</code></pre>
<p>but it doesn't work. What is the proper way to do it?</p>
| <p>You should use <code>stringData</code>:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: aws-ecr-creds-secret
namespace: whatever
labels:
app.kubernetes.io/name: aws-multi-ecr-credentials
helm.sh/chart: aws-multi-ecr-credentials-1.4.3
app.kubernetes.io/instance: aws-ecr-creds-novisign
app.kubernetes.io/version: "1.4.3"
app.kubernetes.io/managed-by: Helm
type: Opaque
stringData:
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}
</code></pre>
<p>Link references:<br />
<a href="https://hub.docker.com/repository/docker/cuongquocvn/aws-cli-kubectl" rel="nofollow noreferrer">https://hub.docker.com/repository/docker/cuongquocvn/aws-cli-kubectl</a></p>
|
<p>I have a cluster with many namespaces.
I'm trying to log data from a specific namespace in my Openshift cluster but it is logging the data from all the namespaces. I tried to follow the documentation of the Openshift regarding logging, but there is no mention of scoping the log data.</p>
<p>I followed this documentation:
<a href="https://docs.openshift.com/container-platform/4.7/logging/cluster-logging.html" rel="nofollow noreferrer">https://docs.openshift.com/container-platform/4.7/logging/cluster-logging.html</a></p>
<p>I'm using <code>fluentd</code> as the log collector.</p>
| <p>As Cluster Logging on OpenShift, you can transfer logs in namespaces or Pods matched label you select.</p>
<p>The sample CR like <code>Forward logs in my-project namespace to Elasticserach which is deployed by Cluster Logging</code> could be as follows:</p>
<pre><code>apiVersion: "logging.openshift.io/v1"
kind: ClusterLogForwarder
metadata:
name: instance
namespace: openshift-logging
spec:
inputs:
- name: my-app-logs
application:
namespaces:
- my-project
pipelines:
- name: my-app
inputRefs:
- my-app-logs
outputRefs:
- default
</code></pre>
<p>You can customize <code>inputs</code> field as you want. It also could be specified Pods using <code>matchLabels</code> expression. *2</p>
<p>outputs <code>default</code> means send logs to <code>default</code> Elasticsearch on Cluster Logging.</p>
<p>*1: <a href="https://docs.openshift.com/container-platform/4.11/logging/cluster-logging-external.html" rel="nofollow noreferrer">https://docs.openshift.com/container-platform/4.11/logging/cluster-logging-external.html</a></p>
<p>*2: <a href="https://docs.openshift.com/container-platform/4.7/logging/cluster-logging-external.html#cluster-logging-collector-log-forward-logs-from-application-pods_cluster-logging-external" rel="nofollow noreferrer">https://docs.openshift.com/container-platform/4.7/logging/cluster-logging-external.html#cluster-logging-collector-log-forward-logs-from-application-pods_cluster-logging-external</a></p>
|
<h2>Summary:</h2>
<p>I'm trying to use KNative eventing to expose a simple web application via a Kafka topic. The server should be able to handle multiple requests concurrently, but, unfortunately, it seems to handle them sequentially when I send them via Kafka. When making simple HTTP requests directly to the service, though, the concurrency is working fine.</p>
<h2>Setup:</h2>
<p>The setup only uses a <code>KafkaSource</code> which points to my KNative <code>Service</code>, and is using a Kafka instance deployed using the <code>bitnami/kafka</code> helm chart.</p>
<p>The version I'm using is <code>v1.7.1</code> for KNative serving and eventing, and <code>v1.7.0</code> for the Kafka eventing integration (from <code>knative-sandbox/eventing-kafka</code>).</p>
<h2>Code:</h2>
<p>The service I'm trying to deploy is a python FastAPI application that, upon receiving a request (with an ID of sorts), logs the received request, sleeps for 5 seconds, then returns a dummy message:</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
from fastapi import FastAPI
from pydantic import BaseModel
import logging
logging.basicConfig(
format="%(asctime)s %(levelname)-8s %(message)s",
level=logging.DEBUG, datefmt="%Y-%m-%d %H:%M:%S",
)
app = FastAPI()
class Item(BaseModel):
id: str
@app.post("/")
async def root(item: Item):
logging.debug(f"Request received with ID: {item.id}")
await asyncio.sleep(5)
logging.debug(f"Request complete for ID: {item.id}")
return {"message": "Hello World"}
</code></pre>
<p>The app is served using uvicorn:</p>
<pre class="lang-bash prettyprint-override"><code>FROM python:3.9-slim
RUN pip install fastapi uvicorn
ADD main.py .
ENTRYPOINT uvicorn --host 0.0.0.0 --port 8877 main:app
</code></pre>
<p>The service deployment spec shows that I'm setting a <code>containerConcurrency</code> value that's greater than <code>1</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: concurrency-test
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/class: "kpa.autoscaling.knative.dev"
autoscaling.knative.dev/metric: "concurrency"
autoscaling.knative.dev/target: "5"
spec:
containerConcurrency: 5
containers:
- name: app
image: dev.local/concurrency-test:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8877
---
apiVersion: sources.knative.dev/v1beta1
kind: KafkaSource
metadata:
name: concurrency-test
spec:
consumerGroup: concurrency-test-group
bootstrapServers:
- kafka.default.svc.cluster.local:9092
topics:
- concurrency-test-requests
sink:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: concurrency-test
</code></pre>
<blockquote>
<p>Note: I also tried with <code>spec.consumers: 2</code> in the <code>KafkaSource</code> but the behavior was the same.</p>
</blockquote>
<h2>Logs:</h2>
<p>When sending two concurrent requests to the service directly with HTTP, the logs to look like this (both requests finish within 6 seconds, so concurrency is in effect):</p>
<pre><code>2022-09-12 02:14:36 DEBUG Request received with ID: abc
2022-09-12 02:14:37 DEBUG Request received with ID: def
2022-09-12 02:14:41 DEBUG Request complete for ID: abc
INFO: 10.42.0.7:0 - "POST / HTTP/1.1" 200 OK
2022-09-12 02:14:42 DEBUG Request complete for ID: def
INFO: 10.42.0.7:0 - "POST / HTTP/1.1" 200 OK
</code></pre>
<p>When sending requests via Kafka, though, the logs look like this (the requests are being processed one after the other):</p>
<pre><code>2022-09-12 02:14:55 DEBUG Request received with ID: 111
2022-09-12 02:15:00 DEBUG Request complete for ID: 111
INFO: 10.42.0.7:0 - "POST / HTTP/1.1" 200 OK
2022-09-12 02:15:00 DEBUG Request received with ID: 222
2022-09-12 02:15:05 DEBUG Request complete for ID: 222
INFO: 10.42.0.7:0 - "POST / HTTP/1.1" 200 OK
</code></pre>
<p>Please let me know if this sequential request handling is the expected behavior when using eventing with just a <code>KafkaSource</code>, and I hope there are ways for enabling concurrency in this setup.</p>
| <p>Kafka provides ordering within a partition (the implementation is a distributed log). You may need to change the number of partitions on your Kafka topic to achieve higher parallelism; you may be able to also use the <a href="https://github.com/knative-sandbox/eventing-kafka/blob/main/pkg/apis/sources/v1beta1/kafka_types.go#L63" rel="nofollow noreferrer"><code>spec.consumers</code> value</a> to increase the throughput (untested).</p>
<p>I'd also encourage filing an issue <a href="https://github.com/knative-sandbox/eventing-kafka/issues/new" rel="nofollow noreferrer">in the <code>eventing-kafka</code> repo</a> with your problem and any additional knobs if there is other behavior you're looking for.</p>
|
<p>I have started to learn GitOps ArgoCD. I have one basic doubt. I am unable to test ArgoCD because I do not have any Cluster. It will be so kind of you if you can clear my doubts.</p>
<ol>
<li>As an example currently I am running my deployment using <code>test:1</code> docker image. Then using Jenkins I upload <code>test:2</code> and then put <code>test:2</code> in place of <code>test:1</code> then ArgoCD detects the change and applies the new image in a cluster.
But if before I used <code>test:latest</code> then using Jenkins I uploads a new image with same name <code>test:latest</code>. What will happen now? Will ArgoCD deploy the image ( name and tag of the new and previous image are the same )</li>
</ol>
| <p>If you need automation, you can consider <a href="https://argocd-image-updater.readthedocs.io/en/latest/" rel="nofollow noreferrer"><strong>Argo CD Image Updater</strong></a>, which does include in its <a href="https://argocd-image-updater.readthedocs.io/en/latest/basics/update-strategies/" rel="nofollow noreferrer">update strategies</a>:</p>
<p><code>latest/newest-build</code> - Update to the most recently built image found in a registry</p>
<blockquote>
<p>It is important to understand, that this strategy will consider the build date of the image, and not the date of when the image was tagged or pushed to the registry.</p>
<p>If you are tagging the same image with multiple tags, these tags will have the same build date.<br />
In this case, Argo CD Image Updater will sort the tag names lexically descending and pick the last tag name of that list.</p>
<p>For example, consider an image that was tagged with the <code>f33bacd</code>, <code>dev</code> and <code>latest</code> tags.<br />
You might want to have the <code>f33bacd</code> tag set for your application, but Image Updater will pick the <code>latest</code> tag name.</p>
</blockquote>
<pre class="lang-yaml prettyprint-override"><code>argocd-image-updater.argoproj.io/image-list: myimage=some/image
argocd-image-updater.argoproj.io/myimage.update-strategy: latest
</code></pre>
|
<p>I am new to the storage concepts in Kubernetes.
I need to have some common persistent storage in the Kubernetes cluster but also to be able to write to it from outside of the cluster on-prem environment..</p>
<p>So my question is, Can we have some persistent storage (of a file system) in Kubernetes cluster that can be shared among different pods, and also applications from outside the Kubernetes cluster will be able to write to it? if yes, what is the proper architecture for it? how can I access that persistent storage from outside of the cluster?</p>
<p>if it's not possible, is there a better way to achieve my need to have some common database file system for podes in the cluster and applications outside the cluster?</p>
| <p>Having a filesystem shared inside the cluster between multiple pods is doable with any persistent volume marked as ReadWriteMany, like a <a href="https://kubernetes.io/docs/concepts/storage/volumes/#nfs" rel="nofollow noreferrer">NFS</a>. However for the NFS you will need to have a Kubernetes "addon" that manages its creation and deletion that is specific to your infrastructure.</p>
<p>I don't know how it will react if it is modified from outside the cluster but if what you need is just to have a database shared between the cluster an outsider application then it may be easier to have a regular database on a machine outside the cluster.</p>
<p>In fact you mostly want a distributed database on a kubernetes cluster for high availability, not performance, and most implementation seem to favor local storage with synchronization implemented inside the application (leader election and so on) over shared volumes.</p>
<p>If you want performances, you may take a look at <a href="https://en.wikipedia.org/wiki/Shard_(database_architecture)" rel="nofollow noreferrer">sharding</a> your database.</p>
|
<p>Recently we faced some issue in our AKS cluster that the nodes memory were simply got increased as the pods memory request was high ( request-2Gi, memory 2Gi) which increased the node count. So inorder to reduce the node counts we reduced the request memory to 256MI and limit to same value (2GB). After this we noticed some strange behaviour in our cluster.</p>
<ol>
<li>there is big difference in % of request and limits of our resource.<br />
More clearly the Limits values are showing 602% and 478 % of Actual, and much difference
between request %.
Is it normal or nor harm to keep this difference between request and limit ?</li>
</ol>
<blockquote>
<pre><code> Resource Requests Limits
-------- -------- ------
cpu 1895m (99%) 11450m (602%)
memory 3971Mi (86%) 21830Mi (478%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
attachable-volumes-azure-disk 0 0
</code></pre>
</blockquote>
<ol start="2">
<li>We noticed that our nodes consumption of memory is showing more than 100 %, which is a starnge
behaviour that how a node can consume more memory than actually it has.</li>
</ol>
<blockquote>
<pre><code>NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
aks-nodepoolx-xxxxxxxx-vmss00000x 151m 7% 5318Mi 116%
</code></pre>
</blockquote>
<pre><code>NAME READY STATUS RESTARTS AGE
mymobile-mobile-xxxxx-ddvd6 2/2 Running 0 151m
myappsvc-xxxxxxxxxx-2t6gz 2/2 Running 0 5h3m
myappsvc-xxxxxxxxxx-4xnsh 0/2 Evicted 0 4h38m
myappsvc-xxxxxxxxxx-5b5mb 0/2 Evicted 0 4h28m
myappsvc-xxxxxxxxxx-5f52g 0/2 Evicted 0 4h19m
myappsvc-xxxxxxxxxx-5f8rz 0/2 Evicted 0 4h31m
myappsvc-xxxxxxxxxx-66lc9 0/2 Evicted 0 4h26m
myappsvc-xxxxxxxxxx-8cnfb 0/2 Evicted 0 4h27m
myappsvc-xxxxxxxxxx-b9f9h 0/2 Evicted 0 4h20m
myappsvc-xxxxxxxxxx-dfx9m 0/2 Evicted 0 4h30m
myappsvc-xxxxxxxxxx-fpwg9 0/2 Evicted 0 4h25m
myappsvc-xxxxxxxxxx-kclt8 0/2 Evicted 0 4h22m
myappsvc-xxxxxxxxxx-kzmxw 0/2 Evicted 0 4h33m
myappsvc-xxxxxxxxxx-lrrnr 2/2 Running 0 4h18m
myappsvc-xxxxxxxxxx-lx4bn 0/2 Evicted 0 4h32m
myappsvc-xxxxxxxxxx-nsc8t 0/2 Evicted 0 4h29m
myappsvc-xxxxxxxxxx-qmlrj 0/2 Evicted 0 4h24m
myappsvc-xxxxxxxxxx-qr75w 0/2 Evicted 0 4h27m
myappsvc-xxxxxxxxxx-tf8bn 0/2 Evicted 0 4h20m
myappsvc-xxxxxxxxxx-vfcdv 0/2 Evicted 0 4h23m
myappsvc-xxxxxxxxxx-vltgw 0/2 Evicted 0 4h31m
myappsvc-xxxxxxxxxx-xhqtb 0/2 Evicted 0 4h22m
</code></pre>
| <p>Here are lots of talks about <strong>removing CPU limits</strong> on K8S</p>
<ul>
<li><a href="https://home.robusta.dev/blog/stop-using-cpu-limits/" rel="noreferrer">For the love of god, stop using CPU limits on Kubernetes </a></li>
<li><a href="https://community.ibm.com/community/user/aiops/blogs/dina-henderson/2022/06/29/kubernetes-cpu-throttling-the-silent-killer-of-res" rel="noreferrer">Kubernetes CPU Throttling: The Silent Killer of Response Time</a></li>
<li><a href="https://erickhun.com/posts/kubernetes-faster-services-no-cpu-limits/" rel="noreferrer">Kubernetes: Make your services faster by removing CPU limits</a></li>
<li><a href="https://medium.com/omio-engineering/cpu-limits-and-aggressive-throttling-in-kubernetes-c5b20bd8a718" rel="noreferrer">CPU limits and aggressive throttling in Kubernetes</a></li>
</ul>
<hr />
<p>Best practices for <strong>CPU limits and requests</strong> on Kubernetes</p>
<ul>
<li>Use CPU requests for everything and make sure they are accurate</li>
<li>Do <strong>NOT</strong> use CPU limits.</li>
</ul>
<p>Best practices for <strong>Memory limits and requests</strong> on Kubernetes</p>
<ul>
<li>Use memory limits and memory requests</li>
<li>Set memory limit= memory request</li>
</ul>
<hr />
<p>Checking the <strong>throttling rate of your pods</strong></p>
<p>Just login to the pod and run <code>cat /sys/fs/cgroup/cpu,cpuacct/kubepods/{PODID}/{CONTAINERID}/cpu.stat</code>.</p>
<ul>
<li><code>nr_periods</code> — Total schedule period</li>
<li><code>nr_throttled</code> — Total throttled period out of nr_periods</li>
<li><code>throttled_time</code> — Total throttled time in ns</li>
</ul>
|
<p>I'm using Terraform for deploying <strong>cert-manager</strong> and <strong>ambassador</strong>.</p>
<p>Trying to understand how to use <strong>nodeSelector</strong> in terraform deployment and assign the helm chart I'm using for both services to a specific group node I have (using a label with key and value to assign)</p>
<pre><code>resource "helm_release" "cert_manager" {
namespace = var.cert_manager_namespace
name = "cert-manager"
repository = "https://charts.jetstack.io"
chart = "cert-manager"
version = var.cert_manager_release_version
create_namespace = true
count = var.enable
set {
name = "controller."
}
set {
name = "controller.nodeselector"
value = ""
}
set {
name = "installCRDs" # Should only happen on the first attempt
value = "true"
}
set {
name = "securityContext.enabled"
value = "true"
}
</code></pre>
<p>Thie example above is me trying to assign it.
Any ideas?</p>
<p>Thanks!!</p>
| <p>If Your nodeSelector location in values.yaml looks like this:</p>
<pre><code>controller:
nodeSelector: {}
</code></pre>
<p>You should be setting it up this way:</p>
<pre><code>set {
name = "controller.nodeSelector.dedicated"
value = "workloads"
}
</code></pre>
<p>Where <strong>dedicated</strong> is key and <strong>workloads</strong> is value.</p>
|
<p>I am new in Kubernetes and stuck on the issue. I was trying to renew letsencrypt SSL certificate. But when I try to get certificate by running following command</p>
<pre><code>kubectl get certificate
</code></pre>
<p>System throwing this exception</p>
<pre><code>Error from server: conversion webhook for cert-manager.io/v1alpha2, Kind=Certificate failed: Post https://cert-manager-webhook.default.svc:443/convert?timeout=30s: x509: certificate signed by unknown authority (possibly because of "x509: ECDSA verification failure" while trying to verify candidate authority certificate "cert-manager-webhook-ca")
</code></pre>
<p>I have checked the pods also</p>
<p><a href="https://i.stack.imgur.com/R94Wn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/R94Wn.png" alt="enter image description here" /></a></p>
<p>The "cert-manager-webhook" is in running state. When I check logs of this pod, I get the following response</p>
<p><a href="https://i.stack.imgur.com/BF0AR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BF0AR.png" alt="enter image description here" /></a></p>
<p>I have also tried to apply cluster-issuer after deleting it but face same issue</p>
<pre><code>kubectl apply -f cluster-issuer.yaml
</code></pre>
<p><a href="https://i.stack.imgur.com/hzNQa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hzNQa.png" alt="enter image description here" /></a></p>
<p>I also have done R&D about this but could not find any suitable solution. Whats the issue here? Can someone please help me regarding this? Thanks.</p>
| <p>if you are using webhook, check if you have injected the ca, if not you could do it using:</p>
<pre><code>apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
...
annotations:
cert-manager.io/inject-ca-from: "<namespace>/<certificate_name>"
</code></pre>
|
<p>I'm running a mongoDB (5.0.12) instance as a kubernetes pod. Suddenly the pod is failing and I need some help to understand the logs:</p>
<pre><code>{"t":{"$date":"2022-09-13T18:39:51.104+00:00"},"s":"E", "c":"STORAGE", "id":22435, "ctx":"AuthorizationManager-1","msg":"WiredTiger error","attr":{"error":1,"message":"[1663094391:104664][1:0x7fc5224cc700], file:index-9--3195476868760592993.wt, WT_SESSION.open_cursor: __posix_open_file, 808: /data/db/index-9--3195476868760592993.wt: handle-open: open: Operation not permitted"}}
{"t":{"$date":"2022-09-13T18:39:51.104+00:00"},"s":"F", "c":"STORAGE", "id":50882, "ctx":"AuthorizationManager-1","msg":"Failed to open WiredTiger cursor. This may be due to data corruption","attr":{"uri":"table:index-9--3195476868760592993","config":"overwrite=false","error":{"code":8,"codeName":"UnknownError","errmsg":"1: Operation not permitted"},"message":"Please read the documentation for starting MongoDB with --repair here: http://dochub.mongodb.org/core/repair"}}
{"t":{"$date":"2022-09-13T18:39:51.104+00:00"},"s":"F", "c":"-", "id":23091, "ctx":"AuthorizationManager-1","msg":"Fatal assertion","attr":{"msgid":50882,"file":"src/mongo/db/storage/wiredtiger/wiredtiger_session_cache.cpp","line":109}}
{"t":{"$date":"2022-09-13T18:39:51.104+00:00"},"s":"F", "c":"-", "id":23092, "ctx":"AuthorizationManager-1","msg":"\n\n***aborting after fassert() failure\n\n"}
</code></pre>
<p>So why is there <code>operation is not permitted</code>? I already run <code>mongod --repair</code>, but the error still occurs.</p>
<p>This is how the pod is deployed:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
strategy:
type: Recreate
template:
metadata:
labels:
app: mongodb
spec:
hostname: mongodb
# securityContext:
# runAsUser: 999
# runAsGroup: 3000
# fsGroup: 2000
volumes:
- name: data
persistentVolumeClaim:
claimName: data
containers:
- name: mongodb
image: mongo:5.0.12
args: ["--auth", "--dbpath", "/data/db"]
imagePullPolicy: IfNotPresent
ports:
- containerPort: 27017
volumeMounts:
- mountPath: /data/db
name: data
# securityContext:
# allowPrivilegeEscalation: false
</code></pre>
<p><em>Update</em></p>
<p>The PVC:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
</code></pre>
| <p>You can try checking the permissions for that file before execution:</p>
<pre><code>ls -l
</code></pre>
<p>then using <strong>chmod</strong> you can try changing the permission and then try executing it.</p>
<p>OR</p>
<p>You can refer here, this might help you:</p>
<blockquote>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/security-context/</a></p>
</blockquote>
|
<p>I cannot really understand the purpose and usage of topologyKey in pod affinity. The documentations says:</p>
<blockquote>
<p><strong>topologyKey</strong> is the key of node labels. If two Nodes are labelled with this key and have identical values for that label, the scheduler treats both Nodes as being in the same topology. The scheduler tries to place a balanced number of Pods into each topology domain.</p>
</blockquote>
<p>And example usage is as follows:</p>
<pre><code>kind: Pod
metadata:
name: with-pod-affinity
spec:
affinity:
podAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: security
operator: In
values:
- S1
topologyKey: topology.kubernetes.io/zone
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: security
operator: In
values:
- S2
topologyKey: topology.kubernetes.io/zone
containers:
- name: with-pod-affinity
image: k8s.gcr.io/pause:2.0
</code></pre>
<p>So where does <strong>topology.kubernetes.io/zone</strong> come from? How can I know what value should I provide for this <strong>topologyKey</strong> field in my yaml file, and what happens if I just put a random string here? Should I label my node and use the key of this label in topologyKey field?</p>
<p>Thank you.</p>
| <p>Required as part of a <strong>affinity.podAffinity</strong> or <strong>affinity.podAntiAffinity</strong> <em>spec</em> section, the <strong>topologyKey</strong> field is used by the scheduler to determine the domain for Pod placement.</p>
<p>The topologyKey <em>domain</em> is used to determine relative placement of the Pods being scheduled relative to the Pods identified by the <strong>...labelSelector.matchExpressions</strong> section.</p>
<p>With <strong>podAffinity</strong>, a Pod <em>will</em> be scheduled in the same domain as the Pods that match the expression.</p>
<p>Two common label options are <strong>topology.kubernetes.io/zone</strong> and <strong>kubernetes.io/hostname</strong>. Others can be found in the Kubernetes <a href="https://kubernetes.io/docs/reference/labels-annotations-taints/" rel="noreferrer">Well-Known Labels, Annotations and Taints</a> documentation.</p>
<ul>
<li><strong>topology.kubernetes.io/zone</strong>: Pods will be scheduled <em>in the same zone</em> as a Pod that matches the expression.</li>
<li><strong>kubernetes.io/hostname</strong>: Pods will be scheduled <em>on the same hostname</em> as a Pod that matches the expression.</li>
</ul>
<p>For <strong>podAntiAffinity</strong>, the opposite is true: Pods <em>will not</em> be scheduled in the same domain as the Pods that match the expression.</p>
<p>The Kubernetes documentation <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-as%20a%20Pod%20that%20matches%20the%20expression.node/#inter-pod-affinity-and-anti-affinity" rel="noreferrer"><strong>Assigning Pods to Nodes</strong> documentation (Inter-pod affinity and anti-affinity section)</a> provides a additional explanation.</p>
|
<p>We have upgraded our AKS to 1.24.3, and since we have, we are having an issue with containers refusing connection.</p>
<p>There have been no changes to the deployed microservices as part of the AKS upgrade, and the issue is occurring at random intervals.</p>
<p>From what I can see the container is returning the error - The client closed the connection.</p>
<p>What I cannot seem to be able to trace is, the connections, within AKS, and the issue is across all services.</p>
<p>Has anyone experienced anything similar and are able to provide any advise?</p>
| <p>I hit similar issue upgrading from 1.23.5 to 1.24.3, issue was configuration mis-match with kubernetes load balancer health probe path and ingress-nginx probe endpoints.</p>
<p>Added this annotation to my ingress-nginx helm install command corrected my problem: --set controller.service.annotations."service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path"=/healthz</p>
|
<p>I'm trying to get an ingress controller working in Minikube and am following the steps in the K8s documentation <a href="https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/" rel="noreferrer">here</a>, but am seeing a different result in that the IP address for the ingress controller is different than that for Minikube (the example seems to indicate they should be the same):</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
example-ingress hello-world.info 10.0.2.15 80 12m
$ minikube ip
192.168.99.101
</code></pre>
<p>When I try to connect to the Minikube IP address (using the address directly vs. adding it to my local hosts file), I'm getting a "Not found" response from NGINX:</p>
<pre class="lang-sh prettyprint-override"><code>$ curl http://`minikube ip`/
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>openresty/1.15.8.1</center>
</body>
</html>
</code></pre>
<p>When I try to connect to the IP address associated with the ingress controller, it just hangs.</p>
<p>Should I expect the addresses to be the same as the K8s doc indicates?</p>
<p>Some additional information:</p>
<pre><code>$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
minikube Ready master 2d23h v1.16.0 10.0.2.15 <none> Buildroot 2018.05.3 4.15.0 docker://18.9.9
$ kubectl get ingresses example-ingress -o yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"networking.k8s.io/v1beta1","kind":"Ingress","metadata":{"annotations":{"nginx.ingress.kubernetes.io/rewrite-target":"/$1"},"name":"example-ingress","namespace":"default"},"spec":{"rules":[{"host":"hello-world.info","http":{"paths":[{"backend":{"serviceName":"web","servicePort":8080},"path":"/"}]}}]}}
nginx.ingress.kubernetes.io/rewrite-target: /$1
creationTimestamp: "2019-10-28T15:36:57Z"
generation: 1
name: example-ingress
namespace: default
resourceVersion: "25609"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/example-ingress
uid: 5e96c378-fbb1-4e8f-9738-3693cbce7d9b
spec:
rules:
- host: hello-world.info
http:
paths:
- backend:
serviceName: web
servicePort: 8080
path: /
status:
loadBalancer:
ingress:
- ip: 10.0.2.15
</code></pre>
| <p>Here’s what worked for me:</p>
<ol>
<li><p>minikube start</p>
</li>
<li><p>minikube addons enable ingress</p>
</li>
<li><p>minikube addons enable ingress-dns</p>
</li>
<li><p>Wait until you see the ingress-nginx-controller-XXXX is up and running using <code>Kubectl get pods -n ingress-nginx</code></p>
</li>
<li><p>Create an ingress using the K8s <a href="https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/service/networking/example-ingress.yaml" rel="noreferrer">example yaml file</a></p>
</li>
<li><p>Update the service section to point to the NodePort Service that you already created</p>
</li>
<li><p>Append
<code>127.0.0.1 hello-world.info</code>
to your /etc/hosts file on MacOS (<strong>NOTE:
Do NOT use the Minikube IP</strong>)</p>
</li>
<li><p>Run <code>minikube tunnel</code> ( Keep the window open. After you entered the password there will be no more messages, and the cursor just blinks)</p>
</li>
<li><p>Hit the <code>hello-world.info</code> ( or whatever host you configured in the yaml file) in a browser and it should work</p>
</li>
</ol>
|
<p>I have <code>Chart.yaml</code> as:</p>
<pre><code>dependencies:
- name: mysql
version: "5.0.9"
repository: "https://charts.bitnami.com/bitnami"
alias: a
- name: mysql
version: "5.0.9"
repository: "https://charts.bitnami.com/bitnami"
alias: b
</code></pre>
<p>and <code>values.yaml</code> as</p>
<pre><code>mysql:
somename: Overriden
somename2: NotOverriden
a:
somename: A
b:
somename: B
</code></pre>
<p>but the helm is reading only values from <code>a:</code> and <code>b:</code>. I would expect that values from <code>mysql:</code> are applied to both <code>a:</code> and <code>b:</code> and overridden where needed.</p>
<p>Is this possible at all, or is there some other way?</p>
| <p>You could use yaml anchors and aliases.</p>
<pre><code> mysql: &mysql
somename: Overriden
somename2: NotOverriden
a:
<<: *mysql
somename: A
b:
<<: *mysql
somename: B
</code></pre>
|
<p>I know, for example, that you can get the lastUpdateTime of a Deployment with <strong>kubectl</strong>:</p>
<pre><code>kubectl get deploy <deployment-name> -o jsonpath={.status.conditions[1].lastUpdateTime}
</code></pre>
<p>Or via <strong>client-go</strong>:</p>
<pre><code>func deploymentCheck(namespace string, clientset *kubernetes.Clientset) bool {
// get the deployments in the namespace
deployments, err := clientset.AppsV1().Deployments(namespace).List(context.TODO(), metav1.ListOptions{})
if errors.IsNotFound(err) {
log.Fatal("\nNo deployments in the namespace", err)
} else if err != nil {
log.Fatal("\nFailed to fetch deployments in the namespace", err)
}
var dptNames []string
for _, dpt := range deployments.Items {
dptNames = append(dptNames, dpt.Name)
}
// check the last update time of the deployments
for _, dpt := range deployments.Items {
lastUpdateTime := dpt.Status.Conditions[1].LastUpdateTime
dptAge := time.Since(lastUpdateTime.Time)
fmt.Printf("\nDeployment %v age: %v", dpt.Name, dptAge)
}
}
</code></pre>
<p>The equivalent of <code>lastUpdateTime := dpt.Status.Conditions[1].LastUpdateTime</code> for a StatefulSet doesn't seem to exist.</p>
<p>So, how can I get the lastUpdateTime of a StatefulSet?</p>
| <p>I noticed that the only things that change after someone edits a given resource are the resource's lastAppliedConfiguration, Generation and ObservedGeneration. So, I stored them in lists:</p>
<pre><code> for _, deployment := range deployments.Items {
deploymentNames = append(deploymentNames, deployment.Name)
lastAppliedConfig := deployment.GetAnnotations()["kubectl.kubernetes.io/last-applied-configuration"]
lastAppliedConfigs = append(lastAppliedConfigs, lastAppliedConfig)
generations = append(generations, deployment.Generation)
observedGenerations = append(observedGenerations, deployment.Status.ObservedGeneration)
}
</code></pre>
<p>Here's the full function:</p>
<pre><code>func DeploymentCheck(namespace string, clientset *kubernetes.Clientset) ([]string, []string, []int64, []int64) {
var deploymentNames []string
var lastAppliedConfigs []string
var generations []int64
var observedGenerations []int64
deployments, err := clientset.AppsV1().Deployments(namespace).List(context.TODO(), metav1.ListOptions{})
if errors.IsNotFound(err) {
log.Print("No deployments in the namespace", err)
} else if err != nil {
log.Print("Failed to fetch deployments in the namespace", err)
}
for _, deployment := range deployments.Items {
deploymentNames = append(deploymentNames, deployment.Name)
lastAppliedConfig := deployment.GetAnnotations()["kubectl.kubernetes.io/last-applied-configuration"]
lastAppliedConfigs = append(lastAppliedConfigs, lastAppliedConfig)
generations = append(generations, deployment.Generation)
observedGenerations = append(observedGenerations, deployment.Status.ObservedGeneration)
}
return deploymentNames, lastAppliedConfigs, generations, observedGenerations
}
</code></pre>
<p>I use all this information to instantiate a struct called Namespace, which contains all major resources a k8s namespace can have.</p>
<p>Then, after a given time I check the same namespace again and check if its resources had any changes:</p>
<pre><code>if !reflect.DeepEqual(namespace.stsLastAppliedConfig, namespaceCopy.stsLastAppliedConfig) {
...
}
else if !reflect.DeepEqual(namespace.stsGeneration, namespaceCopy.stsGeneration) {
...
}
else if !reflect.DeepEqual(namespace.stsObservedGeneration, namespaceCopy.stsObservedGeneration) {
...
}
</code></pre>
<p>So, the only workaround I found was to compare the resource's configuration, including StatefulSets', after a given time. Apparently, for some resources you cannot get any information about their lastUpdateTime.</p>
<p>I also found out that lastUpdateTime is actually not reliable, as it understands minor cluster changes as the resource's change. For example, if a cluster rotates and kills all pods, the lastUpdateTime of a Deployment will update its time. That's not what I wanted. I wanted to detect user changes to resources, like when someone applies an edited yaml file or run <code>kubectl edit</code>.</p>
<p>@hypperster , I hope it helps.</p>
|
<p>I need to read file from smb:// share within Knative function. Is it possible?</p>
<p>I tried to mount share with Volumes (PersistentVolume/PersistentVolumeClaim). But seems Knative services don't support Volume functionality (supported volume types <a href="https://github.com/knative/docs/blob/51b878d8ebb494e48cfa5f4214defe92e85eb491/docs/serving/spec/knative-api-specification-1.0.md#volume" rel="nofollow noreferrer">https://github.com/knative/docs/blob/51b878d8ebb494e48cfa5f4214defe92e85eb491/docs/serving/spec/knative-api-specification-1.0.md#volume</a>).</p>
<p>Error description:</p>
<pre><code>for: "service.yaml":
error when patching "service.yaml": admission webhook "validation.webhook.serving.knative.dev" denied the request:
validation failed:
Persistent volume claim support is disabled, but found persistent volume claim pvc-smb:
Persistent volume write support is disabled, but found persistent volume claim pvc-smb that is not read-only:
must not set the field(s): spec.template.spec.volumes[0].persistentVolumeClaim
</code></pre>
<p>I also executed mount command within pod container. But also received an error:</p>
<pre><code>mount -t cifs "//IP/SHARE" "/data" -o "username=debian"
Unable to apply new capability set.
</code></pre>
| <p>There is a <a href="https://github.com/knative/serving/blob/main/config/core/configmaps/features.yaml#L173" rel="nofollow noreferrer">feature flag <code>kubernetes.podspec-persistent-volume-claim</code> (and <code>kubernetes.podspec-persistent-volume-write</code>)</a> which an administrator can use to enable mounting persistent volumes, including SMB shares.</p>
<p>These are disabled by default because volume mounts can impose scalability, performance, and correctness challenges for applications, and may restrict portability across clusters. There are some guard rails like this in Knative, with a goal of defaulting to more portable and less host-linked configuration.</p>
|
<p>**Unable to connect to the server: getting credentials: decoding stdout: no kind "ExecCredential" is registered for version "client.authentication.k8s.io/v1alpha1" in scheme "pkg/client/auth/exec/exec.go:62"
**</p>
<pre><code>2022-09-16 16:35:00 [ℹ] eksctl version 0.111.0
2022-09-16 16:35:00 [ℹ] using region ap-south-1
2022-09-16 16:35:00 [ℹ] skipping ap-south-1c from selection because it doesn't support the following instance type(s): t2.micro
2022-09-16 16:35:00 [ℹ] setting availability zones to [ap-south-1a ap-south-1b]
2022-09-16 16:35:00 [ℹ] subnets for ap-south-1a - public:192.168.0.0/19 private:192.168.64.0/19
2022-09-16 16:35:00 [ℹ] subnets for ap-south-1b - public:192.168.32.0/19 private:192.168.96.0/19
2022-09-16 16:35:00 [ℹ] nodegroup "ng-1" will use "" [AmazonLinux2/1.23]
2022-09-16 16:35:00 [ℹ] using Kubernetes version 1.23
2022-09-16 16:35:00 [ℹ] creating EKS cluster "basic-cluster" in "ap-south-1" region with managed nodes
2022-09-16 16:35:00 [ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
2022-09-16 16:35:00 [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=ap-south-1 --cluster=basic-cluster'
2022-09-16 16:35:00 [ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "basic-cluster" in "ap-south-1"
2022-09-16 16:35:00 [ℹ] CloudWatch logging will not be enabled for cluster "basic-cluster" in "ap-south-1"
2022-09-16 16:35:00 [ℹ] you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=ap-south-1 --cluster=basic-cluster'
2022-09-16 16:35:00 [ℹ]
2 sequential tasks: { create cluster control plane "basic-cluster",
2 sequential sub-tasks: {
wait for control plane to become ready,
create managed nodegroup "ng-1",
}
}
2022-09-16 16:35:00 [ℹ] building cluster stack "eksctl-basic-cluster-cluster"
2022-09-16 16:35:00 [ℹ] deploying stack "eksctl-basic-cluster-cluster"
2022-09-16 16:35:30 [ℹ] waiting for CloudFormation stack "eksctl-basic-cluster-cluster"
2022-09-16 16:36:01 [ℹ] waiting for CloudFormation stack "eksctl-basic-cluster-cluster"
2022-09-16 16:37:01 [ℹ] waiting for CloudFormation stack "eksctl-basic-cluster-cluster"
2022-09-16 16:38:01 [ℹ] waiting for CloudFormation stack "eksctl-basic-cluster-cluster"
2022-09-16 16:39:01 [ℹ] waiting for CloudFormation stack "eksctl-basic-cluster-cluster"
2022-09-16 16:40:01 [ℹ] waiting for CloudFormation stack "eksctl-basic-cluster-cluster"
2022-09-16 16:41:02 [ℹ] waiting for CloudFormation stack "eksctl-basic-cluster-cluster"
2022-09-16 16:42:02 [ℹ] waiting for CloudFormation stack "eksctl-basic-cluster-cluster"
2022-09-16 16:43:02 [ℹ] waiting for CloudFormation stack "eksctl-basic-cluster-cluster"
2022-09-16 16:44:02 [ℹ] waiting for CloudFormation stack "eksctl-basic-cluster-cluster"
2022-09-16 16:45:02 [ℹ] waiting for CloudFormation stack "eksctl-basic-cluster-cluster"
2022-09-16 16:46:03 [ℹ] waiting for CloudFormation stack "eksctl-basic-cluster-cluster"
2022-09-16 16:48:05 [ℹ] building managed nodegroup stack "eksctl-basic-cluster-nodegroup-ng-1"
2022-09-16 16:48:05 [ℹ] deploying stack "eksctl-basic-cluster-nodegroup-ng-1"
2022-09-16 16:48:05 [ℹ] waiting for CloudFormation stack "eksctl-basic-cluster-nodegroup-ng-1"
2022-09-16 16:48:36 [ℹ] waiting for CloudFormation stack "eksctl-basic-cluster-nodegroup-ng-1"
2022-09-16 16:49:22 [ℹ] waiting for CloudFormation stack "eksctl-basic-cluster-nodegroup-ng-1"
2022-09-16 16:49:53 [ℹ] waiting for CloudFormation stack "eksctl-basic-cluster-nodegroup-ng-1"
2022-09-16 16:51:15 [ℹ] waiting for CloudFormation stack "eksctl-basic-cluster-nodegroup-ng-1"
2022-09-16 16:52:09 [ℹ] waiting for CloudFormation stack "eksctl-basic-cluster-nodegroup-ng-1"
2022-09-16 16:52:09 [ℹ] waiting for the control plane availability...
2022-09-16 16:52:09 [✔] saved kubeconfig as "/home/santhosh_puvaneswaran/.kube/config"
2022-09-16 16:52:09 [ℹ] no tasks
2022-09-16 16:52:09 [✔] all EKS cluster resources for "basic-cluster" have been created
2022-09-16 16:52:09 [ℹ] nodegroup "ng-1" has 3 node(s)
2022-09-16 16:52:09 [ℹ] node "ip-192-168-15-31.ap-south-1.compute.internal" is ready
2022-09-16 16:52:09 [ℹ] node "ip-192-168-35-216.ap-south-1.compute.internal" is ready
2022-09-16 16:52:09 [ℹ] node "ip-192-168-36-191.ap-south-1.compute.internal" is ready
2022-09-16 16:52:09 [ℹ] waiting for at least 3 node(s) to become ready in "ng-1"
2022-09-16 16:52:09 [ℹ] nodegroup "ng-1" has 3 node(s)
2022-09-16 16:52:09 [ℹ] node "ip-192-168-15-31.ap-south-1.compute.internal" is ready
2022-09-16 16:52:09 [ℹ] node "ip-192-168-35-216.ap-south-1.compute.internal" is ready
2022-09-16 16:52:09 [ℹ] node "ip-192-168-36-191.ap-south-1.compute.internal" is ready
*2022-09-16 16:52:10 [✖] unable to use kubectl with the EKS cluster (check 'kubectl version'): WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
Unable to connect to the server: getting credentials: decoding stdout: no kind "ExecCredential" is registered for version "client.authentication.k8s.io/v1alpha1" in scheme "pkg/client/auth/exec/exec.go:62"*
2022-09-16 16:52:10 [ℹ] cluster should be functional despite missing (or misconfigured) client binaries
2022-09-16 16:52:10 [✔] EKS cluster "basic-cluster" in "ap-south-1" region is ready
santhosh_puvaneswaran@it002072:
</code></pre>
<p>I don't why I am having this error again and again, <a href="https://i.stack.imgur.com/2ttCF.png" rel="noreferrer"><img src="https://i.stack.imgur.com/2ttCF.png" alt="enter image description here" /></a></p>
<p>I can create a clusters and delete, But can't able to work on it..!</p>
| <p>You need to update your AWS CLI to >2.7.25 or the latest (recommended), ensure your CLI is pointing to the right region, then try <code>eksctl utils write-kubeconfig --cluster=<name></code>. Open the kubeconfig file and check <code>client.authentication.k8s.io/v1alpha1</code> has changed to <code>client.authentication.k8s.io/v1beta1</code>.</p>
|
<p>I manage my DNS using GoDaddy and I was hoping to have Lets Encrypt certificates generated for my kubernetes deployment. However, trying to generate a certificate generates the error</p>
<pre><code>I0728 17:31:12.123952 1 dns.go:88] cert-manager/controller/challenges/Present "msg"="presenting DNS01 challenge for domain" "dnsName"="XXXX" "domain"="XXX" "resource_kind"="Challenge" "resource_name"="letsencrypt-staging-bflxn-153714257-3821133841" "resource_namespace"="default" "resource_version"="v1" "type"="DNS-01"
E0728 17:31:12.129511 1 controller.go:163] cert-manager/controller/challenges "msg"="re-queuing item due to error processing" "error"="godaddy.acme.mycompany.com is forbidden: User \"system:serviceaccount:cert-manager:cert-manager\" cannot create resource \"godaddy\" in API group \"acme.mycompany.com\" at the cluster scope" "key"="default/letsencrypt-staging-bflxn-153714257-3821133841"
</code></pre>
<p>At the core of this problem, I believe, is what the <code>groupName</code> and <code>solver</code> should be for my <code>ClusterIssuer</code>.</p>
<p><code>secret.yml</code></p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: godaddy-api-key
namespace: cert-manager
type: Opaque
stringData:
token: GO_DADDY_KEY:GO_DADDY_SECRET
</code></pre>
<p><code>issuer.yml</code></p>
<pre><code>apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: XXXX
privateKeySecretRef:
name: letsencrypt-staging
solvers:
- selector:
dnsNames:
- '*.company.com'
dns01:
webhook:
config:
apiKeySecretRef:
name: godaddy-api-key
key: token
production: true
ttl: 600
groupName: acme.mycompany.com
solverName: godaddy
</code></pre>
<p><em>NB: I've tried different permutations of groupName including using a unique domain with no success</em></p>
<pre><code>apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: letsencrypt-staging
spec:
secretName: letsencrypt-staging
renewBefore: 240h
dnsNames:
- "*.company.com"
issuerRef:
name: letsencrypt-staging
kind: ClusterIssuer
</code></pre>
<p>But the certificate is never generated</p>
<pre><code>$ k get certificate letsencrypt-staging
NAME READY SECRET AGE
letsencrypt-staging False letsencrypt-staging 8m27s
</code></pre>
<p>I'm using this webhook <a href="https://github.com/snowdrop/godaddy-webhook" rel="nofollow noreferrer">https://github.com/snowdrop/godaddy-webhook</a></p>
| <p>I also encountered this issue. I got it fixed by adding a missing <code>ClusterRole</code> and <code>ClusterRolebinding</code>. This is the manifest that fixed the issue for me.</p>
<pre class="lang-yaml prettyprint-override"><code>---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: dns-challenge-missing-role
rules:
- apiGroups: ["acme.mycompany.com"] # "" indicates the core API group
resources: ["godaddy"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: dns-challenge-missing-role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: dns-challenge-missing-role
subjects:
- kind: ServiceAccount
name: cert-manager
namespace: cert-manager
</code></pre>
|
<p>I use this manifest configuration to deploy a registry into 3 mode Kubernetes cluster:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: pv1
namespace: registry-space
spec:
capacity:
storage: 5Gi # specify your own size
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain
local:
path: /opt/registry # can be any path
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- kubernetes2
accessModes:
- ReadWriteMany # only 1 node will read/write on the path.
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv1-claim
namespace: registry-space
spec: # should match specs added in the PersistenVolume
accessModes:
- ReadWriteMany
volumeMode: Filesystem
resources:
requests:
storage: 5Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: private-repository-k8s
namespace: registry-space
labels:
app: private-repository-k8s
spec:
replicas: 1
selector:
matchLabels:
app: private-repository-k8s
template:
metadata:
labels:
app: private-repository-k8s
spec:
volumes:
- name: certs-vol
hostPath:
path: /opt/certs
type: Directory
- name: task-pv-storage
persistentVolumeClaim:
claimName: pv1-claim # specify the PVC that you've created. PVC and Deployment must be in same namespace.
containers:
- image: registry:2
name: private-repository-k8s
imagePullPolicy: IfNotPresent
env:
- name: REGISTRY_HTTP_TLS_CERTIFICATE
value: "/opt/certs/registry.crt"
- name: REGISTRY_HTTP_TLS_KEY
value: "/opt/certs/registry.key"
ports:
- containerPort: 5000
volumeMounts:
- name: certs-vol
mountPath: /opt/certs
- name: task-pv-storage
mountPath: /opt/registry
</code></pre>
<p>I manually created directories on every node under <code>/opt/certs</code> and <code>/opt/registry</code>.</p>
<p>But when I try to deploy the manifest without hardcoded <code>nodeSelectorTerms</code> on tha control plane I get error:</p>
<pre><code>kubernetes@kubernetes1:/opt/registry$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-58dbc876ff-fsjd5 1/1 Running 1 (74m ago) 84m
kube-system calico-node-5brzt 1/1 Running 1 (73m ago) 84m
kube-system calico-node-nph9n 1/1 Running 1 (76m ago) 84m
kube-system calico-node-pcd74 1/1 Running 1 (74m ago) 84m
kube-system calico-node-ph2ht 1/1 Running 1 (76m ago) 84m
kube-system coredns-565d847f94-7pswp 1/1 Running 1 (74m ago) 105m
kube-system coredns-565d847f94-tlrfr 1/1 Running 1 (74m ago) 105m
kube-system etcd-kubernetes1 1/1 Running 2 (74m ago) 105m
kube-system kube-apiserver-kubernetes1 1/1 Running 2 (74m ago) 105m
kube-system kube-controller-manager-kubernetes1 1/1 Running 2 (74m ago) 105m
kube-system kube-proxy-4slm4 1/1 Running 1 (76m ago) 86m
kube-system kube-proxy-4tnx2 1/1 Running 2 (74m ago) 105m
kube-system kube-proxy-9dgsj 1/1 Running 1 (73m ago) 85m
kube-system kube-proxy-cgr44 1/1 Running 1 (76m ago) 86m
kube-system kube-scheduler-kubernetes1 1/1 Running 2 (74m ago) 105m
registry-space private-repository-k8s-6d5d954b4f-xkmj5 0/1 Pending 0 4m55s
kubernetes@kubernetes1:/opt/registry$
</code></pre>
<p>Do you know how I can let Kubernetes to decide where to deploy the pod?</p>
| <p>Lets try the following(disregard the paths you currently have and use the ones in the example, (then you can change it), we can adapt it to your needs once dynamic provisioning is working, at the very bottom theres mysql image as an example, use busybox or leave it as it is to get a better understanding:</p>
<ol>
<li>NFS Server install. Create NFS Share on File Server (Usually master node)</li>
</ol>
<pre class="lang-yaml prettyprint-override"><code>
#Include prerequisites
sudo apt update -y # Run updates prior to installing
sudo apt install nfs-kernel-server # Install NFS Server
sudo systemctl enable nfs-server # Set nfs-server to load on startups
sudo systemctl status nfs-server # Check its status
# check server status
root@worker03:/home/brucelee# sudo systemctl status nfs-server
● nfs-server.service - NFS server and services
Loaded: loaded (/lib/systemd/system/nfs-server.service; enabled; vendor preset: enabled)
Active: active (exited) since Fri 2021-08-13 04:25:50 UTC; 18s ago
Process: 2731 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
Process: 2732 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS)
Main PID: 2732 (code=exited, status=0/SUCCESS)
Aug 13 04:25:49 linux03 systemd[1]: Starting NFS server and services...
Aug 13 04:25:50 linux03 systemd[1]: Finished NFS server and services.
# Prepare an empty folder
sudo su # enter root
nfsShare=/nfs-share
mkdir $nfsShare # create folder if it doesn't exist
chown nobody: $nfsShare
chmod -R 777 $nfsShare # not recommended for production
# Edit the nfs server share configs
vim /etc/exports
# add these lines
/nfs-share x.x.x.x/24(rw,sync,no_subtree_check,no_root_squash,no_all_squash,insecure)
# Export directory and make it available
sudo exportfs -rav
# Verify nfs shares
sudo exportfs -v
# Enable ingress for subnet
sudo ufw allow from x.x.x.x/24 to any port nfs
# Check firewall status - inactive firewall is fine for testing
root@worker03:/home/brucelee# sudo ufw status
Status: inactive
</code></pre>
<ol start="2">
<li>NFS Client install (Worker nodes)</li>
</ol>
<pre class="lang-yaml prettyprint-override"><code># Install prerequisites
sudo apt update -y
sudo apt install nfs-common
# Mount the nfs share
remoteShare=server.ip.here:/nfs-share
localMount=/mnt/testmount
sudo mkdir -p $localMount
sudo mount $remoteShare $localMount
# Unmount
sudo umount $localMount
</code></pre>
<ol start="3">
<li>Dinamic provisioning and Storage class defaulted</li>
</ol>
<pre class="lang-yaml prettyprint-override"><code># Pull the source code
workingDirectory=~/nfs-dynamic-provisioner
mkdir $workingDirectory && cd $workingDirectory
git clone https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner
cd nfs-subdir-external-provisioner/deploy
# Deploying the service accounts, accepting defaults
k create -f rbac.yaml
# Editing storage class
vim class.yaml
##############################################
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-ssd # set this value
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "true" # value of true means retaining data upon pod terminations
allowVolumeExpansion: "true" # this attribute doesn't exist by default
##############################################
# Deploying storage class
k create -f class.yaml
# Sample output
stoic@masternode:~/nfs-dynamic-provisioner/nfs-subdir-external-provisioner/deploy$ k get storageclasses.storage.k8s.io
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
managed-nfs-ssd k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 33s
nfs-class kubernetes.io/nfs Retain Immediate true 193d
nfs-client (default) cluster.local/nfs-subdir-external-provisioner Delete Immediate true 12d
# Example of patching an applied object
kubectl patch storageclass managed-nfs-ssd -p '{"allowVolumeExpansion":true}'
kubectl patch storageclass managed-nfs-ssd -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' # Set storage class as default
# Editing deployment of dynamic nfs provisioning service pod
vim deployment.yaml
##############################################
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
- name: NFS_SERVER
value: X.X.X.X # change this value
- name: NFS_PATH
value: /nfs-share # change this value
volumes:
- name: nfs-client-root
nfs:
server: 192.168.100.93 # change this value
path: /nfs-share # change this value
##############################################
# Creating nfs provisioning service pod
k create -f deployment.yaml
# Troubleshooting: example where the deployment was pending variables to be created by rbac.yaml
stoic@masternode: $ k describe deployments.apps nfs-client-provisioner
Name: nfs-client-provisioner
Namespace: default
CreationTimestamp: Sat, 14 Aug 2021 00:09:24 +0000
Labels: app=nfs-client-provisioner
Annotations: deployment.kubernetes.io/revision: 1
Selector: app=nfs-client-provisioner
Replicas: 1 desired | 0 updated | 0 total | 0 available | 1 unavailable
StrategyType: Recreate
MinReadySeconds: 0
Pod Template:
Labels: app=nfs-client-provisioner
Service Account: nfs-client-provisioner
Containers:
nfs-client-provisioner:
Image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
Port: <none>
Host Port: <none>
Environment:
PROVISIONER_NAME: k8s-sigs.io/nfs-subdir-external-provisioner
NFS_SERVER: X.X.X.X
NFS_PATH: /nfs-share
Mounts:
/persistentvolumes from nfs-client-root (rw)
Volumes:
nfs-client-root:
Type: NFS (an NFS mount that lasts the lifetime of a pod)
Server: X.X.X.X
Path: /nfs-share
ReadOnly: false
Conditions:
Type Status Reason
---- ------ ------
Progressing True NewReplicaSetCreated
Available False MinimumReplicasUnavailable
ReplicaFailure True FailedCreate
OldReplicaSets: <none>
NewReplicaSet: nfs-client-provisioner-7768c6dfb4 (0/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 3m47s deployment-controller Scaled up replica set nfs-client-provisioner-7768c6dfb4 to 1
# Get the default nfs storage class
echo $(kubectl get sc -o=jsonpath='{range .items[?(@.metadata.annotations.storageclass\.kubernetes\.io/is-default-class=="true")]}{@.metadata.name}{"\n"}{end}')
</code></pre>
<ol start="4">
<li>PersistentVolumeClaim (Notice the storageClassName it is the one defined on the previous step)</li>
</ol>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-persistentvolume-claim
namespace: default
spec:
storageClassName: nfs-client
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
</code></pre>
<ol start="5">
<li>PersistentVolume</li>
</ol>
<p>It is created dinamically ! confirm if it is here with the correct values running this command:</p>
<blockquote>
<p>kubectl get pv -A</p>
</blockquote>
<ol start="6">
<li>Deployment</li>
</ol>
<p>On your deployment you need two things, volumeMounts (for each container) and volumes (for all containers).
Notice: VolumeMounts->name=data and volumes->name=data because they should match. And claimName is my-persistentvolume-claim which is the same as you PVC.</p>
<pre class="lang-yaml prettyprint-override"><code> ...
spec:
containers:
- name: mysql
image: mysql:8.0.30
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
volumes:
- name: data
persistentVolumeClaim:
claimName: my-persistentvolume-claim
</code></pre>
|
<p>I want to run patching of statefulsets for a specific use case from a Pod via a cronjob. To do so I created the following plan with a custom service account, role and rolebinding to permit the Pod access to the apps api group with the patch verb but I keep running into the following error:</p>
<pre><code>Error from server (Forbidden): statefulsets.apps "test-statefulset" is forbidden: User "system:serviceaccount:test-namespace:test-serviceaccount" cannot get resource "statefulsets" in API group "apps" in the namespace "test-namespace"
</code></pre>
<p>my k8s plan:</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
labels:
env: test
name: test-serviceaccount
namespace: test-namespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
env: test
name: test-role
namespace: test-namespace
rules:
- apiGroups:
- apps/v1
resourceNames:
- test-statefulset
resources:
- statefulsets
verbs:
- patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
name: test-binding
namespace: test-namespace
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: test-role
subjects:
- kind: ServiceAccount
name: test-serviceaccount
namespace: test-namespace
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
labels:
name:test-job
namespace: test-namespace
spec:
concurrencyPolicy: Forbid
failedJobsHistoryLimit: 3
jobTemplate:
metadata:
labels:
env: test
spec:
activeDeadlineSeconds: 900
backoffLimit: 1
parallelism: 1
template:
metadata:
labels:
env: test
spec:
containers:
- args:
- kubectl -n test-namespace patch statefulset test-statefulset -p '{"spec":{"replicas":0}}'
- kubectl -n test-namespace patch statefulset test-statefulset -p '{"spec":{"replicas":1}}'
command:
- /bin/sh
- -c
image: bitnami/kubectl
restartPolicy: Never
serviceAccountName: test-serviceaccount
schedule: '*/5 * * * *'
startingDeadlineSeconds: 300
successfulJobsHistoryLimit: 3
suspend: false
</code></pre>
<p>So far to debug:</p>
<ol>
<li><p>I have checked if the pod and serviceaccount association worked as expected and it looks like it did. I see the name of secret mounted on the Pod the cronjob starts is correct.</p>
</li>
<li><p>Used a simpler role where apiGroups was "" i.e. all core groups and tried to "get pods" from that pod, same error</p>
</li>
</ol>
<p>role description:</p>
<pre><code>Name: test-role
Labels: env=test
Annotations: <none>
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
statefulsets.apps/v1 [] [test-statefulset] [patch]
</code></pre>
<p>rolebinding description:</p>
<pre><code>Name: test-binding
Labels: env=test
Annotations: <none>
Role:
Kind: Role
Name: test-role
Subjects:
Kind Name Namespace
---- ---- ---------
ServiceAccount test-serviceaccount test-namespace
</code></pre>
| <p>Stateful sets need two verbs to apply a patch :
GET and PATCH. PATCH alone wont work</p>
|
<p>I am looking to create a CRD, which has some of the Specs of an existing k8s object. Is there a way of importing the schema and validation checks of the existing spec instead of manually repeating it again?</p>
<p>For reference, I am registering the CRD with the API like this - <a href="https://gist.github.com/tallclair/2491c8034f62629b224260fb8a1854d9#file-dynamic_crds-go-L56" rel="nofollow noreferrer">https://gist.github.com/tallclair/2491c8034f62629b224260fb8a1854d9#file-dynamic_crds-go-L56</a>
And I would like to add a PodSpec into this CRD type.</p>
| <p>CRD are managed by a controller specific to that CRD.</p>
<p>Validation of an object concerning the CRD is achieved through a service that takes a call from the API, in this case validation would work along these lines, <a href="https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/" rel="nofollow noreferrer">admission controller validating webhook</a></p>
<p>More generally, your CRD does not need to concern itself with podspec per se. The CRD is just some declarative representation of the resource you want your controller to manage.</p>
<p>Extending the k8s api mostly works something like this;</p>
<ol>
<li>think up some bundled functionality you would like to represent declaratively in one schema (the CRD)</li>
<li>create a controller that handles your CRD</li>
<li>add some validation to make sure the API will reject objects that will confuse the controller you made, and hook it up to the API by way of the <a href="https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#write-an-admission-webhook-server" rel="nofollow noreferrer">Dynamic Admission Control</a></li>
<li>your controller manages the resources required to fullfil the functionality described</li>
</ol>
<p>I'm sure you <em>could</em> use a podspec in your CRD, but I wouldn't. Generally that's an abstraction better left to the controller managing that specific resource.</p>
|
<p>I'm trying to start minikube but it gives me this error</p>
<pre><code>this vM is having trouble accessing https://k8s.gcr.io
To pull new external images, you may need to configure a proxy:
https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
</code></pre>
<p>I tried docker and HyperV and VirtualBox same error kubectl is working fine but whenever I tried to pull a namespace like Kubernetes-dashboard I get errimagepull</p>
| <h3>Option 1:</h3>
<ul>
<li>Try to set up a proxy</li>
<li><a href="https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/" rel="nofollow noreferrer">https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/</a></li>
</ul>
<pre class="lang-bash prettyprint-override"><code>### Linux
# Set your proxy
export HTTP_PROXY=http://<proxy hostname:port>
export HTTPS_PROXY=https://<proxy hostname:port>
### Windows
set HTTP_PROXY=http://<proxy hostname:port>
set HTTPS_PROXY=https://<proxy hostname:port>
# Start minikube
minikube start
</code></pre>
<hr />
<h3>Option 2:</h3>
<pre class="lang-bash prettyprint-override"><code>minikube start --image-repository=auto
</code></pre>
<hr />
<h3>Option 3:</h3>
<pre class="lang-bash prettyprint-override"><code># Remove old content (minikube context)
minikube delete
# Start minikube with docker driver in case you have been using something else
minikube start --driver=docker
</code></pre>
<hr />
<h3>Option 4. (If you are in china)</h3>
<pre class="lang-bash prettyprint-override"><code>$ minikube start \
--driver=docker \
##
## Try with or without this flag if it's not working for you
## [optional] --image-mirror-country
--image-mirror-country=cn \
--registry-mirror=https://registry.docker-cn.com \
--image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers
</code></pre>
|
<p>I would like to migrate an application from one GKE cluster to another, and I'm wondering how to accomplish this while avoiding any downtime for this process.</p>
<p>The application is an HTTP web backend.</p>
<p>Usually how I'd usually handle this in a non GCP/K8S context is have a load balancer in front of the application, setup a new web backend and then just update the appropriate IP address in the load balancer to point from the old IP to the new IP. This would essentially have 0 downtime while also allowing for a seemless rollback if anything goes wrong.</p>
<p>I do not see why this should not work for this context as well however I'm not 100% sure. And if there is a more robust or alternative way to do this (GCP/GKE friendly way), I'd like to investigate that.</p>
<p><strong>So to summarize my question,</strong> does GCP/GKE support this type of migration functionality? If not, is there any implications I need to be aware of with my usual load balancer approach mentioned above?</p>
<hr />
<p>The reason for migrating is the current k8s cluster is running quite an old version (1.18) and if doing an GKE version upgrade to something more recent like 1.22, I suspect a lot of incompatibilities as well risk.</p>
| <p><strong>Few pain points you'll run into:</strong></p>
<ol>
<li>As someone used to DIY Kubernetes I hate GKE's managed Ingress Certs, because they make it very hard to pre-provision HTTPS certs in advance on the new cluster. (GKE's defacto method of provisioning HTTPS certs is to update DNS to point to the LB, and then wait 10-60 minutes. That means if you cutover to a new cluster the new cluster's HTTPS cert supplied by a managedcertificate Custom Resource, won't be ready in advance.)</li>
<li>It is possible to use pre-provision HTTPS certs using ACME-DNS challenge on GCP, but it's poorly documented and a god awful UX(user experience), there's no GUI, and the CLI API is terrible.
You can do it using <code>gcloud services enable certificatemanager.googleapis.com</code>, but I'd highly recommend against
that certificate-manager service that went GA in June, 2022. The UX is painful.</li>
<li>GKE's official docs are pretty bad when it comes to this scenario</li>
</ol>
<p><strong>You basically want to do 2 things:</strong></p>
<ol>
<li>Follow this how to guide for a zero downtime HTTPS cutover from cluster1 to cluster2 by leveraging Lets Encrypt Free Cert<br />
<a href="https://gist.github.com/neoakris/4aafeac7628995da8dd423f1702c975b" rel="nofollow noreferrer">https://gist.github.com/neoakris/4aafeac7628995da8dd423f1702c975b</a><br />
(I know link only answers are bad, but it's github (great uptime) and it's way too long and nuanced to post here.)</li>
<li>Use Velero to migrate workloads from cluster1 to cluster2 (it migrate can do CRDs, CRs, generic yaml objects, and PV/PVCs. One thing of note is Velero works best when you're migrate to and from a cluster of the same version, if you go from a really old version to a really new version you could encounter issues where kubernetes yaml APIs got removed in the new version. Going from old version to new version can be done, but it's best left to an experienced hand. For Happy Path results migrating to and from a cluster of the same version is best.)</li>
</ol>
|
<p>I'm using <a href="https://external-secrets.io/v0.6.0-rc1/" rel="nofollow noreferrer">External Secrets</a> to sync my secrets from azure. And now I need a programmatic way to trigger the sync. With <code>kubectl</code> the command is</p>
<pre class="lang-bash prettyprint-override"><code>kubectl annotate es my-es force-sync=$(date +%s) --overwrite
</code></pre>
<p>So, I try to use k8s js sdk to do this. I can success fully get the External Secret</p>
<pre class="lang-js prettyprint-override"><code>await crdApi.getNamespacedCustomObject("external-secrets.io", "v1beta1", "default", "externalsecrets", "my-es")
</code></pre>
<p>However, when I try to update it with <code>patchNamespacedCustomObject</code>, it always tells me <code>"the body of the request was in an unknown format - accepted media types include: application/json-patch+json, application/merge-patch+json, application/apply-patch+yaml"</code></p>
<p>Here's my code</p>
<pre class="lang-js prettyprint-override"><code>const kc = new k8s.KubeConfig();
kc.loadFromString(kubeConfig);
const crdApi = kc.makeApiClient(k8s.CustomObjectsApi);
let patch = [{
"op": "replace",
"path": "/metadata/annotations",
"value": {
"force-sync": "1663315075"
}
}];
await crdApi.patchNamespacedCustomObject("external-secrets.io", "v1beta1", "default", "externalsecrets", "my-es", patch);
</code></pre>
<p>I am referring their patch example <a href="https://github.com/kubernetes-client/javascript/blob/master/examples/patch-example.js" rel="nofollow noreferrer">here</a></p>
| <pre class="lang-java prettyprint-override"><code>const options = {
"headers": {
"Content-type": k8s.PatchUtils.PATCH_FORMAT_JSON_PATCH
}
};
</code></pre>
<p>is still required.</p>
|
<p>I'm trying to get KNative to be able to create services on my Multipass VM with MacOS as the host OS and I am using MicroK8S. I have DNS enabled and I am using metallb as my ingress controller. I have also changed Multipass to use hyperkit instead of VirtualBox. I don't know what's not been configured or missconfigured. The error I get when I try to create a new service is pasted below:</p>
<p>ubuntu@uncommon-javelin:~/sandbox/sessions/serverless_k8s/yaml$ kn service create nginx --image nginx --port 80
Error: Internal error occurred: failed calling webhook "webhook.serving.knative.dev": failed to call webhook: Post "https://webhook.knative-serving.svc:443/defaulting?timeout=10s": dial tcp 10.152.183.167:443: connect: connection refused
Run 'kn --help' for usage</p>
<p>When I ping that IP, it times out. So it seems like that IP address is either locked down or doesn't exist. Port 443 is configured in my ingress-service.yaml file</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: ingress
namespace: ingress
spec:
selector:
name: nginx-ingress-microk8s
type: LoadBalancer
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 443
</code></pre>
<p>And here is what I have configured for metallb address pool</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: test-service
annotations:
metallb.univers.tf/address-pool: custom-addresspool
spec:
selector:
name: nginx
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 80
</code></pre>
<p>And here's another address-pool.yaml I have configured for my cluster, I'm pretty sure that I have either something networking misconfigured or I'm missing a configuration somewhere.</p>
<pre><code>apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: custom-addresspool
namespace: metallb-system
spec:
addresses:
- 192.168.1.1-192.168.1.100
</code></pre>
| <p>Knative uses validating admission webhooks to ensure that the resources in the cluster are valid. It seems like the Knative webhooks are not running on your cluster, but the validatingwebhookconfiguration has been created, as has the service in front of the webhook (the IP in the error message is a ClusterIP of a Kubernetes service on your cluster).</p>
<p>I'd look at the webhook pods in the <code>knative-serving</code> namespace for more details.</p>
|
<p>I'm trying to understand what happens when a container is configured with a CPU request and without a limit, and it tries to use more CPU than requested while the node is fully utilized, but there is another node with available resources.</p>
<p>Will k8s keep the container throttled in its current node or will it be moved to another node with available resources? do we know how/when k8s decides to move the container when its throttled in such a case?</p>
<p>I would appreciate any extra resources to read on this matter, as I couldn't find anything that go into details for this specific scenario.</p>
| <p><strong>Q1) what happens when a container is configured with a CPU request and without a limit ?</strong></p>
<p><strong>ANS:</strong></p>
<p><strong>If you do not specify a CPU limit</strong></p>
<p>If you do not specify a CPU limit for a Container, then one of these situations applies:</p>
<p>The Container has no upper bound on the CPU resources it can use. The Container could use all of the CPU resources available on the Node where it is running.</p>
<p>The Container is running in a namespace that has a default CPU limit, and the Container is automatically assigned the default limit. Cluster administrators can use a LimitRange to specify a default value for the CPU limit.</p>
<p><strong>If you specify a CPU limit but do not specify a CPU request</strong></p>
<p>If you specify a CPU limit for a Container but do not specify a CPU request, Kubernetes automatically assigns a CPU request that matches the limit. Similarly, if a Container specifies its own memory limit, but does not specify a memory request, Kubernetes automatically assigns a memory request that matches the limit.</p>
<p><strong>Q2) it tries to use more CPU than requested while the node is fully utilized, but there is another node with available resources?</strong></p>
<p><strong>ANS:</strong></p>
<p>The <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-scheduler/#:%7E:text=kube%2Dscheduler-,kube%2Dscheduler,-Synopsis" rel="nofollow noreferrer">Kubernetes scheduler</a> is a control plane process which assigns Pods to Nodes. The scheduler determines which Nodes are valid placements for each Pod in the scheduling queue according to constraints and available resources. The scheduler then ranks each valid Node and binds the Pod to a suitable Node. Multiple different schedulers may be used within a cluster; kube-scheduler is the reference implementation. See scheduling for more information about scheduling and the kube-scheduler component.</p>
<p><strong>Scheduling, Preemption and Eviction</strong></p>
<p><a href="https://kubernetes.io/docs/concepts/scheduling-eviction/#:%7E:text=Preemption%20and%20Eviction-,Scheduling%2C%20Preemption%20and%20Eviction,-In%20Kubernetes%2C%20scheduling" rel="nofollow noreferrer">In Kubernetes</a>, scheduling refers to making sure that Pods are matched to Nodes so that the kubelet can run them. Preemption is the process of terminating Pods with lower Priority so that Pods with higher Priority can schedule on Nodes. Eviction is the process of terminating one or more Pods on Nodes.</p>
<p><strong>Q3) Will k8s keep the container throttled in its current node or will it be moved to another node with available resources?</strong></p>
<p><strong>ANS:</strong></p>
<p><strong>Pod Disruption</strong></p>
<p>Pod disruption is the process by which Pods on Nodes are terminated either voluntarily or involuntarily.</p>
<p>Voluntary disruptions are started intentionally by application owners or cluster administrators. Involuntary disruptions are unintentional and can be triggered by unavoidable issues like Nodes running out of resources, or by accidental deletions.</p>
<p><a href="https://kubernetes.io/docs/concepts/workloads/pods/disruptions/#voluntary-and-involuntary-disruptions" rel="nofollow noreferrer">Voluntary and involuntary disruptions</a>
Pods do not disappear until someone (a person or a controller) destroys them, or there is an unavoidable hardware or system software error.</p>
<p>We call these unavoidable cases involuntary disruptions to an application.</p>
<p><strong>Examples are:</strong></p>
<ul>
<li>a hardware failure of the physical machine backing the node</li>
<li>cluster administrator deletes VM (instance) by mistake</li>
<li>cloud provider or hypervisor failure makes VM disappear</li>
<li>a kernel panic</li>
<li>the node disappears from the cluster due to cluster network partition</li>
<li>eviction of a pod due to the node being <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/node-pressure-eviction/" rel="nofollow noreferrer">out-of-resources</a>.</li>
</ul>
<p><strong>Suggestion:</strong></p>
<p><a href="https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/" rel="nofollow noreferrer">Taints and tolerations</a> work together to ensure that pods are not scheduled onto inappropriate nodes. One or more taints are applied to a node; this marks that the node should not accept any pods that do not tolerate the taints.</p>
<p><strong>Command:</strong></p>
<pre><code>kubectl taint nodes node1 key1=value1:NoSchedule
</code></pre>
<p><strong><a href="https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-based-evictions" rel="nofollow noreferrer">Example</a>:</strong></p>
<pre><code>kubectl taint nodes node1 key1=node.kubernetes.io/disk-pressure:NoSchedule
</code></pre>
|
<p>I am new to Argo and following the Quickstart templates and would like to deploy the HTTP template as a workflow.</p>
<p>I create my cluster as so:</p>
<pre class="lang-bash prettyprint-override"><code>minikube start --driver=docker --cpus='2' --memory='8g'
kubectl create ns argo
kubectl apply -n argo -f https://raw.githubusercontent.com/argoproj/argo-workflows/master/manifests/quick-start-postgres.yaml
</code></pre>
<p>I then apply the HTTP template <code>http_template.yaml</code> from the docs:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: http-template-
spec:
entrypoint: main
templates:
- name: main
steps:
- - name: get-google-homepage
template: http
arguments:
parameters: [ { name: url, value: "https://www.google.com" } ]
- name: http
inputs:
parameters:
- name: url
http:
timeoutSeconds: 20 # Default 30
url: "{{inputs.parameters.url}}"
method: "GET" # Default GET
headers:
- name: "x-header-name"
value: "test-value"
# Template will succeed if evaluated to true, otherwise will fail
# Available variables:
# request.body: string, the request body
# request.headers: map[string][]string, the request headers
# response.url: string, the request url
# response.method: string, the request method
# response.statusCode: int, the response status code
# response.body: string, the response body
# response.headers: map[string][]string, the response headers
successCondition: "response.body contains \"google\"" # available since v3.3
body: "test body" # Change request body
</code></pre>
<p><code>argo submit -n argo http_template.yaml --watch</code></p>
<p>However I get the the following error:</p>
<pre><code>Name: http-template-564qp
Namespace: argo
ServiceAccount: unset (will run with the default ServiceAccount)
Status: Error
Message: failed to get token volumes: service account argo/default does not have any secrets
</code></pre>
<p>I'm not clear on why this doesn't work given it's straight from the Quickstart documentation. Help would be appreciated.</p>
| <p>Argo Workflows does not yet work with Kubernetes v1.24+. See this issue:</p>
<p><a href="https://github.com/argoproj/argo-workflows/issues/8320" rel="nofollow noreferrer">https://github.com/argoproj/argo-workflows/issues/8320</a></p>
|
<p>I have a schedule:
<code>schedule: "0 10,14,18 * * *"</code>, I'd like to run this job in 10:00 am, 2:00pm, 6:00pm.</p>
<p>Since I located at UTC+8 timezone, this cronjob looks like not run as I expected.</p>
<p>Anyway to add a config for timezone?</p>
| <p>In Kubernetes v1.25, you can enable the <code>CronJobTimeZone</code> feature gate and set timezone by setting <code>spec.timeZone</code> to a valid time zone name.</p>
<p>For example, setting <code>spec.timeZone: "Australia/Melbourne"</code> makes Kubernetes to interpret the schedule relative to the <code>UTC+10 (STD)</code> timezone.</p>
<p>You can refer to the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#time-zones" rel="noreferrer">official documentation</a> for more detailed info.</p>
|
<p>Is it possible to expose TCP Openshift Service using routes with SNI?
I am trying to have a TCP service in Openshift be exposed to outside the cluster with SNI. This article goes through some general idea of it. <a href="https://blog.zhaw.ch/icclab/openshift-custom-router-with-tcpsni-support/" rel="nofollow noreferrer">https://blog.zhaw.ch/icclab/openshift-custom-router-with-tcpsni-support/</a></p>
<p>Unfortunately, I am not able to get this working at the moment and the article is dated so I am not sure if it is the best resource to use at the moment. Does anyone have any other resources?</p>
| <p>NodePort Service is the way when an application needs to expose TCP protocol outside OpenShift cluster.</p>
<p>Here is the limitation of the Ingress Controller (Router) *1</p>
<blockquote>
<p>This is limited to HTTP, HTTPS using SNI, and TLS using SNI, which is
sufficient for web applications and services that work over TLS with
SNI.</p>
</blockquote>
<p>*1: <a href="https://docs.openshift.com/container-platform/4.11/networking/configuring_ingress_cluster_traffic/configuring-ingress-cluster-traffic-ingress-controller.html#nw-using-ingress-and-routes_configuring-ingress-cluster-traffic-ingress-controller" rel="nofollow noreferrer">https://docs.openshift.com/container-platform/4.11/networking/configuring_ingress_cluster_traffic/configuring-ingress-cluster-traffic-ingress-controller.html#nw-using-ingress-and-routes_configuring-ingress-cluster-traffic-ingress-controller</a></p>
|
<p>I'm trying to configure RBAC for argocd, I saw a lot of examples like this one below:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-rbac-cm
namespace: argocd
data:
policy.default: role:readonly
policy.csv: |
p, role:org-admin, applications, *, */*, allow
p, role:org-admin, clusters, get, *, allow
p, role:org-admin, repositories, get, *, allow
p, role:org-admin, repositories, create, *, allow
p, role:org-admin, repositories, update, *, allow
p, role:org-admin, repositories, delete, *, allow
p, role:org-admin, logs, get, *, allow
p, role:org-admin, exec, create, */*, allow
g, your-github-org:your-team, role:org-admin
</code></pre>
<p>In the first column you either have the <code>g</code> or <code>p</code> letters.<br />
Tried searching for the definition on <a href="https://argo-cd.readthedocs.io/en/stable/operator-manual/rbac/#:%7E:text=The%20RBAC%20feature%20enables%20restriction,or%20more%20local%20users%20setup." rel="nofollow noreferrer">ArgoCD Documentation</a>, but either they don't mention it or I just failed to see it.</p>
<p>So what does this <code>p</code> and <code>g</code> stand for?</p>
| <p>I would guess p=Permission, g=Group.
p is what permission the role have.
g is which role this group have</p>
|
<p>I launched an EKS cluster using terraform <a href="https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/latest" rel="nofollow noreferrer">module</a></p>
<p>My template looks something like this:</p>
<pre><code>module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "17.20.0"
cluster_name = "${var.cluster_name}"
cluster_version = var.cluster_version
subnets = ["${var.public_subnet_1}",
"${var.public_subnet_2}","${var.public_subnet_3}"]
vpc_id = var.vpc_id
cluster_security_group_id = "${var.master_sg_id}"
worker_security_group_id = "${var.master_sg_id}"
workers_additional_policies =[aws_iam_policy.siera_alb_ingress_controller_policy.arn]
workers_role_name = "${var.cluster_name}-${var.environment}-${var.aws_region}-worker-role"
map_roles = [
{
rolearn = "arn:aws:iam::${var.account_no}:role/${var.cluster_name}-${var.environment}-${var.aws_region}-worker-role"
username = "system:node:{{EC2PrivateDNSName}}"
groups = ["system:bootstrappers","system:nodes"]
},
{
rolearn = "arn:aws:sts::${var.account_no}:assumed-role/${var.assumed_role_1}"
username = "admin"
groups = ["system:masters","system:nodes","system:bootstrappers"]
},
{
rolearn = "arn:aws:sts::${var.account_no}:assumed-role/${var.assumed_role_2}"
username = "admin"
groups = ["system:masters","system:nodes","system:bootstrappers"]
}
]
tags = {
Purpose = "${var.project}"
Environment = "${var.environment}"
}
worker_groups_launch_template = [
{
name = "${var.cluster_name}-lt"
key_name = "${var.node_key}"
additional_userdata = <<EOT
"echo dummy"
EOT
instance_type = "${var.node_size}"
asg_min_size = 3
asg_desired_capacity = 3
asg_max_size = 5
autoscaling_enabled = true
asg_force_delete = true
public_ip = true
enable_monitoring = false
root_volume_size = 80
suspended_processes = ["AZRebalance"]
tags = [
{
"key" = "k8s.io/cluster-autoscaler/enabled"
"propagate_at_launch" = "false"
"value" = "true"
},
{
"key" = "k8s.io/cluster-autoscaler/${var.cluster_name}"
"propagate_at_launch" = "false"
"value" = "true"
}
]
}
]
manage_aws_auth = false
}
</code></pre>
<p>As you can see I'm trying to add aws-auth configmap using map_roles.</p>
<p>After launching the cluster when I run <code>kubectl describe configmap -n kube-system aws-auth</code></p>
<p>It gives this error: <code>Error from server (NotFound): configmaps "aws-auth" not found</code></p>
<p>What am I missing? Please help</p>
| <p>Terraform needs to connect the the created cluster and use the cluster credentials to create the <code>aws-auth</code> configmap. Add this to the file that is creating the EKS cluster.</p>
<pre><code>data "aws_eks_cluster" "default" {
name = module.eks.cluster_id
}
data "aws_eks_cluster_auth" "default" {
name = module.eks.cluster_id
}
provider "kubernetes" {
host = data.aws_eks_cluster.default.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.default.certificate_authority[0].data)
token = data.aws_eks_cluster_auth.default.token
}
</code></pre>
<p>From HashiCorp <a href="https://github.com/hashicorp/terraform-provider-kubernetes/blob/main/_examples/eks/kubernetes-config/main.tf" rel="nofollow noreferrer">github repo</a></p>
|
<p>I need to install this NGINX Ingress Controller Git release <a href="https://github.com/kubernetes/ingress-nginx/releases/tag/nginx-0.22.0" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/releases/tag/nginx-0.22.0</a> in my Kubernetes cluster. Can anyone share the steps on how to do it?</p>
<p>I did some research, but could not find any useful article.</p>
<p><strong>Additional information</strong></p>
<ul>
<li>I'm managing the cluster using helm. So is there a way to set it up using helm?</li>
<li>Is any other approach recommended?</li>
</ul>
| <p>I understand you want to install nginx-ingress controller using(manage using) helm, you can use the link for the same, below command.
<a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/</a></p>
<pre><code>helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx --create-namespace
</code></pre>
<p>If you want to install a particular version of ingress controller you can export the values into your local and change the tag and then run helm install above using the customised values file as shown below.
<a href="https://helm.sh/docs/helm/helm_show_values/" rel="nofollow noreferrer">https://helm.sh/docs/helm/helm_show_values/</a></p>
<pre><code>helm show values [CHART] [flags] > values.change.yaml
helm install chart -f values.change.yaml -n <namespace>
</code></pre>
|
<p>I'm looking to use the Kubernetes python client to delete a deployment, but then block and wait until all of the associated pods are deleted as well. A lot of the examples I'm finding recommend using the watch function something like follows.</p>
<pre><code>try:
# try to delete if exists
AppsV1Api(api_client).delete_namespaced_deployment(namespace="default", name="mypod")
except Exception:
# handle exception
# wait for all pods associated with deployment to be deleted.
for e in w.stream(
v1.list_namespaced_pod, namespace="default",
label_selector='mylabel=my-value",
timeout_seconds=300):
pod_name = e['object'].metadata.name
print("pod_name", pod_name)
if e['type'] == 'DELETED':
w.stop()
break
</code></pre>
<p>However, I see two problems with this.</p>
<ol>
<li>If the pod is already gone (or if some other process deletes all pods before execution reaches the watch stream), then the watch will find no events and the for loop will get stuck until the timeout expires. Watch does not seem to generate activity if there are no events.</li>
<li>Upon seeing events in the event stream for the pod activity, how do know all the pods got deleted? Seems fragile to count them.</li>
</ol>
<p>I'm basically looking to replace the <code>kubectl delete --wait</code> functionality with a python script.</p>
<p>Thanks for any insights into this.</p>
| <p>May be you can try this way and handle exceptions based your requirement</p>
<pre><code> def delete_deployment():
""" Delete deployment """
while True:
try:
deployment = api_client.delete_namespaced_deployment(
name="deployment_name",
namespace="deployment_namespace",
body=client.V1DeleteOptions(propagation_policy="Foreground", grace_period_seconds=5),
)
except ApiException:
break
print("Deployment 'deployment_name' has been deleted.")
</code></pre>
|
<p><a href="https://i.stack.imgur.com/OyFnO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OyFnO.png" alt="diagram of connections between components in a k8s cluster" /></a></p>
<p>One Kubernetes cluster contains several components, for example, kubelet, etcd, api-server, etc.</p>
<p>We need to set up many certificates and keys when setting up a cluster, then they can carry these certificates to communicate with each other.</p>
<pre><code>> kubectl describe pod kube-apiserver-controlplane -n=kube-system
Name: kube-apiserver-controlplane
Namespace: kube-system
...
Controlled By: Node/controlplane
Containers:
kube-apiserver:
Container ID: docker://6974d026de0b2fadb3d2628d0df971ddc4c3d772665b2cd960a1d0e385f97a5d
Image: k8s.gcr.io/kube-apiserver:v1.20.0
Image ID: docker-pullable://k8s.gcr.io/kube-apiserver@sha256:8b8125d7a6e4225b08f04f65ca947b27d0cc86380bf09fab890cc80408230114
Command:
kube-apiserver
...
--client-ca-file=/etc/kubernetes/pki/ca.crt
--etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
--etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
--etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
--kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
--kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
--tls-cert-file=/etc/kubernetes/pki/apiserver.crt
--tls-private-key-file=/etc/kubernetes/pki/apiserver.key
...
...
</code></pre>
<p>I know the concept of Certificate Authority(CA), Certificate, mTLS, and the mechanism of HTTPs.</p>
<p>I know the zero-trust model.</p>
<p><strong>But I don't understand what security risk these Kubernetes certificates try to solve.</strong></p>
<p>If the hacker can get root access to our node, they can view everything including the certificate, this complexity seems not to give us more security.</p>
<h2>Questions</h2>
<ul>
<li><p>What security risks are the most important (common) for a Kubernetes cluster?</p>
</li>
<li><p>If I don't use certificates to protect internal communication within a cluster, how will the hacker attack me?</p>
<p>As far as I know, the communication between AWS load balancer and EC2 is HTTP instead of HTTPs, it seems fine. Why are certificates not required between the load balancer and EC2, but required within a Kubernetes cluster?</p>
</li>
</ul>
| <p>Trying to answer your multiple questions from my view:</p>
<blockquote>
<p>But I don't understand what problem these Kubernetes certificates try to solve.</p>
</blockquote>
<ol>
<li><p>According to <a href="https://en.wikipedia.org/wiki/Zero_trust_security_model" rel="nofollow noreferrer">Zero trust security model</a>, the main concept behind the zero trust security model is "<strong>never trust, always verify</strong>”.</p>
<p>Kubernetes expects that all API communication in the cluster is encrypted by default with TLS, which always checks the <strong>identity and integrity</strong> of communication between different components.</p>
<p>One common scenario is referred to as a <a href="https://en.wikipedia.org/wiki/Man-in-the-middle_attack" rel="nofollow noreferrer">man-in-the-middle attack</a>.</p>
</li>
</ol>
<blockquote>
<p>What security risks are the most important (common) for a Kubernetes
cluster?</p>
</blockquote>
<ol start="2">
<li><p>I think the most important security emphysis for a Kubernetes cluster are <code>tls for all traffic</code>/<code>Authentication</code>/<code>Authorization</code>.</p>
<p>You could also refer to the official docs: <a href="https://kubernetes.io/docs/tasks/administer-cluster/securing-a-cluster" rel="nofollow noreferrer">securing your cluster</a></p>
</li>
</ol>
<blockquote>
<p>If I don't use certificates to protect internal communication within a cluster, how will you attract me?</p>
</blockquote>
<ol start="3">
<li>Better not do that.</li>
</ol>
<blockquote>
<p>Why are certificates not required between the load balancer and EC2, but required within a Kubernetes cluster?</p>
</blockquote>
<ol start="4">
<li><p>That's two different concepts.</p>
<p>AWS load balancer and EC2 are infrastructure-layer(Iaas) components, which are only responsible for transmitting data stream. You can enforce security options on your own.</p>
<p>On the other hand, components within a Kubernetes cluster are application-layer ones, which should include secure in-cluster communication by default.</p>
</li>
</ol>
|
<p>I need to install this NGINX Ingress Controller Git release <a href="https://github.com/kubernetes/ingress-nginx/releases/tag/nginx-0.22.0" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/releases/tag/nginx-0.22.0</a> in my Kubernetes cluster. Can anyone share the steps on how to do it?</p>
<p>I did some research, but could not find any useful article.</p>
<p><strong>Additional information</strong></p>
<ul>
<li>I'm managing the cluster using helm. So is there a way to set it up using helm?</li>
<li>Is any other approach recommended?</li>
</ul>
| <p>You can display all available helm chart versions using:</p>
<pre class="lang-bash prettyprint-override"><code>helm search repo ingress-nginx --versions
</code></pre>
<pre class="lang-bash prettyprint-override"><code>NAME CHART VERSION APP VERSION DESCRIPTION
ingress-nginx/ingress-nginx 4.2.1 1.3.0 Ingress controller for Kubernetes using NGINX a...
ingress-nginx/ingress-nginx 4.2.0 1.3.0 Ingress controller for Kubernetes using NGINX a...
ingress-nginx/ingress-nginx 4.1.4 1.2.1 Ingress controller for Kubernetes using NGINX a...
ingress-nginx/ingress-nginx 4.1.3 1.2.1 Ingress controller for Kubernetes using NGINX a...
....
</code></pre>
<p>Then, choose the version you want <code>CHART VERSION</code>, here 4.2.0</p>
<pre class="lang-bash prettyprint-override"><code>helm install ingress-nginx ingress-nginx/ingress-nginx \
--namespace $NAMESPACE \
--version 4.2.0
</code></pre>
|
<p>Is there any standard to deploy flask app with gunicorn and nginx on kubernetes cluster
because I am trying to it by on dockerfile like below:</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>FROM nginx:latest as base-image
RUN apt update
RUN apt -y install python3 python3-pip
RUN apt -y install build-essential
RUN mkdir /app
WORKDIR /app
COPY src src/
COPY src/.nginx/config /etc/nginx/conf.d/default.conf
COPY requirements.txt ./
RUN pip3 install -r requirements.txt
RUN pip3 install -e ./src
RUN pwd
RUN ls -l /app/src
# RUN pytest
EXPOSE 80
WORKDIR /app/src
CMD ["nginx", "-g", "daemon off;"]
ENTRYPOINT ["gunicorn" , "--bind","0.0.0.0:8000", "wsgi:app"]</code></pre>
</div>
</div>
</p>
<p>Is the solution is to run two containers for each of gunicorn and nginx inside one pod of kubernetes?</p>
| <blockquote>
<p>Is the solution is to run two containers for each of gunicorn and nginx inside one pod of kubernetes?</p>
</blockquote>
<p>Yes. In Kubernetes or when simply running Docker on your local machine, it is always better to compose multiple containers rather than trying to stuff everything into a single container.</p>
<p>This part of your your Dockerfile:</p>
<pre><code>CMD ["nginx", "-g", "daemon off;"]
ENTRYPOINT ["gunicorn" , "--bind","0.0.0.0:8000", "wsgi:app"]
</code></pre>
<p>Isn't doing what you seem to expect. These two directives operate in concert:</p>
<ul>
<li>If defined, <code>ENTRYPOINT</code> is the command run by the container when it starts up.</li>
<li>The value of <code>CMD</code> is provided as an argument to the <code>ENTRYPOINT</code> script.</li>
</ul>
<p>You can read more about <code>ENTRYPOINT</code> in the <a href="https://docs.docker.com/engine/reference/builder/#entrypoint" rel="nofollow noreferrer">official documentation</a>.</p>
<p>A better design would be to use the official Python image for your Python app:</p>
<pre><code>FROM python:3.10
WORKDIR /app
COPY src src/
COPY requirements.txt ./
RUN pip3 install -r requirements.txt
RUN pip3 install -e ./src
WORKDIR /app/src
CMD ["gunicorn" , "--bind","0.0.0.0:8000", "wsgi:app"]
</code></pre>
<p>And then use the official <a href="https://hub.docker.com/_/nginx" rel="nofollow noreferrer">nginx image</a> to run the nginx service.</p>
<hr />
<p>An example deployment that uses two containers, one for your Python app and one for Nginx, might look like:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: flask-example
name: flask-example
spec:
selector:
matchLabels:
app: flask-example
template:
metadata:
labels:
app: flask-example
spec:
containers:
- image: quay.io/larsks/example:flaskapp
name: app
- image: docker.io/nginx:mainline
name: nginx
ports:
- containerPort: 80
name: http
volumeMounts:
- mountPath: /etc/nginx/conf.d
name: nginx-config
volumes:
- configMap:
name: nginx-config-c65tttk45k
name: nginx-config
</code></pre>
<p>In the above deployment, we're mounting the configuration for nginx from a ConfigMap.</p>
<p>You can find a complete deployable example of the above <a href="https://github.com/larsks/so-example-73785638/" rel="nofollow noreferrer">here</a>.</p>
|
<p>I have already tried finding resources and articles online for how to create alerts using Grafana 8 UI about the CPU and/or memory usage of my kubernetes cluster pods, but I couldn't find anything, neither on youtube, google, discord, stackoverflow nor reddit.</p>
<p>Does anyone know any guide on how to do that?</p>
<p>The goal is to literally create an alert rule that will send a slack message when the CPU or Memory usage of my kubernetes cluster pods pass over X%. The slack app to receive the grafana message is working, but I have no idea how would be the grafana query.</p>
<p>PS.: I am using Prometheus and node-exporter.</p>
| <p>You can try this query for creating an alert if the CPU or Memory usage is above threshold (let say 85%).</p>
<p><code>sum(rate(container_cpu_usage_seconds_total{namespace="$namespace", pod="$pod", container!="POD", container!="", pod!=""}[1m])) by (pod) / sum(kube_pod_container_resource_limits{namespace="$namespace", pod="$pod", resource="cpu"}) by (pod) * 100</code></p>
<p>You can check CPU utilization of all pods in the cluster by running:</p>
<p><code>sum(rate(container_cpu_usage_seconds_total{container_name!="POD",pod_name!=""}[5m]))</code></p>
<p>If you want to check CPU usage of each running pod you can use using:</p>
<p><code>sum(rate(container_cpu_usage_seconds_total{container_name!="POD",pod_name!=""}[5m])) by (pod_name).</code></p>
<p>To see actual CPU usage, look at metrics like <code>container_cpu_usage_seconds_total (per container CPU usage)</code> or maybe even <code>process_cpu_seconds_total (per process CPU usage).</code></p>
<p>You can create alert rule in grafana by following the steps provided in the <a href="https://grafana.com/docs/grafana/latest/alerting/alerting-rules/create-grafana-managed-rule/#add-grafana-managed-rule" rel="nofollow noreferrer">document</a> and refer to the <a href="https://stackoverflow.com/questions/61361263/grafana-for-kubernettes-shows-cpu-usage-higher-than-100">link</a> for more information.</p>
|
<p>I'm using the <code>kubectl rollout</code> command to update my deployment. But since my project is a NodeJS Project. The <code>npm run start</code> will take some take(a few seconds before the application is actually running.) But Kubernetes will drop the old pods immediately after the <code>npm run start</code> is executed.</p>
<p>For example,</p>
<pre><code>kubectl logs -f my-app
> my app start
> nest start
</code></pre>
<p>The Kubernetes will drop the old pods now. However, it will take another 10 seconds until</p>
<pre><code>Application is running on: http://[::1]:5274
</code></pre>
<p>which means my service is actually up.</p>
<p>I'd like to know whether there is a way to modify this like waiting some more time before kubernetes drop the old pods.</p>
<p>My docker file:</p>
<pre><code>FROM node:14 AS builder
# Create app directory
WORKDIR /app
# A wildcard is used to ensure both package.json AND package-lock.json are copied
COPY package*.json ./
COPY prisma ./prisma/
COPY protos ./protos/
COPY tsconfig.build.json ./
COPY tsconfig.json ./
# Install app dependencies
RUN npm install
RUN export NODE_OPTIONS=--max_old_space_size=16384
RUN npm run build
COPY . .
# FROM node:14
# COPY --from=builder /app/node_modules ./node_modules
# COPY --from=builder /app/package*.json ./
# COPY --from=builder /app/dist ./dist
# COPY --from=builder /app/protos ./protos
EXPOSE 5273
CMD ["npm", "run", "start"]
</code></pre>
<p>Spec for my kubernetes yaml file:</p>
<pre><code>spec:
replicas: 4
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- name: app
image: image
imagePullPolicy: Always
resources:
limits:
memory: "8Gi"
cpu: "10"
requests:
memory: "8Gi"
cpu: "10"
livenessProbe:
httpGet:
path: /api/Health
port: 5274
initialDelaySeconds: 180
periodSeconds: 80
timeoutSeconds: 20
failureThreshold: 2
ports:
- containerPort: 5274
- containerPort: 5900
</code></pre>
| <p>Use a startup probe on your container. <a href="https://docs.openshift.com/container-platform/4.11/applications/application-health.html" rel="nofollow noreferrer">https://docs.openshift.com/container-platform/4.11/applications/application-health.html</a> . Pods don't count as "ready" until all of their containers have passed their startup (and readiness) checks.</p>
<p>And during a deployment the scheduler counts non-ready pods as "unavailable" for things like the "maxUnavailable" setting of the deployment. Thus the scheduler won't keep shutting down working pods until new pods are ready for traffic. (<a href="https://docs.openshift.com/container-platform/4.11/applications/deployments/deployment-strategies.html" rel="nofollow noreferrer">https://docs.openshift.com/container-platform/4.11/applications/deployments/deployment-strategies.html</a>)</p>
<p>As an additional benefit, services won't route traffic to non-ready pods, so they won't receive any traffic until the containers have passed their startup probes.</p>
|
<p>I am using k8 to host my grpc service.</p>
<p>Sometimes, I am getting the following error (few milliseconds into my request):</p>
<pre><code>rpc error: code = Unavailable desc = closing transport due to: connection error: desc = "error reading from server: read tcp <ipaddr>:52220-><internal ip addr>:8070: read: connection reset by peer", received prior goaway: code: NO_ERROR
</code></pre>
<p>May I ask how will this occur? Could it be that the k8's network is down?</p>
| <p>This happened because on server shutdown, it sends an initial GOAWAY to support graceful shutdown. This means your server shut down.</p>
|
<p>I read through the karpenter document at <a href="https://karpenter.sh/v0.16.1/getting-started/getting-started-with-terraform/#install-karpenter-helm-chart" rel="nofollow noreferrer">https://karpenter.sh/v0.16.1/getting-started/getting-started-with-terraform/#install-karpenter-helm-chart</a>. I followed instructions step by step. I got errors at the end.</p>
<p>kubectl logs -f -n karpenter -l app.kubernetes.io/name=karpenter -c controller</p>
<p>DEBUG controller.provisioning Relaxing soft constraints for pod since it previously failed to schedule, removing: spec.topologySpreadConstraints = {"maxSkew":1,"topologyKey":"topology.kubernetes.io/zone","whenUnsatisfiable":"ScheduleAnyway","labelSelector":{"matchLabels":{"app.kubernetes.io/instance":"karpenter","app.kubernetes.io/name":"karpenter"}}} {"commit": "b157d45", "pod": "karpenter/karpenter-5755bb5b54-rh65t"}
2022-09-10T00:13:13.122Z</p>
<p>ERROR controller.provisioning Could not schedule pod, incompatible with provisioner "default", incompatible requirements, key karpenter.sh/provisioner-name, karpenter.sh/provisioner-name DoesNotExist not in karpenter.sh/provisioner-name In [default] {"commit": "b157d45", "pod": "karpenter/karpenter-5755bb5b54-rh65t"}</p>
<p>Below is the source code:</p>
<pre><code>cat main.tf
terraform {
required_version = "~> 1.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
helm = {
source = "hashicorp/helm"
version = "~> 2.5"
}
kubectl = {
source = "gavinbunney/kubectl"
version = "~> 1.14"
}
}
}
provider "aws" {
region = "us-east-1"
}
locals {
cluster_name = "karpenter-demo"
# Used to determine correct partition (i.e. - `aws`, `aws-gov`, `aws-cn`, etc.)
partition = data.aws_partition.current.partition
}
data "aws_partition" "current" {}
module "vpc" {
# https://registry.terraform.io/modules/terraform-aws-modules/vpc/aws/latest
source = "terraform-aws-modules/vpc/aws"
version = "3.14.4"
name = local.cluster_name
cidr = "10.0.0.0/16"
azs = ["us-east-1a", "us-east-1b", "us-east-1c"]
private_subnets = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
public_subnets = ["10.0.101.0/24", "10.0.102.0/24", "10.0.103.0/24"]
enable_nat_gateway = true
single_nat_gateway = true
one_nat_gateway_per_az = false
public_subnet_tags = {
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
"kubernetes.io/role/elb" = 1
}
private_subnet_tags = {
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
"kubernetes.io/role/internal-elb" = 1
}
}
module "eks" {
# https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/latest
source = "terraform-aws-modules/eks/aws"
version = "18.29.0"
cluster_name = local.cluster_name
cluster_version = "1.22"
vpc_id = module.vpc.vpc_id
subnet_ids = module.vpc.private_subnets
# Required for Karpenter role below
enable_irsa = true
node_security_group_additional_rules = {
ingress_nodes_karpenter_port = {
description = "Cluster API to Node group for Karpenter webhook"
protocol = "tcp"
from_port = 8443
to_port = 8443
type = "ingress"
source_cluster_security_group = true
}
}
node_security_group_tags = {
# NOTE - if creating multiple security groups with this module, only tag the
# security group that Karpenter should utilize with the following tag
# (i.e. - at most, only one security group should have this tag in your account)
"karpenter.sh/discovery/${local.cluster_name}" = local.cluster_name
}
# Only need one node to get Karpenter up and running.
# This ensures core services such as VPC CNI, CoreDNS, etc. are up and running
# so that Karpenter can be deployed and start managing compute capacity as required
eks_managed_node_groups = {
initial = {
instance_types = ["m5.large"]
# Not required nor used - avoid tagging two security groups with same tag as well
create_security_group = false
min_size = 1
max_size = 1
desired_size = 1
iam_role_additional_policies = [
"arn:${local.partition}:iam::aws:policy/AmazonSSMManagedInstanceCore", # Required by Karpenter
"arn:${local.partition}:iam::aws:policy/AmazonEKSWorkerNodePolicy",
"arn:${local.partition}:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly", #for access to ECR images
"arn:${local.partition}:iam::aws:policy/CloudWatchAgentServerPolicy"
]
tags = {
# This will tag the launch template created for use by Karpenter
"karpenter.sh/discovery/${local.cluster_name}" = local.cluster_name
}
}
}
}
#The EKS module creates an IAM role for the EKS managed node group nodes. We’ll use that for Karpenter.
#We need to create an instance profile we can reference.
#Karpenter can use this instance profile to launch new EC2 instances and those instances will be able to connect to your cluster.
resource "aws_iam_instance_profile" "karpenter" {
name = "KarpenterNodeInstanceProfile-${local.cluster_name}"
role = module.eks.eks_managed_node_groups["initial"].iam_role_name
}
#Create the KarpenterController IAM Role
#Karpenter requires permissions like launching instances, which means it needs an IAM role that grants it access. The config
#below will create an AWS IAM Role, attach a policy, and authorize the Service Account to assume the role using IRSA. We will
#create the ServiceAccount and connect it to this role during the Helm chart install.
module "karpenter_irsa" {
source = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks"
version = "5.3.3"
role_name = "karpenter-controller-${local.cluster_name}"
attach_karpenter_controller_policy = true
karpenter_tag_key = "karpenter.sh/discovery/${local.cluster_name}"
karpenter_controller_cluster_id = module.eks.cluster_id
karpenter_controller_node_iam_role_arns = [
module.eks.eks_managed_node_groups["initial"].iam_role_arn
]
oidc_providers = {
ex = {
provider_arn = module.eks.oidc_provider_arn
namespace_service_accounts = ["karpenter:karpenter"]
}
}
}
#Install Karpenter Helm Chart
#Use helm to deploy Karpenter to the cluster. We are going to use the helm_release Terraform resource to do the deploy and pass in the
#cluster details and IAM role Karpenter needs to assume.
provider "helm" {
kubernetes {
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "aws"
args = ["eks", "get-token", "--cluster-name", local.cluster_name]
}
}
}
resource "helm_release" "karpenter" {
namespace = "karpenter"
create_namespace = true
name = "karpenter"
repository = "https://charts.karpenter.sh"
chart = "karpenter"
version = "v0.16.1"
set {
name = "serviceAccount.annotations.eks\\.amazonaws\\.com/role-arn"
value = module.karpenter_irsa.iam_role_arn
}
set {
name = "clusterName"
value = module.eks.cluster_id
}
set {
name = "clusterEndpoint"
value = module.eks.cluster_endpoint
}
set {
name = "aws.defaultInstanceProfile"
value = aws_iam_instance_profile.karpenter.name
}
}
#Provisioner
#Create a default provisioner using the command below. This provisioner configures instances to connect to your cluster’s endpoint and
#discovers resources like subnets and security groups using the cluster’s name.
#This provisioner will create capacity as long as the sum of all created capacity is less than the specified limit.
provider "kubectl" {
apply_retry_count = 5
host = module.eks.cluster_endpoint
cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)
load_config_file = false
exec {
api_version = "client.authentication.k8s.io/v1beta1"
command = "aws"
args = ["eks", "get-token", "--cluster-name", module.eks.cluster_id]
}
}
resource "kubectl_manifest" "karpenter_provisioner" {
yaml_body = <<-YAML
apiVersion: karpenter.sh/v1alpha5
kind: Provisioner
metadata:
name: default
spec:
requirements:
- key: karpenter.sh/capacity-type
operator: In
values: ["spot"]
limits:
resources:
cpu: 1000
provider:
subnetSelector:
Name: "*private*"
securityGroupSelector:
karpenter.sh/discovery/${module.eks.cluster_id}: ${module.eks.cluster_id}
tags:
karpenter.sh/discovery/${module.eks.cluster_id}: ${module.eks.cluster_id}
ttlSecondsAfterEmpty: 30
YAML
depends_on = [
helm_release.karpenter
]
}
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: inflate
spec:
replicas: 0
selector:
matchLabels:
app: inflate
template:
metadata:
labels:
app: inflate
spec:
terminationGracePeriodSeconds: 0
containers:
- name: inflate
image: public.ecr.aws/eks-distro/kubernetes/pause:3.2
resources:
requests:
cpu: 1
EOF
</code></pre>
<p>kubectl scale deployment inflate --replicas 5</p>
<p>kubectl logs -f -n karpenter -l app.kubernetes.io/name=karpenter -c controller</p>
<p>DEBUG controller.provisioning Relaxing soft constraints for pod since it previously failed to schedule, removing: spec.topologySpreadConstraints = {"maxSkew":1,"topologyKey":"topology.kubernetes.io/zone","whenUnsatisfiable":"ScheduleAnyway","labelSelector":{"matchLabels":{"app.kubernetes.io/instance":"karpenter","app.kubernetes.io/name":"karpenter"}}} {"commit": "b157d45", "pod": "karpenter/karpenter-5755bb5b54-rh65t"}
2022-09-10T00:13:13.122Z</p>
<p>ERROR controller.provisioning Could not schedule pod, incompatible with provisioner "default", incompatible requirements, key karpenter.sh/provisioner-name, karpenter.sh/provisioner-name DoesNotExist not in karpenter.sh/provisioner-name In [default] {"commit": "b157d45", "pod": "karpenter/karpenter-5755bb5b54-rh65t"}</p>
| <p>I belive this is due to the pod topology defined in the Karpenter deployment here:</p>
<p><a href="https://github.com/aws/karpenter/blob/main/charts/karpenter/values.yaml#L73-L77" rel="nofollow noreferrer">https://github.com/aws/karpenter/blob/main/charts/karpenter/values.yaml#L73-L77</a></p>
<p>, you can read further on what pod topologySpreadConstraints does here:</p>
<p><a href="https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/scheduling-eviction/topology-spread-constraints/</a></p>
<p>If you increase the desired_size to 2 which matches the default deployment replicas above, that should resove the error.</p>
|
<p>Occasionally, Pgadmin gives me the 500 error in a browser. After reloading the page, the issue disappears for a while and then comes back again. Here's the log I see while getting the error:</p>
<pre><code> [2022-01-21 14:35:21 +0000] [93] [ERROR] Error handling request /authenticate/login
Traceback (most recent call last):
File "/venv/lib/python3.8/site-packages/gunicorn/workers/gthread.py", line 271, in handle
keepalive = self.handle_request(req, conn)
File "/venv/lib/python3.8/site-packages/gunicorn/workers/gthread.py", line 323, in handle_request
respiter = self.wsgi(environ, resp.start_response)
File "/venv/lib/python3.8/site-packages/flask/app.py", line 2464, in __call__
return self.wsgi_app(environ, start_response)
File "/pgadmin4/pgAdmin4.py", line 77, in __call__
return self.app(environ, start_response)
File "/venv/lib/python3.8/site-packages/werkzeug/middleware/proxy_fix.py", line 169, in __call__
return self.app(environ, start_response)
File "/venv/lib/python3.8/site-packages/flask_socketio/__init__.py", line 43, in __call__
return super(_SocketIOMiddleware, self).__call__(environ,
File "/venv/lib/python3.8/site-packages/engineio/middleware.py", line 74, in __call__
return self.wsgi_app(environ, start_response)
File "/venv/lib/python3.8/site-packages/flask/app.py", line 2450, in wsgi_app
response = self.handle_exception(e)
File "/venv/lib/python3.8/site-packages/flask/app.py", line 1867, in handle_exception
reraise(exc_type, exc_value, tb)
File "/venv/lib/python3.8/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/venv/lib/python3.8/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/venv/lib/python3.8/site-packages/flask/app.py", line 1953, in full_dispatch_request
return self.finalize_request(rv)
File "/venv/lib/python3.8/site-packages/flask/app.py", line 1970, in finalize_request
response = self.process_response(response)
File "/venv/lib/python3.8/site-packages/flask/app.py", line 2269, in process_response
self.session_interface.save_session(self, ctx.session, response)
File "/pgadmin4/pgadmin/utils/session.py", line 307, in save_session
self.manager.put(session)
File "/pgadmin4/pgadmin/utils/session.py", line 166, in put
self.parent.put(session)
File "/pgadmin4/pgadmin/utils/session.py", line 270, in put
dump(
_pickle.PicklingError: Can't pickle <class 'wtforms.form.Meta'>: attribute lookup Meta on wtforms.form failed
</code></pre>
<p>The issue appeared after enabling Oauth2 authentication. I've tried using different version but no luck.</p>
<p>Pgadmin is running in Kubernetes.</p>
| <p>Please try setting
PGADMIN_CONFIG_ENHANCED_COOKIE_PROTECTION = False in configuration file according to your operating system as mentioned <a href="https://www.pgadmin.org/docs/pgadmin4/6.13/config_py.html" rel="nofollow noreferrer">here</a>.</p>
|
<p>I am trying to enable <code>https</code> using the following documentation <strong>[Emissary ingress 2.2.2]</strong>
<a href="https://www.getambassador.io/docs/emissary/latest/howtos/tls-termination/" rel="nofollow noreferrer">https://www.getambassador.io/docs/emissary/latest/howtos/tls-termination/</a></p>
<p>I followed these steps to enable https:</p>
<p>i) Create a self-signed certificate</p>
<pre><code>openssl req -x509 -newkey rsa:4096 -keyout key.pem -out cert.pem -subj '/CN=ambassador-cert' -nodes
</code></pre>
<p>ii) Store the certificate and key in a Kubernetes Secret</p>
<pre><code>kubectl create secret tls tls-cert --cert=cert.pem --key=key.pem -n test-namespace
</code></pre>
<p>iii) Tell Emissary-ingress to use this secret for TLS termination</p>
<pre><code>apiVersion: getambassador.io/v3alpha1
kind: Host
metadata:
name: wildcard-host
spec:
hostname: "*"
acmeProvider:
authority: none
tlsSecret:
name: tls-cert
selector:
matchLabels:
hostname: wildcard-host
</code></pre>
<p>iv) Applied this manifest</p>
<pre><code>kubectl apply -f wildcard-host.yaml -n test-namespace
</code></pre>
<p>I verified Emissary-ingress service is listening on 443 and forwarding to port 8443.</p>
<p>Also I have mapped node service with it.</p>
<pre><code>apiVersion: getambassador.io/v3alpha1
kind: Mapping
metadata:
name: node-service-mapping
namespace: test-namespace
spec:
hostname: "*"
prefix: /node-service
service: node-service
</code></pre>
<p>But when I send request to backend service with curl</p>
<pre><code>curl -Lk https://{{AMBASSADOR_IP}}/node-service
</code></pre>
<p>I am getting following error [attached screenshot]:</p>
<pre><code> % Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
curl: (35) error:1408F10B:SSL routines:ssl3_get_record:wrong version number
</code></pre>
<p><a href="https://i.stack.imgur.com/0Luus.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0Luus.png" alt="enter image description here" /></a></p>
<p>Just for information: I am using following kubernetes cluster version</p>
<pre><code>kubectl version --short
</code></pre>
<p><strong>Client Version: v1.19.0</strong></p>
<p><strong>Server Version: v1.21.7</strong></p>
<p>Appropriate Listener definitions for both http and https:</p>
<pre><code>apiVersion: getambassador.io/v3alpha1
kind: Listener
metadata:
name: http-listener
spec:
port: 8080
protocol: HTTPS # NOT A TYPO
securityModel: XFP
hostBinding:
namespace:
from: SELF
---
apiVersion: getambassador.io/v3alpha1
kind: Listener
metadata:
name: https-listener
spec:
port: 8443
protocol: HTTPS
securityModel: XFP
hostBinding:
namespace:
from: SELF
</code></pre>
<p>I followed this document for adding listeners: <a href="https://www.getambassador.io/docs/emissary/latest/howtos/configure-communications/#listeners" rel="nofollow noreferrer">https://www.getambassador.io/docs/emissary/latest/howtos/configure-communications/#listeners</a></p>
<p><strong>I also tried to use the original certificate but that too also didn't worked.</strong></p>
<p>What am I doing wrong here? Why I am getting this error and not able to hit https ? How can I resolve it? How should I debug the issue here? I tried to find several solutions for this error on stackoverflow website and other sites, but was not able to get proper solution for it till now. This link is also related to my question here <a href="https://stackoverflow.com/questions/68410482/how-can-i-use-ambassador-emissary-ingress-for-tls">How can I use Ambassador Emissary -ingress for TLS?</a> . But I didn't find answer for it.</p>
<p>Would appreciate if anyone provide solution for this. Thanks in advance!</p>
| <p>I've been struggling with exactly the same problem for a few evenings, - been following the official docs, but got stuck at getting "wrong version number" error when trying to access k8s dashboard over HTTPS with curl,</p>
<p>what solve the problem for me, was an explicit setting of tls-certificate for a host. A "wildcard" host block described in the docs, alone, doesn't seem to work, so, apart from setting a wildcard Host, I've also set another one, specific to k8s dashboard endpoint:</p>
<pre><code>---
apiVersion: getambassador.io/v3alpha1
kind: Host
metadata:
name: "kubernetes-dashboard-dns"
# namespace: "kubernetes-dashboard"
annotations:
external-dns.ambassador-service: emissary-ingress.emissary
spec:
hostname: "dashboard.mydomain.tld"
tlsSecret:
name: tls-cert
</code></pre>
|
<p>I just started to use Rancher and request to correct me for any wrong terminology.</p>
<p>Earlier I was using minikube on Macbook which provide SSH easily using <code>minikube ssh</code> for troubleshooting. As I am newbie to Rancher Desktop and wanted to do SSH on Rancher Desktop node similar to minikube.</p>
<p>I googled for same but unfortunately I didn't get any fruitful answer. Thanks in advance.</p>
| <p>On recent versions (1.3 on) you can use the <code>rdctl</code> utility, which ships with Rancher Desktop, and run <code>rdctl shell COMMAND</code> or <code>rdctl shell</code> to ssh into the VM.</p>
|
<p>I have a application running react as front end and node as back end code. In react public folder, we have a meta.json which has the version number, every time we run npm run build, it will update version number in that file. we are using this method to make sure the website always displays the new release version, in the database also we update the version number and if both doesn't match website automatically loads new version.</p>
<p>We are on the process of shifting to Kubernetes and the problem now I have is we have a Dockerfile for react in which we have following steps</p>
<pre><code>FROM node:12.18.3 AS build
ENV CI=false
ENV WDS_SOCKET_PORT=0
WORKDIR /app
COPY ["package.json", "package-lock.json", "./"]
RUN npm install --production
COPY . .
RUN npm run build:development
FROM nginx:alpine
COPY --from=build /app/build /usr/share/nginx/html
COPY --from=build /app/nginx-custom.conf /etc/nginx/conf.d/default.conf
</code></pre>
<p>We are using this Dockerfile in azure pipelines and building a image with it and pushing that docker image to Azure container registry and using kubectl restart to pull that image and restart the deployment in AKS. After npm run build from the Dockerfile, my meta.json file will have updated version, I want to commit and push that changed files to azure repo, so that next time if pipeline is run it will have updated version number.</p>
<p>I have done my POC on this item but not able to find any easy to follow steps,
I have come across this repo <a href="https://github.com/ShadowApex/docker-git-push" rel="nofollow noreferrer">https://github.com/ShadowApex/docker-git-push</a> but not clear on how to execute this one properly, any help would be greatly appreciated.</p>
| <p>Instead of adding the <strong>Git</strong> into the <strong>Docker</strong>, it will add extra layers to the docker image.</p>
<p>Once your image build is <strong>completed</strong> after that what you can do is something like copy the <strong>JSON</strong> outside of the docker image and push it from the <strong>CI</strong> machine to git or bucket where you want to manage.</p>
<p>command you can use the</p>
<pre><code>docker create --name container_name
</code></pre>
<p><code>Docker create</code> will <strong>create</strong> the <strong>new</strong> container without running it.</p>
<p>The docker container create (or shorthand: docker create) command creates a new container from the specified image, without starting it.</p>
<blockquote>
<p>When creating a container, the docker daemon creates a writeable
container layer over the specified image and prepares it for running
the specified command. The container ID is then printed to STDOUT.
This is similar to docker run -d except the container is never started.</p>
</blockquote>
<p>So once container filesystem there run command to copy a file from docker to <strong>CI</strong> <strong>machine</strong> simple as that.</p>
<p><strong>Docker copy command</strong></p>
<pre><code>docker cp container_name:/app/build/meta.json .
</code></pre>
<p>Now you have a file on the CI machine you can upload it to Git now or Bucket anywhere.</p>
|
<p>I need to build go project using google Ko after checking out from git and then push the image to private artifactory which takes credentials. How to define the steps for the above in cloudbuild.yaml? Steps where it takes Source path and also where it takes private repository path. How to give the credentials of artifactory through script?</p>
| <p>Error you are getting is due the <code>failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "bash"</code></p>
<blockquote>
<p>By default, the ko command uses a secure and lean base image from the
Distroless collection of images (the gcr.io/distroless/static:nonroot
image), which doesn’t contain a shell or other executables in order to
reduce the attack surface of the container.</p>
</blockquote>
<p>You can first create the <strong>Ko</strong> docker image and we will use this docker image as the base iamge to Cloudbuild. We will push and save the <strong>Ko</strong> docker image to <strong>GCR</strong></p>
<p>Github : <a href="https://github.com/GoogleCloudPlatform/cloud-builders-community/tree/master/ko" rel="nofollow noreferrer">https://github.com/GoogleCloudPlatform/cloud-builders-community/tree/master/ko</a></p>
<p>There is <code>cloudbuild.yaml</code> to build image on <strong>cloudbuild</strong> or you also run <code>docker build -t</code> locally to build docker</p>
<p>Once docker image is built and pushed to the <strong>GCR</strong> we can write the <code>cloudbuild.yaml</code> to build the application</p>
<p>Try this <code>Cloudbuild.yaml</code> for example</p>
<pre><code>steps:
- name: gcr.io/$PROJECT_ID/ko
entrypoint: /bin/sh
env:
- 'KO_DOCKER_REPO=gcr.io/$PROJECT_ID'
args:
- -c
- |
echo $(/ko publish --preserve-import-paths ./cmd/ko) > ./ko_container.txt || exit 1
</code></pre>
|
<p>I use ingress-nginx with Helm Chart. I used to have the problem, that when I would upload a file (50MB) that I would get the error 413 Request Entity Too Large nginx.</p>
<p>So I changed the proxy-body-size value in my values.yaml file to 150m, so I should now be able to upload my file.
But now I get the error "413 Request Entity Too Large openresty/1.13.6.2".
I checked the nginx.conf file on the ingress controller and the value for client_max_body_size is correctly set to 150m.</p>
<p>After some research I found out that openresty is used by the lua module in nginx.
Does anybody know how I can set this setting too for openresty, or what parameter I am missing ?</p>
<p>My current config is the following:</p>
<p>values.yml:</p>
<pre class="lang-yaml prettyprint-override"><code>ingress-nginx:
defaultBackend:
nodeSelector:
beta.kubernetes.io/os: linux
controller:
replicaCount: 2
resources:
requests:
cpu: 1
memory: 4Gi
limits:
cpu: 2
memory: 7Gi
autoscaling:
enabled: true
minReplicas: 2
maxReplicas: 10
targetCPUUtilizationPercentage: 90
targetMemoryUtilizationPercentage: 90
ingressClassResource:
name: nginx
controllerValue: "k8s.io/nginx"
nodeSelector:
beta.kubernetes.io/os: linux
admissionWebhooks:
enabled: false
patch:
nodeSelector:
beta.kubernetes.io/os: linux
extraArgs:
ingress-class: "nginx"
config:
proxy-buffer-size: "16k"
proxy-body-size: "150m"
client-body-buffer-size: "128k"
large-client-header-buffers: "4 32k"
ssl-redirect: "false"
use-forwarded-headers: "true"
compute-full-forwarded-for: "true"
use-proxy-protocol: "false"
</code></pre>
<p>ingress.yml:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
namespace: namespacename
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/proxy-buffer-size: "128k"
nginx.ingress.kubernetes.io/proxy-buffers-number: "8"
nginx.ingress.kubernetes.io/client-body-buffer-size: "128k"
nginx.ingress.kubernetes.io/proxy-body-size: "150m"
spec:
tls:
- hosts:
- hostname
rules:
- host: hostname
http:
paths:
- path: /assets/static/
pathType: ImplementationSpecific
backend:
service:
name: servicename
port:
number: 8080
</code></pre>
| <p>So it turns out the Application wich had the error, had another reverse Proxy infront of it (wich uses Lua and Openresty for oauth registration).
The Proxy-body-size attribute needed to be raised there to. After that the File upload worked</p>
|
Subsets and Splits