prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>My ingress cannot route to endpoint.</p>
<p>I provided everything. Nginx-controller works properly. I added the hostname bago.com as loadbalancerip. But it doesn't work.</p>
<p>Here is my ingres's yaml file</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
spec:
rules:
- host: bago.com
- http:
paths:
- path: /web1/
pathType: Exact
backend:
service:
name: web1-clusterip
port:
number: 8081
- path: /web2/
pathType: Exact
backend:
service:
name: web2-clusterip
port:
number: 8082
</code></pre>
<p>svc and ingress running</p>
<pre><code>bahaddin@bahaddin-ThinkPad-E15-Gen-2:~/projects/personal/exposer/k8s-ingress$ k get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.12.0.1 <none> 443/TCP 154m
web1-clusterip ClusterIP 10.12.6.102 <none> 8081/TCP 145m
web2-clusertip ClusterIP 10.12.9.22 <none> 8082/TCP 149m
bahaddin@bahaddin-ThinkPad-E15-Gen-2:~/projects/personal/exposer/k8s-ingress$ k get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
minimal-ingress <none> bago.com 34.102.241.199 80 121m
bahaddin@bahaddin-ThinkPad-E15-Gen-2:~/projects/personal/exposer/k8s-ingress$
</code></pre>
<p>Here is my API java code.</p>
<pre><code>@RequestMapping("/web1")
@RestController
public class Controller {
@GetMapping("/hello")
public String foo() {
return "hello from web1 ms";
}
}
</code></pre>
<p>server.port=8081 on container level</p>
<p>my service</p>
<pre><code>Web1-service.
apiVersion: v1
kind: Service
metadata:
name: web1-clusterip
spec:
ports:
- protocol: "TCP"
port: 8081
selector:
app: web1-dp
type: ClusterIP
</code></pre>
<p>But when I type in the browser</p>
<pre><code>http://bago.com/web1/hello
http://bago.com/web1/hello
</code></pre>
<p>I got a 404 not found error<br />
<strong>Screenshot</strong><br />
<img src="https://i.stack.imgur.com/WXRg5.png" alt="enter image description here" /></p>
| <p>You have an issue in your ingress manifest. You have two items in the list, while you want to have one. Additionally, you are missing the ingress class.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
spec:
rules:
# these are indivudal list items
- host: bago.com
- http: ...
</code></pre>
<p>You have to change the manifest so that you have a single item. You should also add the ingress class.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
spec:
# this should be the ingress class, i.e. nginx
ingressClassName: my-ingress-class
rules:
# this list has only 1 items, which is an object.
# note the dash (-)
- host: bago.com
http: ...
</code></pre>
|
<p>I'm working on Apple Silicon M1 Max and I'm trying to access service through web browser, but <code>minikube service nginx</code> opens blank web browser tab with some address (for example <code>http://192.168.49.2:31542/</code>) which fails to load. In log below you can notice output from <code>minikube service list</code> has no URL in <code>URL</code> column. Command <code>minikube service nginx --url</code> does not return any URL as well.</p>
<pre><code>$ minikube start --driver=docker
😄 minikube v1.25.2 on Darwin 12.3 (arm64)
▪ MINIKUBE_ACTIVE_DOCKERD=minikube
✨ Using the docker driver based on user configuration
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
💾 Downloading Kubernetes v1.23.3 preload ...
> preloaded-images-k8s-v17-v1...: 419.07 MiB / 419.07 MiB 100.00% 22.15 Mi
🔥 Creating docker container (CPUs=2, Memory=7903MB) ...
🐳 Preparing Kubernetes v1.23.3 on Docker 20.10.12 ...
▪ kubelet.housekeeping-interval=5m
▪ Generating certificates and keys ...
▪ Booting up control plane ...
▪ Configuring RBAC rules ...
🔎 Verifying Kubernetes components...
▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟 Enabled addons: storage-provisioner, default-storageclass
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
$ kubectl create deployment nginx --image=nginx && kubectl create service nodeport nginx --tcp=80:80
deployment.apps/nginx created
service/nginx created
$ kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 12m
nginx NodePort 10.104.191.210 <none> 80:31542/TCP 12m
$ minikube service list
|-------------|------------|--------------|-----|
| NAMESPACE | NAME | TARGET PORT | URL |
|-------------|------------|--------------|-----|
| default | kubernetes | No node port |
| default | nginx | 80-80/80 | |
| kube-system | kube-dns | No node port |
|-------------|------------|--------------|-----|
$ minikube service nginx --url
🏃 Starting tunnel for service nginx.
❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it.
</code></pre>
<p>Any ideas why there is no service URL and I am not able to access service from web browser?</p>
| <p>The network is limited if using the Docker driver on Darwin, Windows, or WSL, and the Node IP is not reachable directly.</p>
<p>Run service tunnel</p>
<pre><code>minikube service nginx
</code></pre>
<p>Check ssh tunnel in another terminal</p>
<pre><code>$ ps -ef | grep [email protected]
ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -N [email protected] -p 55972 -i /Users/FOO/.minikube/machines/minikube/id_rsa -L TUNNEL_PORT:CLUSTER_IP:TARGET_PORT
</code></pre>
<p>Open in your browser</p>
<pre><code>http://127.0.0.1:TUNNEL_PORT
</code></pre>
<p>ref:
<a href="https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-service-with-tunnel" rel="nofollow noreferrer">https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-service-with-tunnel</a></p>
|
<p>I'm using <a href="https://k8slens.dev/" rel="nofollow noreferrer">lens kubernetes</a> I searched and tried a lot but I didn't find any way to switch between namespaces. is there any option to do so ?
to clarify it in vscode extension you can switch between namespaces easily by right clicking on a namespace listed and then selecting <code>use Namespace</code> option <a href="https://i.stack.imgur.com/exqS3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/exqS3.png" alt="switch to ns in vs code" /></a></p>
<p>also in kubectl using this command :</p>
<pre><code>kubectl config set-context --current --namespace=my-namespace
</code></pre>
<p>but how can we do it in lens ?</p>
| <p>If I remember correctly there is a dropdown on the top right corner which you can use to select the namespace from.</p>
<p>Here is a screenshot from a blog that shows this dropdown:</p>
<p><a href="https://i.stack.imgur.com/nbtNH.png" rel="noreferrer"><img src="https://i.stack.imgur.com/nbtNH.png" alt="Select Namespace" /></a></p>
|
<p>I'm trying to use patching in Kustomize to modify Kubernetes resources and I'm wondering if there is a neat way to update every item of a list.</p>
<p>Here is the yaml that I'd like to customize:</p>
<pre><code>apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
annotations:
name: validating-webhook-configuration
webhooks:
- admissionReviewVersions:
- v1
clientConfig:
service:
name: webhook-service
namespace: rabbitmq-system
path: /validate-rabbitmq-com-v1beta1-binding
- admissionReviewVersions:
- v1
clientConfig:
service:
name: webhook-service
namespace: rabbitmq-system
path: /validate-rabbitmq-com-v1beta1-exchange
- admissionReviewVersions:
- v1
clientConfig:
service:
name: webhook-service
namespace: rabbitmq-system
path: /validate-rabbitmq-com-v1beta1-federation
</code></pre>
<p>And here is the end effect that I want to achieve (see <code>NEWVALUE</code>):</p>
<pre><code>apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
annotations:
name: validating-webhook-configuration
webhooks:
- admissionReviewVersions:
- v1
clientConfig:
NEWVALUE: value
service:
name: webhook-service
namespace: rabbitmq-system
path: /validate-rabbitmq-com-v1beta1-binding
- admissionReviewVersions:
- v1
clientConfig:
NEWVALUE: value
service:
name: webhook-service
namespace: rabbitmq-system
path: /validate-rabbitmq-com-v1beta1-exchange
- admissionReviewVersions:
- v1
clientConfig:
NEWVALUE: value
service:
name: webhook-service
namespace: rabbitmq-system
path: /validate-rabbitmq-com-v1beta1-federation
</code></pre>
<p>I'm fully aware of a Json6902 patch but it only allows me to modify one index at a time:</p>
<pre><code>- op: add
path: /webhooks/0/clientConfig/NEWVALUe
value: value
</code></pre>
<p>Is there a way to modify every element in a fashion similar to a for loop or do I have to specify each index manually?</p>
| <p>According to what discussed <a href="https://github.com/kubernetes-sigs/kustomize/issues/4561" rel="nofollow noreferrer">here</a> they don't have support for such case yet. when they add it you can do something like this:</p>
<pre><code>kind: Kustomization
apiVersion: kustomize.config.k8s.io/v1beta1
resources:
- validating-webhook.yaml
- configmap.yaml
replacements:
- source:
kind: ConfigMap
name: cofignmap
fieldPath: data.NEWVALUE
targets:
- select:
name: validating-webhook-configuration
kind: ValidatingWebhookConfiguration
fieldPaths:
- webhooks.*.clientConfig.NEWVALUE
options:
create: true
</code></pre>
<p>configMap:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: cofignmap
data:
NEWVALUE: value
</code></pre>
<p>you can read about replacements in the <a href="https://github.com/kubernetes-sigs/cli-experimental/blob/master/site/content/en/references/kustomize/kustomization/replacements/_index.md" rel="nofollow noreferrer">docs</a></p>
|
<p>After upgrading from Ubuntu 20.04 LTS to Ubuntu 22.04 LTS, I am currently facing issues with my k3s cluster.</p>
<p>For instance, the logs of the <code>local-path-provisioner</code> pod:</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl logs -n kube-system local-path-provisioner-6c79684f77-l4cqp
time="2022-04-28T03:27:00Z" level=fatal msg="Error starting daemon: Cannot start Provisioner: failed to get Kubernetes server version: Get \"https://10.43.0.1:443/version?timeout=32s\": dial tcp 10.43.0.1:443: i/o timeout"
</code></pre>
<p>I've tried the following actions:</p>
<ul>
<li>Disabling ipv6, as described <a href="https://cwesystems.com/?p=231" rel="nofollow noreferrer">here</a></li>
<li>Disabling <code>ufw</code> firewall</li>
<li>Use legacy iptables</li>
<li>Adding rules for internal traffic to iptables, like so:</li>
</ul>
<pre class="lang-sh prettyprint-override"><code>$ sudo iptables -A INPUT -s 10.42.0.0/16 -d <host_ip> -j ACCEPT
</code></pre>
<p>Still, <code>coredns</code>, <code>local-path-provisioner</code> and <code>metrics-server</code> deployments won't start. When listing pods, here's the output:</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
cilium-64r4c 1/1 Running 2 (18m ago) 174m
cilium-d8grw 1/1 Running 2 (18m ago) 174m
cilium-g4gmf 1/1 Running 2 (18m ago) 174m
cilium-h5j4h 1/1 Running 2 (18m ago) 174m
cilium-n62nv 1/1 Running 2 (18m ago) 174m
cilium-operator-76cff99967-6fgkv 1/1 Running 2 (18m ago) 174m
cilium-operator-76cff99967-pbr4l 1/1 Running 2 (18m ago) 174m
cilium-w4n6d 1/1 Running 2 (18m ago) 174m
cilium-wgm7l 1/1 Running 2 (18m ago) 174m
cilium-zqb6w 1/1 Running 2 (18m ago) 174m
coredns-d76bd69b-bhgnl 0/1 CrashLoopBackOff 44 (3m27s ago) 177m
hubble-relay-67f64789c7-cjzz9 0/1 CrashLoopBackOff 63 (4m15s ago) 174m
hubble-ui-794cd44b77-9vgbl 3/3 Running 6 (18m ago) 174m
local-path-provisioner-6c79684f77-l4cqp 0/1 CrashLoopBackOff 35 (3m53s ago) 177m
metrics-server-7cd5fcb6b7-v74rc 0/1 CrashLoopBackOff 42 (3m35s ago) 177m
</code></pre>
<p>Any help is appreciated! thanks</p>
| <p>Since you're using Cilium, I think you might be running into this issue: <a href="https://github.com/cilium/cilium/issues/10645" rel="nofollow noreferrer">https://github.com/cilium/cilium/issues/10645</a></p>
<p>The workaround is to ensure <code>net.ipv4.conf.lxc*.rp_filter</code> is set to 0:</p>
<pre><code>echo 'net.ipv4.conf.lxc*.rp_filter = 0' | sudo tee -a /etc/sysctl.d/90-override.conf
sudo systemctl start systemd-sysctl
</code></pre>
|
<p>So I am trying to instrument a FastAPI python server with Open Telemetry. I installed the dependencies needed through poetry:</p>
<pre><code>opentelemetry-api = "^1.11.1"
opentelemetry-distro = {extras = ["otlp"], version = "^0.31b0"}
opentelemetry-instrumentation-fastapi = "^0.31b0"
</code></pre>
<p>When running the server locally, with <code>opentelemetry-instrument --traces_exporter console uvicorn src.main:app --host 0.0.0.0 --port 5000</code> I can see the traces printed out to my console whenever I call any of my endpoints.</p>
<p>The main issue I face, is when running the app in k8s I see no logs in the collector.</p>
<p>I have added cert-manager <code>kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.0/cert-manager.yaml</code> (needed by the OTel Operator) and the OTel Operator itself <code>install the operator kubectl apply -f https://github.com/open-telemetry/opentelemetry-operator/releases/latest/download/opentelemetry-operator.yaml</code>.</p>
<p>Then, I added a collector with the following config:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: opentelemetry.io/v1alpha1
kind: OpenTelemetryCollector
metadata:
name: otel
spec:
config: |
receivers:
otlp:
protocols:
grpc:
http:
processors:
exporters:
logging:
service:
pipelines:
traces:
receivers: [otlp]
processors: []
exporters: [logging]
</code></pre>
<p>And finally, an Instrumentation CR to enable auto-instrumentation:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
name: my-instrumentation
spec:
exporter:
endpoint: http://otel-collector:4317
propagators:
- tracecontext
- baggage
sampler:
type: parentbased_traceidratio
argument: "0.25"
</code></pre>
<p>My app's deployment contains:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose-run-local-with-aws.yml -c
kompose.version: 1.26.1 (HEAD)
creationTimestamp: null
labels:
io.kompose.service: api
name: api
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: api
strategy:
type: Recreate
template:
metadata:
annotations:
kompose.cmd: kompose convert -f docker-compose-run-local-with-aws.yml -c
kompose.version: 1.26.1 (HEAD)
sidecar.opentelemetry.io/inject: "true"
instrumentation.opentelemetry.io/inject-python: "true"
creationTimestamp: null
labels:
io.kompose.network/backend: "true"
io.kompose.service: api
app: api
spec:
containers:
- env:
- name: APP_ENV
value: docker
- name: RELEASE_VERSION
value: "1.0.0"
- name: OTEL_RESOURCE_ATTRIBUTES
value: "service.name=fastapiApp"
- name: OTEL_LOG_LEVEL
value: "debug"
- name: OTEL_TRACES_EXPORTER
value: otlp_proto_http
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: http://otel-collector:4317
image: my_org/api:1.0.0
name: api
ports:
- containerPort: 5000
resources: {}
imagePullPolicy: IfNotPresent
restartPolicy: Always
</code></pre>
<p>What am I missing? I have double checked everything a thousand times and cannot figure out what might be wrong</p>
| <p>You have incorrectly configured the exporter endpoint setting <code>OTEL_EXPORTER_OTLP_ENDPOINT</code>. Endpoint value for OTLP over HTTP exporter should have port number 4318. The 4317 port number should be used for OTLP/gRPC exporters.</p>
|
<p>I'm trying to use a ytt overlay to replace the <code>objectName</code> in my secret class following <a href="https://carvel.dev/ytt/#gist:https://gist.github.com/cppforlife/7633c2ed0560e5c8005e05c8448a74d2" rel="nofollow noreferrer">this gist example replacing only part of a multi-line string</a>, but it ends up appending a new item instead of replacing the existing one. How can I get it to work for this case?</p>
<h4>Input Files</h4>
<p>db_secret.yaml</p>
<pre><code>kind: SecretProviderClass
metadata:
namespace: default
name: db_credentials
spec:
provider: aws
parameters:
objects: |
- objectName: TO_BE_REPLACED_BY_YTT
objectType: "secretsmanager"
jmesPath:
- path: username
objectAlias: dbusername
- path: password
objectAlias: dbpassword
</code></pre>
<p>overlay.yaml</p>
<pre><code>#@ load("@ytt:overlay", "overlay")
#@ load("@ytt:yaml", "yaml")
#@ load("@ytt:data", "data")
#@ def updates():
- objectName: #@ data.values.db_secret_name
#@ end
#@overlay/match by=overlay.subset({"kind": "SecretProviderClass", "metadata": {"name": "db_credentials"}})
---
spec:
provider: aws
parameters:
#@overlay/replace via=lambda a,_: yaml.encode(overlay.apply(yaml.decode(a), updates()))
objects:
</code></pre>
<p>values-staging.yaml</p>
<pre><code>db_secret_name: db-secret-staging
</code></pre>
<h4>ytt output:</h4>
<pre><code>apiVersion: secrets-store.csi.x-k8s.io/v1alpha1
kind: SecretProviderClass
metadata:
namespace: default
name: db_credentials
spec:
provider: aws
parameters:
objects: |
- objectName: TO_BE_REPLACED_BY_YTT
objectType: secretsmanager
jmesPath:
- path: username
objectAlias: dbusername
- path: password
objectAlias: dbpassword
- objectName: db-secret-staging
</code></pre>
| <p>It's key to notice that the YAML value being overlayed is itself an array. It's an array of one item, but an array nonetheless.</p>
<p>In order to reach the map that has the <code>objectName</code> map item, you need to first match its parent: that containing array item.</p>
<p>The most durable way to do that is by selecting for the array item that has a map that contains the <code>objectName</code> key. You can say that like this:</p>
<pre><code> #@ def updates():
+ #@overlay/match by=lambda idx,left,right: "objectName" in left
- objectName: #@ data.values.db_secret_name
#@ end
</code></pre>
<p>can be read: "in the array value being overlay'ed (aka 'left'), find the array item whose value has a map whose keys include the string "objectName"... merge the value of <em>this</em> array item (i.e. the map in the array item within this overlay) into <em>that</em> matched map."</p>
<p><em>(Playground: <a href="https://carvel.dev/ytt/#gist:https://gist.github.com/pivotaljohn/9593f971ac5962055ff38c5eeaf1df11" rel="nofollow noreferrer">https://carvel.dev/ytt/#gist:https://gist.github.com/pivotaljohn/9593f971ac5962055ff38c5eeaf1df11</a>)</em></p>
<p>When working with overlays, it can be helpful to visualize the tree of values. There are some nice examples in the docs: <a href="https://carvel.dev/ytt/docs/v0.40.0/yaml-primer/" rel="nofollow noreferrer">https://carvel.dev/ytt/docs/v0.40.0/yaml-primer/</a></p>
<p>Also, there was a recent-ish vlog post that has been reported to help folks level up using <code>ytt</code> overlays: <a href="https://carvel.dev/blog/primer-on-ytt-overlays/" rel="nofollow noreferrer">https://carvel.dev/blog/primer-on-ytt-overlays/</a></p>
|
<p>I'm on an ec2 instance trying to get my cluster created. I have kubectl already installed and here are my services and workloads yaml files</p>
<p>services.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: stockapi-webapp
spec:
selector:
app: stockapi
ports:
- name: http
port: 80
type: LoadBalancer
</code></pre>
<p>workloads.yaml</p>
<pre><code>apiVersion: v1
kind: Deployment
metadata:
name: stockapi
spec:
selector:
matchLabels:
app: stockapi
replicas: 1
template: # template for the pods
metadata:
labels:
app: stockapi
spec:
containers:
- name: stock-api
image: public.ecr.aws/u1c1h9j4/stock-api:latest
</code></pre>
<p>When I try to run</p>
<pre><code>kubectl apply -f workloads.yaml
</code></pre>
<p>I get this as an error</p>
<pre><code>The connection to the server localhost:8080 was refused - did you specify the right host or port?
</code></pre>
<p>I also tried changing the port in my services.yaml to 8080 and that didn't fix it either</p>
| <p>This error comes when you don't have <code>~/.kube/config</code> file present or configured correctly on the client / where you run the <code>kubectl</code> command.</p>
<p>kubectl reads the clusterinfo and which port to connect to from the <code>~/.kube/config</code> file.</p>
<p>if you are using eks here's how you can create <code>config</code> file
<a href="https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html" rel="noreferrer">aws eks create kubeconfig file</a></p>
|
<p>I am trying to configure FluxCD in Kubernetes to send notifications to Microsfot Teams for reconciliation events.</p>
<p>I have followed the FluxCD "<a href="https://fluxcd.io/docs/guides/notifications/" rel="nofollow noreferrer">Setup Notifications</a>" instructions. Everything is deployed as expected.</p>
<p><a href="https://i.stack.imgur.com/39Bq5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/39Bq5.png" alt="enter image description here" /></a></p>
<p>I am not receiving any alerts in Teams as expected when I edit a config (e.g. pod replicaCount) and run "flux reconcile ...". This is the error I am seeing in the Notification Controller</p>
<p><a href="https://i.stack.imgur.com/ZKiHH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZKiHH.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/bjKjx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bjKjx.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/fKaq3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fKaq3.png" alt="enter image description here" /></a></p>
<p>Here is the secret with the Microsoft Teams channel URL</p>
<p><a href="https://i.stack.imgur.com/GMM1h.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GMM1h.png" alt="enter image description here" /></a></p>
<p>Does anyone have any ideas to please share with me today. Thank you</p>
| <p>So, firstly, I had not created an incoming webhook in Microsoft Teams UI: <a href="https://fluxcd.io/docs/components/notification/provider/#ms-teams" rel="nofollow noreferrer">https://fluxcd.io/docs/components/notification/provider/#ms-teams</a></p>
<p>Secondly, I asked a question to the Flux maintainers as @Nalum suggested and there was a follow-on issue with encoding the Teams URL correctly. I had a trailing newline character which caused the URL to be invalid as reported by the Provider. I was reciving the following error</p>
<p><a href="https://i.stack.imgur.com/1cRKK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1cRKK.png" alt="enter image description here" /></a></p>
<p>To resolve this I had to use the -n switch on the echo command when base64'ing the Teams URL for the Secret in Kubernetes.</p>
<pre><code>echo -n '<teams url>' | base64
</code></pre>
<p>My final Kubernetes config was as follows:
<a href="https://i.stack.imgur.com/VarHX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VarHX.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/61fmu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/61fmu.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/aiTCw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aiTCw.png" alt="enter image description here" /></a></p>
<p>A full breakdown of the investigation can be seen here: <a href="https://github.com/fluxcd/flux2/discussions/2719" rel="nofollow noreferrer">https://github.com/fluxcd/flux2/discussions/2719</a></p>
|
<p>I want to set up an IPv6 address to service on the GKE cluster. The main reason I want to do that is I am setting up a Google Managed Certificate and connecting the service to a Domain name. The certificate requires type A and type AAAA records to be configured. I reserved an IPv6 address on VPC Network, but there is no way to assign it. Even tried editing the YAML to support IPv6 family, but it just shows the error</p>
<pre><code>The Service "made-up-name" is invalid: spec.ipFamilies[1]: Invalid value: []string(nil): ipfamily IPv6 is not configured on cluster
</code></pre>
<p>Here is my YAML file as of now</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-stream-server-depl
spec:
replicas: 1
selector:
matchLabels:
app: my-stream-server
template:
metadata:
labels:
app: my-stream-server
spec:
containers:
- name: my-stream-server
image: gcr.io/reddo-346118/my-stream-server
---
apiVersion: v1
kind: Service
metadata:
name: my-stream-server-srv
spec:
ipFamilyPolicy: PreferDualStack
ipFamilies:
- IPv4
- IPv6
selector:
app: my-stream-server
ports:
- name: http
protocol: TCP
port: 8000
targetPort: 8000
- name: rtmp
protocol: TCP
port: 1935
targetPort: 1935
---
kind: Service
apiVersion: v1
metadata:
name: my-stream-server-rtmp
spec:
type: LoadBalancer
externalTrafficPolicy: Cluster
ports:
- name: rtmp
port: 1935
targetPort: 1935
protocol: TCP
selector:
app: my-stream-server
</code></pre>
| <p>GKE does not currently support IPv6 for pods or services. You can, however, assign an IPv6 address to an external HTTP(S) load balancer. You won't be able to do this for <em>Service</em> of type <em>LoadBalancer</em>. You'll need to create an Ingress resource instead as Ingress creates an HTTP(S) load balancer which does support IPv6.</p>
<p>Support for dual stack on GKE is currently targeted for late 2Q 2022.</p>
|
<p>I would like to validate deployments based on custom logic before scale.
I created an admission webhook to do that, but unfortunately the scale operation is undetected by the webook.</p>
<pre><code>apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
name: deployment-validator
webhooks:
- admissionReviewVersions:
- v1
clientConfig:
service:
name: example-name
namespace: example-namespace
path: /validate-deployment
port: 9443
failurePolicy: Ignore
matchPolicy: Equivalent
name: validation.deploy.example-domain.com
namespaceSelector: {}
objectSelector: {}
rules:
- apiGroups:
- apps
apiVersions:
- v1
operations:
- '*'
resources:
- deployment
scope: '*'
sideEffects: None
timeoutSeconds: 10
</code></pre>
<p>If I CREATE or UPDATE the deployment, the action is detected by the webhook server, also if I PATCH (kubectl patch ...).
Unfortunately if I use kubectl scale ..., the webhook server does not detect the action, and I'm unable to validate the request.</p>
<p>How can I resolve this issue?</p>
| <p>When you run <code>kubectl scale</code> you are not actually patching the <code>Deployment</code> resource, but you are editing a subresource named <code>Scale</code> instead.</p>
<p>This is the API doc entry of the scale call: <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#replace-scale-deployment-v1-apps" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.24/#replace-scale-deployment-v1-apps</a></p>
<pre><code>PUT /apis/apps/v1/namespaces/{namespace}/deployments/{name}/scale
</code></pre>
<p>Also, I think you need the plural name for your resouce.
So you might have to change the rule in your admission controller like this:</p>
<pre><code> rules:
- apiGroups:
- apps
apiVersions:
- v1
operations:
- '*'
resources:
- deployments/scale
scope: '*'
</code></pre>
<p>and that should work.</p>
|
<p>I have an API in a container and when I create a cluster the api works fine, but the pods constantly restart and there isn't any specific reason why in the logs.</p>
<p>I'm on an m5.Large EC2 instance and using k3d. I was following a demo so after installing k3d I ran this command</p>
<pre><code>k3d cluster create test -p "80:80@loadbalancer"
</code></pre>
<p>Here is my deployment file</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: stock-api
labels:
app: stock-api
spec:
replicas: 1
selector:
matchLabels:
app: stock-api
template:
metadata:
labels:
app: stock-api
spec:
containers:
- name: stock-api
image: mpriv32/stock-api:latest
envFrom:
- secretRef:
name: api-credentials
</code></pre>
<p>Service file</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: stock-api
labels:
app: stock-api
spec:
type: ClusterIP
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: stock-api
</code></pre>
<p>My third file is just my secrets file which I just removed the values for this post</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: api-credentials
stringData:
ACCESS_KEY:
SECRET_KEY:
EMAIL_PASSWORD:
EMAIL_USER:
API_KEY:
</code></pre>
<p>I applied all of the files and the API works fine, but then my pods constantly restart.</p>
<p>First I ran this command and the reason I got stock information is because I had a print statement in my application to test the api response and forgot to remove it.</p>
<pre><code>kubectl logs -p stock-api-7f5c45776b-gc67c
{'Company': {'S': 'AAPL'}, 'DailyPrice': {'S': '166.02'}}
</code></pre>
<p>Getting the logs didn't help, so then I ran describe and got this output</p>
<pre><code> Normal Scheduled 16m default-scheduler Successfully assigned default/stock-api-7f5c45776b-gc67c to k3d-test-server-0
Normal Pulled 15m kubelet Successfully pulled image "mpriv32/stock-api:latest" in 16.509616605s
Normal Pulled 15m kubelet Successfully pulled image "mpriv32/stock-api:latest" in 696.527075ms
Normal Pulled 15m kubelet Successfully pulled image "mpriv32/stock-api:latest" in 734.334806ms
Normal Pulled 15m kubelet Successfully pulled image "mpriv32/stock-api:latest" in 823.429206ms
Normal Started 15m (x4 over 15m) kubelet Started container stock-api
Normal Pulling 14m (x5 over 16m) kubelet Pulling image "mpriv32/stock-api:latest"
Normal Pulled 14m kubelet Successfully pulled image "mpriv32/stock-api:latest" in 698.883126ms
Normal Created 14m (x5 over 15m) kubelet Created container stock-api
Warning BackOff 62s (x67 over 15m) kubelet Back-off restarting failed container
</code></pre>
<p>It constantly keeps "backoff restarting"</p>
<pre><code>When I run `describe po` I get this
Name: stock-api-7f5c45776b-gc67c
Namespace: default
Priority: 0
Node: k3d-test-server-0/172.18.0.2
Start Time: Mon, 23 May 2022 06:44:42 +0000
Labels: app=stock-api
pod-template-hash=7f5c45776b
Annotations: <none>
Status: Running
IP: 10.42.0.9
IPs:
IP: 10.42.0.9
Controlled By: ReplicaSet/stock-api-7f5c45776b
Containers:
stock-api:
Container ID: containerd://846d4c5c282274453c4b2ad8b834f20d2c673468ca18386d7404b07915b81a9c
Image: mpriv32/stock-api:latest
Image ID: docker.io/mpriv32/stock-api@sha256:98189cdf972ed61af79505b58aba2a0166fd012f5be4e0f012b2dffa0ea3dd5f
Port: <none>
Host Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 23 May 2022 08:23:16 +0000
Finished: Mon, 23 May 2022 08:23:17 +0000
Ready: False
Restart Count: 24
Environment Variables from:
api-credentials Secret Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-czkv9 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-czkv9:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 3m8s (x457 over 102m) kubelet Back-off restarting failed container
</code></pre>
<p>Dockerfile for my application</p>
<pre><code>FROM python:3.8
COPY app.py .
RUN pip install requests python-dotenv
ARG API_KEY
ARG EMAIL_USER
ARG EMAIL_PASSWORD
ENV API_KEY $API_KEY
ENV EMAIL_USER $EMAIL_USER
ENV EMAIL_PASSWORD $EMAIL_PASSWORD
COPY database.py .
RUN pip install boto3
ARG ACCESS_KEY
ARG SECRET_KEY
ENV ACCESS_KEY $ACCESS_KEY
ENV SECRET_KEY $SECRET_KEY
CMD ["python", "database.py"]
EXPOSE 80
</code></pre>
<p>app.py</p>
<pre><code>from datetime import datetime
import smtplib
import os
import requests
today_date = {datetime.today().strftime("%Y-%m-%d")}
url = (
'https://api.polygon.io/v1/open-close/{stock}/2022-05-10?adjusted=true&apiKey={API_key}'
)
if os.path.isfile('.env'):
from dotenv import load_dotenv
load_dotenv()
def __send_email(stock_data: str) -> None:
gmail_user = os.getenv('EMAIL_USER')
gmail_password = os.getenv('EMAIL_PASSWORD')
mail_from = gmail_user
mail_to = gmail_user
mail_subject = f'Your stock update for {datetime.today().strftime("%m/%d/%Y")}'
mail_message = f'Subject: {mail_subject}\n\n{stock_data}'
server = smtplib.SMTP('smtp.gmail.com', 587)
server.starttls()
server.login(gmail_user, gmail_password)
server.sendmail(mail_from, mail_to, mail_message)
server.close()
class api_data:
response = requests.get(url.format(
stock=os.getenv('stock'),
API_key=os.getenv('API_key')
))
data = response.json()
ticker = data['symbol']
daily_high = data['high']
daily_low = data['low']
daily_close = data['close']
def __email_body():
stock_data = (
f"""For the stock: {api_data.ticker}
\nToday's high was {api_data.daily_high}
\nToday's low was $ {api_data.daily_low}
\n{api_data.ticker} closed at $ {api_data.daily_close}"""
)
__send_email(stock_data)
__email_body()
</code></pre>
<p>database.py</p>
<pre><code>import boto3
import app
import os
#Pulling stock name and daily price from API
stock_name = str(app.api_data.ticker)
stock_price = str(app.api_data.daily_close)
if os.path.isfile('.env'):
from dotenv import load_dotenv
load_dotenv()
ACCESS_KEY = os.getenv('ACCESS_KEY')
SECRET_KEY = os.getenv('SECRET_KEY')
#DynamoDB table name
table_name = "stock_tracker"
dynamodb_client = boto3.client('dynamodb', region_name='us-west-2', aws_access_key_id=ACCESS_KEY, aws_secret_access_key=SECRET_KEY)
stock_info ={
'Company': {'S': stock_name}
,'DailyPrice': {'S': stock_price}
}
print(stock_info)
if __name__ == "__main__":
dynamodb_client.put_item(TableName = table_name, Item = stock_info )
</code></pre>
| <p>first of all the below line doesnt appears to be a log entry</p>
<pre><code>{'Company': {'S': 'AAPL'}, 'DailyPrice': {'S': '166.02'}}
</code></pre>
<p>you need to have log option, timestamp and other standard values in the log entry. It appears to be a code issue. plug-in standard logging lib like log4j and add log level like debug/warn/error etc to output proper log events. This would help you troubleshoot the issue.</p>
|
<p>Today the coredns one pod running into issue, I check the coredns pod and show log like this:</p>
<pre><code>.:53
2022-05-23T08:41:36.664Z [INFO] CoreDNS-1.3.1
2022-05-23T08:41:36.665Z [INFO] linux/amd64, go1.11.4, 6b56a9c
CoreDNS-1.3.1
linux/amd64, go1.11.4, 6b56a9c
2022-05-23T08:41:36.665Z [INFO] plugin/reload: Running configuration MD5 = 8646128cd34ade07719a0787cce6943e
2022-05-23T09:40:08.490Z [ERROR] plugin/errors: 2 oap. A: dial udp 100.100.2.136:53: i/o timeout
</code></pre>
<p>currently the coredns have 2 pod, one of the pod have this issue. The dns pod ip is <code>172.30.112.19</code>, why the dns tried to connect <code>100.100.2.136</code>? why did this happen? what should I do to make it work?</p>
| <p><code>why the dns tried to connect 100.100.2.136?</code></p>
<p>When coreDNS gets request that is outside the cluster domain (eg. <a href="http://www.google.com" rel="nofollow noreferrer">www.google.com</a>), it will forward the request to upstream nameserver (likely the 100.100.2.136:53 in your case). You can check in coreDNS ConfigMap for <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/#coredns-configmap-options" rel="nofollow noreferrer">forward . /etc/resolv.conf</a></p>
<p><code>why did this happen? what should I do to make it work?</code></p>
<p>If the other node that run the coreDNS pod is function correctly, there could be discrepancy in the nameserver and/or the node's network settings. It is possible also the CNI agent on the node has malfunction and messed up with IP table. You can try cordon this node and move the pod to another node.</p>
|
<p>I would like to validate, that deployments, which have Pod- and NodeAffinities (+AntiAffinity) are configured according to internal guidelines.</p>
<p>Is there a possibility to get deployments (or Pods) using kubectl and limit the result to Objects, that have such an affinity configured?</p>
<p>I have played around with the jsonpath output, but was unsuccessful so far.</p>
| <p>hope you are enjoying your Kubernetes journey !</p>
<p>If you need to use affinities (especially with <code>preferredDuringSchedulingIgnoredDuringExecution</code> (explications below)) and just want to just "find" deployments that actually have affinities, you can use this:</p>
<pre><code>❯ k get deploy -o custom-columns=NAME:".metadata.name",AFFINITIES:".spec.template.spec.affinity"
NAME AFFINITIES
nginx-deployment <none>
nginx-deployment-vanilla <none>
nginx-deployment-with-affinities map[nodeAffinity:map[preferredDuringSchedulingIgnoredDuringExecution:[map[preference:map[matchExpressions:[map[key:test-affinities1 operator:In values:[test1]]]] weight:1]] requiredDuringSchedulingIgnoredDuringExecution:map[nodeSelectorTerms:[map[matchExpressions:[map[key:test-affinities operator:In values:[test]]]]]]]]
</code></pre>
<p>Every <code><none></code> pattern indicates that there is no affinity in the deployment.</p>
<p>However, with affinities, if you want to get only the deployments that have affinities without the deployments that don't have affinities, use this:</p>
<pre><code>❯ k get deploy -o custom-columns=NAME:".metadata.name",AFFINITIES:".spec.template.spec.affinity" | grep -v "<none>"
NAME AFFINITIES
nginx-deployment-with-affinities map[nodeAffinity:map[preferredDuringSchedulingIgnoredDuringExecution:[map[preference:map[matchExpressions:[map[key:test-affinities1 operator:In values:[test1]]]] weight:1]] requiredDuringSchedulingIgnoredDuringExecution:map[nodeSelectorTerms:[map[matchExpressions:[map[key:test-affinities operator:In values:[test]]]]]]]]
</code></pre>
<p>And if you just want the names of the deployments that have affinities, consider using this little script:</p>
<pre><code>❯ k get deploy -o custom-columns=NAME:".metadata.name",AFFINITIES:".spec.template.spec.affinity" --no-headers | grep -v "<none>" | awk '{print $1}'
nginx-deployment-with-affinities
</code></pre>
<p>But, do not forget that <code>nodeSelector</code> is the simplest way to constrain Pods to nodes with specific labels. (more info here: <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity</a>). Also remember that (according to the same link) the <code>requiredDuringSchedulingIgnoredDuringExecution</code> type of node Affinity functions like nodeSelector, but with a more expressive syntax !
So If you don't need <code>preferredDuringSchedulingIgnoredDuringExecution</code> when dealing with affinities consider using nodeSelector !</p>
<p>After reading the above link, if you want to deal with nodeSelector you can use the same mechanic I used before:</p>
<pre><code>❯ k get deploy -o custom-columns=NAME:".metadata.name",NODE_SELECTOR:".spec.template.spec.nodeSelector"
NAME NODE_SELECTOR
nginx-deployment map[test-affinities:test]
nginx-deployment-vanilla <none>
</code></pre>
|
<p>Right now, I can add my ip using</p>
<pre><code>gcloud container clusters update core-cluster --zone=asia-southeast1-a --enable-master-authorized-networks --master-authorized-networks w.x.y.z/32
</code></pre>
<p>but it overrides all the existing authorized networks that was already there.</p>
<p>Is there any way to append the new ip to the existing list of authorized networks?</p>
| <p>You could automate what @Gari Singh said using gcloud, jq and tr. See below for doing it with CLI:</p>
<pre class="lang-bash prettyprint-override"><code>NEW_CIDR=8.8.4.4/32
export CLUSTER=test-psp
OLD_CIDR=$(gcloud container clusters describe $CLUSTER --format json | jq -r '.masterAuthorizedNetworksConfig.cidrBlocks[] | .cidrBlock' | tr '\n' ',')
echo "The existing master authorized networks were $OLD_CIDR"
gcloud container clusters update $CLUSTER --master-authorized-networks "$OLD_CIDR$NEW_CIDR" --enable-master-authorized-networks
</code></pre>
|
<p>I have an API in a container and when I create a cluster the api works fine, but the pods constantly restart and there isn't any specific reason why in the logs.</p>
<p>I'm on an m5.Large EC2 instance and using k3d. I was following a demo so after installing k3d I ran this command</p>
<pre><code>k3d cluster create test -p "80:80@loadbalancer"
</code></pre>
<p>Here is my deployment file</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: stock-api
labels:
app: stock-api
spec:
replicas: 1
selector:
matchLabels:
app: stock-api
template:
metadata:
labels:
app: stock-api
spec:
containers:
- name: stock-api
image: mpriv32/stock-api:latest
envFrom:
- secretRef:
name: api-credentials
</code></pre>
<p>Service file</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: stock-api
labels:
app: stock-api
spec:
type: ClusterIP
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: stock-api
</code></pre>
<p>My third file is just my secrets file which I just removed the values for this post</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: api-credentials
stringData:
ACCESS_KEY:
SECRET_KEY:
EMAIL_PASSWORD:
EMAIL_USER:
API_KEY:
</code></pre>
<p>I applied all of the files and the API works fine, but then my pods constantly restart.</p>
<p>First I ran this command and the reason I got stock information is because I had a print statement in my application to test the api response and forgot to remove it.</p>
<pre><code>kubectl logs -p stock-api-7f5c45776b-gc67c
{'Company': {'S': 'AAPL'}, 'DailyPrice': {'S': '166.02'}}
</code></pre>
<p>Getting the logs didn't help, so then I ran describe and got this output</p>
<pre><code> Normal Scheduled 16m default-scheduler Successfully assigned default/stock-api-7f5c45776b-gc67c to k3d-test-server-0
Normal Pulled 15m kubelet Successfully pulled image "mpriv32/stock-api:latest" in 16.509616605s
Normal Pulled 15m kubelet Successfully pulled image "mpriv32/stock-api:latest" in 696.527075ms
Normal Pulled 15m kubelet Successfully pulled image "mpriv32/stock-api:latest" in 734.334806ms
Normal Pulled 15m kubelet Successfully pulled image "mpriv32/stock-api:latest" in 823.429206ms
Normal Started 15m (x4 over 15m) kubelet Started container stock-api
Normal Pulling 14m (x5 over 16m) kubelet Pulling image "mpriv32/stock-api:latest"
Normal Pulled 14m kubelet Successfully pulled image "mpriv32/stock-api:latest" in 698.883126ms
Normal Created 14m (x5 over 15m) kubelet Created container stock-api
Warning BackOff 62s (x67 over 15m) kubelet Back-off restarting failed container
</code></pre>
<p>It constantly keeps "backoff restarting"</p>
<pre><code>When I run `describe po` I get this
Name: stock-api-7f5c45776b-gc67c
Namespace: default
Priority: 0
Node: k3d-test-server-0/172.18.0.2
Start Time: Mon, 23 May 2022 06:44:42 +0000
Labels: app=stock-api
pod-template-hash=7f5c45776b
Annotations: <none>
Status: Running
IP: 10.42.0.9
IPs:
IP: 10.42.0.9
Controlled By: ReplicaSet/stock-api-7f5c45776b
Containers:
stock-api:
Container ID: containerd://846d4c5c282274453c4b2ad8b834f20d2c673468ca18386d7404b07915b81a9c
Image: mpriv32/stock-api:latest
Image ID: docker.io/mpriv32/stock-api@sha256:98189cdf972ed61af79505b58aba2a0166fd012f5be4e0f012b2dffa0ea3dd5f
Port: <none>
Host Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 23 May 2022 08:23:16 +0000
Finished: Mon, 23 May 2022 08:23:17 +0000
Ready: False
Restart Count: 24
Environment Variables from:
api-credentials Secret Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-czkv9 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-czkv9:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 3m8s (x457 over 102m) kubelet Back-off restarting failed container
</code></pre>
<p>Dockerfile for my application</p>
<pre><code>FROM python:3.8
COPY app.py .
RUN pip install requests python-dotenv
ARG API_KEY
ARG EMAIL_USER
ARG EMAIL_PASSWORD
ENV API_KEY $API_KEY
ENV EMAIL_USER $EMAIL_USER
ENV EMAIL_PASSWORD $EMAIL_PASSWORD
COPY database.py .
RUN pip install boto3
ARG ACCESS_KEY
ARG SECRET_KEY
ENV ACCESS_KEY $ACCESS_KEY
ENV SECRET_KEY $SECRET_KEY
CMD ["python", "database.py"]
EXPOSE 80
</code></pre>
<p>app.py</p>
<pre><code>from datetime import datetime
import smtplib
import os
import requests
today_date = {datetime.today().strftime("%Y-%m-%d")}
url = (
'https://api.polygon.io/v1/open-close/{stock}/2022-05-10?adjusted=true&apiKey={API_key}'
)
if os.path.isfile('.env'):
from dotenv import load_dotenv
load_dotenv()
def __send_email(stock_data: str) -> None:
gmail_user = os.getenv('EMAIL_USER')
gmail_password = os.getenv('EMAIL_PASSWORD')
mail_from = gmail_user
mail_to = gmail_user
mail_subject = f'Your stock update for {datetime.today().strftime("%m/%d/%Y")}'
mail_message = f'Subject: {mail_subject}\n\n{stock_data}'
server = smtplib.SMTP('smtp.gmail.com', 587)
server.starttls()
server.login(gmail_user, gmail_password)
server.sendmail(mail_from, mail_to, mail_message)
server.close()
class api_data:
response = requests.get(url.format(
stock=os.getenv('stock'),
API_key=os.getenv('API_key')
))
data = response.json()
ticker = data['symbol']
daily_high = data['high']
daily_low = data['low']
daily_close = data['close']
def __email_body():
stock_data = (
f"""For the stock: {api_data.ticker}
\nToday's high was {api_data.daily_high}
\nToday's low was $ {api_data.daily_low}
\n{api_data.ticker} closed at $ {api_data.daily_close}"""
)
__send_email(stock_data)
__email_body()
</code></pre>
<p>database.py</p>
<pre><code>import boto3
import app
import os
#Pulling stock name and daily price from API
stock_name = str(app.api_data.ticker)
stock_price = str(app.api_data.daily_close)
if os.path.isfile('.env'):
from dotenv import load_dotenv
load_dotenv()
ACCESS_KEY = os.getenv('ACCESS_KEY')
SECRET_KEY = os.getenv('SECRET_KEY')
#DynamoDB table name
table_name = "stock_tracker"
dynamodb_client = boto3.client('dynamodb', region_name='us-west-2', aws_access_key_id=ACCESS_KEY, aws_secret_access_key=SECRET_KEY)
stock_info ={
'Company': {'S': stock_name}
,'DailyPrice': {'S': stock_price}
}
print(stock_info)
if __name__ == "__main__":
dynamodb_client.put_item(TableName = table_name, Item = stock_info )
</code></pre>
| <p>Both your python files are executing batch actions.</p>
<p>The <code>app.py</code> file sends an email, the <code>database.py</code> stores something in a database.</p>
<p>That is not how Pods are supposed to work. Pods are meant to be persistent, or in other words to run continuously. Examples of workloads that are meant to run inside a pod are webservers or DBMSes.</p>
<p>For the two applications you showed us, you might want to use a <code>Job</code> instead (a Job runs your container once and than exits) or a <code>CronJob</code> maybe if you need your scripts to run multiple times on a schedule.</p>
|
<p>Here is a section of my network policy:</p>
<pre><code>egress:
-to:
- ipBlock:
cidr: 10.0.0.0/32
- ipBlock:
cidr: 10.0.0.1/32
ports:
- protocol: TCP
port: 5978
</code></pre>
<p>I have a doubt whether these IPs will be allowed to connect to the same port.</p>
<p>Basically, if we want to have a different port for a different IP, what can we do?</p>
| <p>I assume the snippet you shared is a portion of a NetworkPolicy.</p>
<p>If so, you can have multiple entries in the <code>egress</code> field, like this for example:</p>
<pre><code>egress:
- to:
- ipBlock:
cidr: 10.0.0.1/32
ports:
- protocol: TCP
port: 5978
- to:
- ipBlock:
cidr: 10.0.0.2/32
ports:
- protocol: TCP
port: 1234
</code></pre>
|
<p>I have a pod with single binary with multiple threads in that application. Is it possible in Kubernetes to assign specific thread to specific CPU core inside the pod. I am aware of way to limit pod to work in specific cores, but my requirement is to manage thread mapping inside the pod to specific cores. Thank you.</p>
| <p>The lowest level that you can set up <strong>CPU Policies</strong> is at Container level. We can divide a Kubernetes cluster into namespaces. If you create a Pod within a namespace that has a default CPU limit, and any container in that Pod does not specify its own CPU limit, then the control plane assigns the default CPU limit to that container. Visit this <a href="https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/" rel="nofollow noreferrer">Official Kubernetes Documentation</a> for more reference.</p>
<p>From the inside of the Container, the only possibility could be <a href="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/performance_tuning_guide/sec-tuna-cpu-tuning" rel="nofollow noreferrer">RedHat Tuna</a>. Tuna commands can target individual CPUs.</p>
<p>The last possibility is the <a href="https://kubernetes.io/docs/tasks/administer-cluster/cpu-management-policies/#static-policy" rel="nofollow noreferrer"><strong>Static CPU policy</strong></a> which allows containers in Guaranteed pods with integer CPU requests access to exclusive CPUs on the node.</p>
<p>Finally, the following <a href="https://stackoverflow.com/questions/53276398/kubernetes-cpu-multithreading">question</a> is useful for you, regarding multi threads and CPU assignments.</p>
|
<p>How can I make every container run as non-root in Kubernetes?</p>
<p>Containers that do not specify a user, as in this example, and also do not specify a SecurityContext in the corresponding deployment, should still be able to be executed in the cluster - but without running as root. What options do you have here?</p>
<pre><code>FROM debian:jessie
RUN apt-get update && apt-get install -y \
git \
python \
vim
CMD ["echo", "hello world"]
</code></pre>
<pre><code>---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
namespace: mynamespace
spec:
replicas: 1
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- image: hello-world
name: hello-world
</code></pre>
| <p>you can add Pod Security Policy to your cluster, there is an option (below) you can add to prevent any deployment from running without specifying a non-root user:</p>
<pre><code>spec:
runAsUser:
rule: MustRunAsNonRoot
</code></pre>
<p>for more info about Pod Security Policy please go to this link:
<a href="https://kubernetes.io/docs/concepts/security/pod-security-policy/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/security/pod-security-policy/</a></p>
|
<p>Trying to plan out a deployment for an application and am wondering if it makes sense to have multiple pods in a container vs putting them in separate pods. I expect one of the containers to potentially be operating near its allocated memory limit. My understanding is that this presents the risk of this container getting OOMKilled. If that's the case, would it restart the entire pod (so the other container in the pod is restarted as well) or will it only restart the OOMKilled container?</p>
| <p>No, only the specific container.</p>
<p>For the whole pod to be recreated there needs to be a change in the Pod's <code>ownerObject</code> (tipically a <code>Replicaset</code>) or a scheduling decision by <code>kube-scheduler</code>.</p>
|
<p>I tried to define a PVC:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: master-pvc
spec:
accessModes:
- ReadWriteOnce
- ReadOnlyMany
resources:
requests:
storage: 1Gi
</code></pre>
<p>The nodes that need readonly access should define:</p>
<pre class="lang-yaml prettyprint-override"><code>volumeMounts:
- name: mastervol
mountPath: /data/master/
readOnly: true
</code></pre>
<p>The master that needs to read and write, should define:</p>
<pre class="lang-yaml prettyprint-override"><code>volumeMounts:
- name: mastervol
mountPath: /data/master/
</code></pre>
<p>Is this a valid way to achieve what i want?</p>
| <p>It is not a node that requests a <code>volumeMount</code>, but a container inside a Pod (many Pods may run in same node).</p>
<p>A PVC is 1-1 bound to a PV using a single access mode (even it supports many) at a time (see also <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer">access modes</a>). So, you <strong>cannot</strong> have a PVC operating at the same time as <code>ReadWriteOnce</code> (for some Pods) and <code>ReadOnlyMany</code> (for some other Pods).</p>
<p>If you want to support 1 writer and many readers, one solution is to let them use a <code>ReadWriteOnce</code> volume and ensure that all Pods run on the same node the volume is mounted (e.g by using <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity" rel="nofollow noreferrer">node affinity</a> constraints).</p>
|
<p>I have a service that lives inside my AKS cluster that requires both internal and external communication to authenticate against.</p>
<p>The current flow is this</p>
<p>GUI Application communicates to to ingress by FQDN, traffic gets handed off to Middleware, Middleware talks to another Middleware within the same Namespace and authenticates.</p>
<p>After the first set of authentication is performed, the GUI application then needs to communicate with the second middleware.</p>
<p>To accomplish this, I thought I would set up an external IP on the service so I can communicate with the pods within the namespace, but I am unable to resolve this externally. Within the Cluster, I can telnet and hit the Pod / IP just fine which tells me this is a network issue, I am just unsure as to where.</p>
<p>These are the steps I have performed so far:</p>
<pre><code># creating IP from prefix
az network public-ip create \
--name mw-dev-ip \
--resource-group AKS-Dev-Node \
--allocation-method Static \
--sku Basic \
--version IPv4
# Get IP address
az network public-ip show \
-n mw-dev-ip \
-g AKS-Dev-Node \
--query "ipAddress" \
-o tsv
# Enter your details below.
PIP_RESOURCE_GROUP=AKS-Dev-Node
AKS_RESOURCE_GROUP=AKS-Dev
AKS_CLUSTER_NAME=AKS-Dev-Cluster
#
CLIENT_ID=$(az aks show --resource-group $AKS_RESOURCE_GROUP --name $AKS_CLUSTER_NAME --query "servicePrincipalProfile.clientId" --output tsv)
SUB_ID=$(az account show --query "id" --output tsv)
az role assignment create \
--assignee $CLIENT_ID \
--role "Network Contributor" \
--scope /subscriptions/$SUB_ID/resourceGroups/$PIP_RESOURCE_GROUP
</code></pre>
<p>Doing these steps above created the Load balancer with a public IP address that I was then able to add to my Service manifest as such</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/azure-load-balancer-resource-group: AKS-Dev-Node
name: {{ template "mw.fullname" . }}
labels:
app: {{ template "mw.name" . }}
chart: {{ template "mw.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
loadBalancerIP: {{ .Values.mw_host }}
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: {{ .Values.service.port }}
protocol: TCP
name: {{ .Values.service.name }}
- port: {{ .Values.ping.port }}
targetPort: {{ .Values.ping.port }}
protocol: TCP
name: {{ .Values.ping.name }}
selector:
app: {{ template "mw.name" . }}
release: {{ .Release.Name }}
</code></pre>
<p>I am unsure what step I am missing though to hit this from the outside. I checked my NSG and all traffic is open, same for the firewall in front of it. Within the cluster I can telnet and hit it just fine as well as if I port forward the pod endpoint directly.</p>
<p>I can also telnet against the Endpoint IP of the pod(s) without issue as well, its just trying to route this external ip to the endpoints to hit the service from the outside.</p>
<p>Any helpful suggestions would be greatly appreciated.</p>
| <p>Never mind, the problem was that I only updated the NSG for the AKS Node Resource Group, I did not update the AKS Resource Group as well. Once I exposed the ports to the Cluster itself, traffic ran through just fine.</p>
|
<p>I have <strong>sample.yaml</strong> file that looks like the following:</p>
<pre><code> a:
b:
- "x"
- "y"
- "z"
</code></pre>
<p>and I have another file called <strong>toadd.yaml</strong> that contains the following</p>
<pre><code>- "first to add"
- "second to add"
</code></pre>
<p>I want to modify <strong>sample.yaml</strong> file so that it looks like:</p>
<pre><code> a:
b:
- "x
- "y"
- "z"
- "first to add"
- "second to add"
</code></pre>
<p>Also, I dont want redundant naming! so if there is "x" already in <strong>toadd.yaml</strong> than I dont want it to be added two times in <strong>sample.yaml/a.b</strong></p>
<p>Please note that I have tried the following:</p>
<pre><code>while read line; do
yq '.a.b += ['$line']' sample.yaml
done <toadd.yaml
</code></pre>
<p>and I fell on:</p>
<pre><code>Error: Bad expression, could not find matching `]`
</code></pre>
| <p>If the files are relatively smaller, you could just directly load the second file on to the first. See <a href="https://mikefarah.gitbook.io/yq/operators/multiply-merge#merge-two-files-together" rel="nofollow noreferrer">Merging two files together</a></p>
<pre class="lang-none prettyprint-override"><code>yq '.a.b += load("toadd.yaml")' sample.yaml
</code></pre>
<p>Tested on <a href="https://github.com/mikefarah/yq/" rel="nofollow noreferrer">mikefarah/yq</a> version 4.25.1</p>
<p>To solve the redundancy requirement, do a <code>unique</code> operation before forming the array again.</p>
<pre class="lang-none prettyprint-override"><code>yq 'load("toadd.yaml") as $data | .a.b |= ( . + $data | unique )' sample.yaml
</code></pre>
<p>which can be further simplified to just</p>
<pre class="lang-none prettyprint-override"><code>yq '.a.b |= ( . + load("toadd.yaml") | unique )' sample.yaml
</code></pre>
|
<p>I am referring to this link (<a href="https://github.com/vegasbrianc/prometheus" rel="nofollow noreferrer">https://github.com/vegasbrianc/prometheus</a>) to install Prometheus in Docker version into a Ubuntu server (server-1)</p>
<p>I also have a Kubernetes cluster (bare-metal) running in another Ubuntu server (server-2). In this Kubernetes cluster, I have some sample apps being deployed.</p>
<p>I know normally people will deploy Prometheus to the same Kubernetes cluster.</p>
<p>But now, I want to use this Prometheus in server-1 to monitor the apps in Kubernetes cluster in server-2. How to achieve this? Is there any setting/ configuration I have to do to achieve the monitoring?</p>
| <p>Assuming both servers can access each other, you will need to configure the prometheus on server-2 to <a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config" rel="nofollow noreferrer">scrape the targets</a> running on server-1, which in turn will have to expose their <code>/metrics</code> path in a way that server-2 can access (via ingress is the most popular choice).</p>
|
<p>I am using Deployment kind to define a pod.
Pod definition contains two containers and 1 sidecar.</p>
<p>In kubernetes events, Execution of container start is in sequential way.
How we can start both containers parallely.</p>
<p>Container 1 (6s) + Container (5s) + NEG ready(2s) = 13s</p>
<p>If we start container in parallel we can minimize the pod spinning total time.</p>
| <p><code>In kubernetes events, Execution of container start is in sequential way. How we can start both containers parallely.</code></p>
<p>Although container is started in <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/kuberuntime/kuberuntime_manager.go#L935" rel="nofollow noreferrer">sequence</a>, kubelet does <strong>not</strong> wait for the container to enter running state to start the next container. The time gap for kubelet to loop thru is only fraction of second. It is the startup and readiness time that your containers needed that define how fast it can start receiving requests.</p>
|
<p>I am trying to mount remote path inside docker container image using sshfs, which will going to run on kubernetes cluster.</p>
<p><strong>Problem</strong>
The issue is there is not error & after run the server nginx does not respond. If I comment the <code>CMD</code> part its working.</p>
<p>Nginx Pod Status is <code>CrashLoopBackOff</code> but when not running the sshfs, its working</p>
<p><strong>Dockerfile-nginx</strong></p>
<pre><code>FROM nginx:1.20.2-alpine
RUN apk update && apk upgrade && apk add sshfs
COPY nginx/key.pem /home/files
RUN echo "user_allow_other" | tee -a /etc/fuse.conf
RUN mkdir -p /mnt/shared
RUN chmod 600 /home/files/key.pem
CMD sh -c "sshfs [email protected]:/home /mnt/shared -o debug -o allow_other -o idmap=user,uid=$(id -u),gid=$(id -g) -o ssh_command='ssh -i /home/files/key.pem -o StrictHostKeyChecking=no'"
</code></pre>
<p><strong>Kubernetes Pod</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: kube_nginx:latest
imagePullPolicy: Never
ports:
- containerPort: 80
securityContext:
privileged: true
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
</code></pre>
<p>Thank you in advance</p>
| <p>I resolved the issue.</p>
<p>If anyone facing this issue here is the entire code</p>
<p><strong>Dockerfile-nginx</strong></p>
<pre><code>FROM nginx:1.20.2-alpine
RUN apk update && apk upgrade && apk add sshfs
COPY nginx/key.pem /home/files
RUN mkdir -p /mnt/shared
RUN chmod 600 /home/files/key.pem
CMD sh -c "sshfs [email protected]:/home /mnt/shared -o IdentityFile=/home/files/key.pem -o StrictHostKeyChecking=no && nginx -g 'daemon off;'"
</code></pre>
<p><strong>Kubernetes Pod</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
hostPID: true
containers:
- name: nginx
image: kube_nginx:latest
imagePullPolicy: Never
ports:
- containerPort: 80
securityContext:
privileged: true
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
</code></pre>
|
<p>Using Kubernetes v1.16 on AWS I am facing a weird issue while trying to reduce the time it takes to start a pod on a newly spawned node.</p>
<p>By default, a node AMI does not contains any pre cached docker image, so when a pod is scheduled onto it, its 1st job is to pull the docker image.</p>
<p>Pulling large docker image can take a while, so the pod takes a long time to run.</p>
<p>Recently I come with the idea of pre-pulling my large docker image right into the AMI, so that when a pod is scheduled onto it, it won't have to download it. Turns out a lot of people was doing this dor a while, known as "baking AMI":</p>
<ul>
<li><p><a href="https://runnable.com/blog/how-we-pre-bake-docker-images-to-reduce-infrastructure-spin-up-time" rel="noreferrer">https://runnable.com/blog/how-we-pre-bake-docker-images-to-reduce-infrastructure-spin-up-time</a></p></li>
<li><p><a href="https://github.com/kubernetes-incubator/kube-aws/issues/505" rel="noreferrer">https://github.com/kubernetes-incubator/kube-aws/issues/505</a></p></li>
<li><p><a href="https://github.com/kubernetes/kops/issues/3378" rel="noreferrer">https://github.com/kubernetes/kops/issues/3378</a></p></li>
</ul>
<p>My issue is that when I generate an AMI with my large image onto it and use this AMI, everything works as expected and the docker image is not downloaded as already present so the pod starts almost in 1 second but the pod itself runs 1000 times slower than if the docker image was not pre pulled on the AMI.</p>
<p>What I am doing:</p>
<ul>
<li>start a base working AMI on an EC2 instance</li>
<li>ssh onto it</li>
<li>run docker pull myreposiroty/myimage</li>
<li>right click on the EC2 instance from AWS console & generate an AMI</li>
</ul>
<p>If I don't prepull my docker image, then it's running normally, only if i prepull it, using the generated a new AMI then eben if its running in a second the container will be slow like never before.</p>
<p>My docker image use GPU resources and it is based on tensorflow/tensorflow:1.14.0-gpu-py3 image.It seems to be relates to the use of combined nvidia-docker & tensorflow om GPU enabled EC2.</p>
<p>Id anyone have an idea from where this extreme running latency comes from it would be much appreciated.</p>
<p><strong>EDIT #1</strong></p>
<p>Since then I am now using Packer to build my AMI.
Here is my template file:</p>
<pre><code>{
"builders": [
{
"type": "amazon-ebs",
"access_key": "{{user `aws_access_key`}}",
"secret_key": "{{user `aws_secret_key`}}",
"ami_name": "compute-{{user `environment_name`}}-{{timestamp}}",
"region": "{{user `region`}}",
"instance_type": "{{user `instance`}}",
"ssh_username": "admin",
"source_ami_filter": {
"filters": {
"virtualization-type": "hvm",
"name": "debian-stretch-hvm-x86_64-gp2-*",
"root-device-type": "ebs"
},
"owners":"379101102735",
"most_recent": true
}
}
],
"provisioners": [
{
"execute_command": "sudo env {{ .Vars }} {{ .Path }}",
"scripts": [
"ami/setup_vm.sh"
],
"type": "shell",
"environment_vars": [
"ENVIRONMENT_NAME={{user `environment_name`}}",
"AWS_ACCOUNT_ID={{user `aws_account_id`}}",
"AWS_REGION={{user `region`}}",
"AWS_ACCESS_KEY_ID={{user `aws_access_key`}}",
"AWS_SECRET_ACCESS_KEY={{user `aws_secret_key`}}",
"DOCKER_IMAGE_NAME={{user `docker_image_name`}}"
]
}
],
"post-processors": [
{
"type": "manifest",
"output": "ami/manifest.json",
"strip_path": true
}
],
"variables": {
"aws_access_key": "{{env `AWS_ACCESS_KEY_ID`}}",
"aws_secret_key": "{{env `AWS_SECRET_ACCESS_KEY`}}",
"environment_name": "",
"region": "eu-west-1",
"instance": "g4dn.xlarge",
"aws_account_id":"",
"docker_image_name":""
}
}
</code></pre>
<p>and here is the associated script to configure the AMI for Docker & Nvidia Docker:</p>
<pre><code>#!/bin/bash
cd /tmp
export DEBIAN_FRONTEND=noninteractive
export APT_LISTCHANGES_FRONTEND=noninteractive
# docker
apt-get update
apt-get install -y apt-transport-https ca-certificates curl gnupg2 software-properties-common
curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian stretch stable"
apt-get update
apt-get install -y docker-ce
usermod -a -G docker $USER
# graphical drivers
apt-get install -y software-properties-common
wget http://us.download.nvidia.com/XFree86/Linux-x86_64/440.64/NVIDIA-Linux-x86_64-440.64.run
bash NVIDIA-Linux-x86_64-440.64.run -sZ
# nvidia-docker
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | apt-key add -
curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | tee /etc/apt/sources.list.d/nvidia-docker.list
apt-get update
apt-get install -y nvidia-container-toolkit
apt-get install -y nvidia-docker2
cat > /etc/docker/daemon.json <<EOL
{
"default-runtime": "nvidia",
"runtimes": {
"nvidia": {
"path": "/usr/bin/nvidia-container-runtime",
"runtimeArgs": []
}
}
}
EOL
systemctl restart docker
# enable nvidia-persistenced service
cat > /etc/systemd/system/nvidia-persistenced.service <<EOL
[Unit]
Description=NVIDIA Persistence Daemon
Wants=syslog.target
[Service]
Type=forking
PIDFile=/var/run/nvidia-persistenced/nvidia-persistenced.pid
Restart=always
ExecStart=/usr/bin/nvidia-persistenced --verbose
ExecStopPost=/bin/rm -rf /var/run/nvidia-persistenced
[Install]
WantedBy=multi-user.target
EOL
systemctl enable nvidia-persistenced
# prepull docker
apt-get install -y python3-pip
pip3 install awscli --upgrade
aws ecr get-login-password --region $AWS_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com
docker pull $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$DOCKER_IMAGE_NAME:$ENVIRONMENT_NAME
# Clean up
apt-get -y autoremove
apt-get -y clean
</code></pre>
<p>Anyway, as soon as I put this line:</p>
<pre><code>docker pull $AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com/$DOCKER_IMAGE_NAME:$ENVIRONMENT_NAME
</code></pre>
<p>I am facing the same weird issue, when pods are scheduled on nodes booted from this AMI, it says "image already present on machine", so it don't pull it again, but then the container is slow as hell when using TensorFlow, for eg. ts.Session() takes something like a minute to run.</p>
<p><strong>EDIT #2</strong></p>
<p>Adding extra information regarding what is executed on the pod:</p>
<p>Dockerfile</p>
<pre><code>FROM tensorflow/tensorflow:1.14.0-gpu-py3
CMD ["python", "main.py"]
</code></pre>
<p>main.py</p>
<pre><code>import tensorflow as tf
config = tf.ConfigProto(allow_soft_placement=True)
config.gpu_options.allow_growth = True
return tf.Session(graph=tf.Graph(), config=config)
</code></pre>
<p>Only with those lines TF Session initialization takes up to 1 mins to be done when the image is prepulled vs. 1 second when the image is not.</p>
| <p>This is most likely due to the new instance not having "fully downloaded" all parts of the disk. <a href="https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-initialize.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-initialize.html</a> has details on this.</p>
<blockquote>
<p>For volumes that were created from snapshots, the storage blocks must be pulled down from Amazon S3 and written to the volume before you can access them. This preliminary action takes time and can cause a significant increase in the latency of I/O operations the first time each block is accessed. Volume performance is achieved after all blocks have been downloaded and written to the volume.</p>
</blockquote>
<p>When the image is fully downloaded from S3, I assume everything will return to normal speed.</p>
|
<p>I have created a database and accompanying user for the database but It appears I cant do backups with that user and neither can I add the backup role to the user.
Having checked documentation I added a user but this time at the admin database level (<code>use admin</code>) and added backup role for the same.</p>
<p>However when I attempt to do a backup I am getting an error <em>Failed: error dumping metadata: error creating directory for metadata file /var/backups/...: mkdir /var/backups/...: permission denied</em></p>
<p>Steps
1.</p>
<pre><code> `kubectl -n <namespace> exec -it <pod_name> -- sh`
</code></pre>
<p>(mongo is running in kubernetes)</p>
<p>2.</p>
<pre><code>`use admin`
</code></pre>
<p>(switch to admin user)</p>
<p>3.</p>
<pre><code>db.createUser( {user: "backupuser", pwd: "abc123", roles: ["root", "userAdminAnyDatabase", "dbAdminAnyDatabase", "readWriteAnyDatabase","backup"], mechanisms:["SCRAM-SHA-256"]})
</code></pre>
<ol start="4">
<li></li>
</ol>
<pre><code> `db.getUsers({ filter: { mechanisms: "SCRAM-SHA-256" } })`
</code></pre>
<p>(Verify if user exists)</p>
<p>5.</p>
<pre><code> mongodump -u backupuser -p abc123 --authenticationDatabase admin -d TESTDB --out /var/backups/dump-25-05-22 --gzip
</code></pre>
<p>Is it possible to even amend permissions for this user in such a case or I should be looking somewhere else.
In the container it seems I cant do any permission updates (for the group) but the user already has all permissions on /var/backups :</p>
<pre><code>ls -la
total 8
drwxr-xr-x 2 root root 4096 Feb 18 2021 .
drwxr-xr-x 1 root root 4096 Feb 18 2021 ..
</code></pre>
<p>I am not convinced either that I should be going even this far. The backup should execute out of the box as per the user I added.</p>
<p>What exactly am I missing ?</p>
| <p>There is nothing to do from mongodb side. The user that is running <code>mongodump</code> command doesn't have the required permission. To check if thats the case, you can try this out : <code>sudo chmod 777 -R /var/backups/</code> before running <code>mongodump</code>.</p>
|
<p>I am able to get the jobs under the namespace using kubectl command:</p>
<pre><code>kubectl get jobs --namespace xxx
</code></pre>
<p>This is giving me the jobs information</p>
<p><a href="https://i.stack.imgur.com/X0j22.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/X0j22.png" alt="enter image description here" /></a></p>
<p>I would like to print the count of jobs using kubectl command. How to do that?</p>
| <p><code>go-template</code> way, no pipes or installation needed. Just good old kubectl:</p>
<pre><code>kubectl get jobs --namespace xxx -o go-template='{{printf "%d\n" (len .items)}}'
</code></pre>
<p><code>len</code> is an inbuild function in <code>go-template</code> to return the number of elements to its argument. Eg: <code>items</code>.</p>
|
<p>I'm new to Kubernetes and trying to point all requests to the domain to another local service.</p>
<p>Both applications are running in the same cluster under a different namespace</p>
<p>Example domains
<code>a.domain.com</code> hosting first app
<code>b.domain.com</code> hosting the second app</p>
<p>When I do a <code>curl</code> request from the first app to the second app (<code>b.domain.com</code>). it travels through the internet to the second app.</p>
<p>Usually what I could do is in <code>/etc/hosts</code> point <code>b.domain.com</code> to localhost.</p>
<p>What do we do in this case in Kubernetes?</p>
<p>I was looking into Network Policies but I'm not sure if it correct approach.</p>
<p>Also As I understood we could just call <code>service name.namespace:port</code> from the first app. But I would like to keep the full URL.</p>
<p>Let me know if you need more details to help me solve this.</p>
| <p>The way to do it is by using the <a href="https://gateway-api.sigs.k8s.io" rel="nofollow noreferrer">Kubernetes Gateway API</a>. Now, it is true that you can deploy your own implementation since this is an Open Source project, but there are a lot of solutions already using it and it would be much easier to learn how to implement those instead.</p>
<p>For what you want, <a href="https://istio.io" rel="nofollow noreferrer">Istio</a> would fit your needs. If your cluster is hosted in a Cloud environment, you can take a look at <a href="https://cloud.google.com/anthos" rel="nofollow noreferrer">Anthos</a>, which is the managed version of Istio.</p>
<p>Finally, take a look at the blog <a href="https://cloud.google.com/blog/products/networking/welcome-to-the-service-mesh-era-introducing-a-new-istio-blog-post-series" rel="nofollow noreferrer">Welcome to the service mesh era</a>, since the traffic management between services is one of the elements of the service mesh paradigm, among others like monitoring, logging, etc.</p>
|
<p>Both Logs/ Exec commands are throwing tls error:</p>
<pre><code>$ kubectl logs <POD-NAME>
Error from server: Get "https://<NODE-PRIVATE-IP>:10250/containerLogs/<NAMESPACE>/<POD-NAME>/<DEPLOYMENT-NAME>": remote error: tls: internal error
</code></pre>
<pre><code>$ kubectl exec -it <POD-NAME> -- sh
Error from server: error dialing backend: remote error: tls: internal error
</code></pre>
| <p>Check if your hostname type setting in the subnet and launch template configuration is using resource name and if that is the case then switch to using IP name instead. I think this is caused by some weird pattern matching going on with the AWS EKS control plane (as of v1.22) where it would not issue a certificate for a node if that node's hostname doesn't match its requirements. You can test this quickly by adding another node group to your cluster with the nodes' hostnames set to IP name.</p>
|
<p>I am migrating <strong>Certificate</strong> from <code>certmanager.k8s.io/v1alpha1</code> to <code>cert-manager.io/v1</code>, however, I am getting this error</p>
<blockquote>
<p>error validating data: ValidationError(Certificate.spec): unknown field "acme" in io.cert-manager.v1.Certificate.spec</p>
</blockquote>
<p>My manifest</p>
<pre><code>apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: myapp-com-tls
namespace: default
spec:
secretName: myapp-com-tls
issuerRef:
name: letsencrypt-myapp-issuer
commonName: '*.myapp.com'
dnsNames:
- myapp.com
acme:
config:
- dns01:
provider: google-dns
domains:
- '*.myapp.com'
- myapp.com
</code></pre>
<p>I know that there is no more <code>acme</code>, but how to migrate to a newer version?</p>
| <p>The <code>cert-manager.io/v1</code> API version separates the roles of certificate <a href="https://cert-manager.io/docs/concepts/issuer/" rel="nofollow noreferrer">issuers</a> and certificates.</p>
<p>Basically, you need to <a href="https://cert-manager.io/docs/configuration/" rel="nofollow noreferrer">configure</a> a certificate issuer among the supported ones, like <a href="https://cert-manager.io/docs/configuration/acme/" rel="nofollow noreferrer">ACME</a>.</p>
<p>This issuer can be later used to obtain a certificate.</p>
<p>Please, consider read this tutorial about a certificate obtained from ACME with DNS validation in the <a href="https://cert-manager.io/docs/tutorials/acme/dns-validation/#issuing-an-acme-certificate-using-dns-validation" rel="nofollow noreferrer">cert-manager.io documentation</a>.</p>
|
<p>I'm running a Kubernetes cluster in a public cloud (Azure/AWS/Google Cloud), and I have some non-HTTP services I'd like to expose for users.</p>
<p>For HTTP services, I'd typically use an Ingress resource to expose that service publicly through an addressable DNS entry.</p>
<p>For non-HTTP, TCP-based services (e.g, a database such as PostgreSQL) how should I expose these for public consumption?</p>
<p>I considered using <code>NodePort</code> services, but this requires the nodes themselves to be publicly accessible (relying on <code>kube-proxy</code> to route to the appropriate node). I'd prefer to avoid this if possible.</p>
<p><code>LoadBalancer</code> services seem like another option, though I don't want to create a dedicated cloud load balancer for <em>each</em> TCP service I want to expose.</p>
<p>I'm aware that the <a href="https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/" rel="noreferrer">NGINX Ingress controller supports exposing TCP and UDP services</a>, but that seems to require a static definition of the services you'd like to expose. For my use case, these services are being dynamically created and destroyed, so it's not possible to define these service mappings upfront in a static <code>ConfigMap</code>.</p>
| <p>A couple of years late to reply, but here is what I did for a similar case.</p>
<p>If you don't want to use <code>LoadBalancer</code> at all, then the other option is <code>NodePort</code> as you mention. To make it externally addressable, you can have the pod associate a static IP to its node when it comes up. For example in AWS EC2 you can have an elastic IP (or static external IP in GCP) and associate it when the postgresql pod comes up in its <code>PodSpec</code> using initContainers or a separate container:</p>
<pre><code> initContainers:
- name: eip
image: docker.io/amazon/aws-cli:2.7.1
command:
- /bin/bash
- -xec
- |
INSTANCE_ID="$(curl http://169.254.169.254/latest/meta-data/instance-id)"
aws ec2 associate-address --allocation-id "$EIP_ALLOCATION_ID" --instance-id "$INSTANCE_ID"
env:
- name: EIP_ALLOCATION_ID
value: <your elastic IP allocation ID>
- name: AWS_REGION
value: <region>
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: aws-secret
key: accessKey
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: aws-secret
key: secretKey
</code></pre>
<p>Here AWS access key and secret key are assumed to be installed in <code>aws-secret</code>. Above example uses environment variables but to be more secure you can instead mount on a volume and read in the script body and unset after use.</p>
<p>To alleviate the security concerns related to ports, one option can be to add only a node or two to the security group exposing the port, and use nodeSelector for the pod to stick it to only those nodes. Alternatively you can use "aws ec2 modify-instance-attribute" in the container body above to add the security group to just the node running the pod. Below is a more complete example for AWS EC2 that handles a node having multiple network interfaces:</p>
<pre><code> containers:
- name: eip-sg
image: docker.io/amazon/aws-cli:2.7.1
command:
- /bin/bash
- -xec
- |
INSTANCE_ID=$(curl http://169.254.169.254/latest/meta-data/instance-id)
PRIVATE_IP="$(curl http://169.254.169.254/latest/meta-data/local-ipv4)"
for iface in $(curl http://169.254.169.254/latest/meta-data/network/interfaces/macs/); do
if curl "http://169.254.169.254/latest/meta-data/network/interfaces/macs/$iface/local-ipv4s" | grep -q "$PRIVATE_IP"; then
INTERFACE_FOR_IP="$(curl http://169.254.169.254/latest/meta-data/network/interfaces/macs/$iface/interface-id)"
fi
done
if [ -z "$INTERFACE_FOR_IP" ]; then
aws ec2 associate-address --allocation-id "$EIP_ALLOCATION_ID" --instance-id "$INSTANCE_ID"
else
aws ec2 associate-address --allocation-id "$EIP_ALLOCATION_ID" --network-interface-id "$INTERFACE_FOR_IP" --private-ip-address "$PRIVATE_IP"
fi
aws ec2 modify-instance-attribute --instance-id "$INSTANCE_ID" --groups $FULL_SECURITY_GROUPS
tail -f -s 10 /dev/null
lifecycle:
preStop:
exec:
command:
- /bin/bash
- -ec
- |
INSTANCE_ID=$(curl http://169.254.169.254/latest/meta-data/instance-id)
aws ec2 modify-instance-attribute --instance-id "$INSTANCE_ID" --groups $DEFAULT_SECURITY_GROUPS
env:
- name: EIP_ALLOCATION_ID
value: <your elastic IP allocation ID>
- name: DEFAULT_SECURITY_GROUPS
value: "<sg1> <sg2>"
- name: FULL_SECURITY_GROUPS
value: "<sg1> <sg2> <sg3>"
- name: AWS_REGION
value: <region>
- name: AWS_ACCESS_KEY_ID
valueFrom:
secretKeyRef:
name: aws-secret
key: accessKey
- name: AWS_SECRET_ACCESS_KEY
valueFrom:
secretKeyRef:
name: aws-secret
key: secretKey
</code></pre>
<p>You may also need to set the deployment <code>strategy</code> to be <code>Recreate</code> instead of the default of <code>RollingUpdate</code> otherwise the <code>lifecycle.preStop</code> may get invoked after the container <code>command</code> in case the deployment is restarted using <code>kubectl rollout restart deployment ...</code>.</p>
|
<p>In my terraform config files I create a Kubernetes cluster on GKE and when created, set up a Kubernetes provider to access said cluster and perform various actions such as setting up namespaces.</p>
<p>The problem is that some new namespaces were created in the cluster without terraform and now my attempts to import these namespaces into my state seem fail due to inability to connect to the cluster, which I believe is due to the following (taken from Terraform's official documentation of the import command):</p>
<blockquote>
<p>The only limitation Terraform has when reading the configuration files is that the import provider configurations must not depend on non-variable inputs. For example, a provider configuration cannot depend on a data source.</p>
</blockquote>
<p>The command I used to import the namespaces is pretty straightforward:</p>
<p><code>terraform import kubernetes_namespace.my_new_namespace my_new_namespace</code></p>
<p>I also tried using the <code>-provdier=""</code> and <code>-config=""</code> but to no avail.</p>
<p>My Kubernetes provider configuration is this:</p>
<pre><code>provider "kubernetes" {
version = "~> 1.8"
host = module.gke.endpoint
token = data.google_client_config.current.access_token
cluster_ca_certificate = base64decode(module.gke.cluster_ca_certificate)
}
</code></pre>
<p>An example for a namespace resource I am trying to import is this:</p>
<pre><code>resource "kubernetes_namespace" "my_new_namespace" {
metadata {
name = "my_new_namespace"
}
}
</code></pre>
<p>The import command results in the following:</p>
<blockquote>
<p>Error: Get <a href="http://localhost/api/v1/namespaces/my_new_namespace" rel="noreferrer">http://localhost/api/v1/namespaces/my_new_namespace</a>: dial tcp [::1]:80: connect: connection refused</p>
</blockquote>
<p>It's obvious it's doomed to fail since it's trying to reach <code>localhost</code> instead of the actual cluster IP and configurations.</p>
<p>Is there any workaround for this use case?</p>
<p>Thanks in advance.</p>
| <p>the issue lies with the dynamic data provider. The <code>import</code> statement doesn't have access to it.</p>
<p>For the process of importing, you have to hardcode the provider values.</p>
<p>Change this:</p>
<pre><code>provider "kubernetes" {
version = "~> 1.8"
host = module.gke.endpoint
token = data.google_client_config.current.access_token
cluster_ca_certificate = base64decode(module.gke.cluster_ca_certificate)
}
</code></pre>
<p>to:</p>
<pre><code>provider "kubernetes" {
version = "~> 1.8"
host = "https://<ip-of-cluster>"
token = "<token>"
cluster_ca_certificate = base64decode(<cert>)
load_config_file = false
}
</code></pre>
<ul>
<li>The token can be retrieved from <code>gcloud auth print-access-token</code>.</li>
<li>The IP and cert can be retrieved by inspecting the created container resource using <code>terraform state show module.gke.google_container_cluster.your_cluster_resource_name_here</code></li>
</ul>
<p>For provider version 2+ you have to drop <code>load_config_file</code>.</p>
<p>Once in place, import and revert the changes on the provider.</p>
|
<p>In Google Cloud Platform, my goal is to have one cluster with a message queue and a pod to consume these in another cluster with MCS (Multi Cluster Service). When trying this out with only <em>one</em> cluster it went fairly smooth. I used the container name with the port number as the endpoint to connect to the redpanda message queue like this:</p>
<img src="https://i.stack.imgur.com/OS5ER.png" height="250">
<p>Now I want to do this between two clusters, but I'm having trouble configuring stuff right. This is my setup:</p>
<p><a href="https://i.stack.imgur.com/GaQPr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GaQPr.png" alt="enter image description here" /></a></p>
<p>I followed <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/multi-cluster-services" rel="nofollow noreferrer">this guide</a> to set the clusters up which seemed to work (hard to tell, but no errors), and the redpanda application inside the pod is configured to be on <code>localhost:9092</code>. Unfortunately, I'm getting a Connection Error when running the consumer on <code>my-service-export.my-ns.svc.clusterset.local:9092</code>.</p>
<p>Is it correct to expose the pod with the message queue on its localhost?</p>
<p>Are there ways I can debug or test the connection between pods easier?</p>
| <p>Ok, got it working. I obviously misread the setup at one point and had to re-do some stuff to get it working.</p>
<p>Also, the <code>my-service-export</code> should probably have the same name as the service you want to export, in my case <code>redpanda</code>.</p>
<p>A helpful tool to check the connection without running up a consumer is the simple <code>dnsutils</code> image. Use this deployment file and change the namespace to <code>my-ns</code>:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: dnsutils
namespace: my-ns # <----
spec:
containers:
- name: dnsutils
image: k8s.gcr.io/e2e-test-images/jessie-dnsutils:1.3
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
restartPolicy: Always
</code></pre>
<p>Then spin it up with <code>apply</code>, <code>exec</code> into it, then run <code>host my-service-export.my-ns.svc.clusterset.local</code>. If you get an IP back you are probably good.</p>
|
<p>I am trying to deploy a docker image on a kubernetes cluster.</p>
<p>What I want to achieve on the cluster is the same output as I achieve when I run this command locally (The output will be some generated files)</p>
<pre><code>sudo docker run \
--env ACCEPT_EULA="I_ACCEPT_THE_EULA" \
--volume /my-folder:/opt/data \
--volume /my-folder:/opt/g2 \
test/installer:3.0.0
</code></pre>
<p>What I have created for the deployment on the kubernetes side is this:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: test
namespace: default
spec:
template:
metadata:
name: test
labels:
app: test
spec:
volumes:
- name: nfs-volume
nfs:
# URL for the NFS server
server: SERVER_HOSTNAME
path: PATH
containers:
- name: test-container
image: DOCKER_IMAGE
resources:
requests:
memory: "1Gi"
cpu: "1000m"
limits:
memory: "2Gi"
cpu: "2000m"
env:
- name: ACCEPT_EULA
value: "I_ACCEPT_THE_EULA"
volumeMounts:
- name: nfs-volume
mountPath: /var/nfs
replicas: 1
selector:
matchLabels:
app: test
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
</code></pre>
<p>The problem that I have is wrt/ to these two arguments, I can not understand how to perform the related actions on the k8 side. Any suggestions ?</p>
<pre><code>--volume /my-folder:/opt/data
--volume /my-folder:/opt/g2
</code></pre>
<p>Currently I get errors like:
cp: cannot create directory '/opt/test': Permission denied</p>
| <p>try this:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: test
namespace: default
spec:
template:
metadata:
name: test
labels:
app: test
spec:
volumes:
- name: my-folder
hostPath:
path: /my-folder
containers:
- name: test-container
image: test/installer:3.0.0
resources:
requests:
memory: "1Gi"
cpu: "1000m"
limits:
memory: "2Gi"
cpu: "2000m"
env:
- name: ACCEPT_EULA
value: "I_ACCEPT_THE_EULA"
volumeMounts:
- name: nfs-volume
mountPath: /var/nfs
- name: my-folder
mountPath: /opt/data
- name: my-folder
mountPath: /opt/g2
replicas: 1
selector:
matchLabels:
app: test
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
</code></pre>
|
<p>When I deploy the Bitnami PostgreSQL Helm Chart (chart version 10.9.4, appVersion: 11.13.0), the passwords for any user are not updated or changed after the first installation.</p>
<p>Lets say that for the first installation I use this command:</p>
<pre><code>helm install postgresql --set postgresqlUsername=rpuser --set postgresqlPassword=rppass --set
postgresqlDatabase=reportportal --set postgresqlPostgresPassword=password2 -f
./reportportal/postgresql/values.yaml ./charts/postgresql
</code></pre>
<p>Deleting the Helm release also deletes the stateful set. After that, If I try to install PostgreSQL the same way but with different password values, these won't be updated and will keep the previous ones from the first installation.</p>
<p>Is there something I'm missing regarding where the users' passwords are stored? Do I have to remove the PV and PVC, or do they have nothing to do with this? (I know I can change the passwords using psql commands, but I'm failing to understand why this happens)</p>
| <p>The database password and all of the other database data is stored in the Kubernetes PersistentVolume. Helm won't delete the PersistentVolumeClaim by default, so even if you <code>helm uninstall && helm install</code> the chart, it will still use the old database data and the old login credentials.</p>
<p><a href="https://docs.helm.sh/docs/helm/helm_uninstall/" rel="nofollow noreferrer"><code>helm uninstall</code></a> doesn't have an option to delete the PVC. This matches the standard behavior of a Kubernetes StatefulSet (there is an <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#persistentvolumeclaim-retention" rel="nofollow noreferrer">alpha option to automatically delete the PVC</a> but it needs to be enabled at the cluster level and also requires modifying the StatefulSet object). When you uninstall the chart, you also need to manually delete the PVC:</p>
<pre class="lang-bash prettyprint-override"><code>helm uninstall postgresql
kubectl delete pvc postgresql-primary-data-0
helm install postgresql ... ./charts/postgresql
</code></pre>
|
<p>Recently some of the microservice AMQP applications are disconnected based on the solace event logs. However, the microservice AMQP applications did not detect any "CONNECTION_CLOSE" event. And, the applications did not trigger DISCONNECT action.</p>
<p>Is there any documentation of the reasons and the explanation for the causes of them? How to troubleshoot to find more information?</p>
<ul>
<li>K8s : Using Microk8s (3 Nodes)</li>
<li>Microservices Application : Using NodeJS (AMQP-PROMISE)</li>
<li>Solace : Using Docker-Compose (Version 9.12.1.17) - Outside the K8s Cluster</li>
</ul>
<pre><code>2022-05-21T01:00:10.139+00:00 <local3.notice> 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_UNBIND: default #amqp/client/cc91953864d36d09/food-input-adaptor/d874f7c9-9620-2243-b12f-3fe039a4f1eb Client (59) #amqp/client/cc91953864d36d09/food-in
put-adaptor/d874f7c9-9620-2243-b12f-3fe039a4f1eb username testAccount Unbind to Flow Id (78), ForwardingMode(StoreAndForward), final statistics - flow(255, 0, 0, 0, 0, 0, 0, 0, 2997, 0), isActive(No), Reason(Client disconnected)
2022-05-21T01:00:10.141+00:00 <local3.info> 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_DISCONNECT: default #amqp/client/cc91953864d36d09/food-input-adaptor/d874f7c9-9620-2243-b12f-3fe039a4f1eb Client (59) #amqp/client/cc91953864d36d09/food-
input-adaptor/d874f7c9-9620-2243-b12f-3fe039a4f1eb username testAccount WebSessionId (N/A) reason(Peer TCP Reset) final statistics - dp(104, 2955, 2951, 2997, 3055, 5952, 6915, 118733, 3991887, 1496290, 3998802, 1615023, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0) conn(0, 0, 192.168.105.161:60776, CLOSD, 0, 0, 0) zip(0, 0, 0, 0, 0.00, 0.00, 0, 0, 0, 0, 0, 0, 0, 0) web(0, 0, 0, 0, 0, 0, 0), SslVersion(), SslCipher()
2022-05-21T01:00:10.613+00:00 <local3.info> 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_DISCONNECT: default #amqp/client/26253499ce1fbce5/a2c7ad9b-540d-4116-85cf-e8dfe8d43d71 Client (4) #amqp/client/26253499ce1fbce5/a2c7ad9b-540d-4116-85cf
-e8dfe8d43d71 username testAccount WebSessionId (N/A) reason(Peer TCP Reset) final statistics - dp(2, 2, 0, 0, 2, 2, 223, 401, 0, 0, 223, 401, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) conn(0, 0, 192.168.105.161:38101, CLOSD, 0, 0, 0) zip(0, 0, 0, 0, 0.
00, 0.00, 0, 0, 0, 0, 0, 0, 0, 0) web(0, 0, 0, 0, 0, 0, 0), SslVersion(), SslCipher()
2022-05-21T01:00:13.141+00:00 <local3.info> 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_CLOSE_FLOW: default #amqp/client/cc91953864d36d09/food-input-adaptor/d874f7c9-9620-2243-b12f-3fe039a4f1eb Client (59) #amqp/client/cc91953864d36d09/food-
input-adaptor/d874f7c9-9620-2243-b12f-3fe039a4f1eb username testAccount Pub flow session flow name 8b608e84651042bbaa485cdea5fd00ef (7), transacted session id -1, publisher id 6, last message id 4573, window size 247, final statistics - flow
(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2951)
2022-05-21T01:01:10.158+00:00 <local3.notice> 031cdc6fee4f event: VPN: VPN_AD_QENDPT_DELETE: default - Message VPN (0) Topic Endpoint food-input-adaptor/d874f7c9-9620-2243-b12f-3fe039a4f1eb/food-input-adaptor deleted, final statistics - sp
ool(145051, 145023, 145051, 0, 0, 0, 1202584, 2997) bind(1, 1, 0, 0, 0)
2022-05-21T01:31:46.979+00:00 <local3.notice> 031cdc6fee4f event: SYSTEM: SYSTEM_AUTHENTICATION_SESSION_OPENED: - - SEMP session 192.168.7.1-48 internal authentication opened for user localAccount (admin)
2022-05-21T01:35:13.360+00:00 <local3.notice> 031cdc6fee4f event: SYSTEM: SYSTEM_AUTHENTICATION_SESSION_OPENED: - - CLI session pts/0 [10572] internal authentication opened for user appuser (appuser)
</code></pre>
<p>After a while</p>
<pre><code>2022-05-21T01:51:36.127+00:00 <local3.notice> 031cdc6fee4f event: VPN: VPN_AD_QENDPT_CREATE: default - Message VPN (0) Topic Endpoint food-input-adaptor/703ee98b-f552-a94f-bb00-7b59cf576ae4/food-input-adaptor created
2022-05-21T01:51:36.130+00:00 <local3.notice> 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_BIND_SUCCESS: default #amqp/client/4b2080ad03c99e16/food-input-adaptor/703ee98b-f552-a94f-bb00-7b59cf576ae4 Client (87) #amqp/client/4b2080ad03c99e16/
food-input-adaptor/703ee98b-f552-a94f-bb00-7b59cf576ae4 username testAccount Bind to Non-Durable Topic Endpoint food-input-adaptor/703ee98b-f552-a94f-bb00-7b59cf576ae4/food-input-adaptor Topic(caas/food/input), AccessType(Exclusive), Quota(5000M
B), MaxMessageSize(10000000B), AllOthersPermission(No-Access), RespectTTL(No), RejectMsgToSenderOnDiscard(No), ReplayFrom(N/A), GrantedPermission(Read|Consume|Modify-Topic|Delete), FlowType(Consumer-Redelivery), FlowId(85), ForwardingMod
e(StoreAndForward), MaxRedelivery(0), TransactedSessionId(-1) completed
2022-05-21T01:51:39.692+00:00 <local3.notice> 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_UNBIND: default #amqp/client/4b2080ad03c99e16/food-input-adaptor/703ee98b-f552-a94f-bb00-7b59cf576ae4 Client (87) #amqp/client/4b2080ad03c99e16/food-in
put-adaptor/703ee98b-f552-a94f-bb00-7b59cf576ae4 username testAccount Unbind to Flow Id (85), ForwardingMode(StoreAndForward), final statistics - flow(1, 0, 0, 0, 0, 0, 0, 0, 0, 0), isActive(No), Reason(Client disconnected)
2022-05-21T01:51:39.693+00:00 <local3.info> 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_DISCONNECT: default #amqp/client/4b2080ad03c99e16/food-input-adaptor/703ee98b-f552-a94f-bb00-7b59cf576ae4 Client (87) #amqp/client/4b2080ad03c99e16/food-
input-adaptor/703ee98b-f552-a94f-bb00-7b59cf576ae4 username testAccount WebSessionId (N/A) reason(Peer TCP Reset) final statistics - dp(5, 4, 0, 0, 5, 4, 417, 693, 0, 0, 417, 693, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) conn(0, 0, 192.168.7.1:63510, C
LOSD, 0, 0, 0) zip(0, 0, 0, 0, 0.00, 0.00, 0, 0, 0, 0, 0, 0, 0, 0) web(0, 0, 0, 0, 0, 0, 0), SslVersion(), SslCipher()
2022-05-21T01:51:42.693+00:00 <local3.info> 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_CLOSE_FLOW: default #amqp/client/4b2080ad03c99e16/food-input-adaptor/703ee98b-f552-a94f-bb00-7b59cf576ae4 Client (87) #amqp/client/4b2080ad03c99e16/food-
input-adaptor/703ee98b-f552-a94f-bb00-7b59cf576ae4 username testAccount Pub flow session flow name 9ad69d02d736443489115e34529e6e68 (22), transacted session id -1, publisher id 21, last message id 1544, window size 247, final statistics - fl
ow(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0)
2022-05-21T01:51:42.818+00:00 <local3.notice> 031cdc6fee4f event: SYSTEM: SYSTEM_AUTHENTICATION_SESSION_CLOSED: - - CLI session pts/0 [10572] closed for user appuser (appuser)
2022-05-21T01:52:39.712+00:00 <local3.notice> 031cdc6fee4f event: VPN: VPN_AD_QENDPT_DELETE: default - Message VPN (0) Topic Endpoint food-input-adaptor/703ee98b-f552-a94f-bb00-7b59cf576ae4/food-input-adaptor deleted, final statistics - sp
ool(0, 0, 0, 0, 0, 0, 0, 0) bind(1, 1, 0, 0, 0)
2022-05-21T01:53:54.892+00:00 <local3.info> 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_CONNECT: default #amqp/client/eb13bf695e4cdc3a Client (86) #amqp/client/eb13bf695e4cdc3a username testAccount OriginalClientUsername(testAccount) WebSessionId
(N/A) connected to 172.22.0.2:5672 from 192.168.7.1:63572 version(Unknown) platform(Unknown) SslVersion() SslCipher() APIuser(Unknown) authScheme(Basic) authorizationGroup() clientProfile(default) ACLProfile(default) SSLDowngradedToPlain
Text(No) SSLNegotiatedTo() SslRevocation(Not Checked) Capabilities() SslValidityNotAfter(-)
2022-05-21T01:53:54.919+00:00 <local3.notice> 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_NAME_CHANGE: default #amqp/client/eb13bf695e4cdc3a/food-input-adaptor/c296075d-66ae-5843-9a49-118437114e88 Client (86) #amqp/client/eb13bf695e4cdc3a/f
pl-input-adaptor/c296075d-66ae-5843-9a49-118437114e88 username testAccount changed name from #amqp/client/eb13bf695e4cdc3a
2022-05-21T01:53:54.964+00:00 <local3.info> 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_OPEN_FLOW: default #amqp/client/eb13bf695e4cdc3a/food-input-adaptor/c296075d-66ae-5843-9a49-118437114e88 Client (86) #amqp/client/eb13bf695e4cdc3a/food-i
nput-adaptor/c296075d-66ae-5843-9a49-118437114e88 username testAccount Pub flow session flow name f18e140a02a9453ab6eb03425ee7d3f9 (23), transacted session id -1, publisher id 22, last message id 1291, window size 247
2022-05-21T01:53:54.979+00:00 <local3.notice> 031cdc6fee4f event: VPN: VPN_AD_QENDPT_CREATE: default - Message VPN (0) Topic Endpoint food-input-adaptor/c296075d-66ae-5843-9a49-118437114e88/food-input-adaptor created
2022-05-21T01:53:54.983+00:00 <local3.notice> 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_BIND_SUCCESS: default #amqp/client/eb13bf695e4cdc3a/food-input-adaptor/c296075d-66ae-5843-9a49-118437114e88 Client (86) #amqp/client/eb13bf695e4cdc3a/
food-input-adaptor/c296075d-66ae-5843-9a49-118437114e88 username testAccount Bind to Non-Durable Topic Endpoint food-input-adaptor/c296075d-66ae-5843-9a49-118437114e88/food-input-adaptor Topic(caas/food/input), AccessType(Exclusive), Quota(5000M
B), MaxMessageSize(10000000B), AllOthersPermission(No-Access), RespectTTL(No), RejectMsgToSenderOnDiscard(No), ReplayFrom(N/A), GrantedPermission(Read|Consume|Modify-Topic|Delete), FlowType(Consumer-Redelivery), FlowId(86), ForwardingMod
e(StoreAndForward), MaxRedelivery(0), TransactedSessionId(-1) completed
2022-05-21T01:53:56.502+00:00 <local3.notice> 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_UNBIND: default #amqp/client/eb13bf695e4cdc3a/food-input-adaptor/c296075d-66ae-5843-9a49-118437114e88 Client (86) #amqp/client/eb13bf695e4cdc3a/food-in
put-adaptor/c296075d-66ae-5843-9a49-118437114e88 username testAccount Unbind to Flow Id (86), ForwardingMode(StoreAndForward), final statistics - flow(1, 0, 0, 0, 0, 0, 0, 0, 0, 0), isActive(No), Reason(Client disconnected)
2022-05-21T01:53:56.503+00:00 <local3.info> 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_DISCONNECT: default #amqp/client/eb13bf695e4cdc3a/food-input-adaptor/c296075d-66ae-5843-9a49-118437114e88 Client (86) #amqp/client/eb13bf695e4cdc3a/food-
input-adaptor/c296075d-66ae-5843-9a49-118437114e88 username testAccount WebSessionId (N/A) reason(Peer TCP Reset) final statistics - dp(5, 4, 0, 0, 5, 4, 417, 693, 0, 0, 417, 693, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) conn(0, 0, 192.168.7.1:63572, C
LOSD, 0, 0, 0) zip(0, 0, 0, 0, 0.00, 0.00, 0, 0, 0, 0, 0, 0, 0, 0) web(0, 0, 0, 0, 0, 0, 0), SslVersion(), SslCipher()
2022-05-21T01:53:59.503+00:00 <local3.info> 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_CLOSE_FLOW: default #amqp/client/eb13bf695e4cdc3a/food-input-adaptor/c296075d-66ae-5843-9a49-118437114e88 Client (86) #amqp/client/eb13bf695e4cdc3a/food-
input-adaptor/c296075d-66ae-5843-9a49-118437114e88 username testAccount Pub flow session flow name f18e140a02a9453ab6eb03425ee7d3f9 (23), transacted session id -1, publisher id 22, last message id 1291, window size 247, final statistics - fl
ow(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0)
2022-05-21T01:54:56.521+00:00 <local3.notice> 031cdc6fee4f event: VPN: VPN_AD_QENDPT_DELETE: default - Message VPN (0) Topic Endpoint food-input-adaptor/c296075d-66ae-5843-9a49-118437114e88/food-input-adaptor deleted, final statistics - sp
ool(0, 0, 0, 0, 0, 0, 0, 0) bind(1, 1, 0, 0, 0)
2022-05-21T01:55:15.625+00:00 <local3.info> 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_CONNECT: default #amqp/client/5d9a55a247c9c1bf Client (68) #amqp/client/5d9a55a247c9c1bf username testAccount OriginalClientUsername(testAccount) WebSessionId
(N/A) connected to 172.22.0.2:5672 from 192.168.7.1:63584 version(Unknown) platform(Unknown) SslVersion() SslCipher() APIuser(Unknown) authScheme(Basic) authorizationGroup() clientProfile(default) ACLProfile(default) SSLDowngradedToPlain
Text(No) SSLNegotiatedTo() SslRevocation(Not Checked) Capabilities() SslValidityNotAfter(-)
2022-05-21T01:55:15.649+00:00 <local3.notice> 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_NAME_CHANGE: default #amqp/client/5d9a55a247c9c1bf/food-input-adaptor/44920b4d-7a3e-f844-96e3-e67cebc3ac80 Client (68) #amqp/client/5d9a55a247c9c1bf/f
pl-input-adaptor/44920b4d-7a3e-f844-96e3-e67cebc3ac80 username testAccount changed name from #amqp/client/5d9a55a247c9c1bf
2022-05-21T01:55:15.696+00:00 <local3.info> 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_OPEN_FLOW: default #amqp/client/5d9a55a247c9c1bf/food-input-adaptor/44920b4d-7a3e-f844-96e3-e67cebc3ac80 Client (68) #amqp/client/5d9a55a247c9c1bf/food-i
nput-adaptor/44920b4d-7a3e-f844-96e3-e67cebc3ac80 username testAccount Pub flow session flow name e216e8c8310d499083f1684609ef573f (24), transacted session id -1, publisher id 23, last message id 1542, window size 247
2022-05-21T01:55:15.711+00:00 <local3.notice> 031cdc6fee4f event: VPN: VPN_AD_QENDPT_CREATE: default - Message VPN (0) Topic Endpoint food-input-adaptor/44920b4d-7a3e-f844-96e3-e67cebc3ac80/food-input-adaptor created
2022-05-21T01:55:15.713+00:00 <local3.notice> 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_BIND_SUCCESS: default #amqp/client/5d9a55a247c9c1bf/food-input-adaptor/44920b4d-7a3e-f844-96e3-e67cebc3ac80 Client (68) #amqp/client/5d9a55a247c9c1bf/
food-input-adaptor/44920b4d-7a3e-f844-96e3-e67cebc3ac80 username testAccount Bind to Non-Durable Topic Endpoint food-input-adaptor/44920b4d-7a3e-f844-96e3-e67cebc3ac80/food-input-adaptor Topic(caas/food/input), AccessType(Exclusive), Quota(5000M
B), MaxMessageSize(10000000B), AllOthersPermission(No-Access), RespectTTL(No), RejectMsgToSenderOnDiscard(No), ReplayFrom(N/A), GrantedPermission(Read|Consume|Modify-Topic|Delete), FlowType(Consumer-Redelivery), FlowId(87), ForwardingMod
e(StoreAndForward), MaxRedelivery(0), TransactedSessionId(-1) completed
2022-05-21T01:55:16.569+00:00 <local3.notice> 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_UNBIND: default #amqp/client/5d9a55a247c9c1bf/food-input-adaptor/44920b4d-7a3e-f844-96e3-e67cebc3ac80 Client (68) #amqp/client/5d9a55a247c9c1bf/food-in
put-adaptor/44920b4d-7a3e-f844-96e3-e67cebc3ac80 username testAccount Unbind to Flow Id (87), ForwardingMode(StoreAndForward), final statistics - flow(1, 0, 0, 0, 0, 0, 0, 0, 0, 0), isActive(No), Reason(Client disconnected)
2022-05-21T01:55:16.570+00:00 <local3.info> 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_DISCONNECT: default #amqp/client/5d9a55a247c9c1bf/food-input-adaptor/44920b4d-7a3e-f844-96e3-e67cebc3ac80 Client (68) #amqp/client/5d9a55a247c9c1bf/food-
---Press any key to continue, or `q' to quit---
input-adaptor/44920b4d-7a3e-f844-96e3-e67cebc3ac80 username testAccount WebSessionId (N/A) reason(Peer TCP Reset) final statistics - dp(5, 4, 0, 0, 5, 4, 417, 693, 0, 0, 417, 693, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) conn(0, 0, 192.168.7.1:63584, C
LOSD, 0, 0, 0) zip(0, 0, 0, 0, 0.00, 0.00, 0, 0, 0, 0, 0, 0, 0, 0) web(0, 0, 0, 0, 0, 0, 0), SslVersion(), SslCipher()
2022-05-21T01:55:19.570+00:00 <local3.info> 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_CLOSE_FLOW: default #amqp/client/5d9a55a247c9c1bf/food-input-adaptor/44920b4d-7a3e-f844-96e3-e67cebc3ac80 Client (68) #amqp/client/5d9a55a247c9c1bf/food-
input-adaptor/44920b4d-7a3e-f844-96e3-e67cebc3ac80 username testAccount Pub flow session flow name e216e8c8310d499083f1684609ef573f (24), transacted session id -1, publisher id 23, last message id 1542, window size 247, final statistics - fl
ow(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0)
2022-05-21T01:56:16.589+00:00 <local3.notice> 031cdc6fee4f event: VPN: VPN_AD_QENDPT_DELETE: default - Message VPN (0) Topic Endpoint food-input-adaptor/44920b4d-7a3e-f844-96e3-e67cebc3ac80/food-input-adaptor deleted, final statistics - sp
ool(0, 0, 0, 0, 0, 0, 0, 0) bind(1, 1, 0, 0, 0)
2022-05-21T01:56:27.372+00:00 <local3.info> 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_CONNECT: default #amqp/client/b45f6c87b0e76616 Client (67) #amqp/client/b45f6c87b0e76616 username testAccount OriginalClientUsername(testAccount) WebSessionId
(N/A) connected to 172.22.0.2:5672 from 192.168.7.1:63599 version(Unknown) platform(Unknown) SslVersion() SslCipher() APIuser(Unknown) authScheme(Basic) authorizationGroup() clientProfile(default) ACLProfile(default) SSLDowngradedToPlain
Text(No) SSLNegotiatedTo() SslRevocation(Not Checked) Capabilities() SslValidityNotAfter(-)
2022-05-21T01:56:27.397+00:00 <local3.notice> 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_NAME_CHANGE: default #amqp/client/b45f6c87b0e76616/food-input-adaptor/bc685049-76db-7345-9945-8cc07b6035ae Client (67) #amqp/client/b45f6c87b0e76616/f
pl-input-adaptor/bc685049-76db-7345-9945-8cc07b6035ae username testAccount changed name from #amqp/client/b45f6c87b0e76616
2022-05-21T01:56:27.441+00:00 <local3.info> 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_OPEN_FLOW: default #amqp/client/b45f6c87b0e76616/food-input-adaptor/bc685049-76db-7345-9945-8cc07b6035ae Client (67) #amqp/client/b45f6c87b0e76616/food-i
nput-adaptor/bc685049-76db-7345-9945-8cc07b6035ae username testAccount Pub flow session flow name 04c9bfb86bac4f3dba9dba89b4724cde (25), transacted session id -1, publisher id 24, last message id 1290, window size 247
2022-05-21T01:56:27.454+00:00 <local3.notice> 031cdc6fee4f event: VPN: VPN_AD_QENDPT_CREATE: default - Message VPN (0) Topic Endpoint food-input-adaptor/bc685049-76db-7345-9945-8cc07b6035ae/food-input-adaptor created
2022-05-21T01:56:27.456+00:00 <local3.notice> 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_BIND_SUCCESS: default #amqp/client/b45f6c87b0e76616/food-input-adaptor/bc685049-76db-7345-9945-8cc07b6035ae Client (67) #amqp/client/b45f6c87b0e76616/
food-input-adaptor/bc685049-76db-7345-9945-8cc07b6035ae username testAccount Bind to Non-Durable Topic Endpoint food-input-adaptor/bc685049-76db-7345-9945-8cc07b6035ae/food-input-adaptor Topic(caas/food/input), AccessType(Exclusive), Quota(5000M
B), MaxMessageSize(10000000B), AllOthersPermission(No-Access), RespectTTL(No), RejectMsgToSenderOnDiscard(No), ReplayFrom(N/A), GrantedPermission(Read|Consume|Modify-Topic|Delete), FlowType(Consumer-Redelivery), FlowId(88), ForwardingMod
e(StoreAndForward), MaxRedelivery(0), TransactedSessionId(-1) completed
2022-05-21T01:56:32.434+00:00 <local3.notice> 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_UNBIND: default #amqp/client/b45f6c87b0e76616/food-input-adaptor/bc685049-76db-7345-9945-8cc07b6035ae Client (67) #amqp/client/b45f6c87b0e76616/food-in
put-adaptor/bc685049-76db-7345-9945-8cc07b6035ae username testAccount Unbind to Flow Id (88), ForwardingMode(StoreAndForward), final statistics - flow(1, 0, 0, 0, 0, 0, 0, 0, 0, 0), isActive(No), Reason(Client disconnected)
2022-05-21T01:56:32.435+00:00 <local3.info> 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_DISCONNECT: default #amqp/client/b45f6c87b0e76616/food-input-adaptor/bc685049-76db-7345-9945-8cc07b6035ae Client (67) #amqp/client/b45f6c87b0e76616/food-
input-adaptor/bc685049-76db-7345-9945-8cc07b6035ae username testAccount WebSessionId (N/A) reason(Client Disconnect Received) final statistics - dp(7, 4, 0, 0, 7, 4, 493, 693, 0, 0, 493, 693, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) conn(0, 0, 192.168.
8.1:63599, ESTAB, 0, 0, 0) zip(0, 0, 0, 0, 0.00, 0.00, 0, 0, 0, 0, 0, 0, 0, 0) web(0, 0, 0, 0, 0, 0, 0), SslVersion(), SslCipher()
2022-05-21T01:56:35.436+00:00 <local3.info> 031cdc6fee4f event: CLIENT: CLIENT_CLIENT_CLOSE_FLOW: default #amqp/client/b45f6c87b0e76616/food-input-adaptor/bc685049-76db-7345-9945-8cc07b6035ae Client (67) #amqp/client/b45f6c87b0e76616/food-
input-adaptor/bc685049-76db-7345-9945-8cc07b6035ae username testAccount Pub flow session flow name 04c9bfb86bac4f3dba9dba89b4724cde (25), transacted session id -1, publisher id 24, last message id 1290, window size 247, final statistics - fl
ow(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0)
2022-05-21T01:57:32.453+00:00 <local3.notice> 031cdc6fee4f event: VPN: VPN_AD_QENDPT_DELETE: default - Message VPN (0) Topic Endpoint food-input-adaptor/bc685049-76db-7345-9945-8cc07b6035ae/food-input-adaptor deleted, final statistics - sp
ool(0, 0, 0, 0, 0, 0, 0, 0) bind(1, 1, 0, 0, 0)
</code></pre>
| <p>See the discussion on this in the Solace Developer Community at <a href="https://solace.community/discussion/1342/reason-for-solace-client-client-disconnect" rel="nofollow noreferrer">https://solace.community/discussion/1342/reason-for-solace-client-client-disconnect</a></p>
|
<p>Is there any kubernetes api or kubectl command to delete older docker images that is lying on the device.</p>
<p>I know we can delete by using docker rm image but i want to do remotely through API.</p>
<p>Any alternative?</p>
| <p>The <code>kubelet</code> removes unused images automatically when the docker disk fullness reaches a configurable threshold.</p>
<p>See <a href="https://kubernetes.io/docs/concepts/architecture/garbage-collection/#containers-images" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/architecture/garbage-collection/#containers-images</a> for details on how that works.</p>
<p>There is no API-lead way to forcefully delete an image.</p>
<p>If you really need to manually clean up an image from nodes, you could run a container that connects to the docker daemon and runs <code>docker rmi <image></code> there, but it smells like an antipattern to me.</p>
|
<p>I built a new cluster with Terraform for a AWS EKS, single node group with a single node.</p>
<p>This cluster is using <code>1.22</code> and cant seem to get anything to work correctly.</p>
<p>So Istio will install fine, i have installed versions <code>1.12.1</code>, <code>1.13.2</code>, <code>1.13.3</code> & <code>1.13.4</code> and all seem to have the same issue with auto injecting the sidecar.</p>
<p><code>Error creating: Internal error occurred: failed calling webhook "namespace.sidecar-injector.istio.io": failed to call webhook: Post "https://istiod.istio-system.svc:443/inject?timeout=10s": context deadline exceeded</code></p>
<p>But there are also other issues with the cluster, even without using Istio. My application is pulled and the pod will build fine but can not connect to the database. This is an external DB to the cluster - no other build (running on Azure) have any issues connecting to the DB</p>
<p>I am not sure if this is the issue with the application not connecting to the ext. DB but the sidecar issue could have something to do with <code>BoundServiceAccountTokenVolume</code>?</p>
<p>There is a warming about it being enabled on all clusters from 1.21 - a little odd as i have another applications with istio, running on another cluster with 1.21 on AWS EKS!</p>
<p>I also have this application running with istio without any issues in Azure on 1.22</p>
| <p>I seem to have fix it :)</p>
<p>It seems to be a port issue with the security groups. I was letting terraform build its own group.</p>
<p>When I opened all the ports up in the 'inbound' section it seemed to work.</p>
<p>I then closed them all again and only opened 80 and 443 - which again stopped Istio from auto-injecting its sidecar</p>
<p>My app was requesting to talk to Istio on port <code>15017</code>, so i opened just that port, along sided ports 80 and 443.</p>
<p>Once that port was opened, my app started to work and got the sidecar from Istio without any issue.</p>
<p>So it seems like the security group stops pod-to-pod communication... unless i have completely messed up my terraform build in some way</p>
|
<p>Looking for a solution to deploy multinode vespa to Kuberentes. I've refered docs but there is no info about deploying multinode to K8s.</p>
| <p>Please take a look at <a href="https://github.com/vespa-engine/sample-apps/tree/master/examples/operations/basic-search-on-gke" rel="nofollow noreferrer">https://github.com/vespa-engine/sample-apps/tree/master/examples/operations/basic-search-on-gke</a> for a small multinode example - and feel free to submit improvements to both documentation and sample apps, it is a bit slim on Kubernetes. Kristian, Vespa Team</p>
|
<p>Have a simple program as shown below</p>
<pre><code>import pyspark
builder = (
pyspark.sql.SparkSession.builder.appName("MyApp")
.config("spark.sql.extensions", "io.delta.sql.DeltaSparkSessionExtension")
.config(
"spark.sql.catalog.spark_catalog",
"org.apache.spark.sql.delta.catalog.DeltaCatalog",
)
)
spark = builder.getOrCreate()
spark._jsc.hadoopConfiguration().set(
"fs.gs.impl", "com.google.cloud.hadoop.fs.gcs.GoogleHadoopFileSystem"
)
spark._jsc.hadoopConfiguration().set(
"fs.AbstractFileSystem.gs.impl", "com.google.cloud.hadoop.fs.gcs.GoogleHadoopFS"
)
df = spark.read.format("delta").load(
"gs://org/delta/bronze/mongodb/registration/audits"
)
print(df.show())
</code></pre>
<p>This is packaged into a container using the below Dockerfile</p>
<pre><code>FROM varunmallya/spark-pi:3.2.1
USER root
ADD gcs-connector-hadoop2-latest.jar $SPARK_HOME/jars
WORKDIR /app
COPY main.py .
</code></pre>
<p>This app is then deployed as a SparkApplication on k8s using the <a href="https://github.com/GoogleCloudPlatform/spark-on-k8s-operator/blob/master/docs/user-guide.md#specifying-deployment-mode" rel="nofollow noreferrer">spark-on-k8s</a> operator</p>
<p>I expected to see 20 rows of data but instead got this exception</p>
<pre><code>java.lang.ClassCastException: cannot assign instance of java.lang.invoke.SerializedLambda to field org.apache.spark.sql.catalyst.expressions.ScalaUDF.f of type scala.Function1 in instance of org.apache.spark.sql.catalyst.expressions.ScalaUDF
</code></pre>
<p>However when I run this in local jupyter notebook I can see the desired. I have added the necessary package - <em>io.delta:delta-core_2.12:1.2.0</em> via the crd and have also ensured the <em>gcs-connector-hadoop2-latest.jar</em> is made available.</p>
<p>What could the issue be?</p>
| <p>Could you try the following <code>Dockerfile</code>:</p>
<pre><code>FROM datamechanics/spark:3.1.1-hadoop-3.2.0-java-8-scala-2.12-python-3.8-dm17
USER root
WORKDIR /app
COPY main.py .
</code></pre>
<p>And then try deploying the <code>SparkApplication</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: sparkoperator.k8s.io/v1beta2
kind: SparkApplication
metadata:
name: sparky-pi
namespace: spark
spec:
type: Python
mode: cluster
pythonVersion: "3"
image: <YOUR_IMAGE_GOES_HERE>:latest
mainApplicationFile: local:///app/main.py
sparkVersion: "3.1.1"
restartPolicy:
type: OnFailure
onFailureRetries: 3
onFailureRetryInterval: 10
onSubmissionFailureRetries: 5
onSubmissionFailureRetryInterval: 20
driver:
cores: 1
coreLimit: "1200m"
memory: "512m"
labels:
version: 3.1.1
serviceAccount: spark
executor:
serviceAccount: spark
cores: 1
instances: 1
memory: "512m"
labels:
version: 3.1.1
</code></pre>
<p>I ran this on my Kubernetes cluster and was able to get:</p>
<p><a href="https://i.stack.imgur.com/Iq5pE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Iq5pE.png" alt="" /></a></p>
<p>I think here the base image <code>datamechanics/spark:3.1.1-hadoop-3.2.0-java-8-scala-2.12-python-3.8-dm17</code> is key. Props to the folks who put it together!</p>
<p>Source: <a href="https://towardsdatascience.com/optimized-docker-images-for-apache-spark-now-public-on-dockerhub-1f9f8fed1665" rel="nofollow noreferrer">https://towardsdatascience.com/optimized-docker-images-for-apache-spark-now-public-on-dockerhub-1f9f8fed1665</a></p>
|
<p>I'm trying to use a <code>headless service</code> with an <code>endpoint</code> to forward traffic from within my cluster to my local development machine. I want to listen on port 80 on the service and call port 5002 on the endpoint. I have it setup as so:</p>
<p>Headless Service (listening on port 80 with a targetPort of 5002):
<a href="https://i.stack.imgur.com/D07hz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/D07hz.png" alt="kubectl describe service web" /></a></p>
<p>Endpoint (pointing to my development computer on port 5002):
<a href="https://i.stack.imgur.com/C5frJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/C5frJ.png" alt="kubectl describe endpoints web" /></a></p>
<p>When I try to <code>curl http://web:80</code> from any pod in my cluster on port 80 it times out. If I <code>curl http://web:5002</code> it successfully goes through and hits my development machine. Shouldn't the <code>targetPort</code> make the request to <code>web:80</code> go to my endpoint on port <code>5002</code>?</p>
<p><code>curl web:80</code>
<a href="https://i.stack.imgur.com/sP7z6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sP7z6.png" alt="curl web" /></a></p>
<p><code>curl web:5002</code>
<a href="https://i.stack.imgur.com/GWjQO.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GWjQO.png" alt="curl web:5002" /></a></p>
<p>Some additional info:</p>
<ul>
<li>My cluster and dev machine are in the same local network</li>
<li>I'm using K3S on the cluster</li>
<li>I'm just trying to emulate what Bridge For Kubernetes does</li>
</ul>
<p>Here is the manifest yaml:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: web
namespace: default
spec:
clusterIP: None
ports:
- name: web
port: 80
targetPort: 5002
---
apiVersion: v1
kind: Endpoints
metadata:
name: web
namespace: default
subsets:
- addresses:
- ip: $HOST_IP
ports:
- name: web
port: 5002
protocol: TCP
</code></pre>
| <p>I managed to get it to work by removing the <code>clusterIP: None</code>. My manifest now looks like this:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: web
spec:
type: ClusterIP
ports:
- name: web
port: 80
targetPort: 5002
---
apiVersion: v1
kind: Endpoints
metadata:
name: web
subsets:
- addresses:
- ip: $HOST_IP
ports:
- name: web
port: 5002
</code></pre>
|
<p>We are using this chart:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v2
appVersion: 1.0
version: 0.0.1
description: Helm chart for setting up Kafka platform (Kafka, Zookeeper, Kafka Rest, Kafka Schema Registry, Kafka Connect, KSQL Server)
name: kafka-platform
dependencies:
- name: cp-helm-charts
version: 0.6.0
repository: "https://confluentinc.github.io/cp-helm-charts"
</code></pre>
<p>We need to create new local template that does not exist in the public chart, and this new template should access built-in variables as defined in the public chart. For example, the <a href="https://github.com/confluentinc/cp-helm-charts/blob/master/charts/cp-kafka/Chart.yaml#L4" rel="nofollow noreferrer">.Chart.Name</a></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
appVersion: "1.0"
description: A Helm chart for Confluent Kafka on Kubernetes
name: cp-kafka
version: 0.1.0
</code></pre>
<p>How could we access "cp-kafka" from a local .tpl named template or template? All I manage to access right now is either an error of type " at <.Chart.Name>: can't evaluate field Chart in type int" or the value of the local Chart "kafka-platform".</p>
<p>I've searched for this question/answer but although I see things that look alike, I did not find one that fully fits or answers this exact question.</p>
| <p>You can use the subcharts map.</p>
<pre><code>{{ $sub := index .Subcharts "cp-kafka" }}
</code></pre>
<p>This will give you all built in variables and even the <code>.Values</code> from the values file.</p>
<p>For example, I create a chart and a subchart:</p>
<pre><code>helm create foo
helm create foo/charts/bar
rm -rf foo/templates/*
rm -rf foo/charts/bar/templates/*
</code></pre>
<p>In bar values I put this:</p>
<pre class="lang-yaml prettyprint-override"><code>some:
values: from
this: subchart
</code></pre>
<p>In foo templates I put this configmap:</p>
<pre class="lang-yaml prettyprint-override"><code>kind: ConfigMap
apiVersion: v1
metadata:
name: example
data:
bar-chart: |
{{- .Subcharts.bar | toYaml | nindent 4 }}
</code></pre>
<p>Now when I render the chart with <code>helm template foo</code> I get the below output:</p>
<pre class="lang-yaml prettyprint-override"><code>---
# Source: foo/templates/cm.yaml
kind: ConfigMap
apiVersion: v1
metadata:
name: example
data:
bar-chart: |
Capabilities:
APIVersions:
- v1
- admissionregistration.k8s.io/v1
- admissionregistration.k8s.io/v1beta1
- internal.apiserver.k8s.io/v1alpha1
- apps/v1
- apps/v1beta1
- apps/v1beta2
- authentication.k8s.io/v1
- authentication.k8s.io/v1beta1
- authorization.k8s.io/v1
- authorization.k8s.io/v1beta1
- autoscaling/v1
- autoscaling/v2
- autoscaling/v2beta1
- autoscaling/v2beta2
- batch/v1
- batch/v1beta1
- certificates.k8s.io/v1
- certificates.k8s.io/v1beta1
- coordination.k8s.io/v1beta1
- coordination.k8s.io/v1
- discovery.k8s.io/v1
- discovery.k8s.io/v1beta1
- events.k8s.io/v1
- events.k8s.io/v1beta1
- extensions/v1beta1
- flowcontrol.apiserver.k8s.io/v1alpha1
- flowcontrol.apiserver.k8s.io/v1beta1
- flowcontrol.apiserver.k8s.io/v1beta2
- networking.k8s.io/v1
- networking.k8s.io/v1beta1
- node.k8s.io/v1
- node.k8s.io/v1alpha1
- node.k8s.io/v1beta1
- policy/v1
- policy/v1beta1
- rbac.authorization.k8s.io/v1
- rbac.authorization.k8s.io/v1beta1
- rbac.authorization.k8s.io/v1alpha1
- scheduling.k8s.io/v1alpha1
- scheduling.k8s.io/v1beta1
- scheduling.k8s.io/v1
- storage.k8s.io/v1beta1
- storage.k8s.io/v1
- storage.k8s.io/v1alpha1
- apiextensions.k8s.io/v1beta1
- apiextensions.k8s.io/v1
HelmVersion:
git_commit: 6e3701edea09e5d55a8ca2aae03a68917630e91b
git_tree_state: clean
go_version: go1.17.5
version: v3.8.2
KubeVersion:
Major: "1"
Minor: "23"
Version: v1.23.0
Chart:
IsRoot: false
apiVersion: v2
appVersion: 1.16.0
description: A Helm chart for Kubernetes
name: bar
type: application
version: 0.1.0
Files:
.helmignore: IyBQYXR0ZXJucyB0byBpZ25vcmUgd2hlbiBidWlsZGluZyBwYWNrYWdlcy4KIyBUaGlzIHN1cHBvcnRzIHNoZWxsIGdsb2IgbWF0Y2hpbmcsIHJlbGF0aXZlIHBhdGggbWF0Y2hpbmcsIGFuZAojIG5lZ2F0aW9uIChwcmVmaXhlZCB3aXRoICEpLiBPbmx5IG9uZSBwYXR0ZXJuIHBlciBsaW5lLgouRFNfU3RvcmUKIyBDb21tb24gVkNTIGRpcnMKLmdpdC8KLmdpdGlnbm9yZQouYnpyLwouYnpyaWdub3JlCi5oZy8KLmhnaWdub3JlCi5zdm4vCiMgQ29tbW9uIGJhY2t1cCBmaWxlcwoqLnN3cAoqLmJhawoqLnRtcAoqLm9yaWcKKn4KIyBWYXJpb3VzIElERXMKLnByb2plY3QKLmlkZWEvCioudG1wcm9qCi52c2NvZGUvCg==
Release:
IsInstall: true
IsUpgrade: false
Name: release-name
Namespace: default
Revision: 1
Service: Helm
Subcharts: {}
Values:
global: {}
some:
this: subchart
values: from
</code></pre>
|
<p>I try to integrate vault and <em>gitlab</em>.</p>
<p><em>Vault</em> side is ok , and I try to locate vault in our <em>gitlab-ci.yaml</em> but I confused something.</p>
<p>Where is the location of <em>vault</em> in <em>yaml</em> ?</p>
<p>We use <em>gitlab ee</em> (community).</p>
<p>Our <em>yaml</em>:</p>
<pre><code>.kaniko-build:
stage: build
before_script:
- mkdir -p /kaniko/.docker
- |
cat <<EOF > /kaniko/.docker/config.json
{
"auths":{
"${CI_REGISTRY}":{
"auth":"$(printf "%s:%s" "${CI_REGISTRY_USER}" "${CI_REGISTRY_PASSWORD}" | base64 | tr -d '\n')"
},
"https://index.docker.io/v1/":{
"auth":"$(printf "%s:%s" "${DOCKERHUB_USERNAME}" "${DOCKERHUB_PASSWORD}" | base64 | tr -d '\n')"
}
}
}
EOF
- cat /kaniko/.docker/config.json
script:
- >-
/kaniko/executor
--context "${CI_PROJECT_DIR}"
--dockerfile "${DOCKERFILE_PATH}"
--destination "${CI_REGISTRY_IMAGE}:${CI_PIPELINE_IID}"
--destination "${CI_REGISTRY_IMAGE}:latest"
--cache
- echo $(date) $(date)
image:
name: gcr.io/kaniko-project/executor:v1.8.0-debug
entrypoint: [""]
test-build:
extends: .kaniko-build
when: manual
variables:
DOCKERFILE_PATH: "devops/test/Dockerfile"
rules:
- if: $CI_COMMIT_BRANCH
exists:
- devops/test/Dockerfile
interruptible: true
</code></pre>
| <p>If you've not already done so, you first need to configure vault for jwt authentication.</p>
<pre><code>vault auth enable -path=jwt/gitlab jwt
</code></pre>
<p>Then configure the new jwt auth with a token validation endpoint that references your gitlab instance.</p>
<pre><code>vault write auth/jwt/config \
jwks_url="https://gitlab.example.com/-/jwks" \
bound_issuer="gitlab.example.com"
</code></pre>
<p>Now in your gitlab-ci.yml, login to vault.</p>
<pre><code>- export VAULT_ADDR="https://gitlab.example.com"
- export VAULT_TOKEN="$(vault write -field=token auth/jwt/gitlab/login role=SOME_ROLE_NAME jwt=$CI_JOB_JWT)"
</code></pre>
<p>Next in your gitlab-ci.yml, retrieve the secret.</p>
<pre><code>- export EXAMPLE_SECRET="$(vault kv get -field=EXAMPLE_SECRET_KEY kv-v2/example/secret/path)"
</code></pre>
<p>This is all covered in more detail in the official GitLab docs <a href="https://docs.gitlab.com/ee/ci/examples/authenticating-with-hashicorp-vault/" rel="nofollow noreferrer">here</a></p>
|
<p>I am using outboundTrafficPolicy.mode ALLOW_ANY global option in Istio but any HTTPS requests are failing with a server certificate error:</p>
<pre><code>* ALPN, server accepted to use h2
* Server certificate:
* subject: CN=*.execute-api.<my-region>.amazonaws.com
* start date: Jul 22 00:00:00 2021 GMT
* expire date: Aug 20 23:59:59 2022 GMT
* subjectAltName does not match www.google.com
* SSL: no alternative certificate subject name matches target host name 'www.google.com'
* Closing connection 0
* TLSv1.2 (OUT), TLS alert, close notify (256):
curl: (60) SSL: no alternative certificate subject name matches target host name 'www.google.com'
More details here: https://curl.se/docs/sslcerts.html
</code></pre>
<p>Shouldn't it bypass all outbound traffic HTTP or HTTPS? Is there another configuration I'm missing here?</p>
<p>PS: I am using Istio with ingress-nginx with the <code>traffic.sidecar.istio.io/includeInboundPorts: ""</code> annotation, which bypasses envoy in the cluster's entrance. The test was made in another pod inside the service mesh.</p>
<p><strong>Istio configuration</strong>: istioctl install --set profile=minimal --set meshConfig.outboundTrafficPolicy.mode=ALLOW_ANY --set meshConfig.enableTracing=true --set revision=canary</p>
<p><strong>Ingress-Nginx configuration</strong>:</p>
<pre><code>
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: nginx-4
labels:
app.kubernetes.io/component: controller
annotations:
ingressclass.kubernetes.io/is-default-class: 'false'
spec:
controller: "k8s.io/ingress-nginx"
---
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx-4
labels:
app.kubernetes.io/name: ingress-nginx-4
app.kubernetes.io/instance: ingress-nginx-4
istio.io/rev: canary
---
# Source: ingress-nginx/templates/controller-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
helm.sh/chart: ingress-nginx-3.33.0
app.kubernetes.io/name: ingress-nginx-4
app.kubernetes.io/instance: ingress-nginx-4
app.kubernetes.io/version: 0.47.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx
namespace: ingress-nginx-4
automountServiceAccountToken: true
---
# Source: ingress-nginx/templates/controller-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
labels:
helm.sh/chart: ingress-nginx-3.33.0
app.kubernetes.io/name: ingress-nginx-4
app.kubernetes.io/instance: ingress-nginx-4
app.kubernetes.io/version: 0.47.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx-controller
namespace: ingress-nginx-4
data:
proxy-real-ip-cidr: <my_cluster_range>
use-forwarded-headers: "true"
enable-real-ip: "false"
use-proxy-protocol: "false"
---
# Source: ingress-nginx/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
helm.sh/chart: ingress-nginx-3.33.0
app.kubernetes.io/name: ingress-nginx-4
app.kubernetes.io/instance: ingress-nginx-4
app.kubernetes.io/version: 0.47.0
app.kubernetes.io/managed-by: Helm
name: ingress-nginx-4
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-nginx
subjects:
- kind: ServiceAccount
name: ingress-nginx
namespace: ingress-nginx-4
---
# Source: ingress-nginx/templates/controller-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
helm.sh/chart: ingress-nginx-3.33.0
app.kubernetes.io/name: ingress-nginx-4
app.kubernetes.io/instance: ingress-nginx-4
app.kubernetes.io/version: 0.47.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx
namespace: ingress-nginx-4
rules:
- apiGroups:
- ''
resources:
- namespaces
verbs:
- get
- apiGroups:
- ''
resources:
- configmaps
- pods
- secrets
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- extensions
- networking.k8s.io # k8s 1.14+
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- extensions
- networking.k8s.io # k8s 1.14+
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- networking.k8s.io # k8s 1.14+
resources:
- ingressclasses
verbs:
- get
- list
- watch
- apiGroups:
- ''
resources:
- configmaps
resourceNames:
- ingress-controller-leader-nginx
verbs:
- get
- update
- apiGroups:
- ''
resources:
- configmaps
verbs:
- create
- update
- apiGroups:
- ''
resources:
- events
verbs:
- create
- patch
---
# Source: ingress-nginx/templates/controller-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
helm.sh/chart: ingress-nginx-3.33.0
app.kubernetes.io/name: ingress-nginx-4
app.kubernetes.io/instance: ingress-nginx-4
app.kubernetes.io/version: 0.47.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
name: ingress-nginx
namespace: ingress-nginx-4
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-nginx
subjects:
- kind: ServiceAccount
name: ingress-nginx
namespace: ingress-nginx-4
---
# Source: ingress-nginx/templates/controller-service-webhook.yaml
apiVersion: v1
kind: Service
metadata:
labels:
helm.sh/chart: ingress-nginx-3.33.0
app.kubernetes.io/name: ingress-nginx-4
app.kubernetes.io/instance: ingress-nginx-4
app.kubernetes.io/version: 0.47.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
app: ingress-nginx-4
name: ingress-nginx-controller-admission
namespace: ingress-nginx-4
spec:
type: ClusterIP
ports:
- name: https-webhook
port: 443
targetPort: webhook
selector:
app.kubernetes.io/name: ingress-nginx-4
app.kubernetes.io/instance: ingress-nginx-4
app.kubernetes.io/component: controller
app: ingress-nginx-4
---
# Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '60'
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: 'true'
labels:
helm.sh/chart: ingress-nginx-3.33.0
app.kubernetes.io/name: ingress-nginx-4
app.kubernetes.io/instance: ingress-nginx-4
app.kubernetes.io/version: 0.47.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
app: ingress-nginx-4
service: ingress-nginx-4
name: ingress-nginx-controller
namespace: ingress-nginx-4
spec:
type: LoadBalancer
externalTrafficPolicy: Local
ports:
- name: http
port: 80
nodePort: 30008
protocol: TCP
targetPort: http
- name: https
port: 443
nodePort: 30009
protocol: TCP
targetPort: http
selector:
app.kubernetes.io/name: ingress-nginx-4
app.kubernetes.io/instance: ingress-nginx-4
app.kubernetes.io/component: controller
app: ingress-nginx-4
---
# Source: ingress-nginx/templates/controller-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
helm.sh/chart: ingress-nginx-3.33.0
app.kubernetes.io/name: ingress-nginx-4
app.kubernetes.io/instance: ingress-nginx-4
app.kubernetes.io/version: 0.47.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: controller
app: ingress-nginx-4
version: v1
name: ingress-nginx-controller-2
namespace: ingress-nginx-4
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx-4
app.kubernetes.io/instance: ingress-nginx-4
app.kubernetes.io/component: controller
app: ingress-nginx-4
version: v1
revisionHistoryLimit: 10
minReadySeconds: 0
template:
metadata:
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "10254"
sidecar.istio.io/inject: "true"
traffic.sidecar.istio.io/includeInboundPorts: ""
labels:
app.kubernetes.io/name: ingress-nginx-4
app.kubernetes.io/instance: ingress-nginx-4
app.kubernetes.io/component: controller
app: ingress-nginx-4
version: v1
spec:
dnsPolicy: ClusterFirst
containers:
- name: controller
image: k8s.gcr.io/ingress-nginx/controller:v0.46.0@sha256:52f0058bed0a17ab0fb35628ba97e8d52b5d32299fbc03cc0f6c7b9ff036b61a
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
args:
- /nginx-ingress-controller
- --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
- --election-id=ingress-controller-leader
- --ingress-class=nginx-4
- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
runAsUser: 101
allowPrivilegeEscalation: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: LD_PRELOAD
value: /usr/local/lib/libmimalloc.so
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
ports:
- name: http
containerPort: 80
protocol: TCP
- name: https
containerPort: 80
protocol: TCP
- name: tohttps
containerPort: 2443
protocol: TCP
- name: webhook
containerPort: 8443
protocol: TCP
volumeMounts:
- name: webhook-cert
mountPath: /usr/local/certificates/
readOnly: true
resources:
requests:
cpu: 100m
memory: 90Mi
nodeSelector:
kubernetes.io/os: linux
serviceAccountName: ingress-nginx
terminationGracePeriodSeconds: 300
volumes:
- name: webhook-cert
secret:
secretName: ingress-nginx-admission
---
# Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml
# before changing this value, check the required kubernetes version
# https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
labels:
helm.sh/chart: ingress-nginx-3.33.0
app.kubernetes.io/name: ingress-nginx-4
app.kubernetes.io/instance: ingress-nginx-4
app.kubernetes.io/version: 0.47.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
name: ingress-nginx-admission-4
webhooks:
- name: validate.nginx.ingress.kubernetes.io
matchPolicy: Equivalent
rules:
- apiGroups:
- networking.k8s.io
apiVersions:
- v1beta1
operations:
- CREATE
- UPDATE
resources:
- ingresses
failurePolicy: Fail
sideEffects: None
admissionReviewVersions:
- v1
- v1beta1
clientConfig:
service:
namespace: ingress-nginx-4
name: ingress-nginx-controller-admission
path: /networking/v1beta1/ingresses
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: ingress-nginx-admission
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-3.33.0
app.kubernetes.io/name: ingress-nginx-4
app.kubernetes.io/instance: ingress-nginx-4
app.kubernetes.io/version: 0.47.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
namespace: ingress-nginx-4
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: ingress-nginx-admission-4
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-3.33.0
app.kubernetes.io/name: ingress-nginx-4
app.kubernetes.io/instance: ingress-nginx-4
app.kubernetes.io/version: 0.47.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
name: ingress-nginx-admission
namespace: ingress-nginx-4
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: ingress-nginx-admission
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-3.33.0
app.kubernetes.io/name: ingress-nginx-4
app.kubernetes.io/instance: ingress-nginx-4
app.kubernetes.io/version: 0.47.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
namespace: ingress-nginx-4
rules:
- apiGroups:
- ''
resources:
- secrets
verbs:
- get
- create
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: ingress-nginx-admission
annotations:
helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-3.33.0
app.kubernetes.io/name: ingress-nginx-4
app.kubernetes.io/instance: ingress-nginx-4
app.kubernetes.io/version: 0.47.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
namespace: ingress-nginx-4
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
name: ingress-nginx-admission
namespace: ingress-nginx-4
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: ingress-nginx-admission-create
annotations:
helm.sh/hook: pre-install,pre-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-3.33.0
app.kubernetes.io/name: ingress-nginx-4
app.kubernetes.io/instance: ingress-nginx-4
app.kubernetes.io/version: 0.47.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
namespace: ingress-nginx-4
spec:
template:
metadata:
name: ingress-nginx-admission-create
labels:
helm.sh/chart: ingress-nginx-3.33.0
app.kubernetes.io/name: ingress-nginx-4
app.kubernetes.io/instance: ingress-nginx-4
app.kubernetes.io/version: 0.47.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
spec:
containers:
- name: create
image: docker.io/jettech/kube-webhook-certgen:v1.5.1
imagePullPolicy: IfNotPresent
args:
- create
- --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
- --namespace=$(POD_NAMESPACE)
- --secret-name=ingress-nginx-admission
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
restartPolicy: OnFailure
serviceAccountName: ingress-nginx-admission
securityContext:
runAsNonRoot: true
runAsUser: 2000
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: ingress-nginx-admission-patch
annotations:
helm.sh/hook: post-install,post-upgrade
helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
labels:
helm.sh/chart: ingress-nginx-3.33.0
app.kubernetes.io/name: ingress-nginx-4
app.kubernetes.io/instance: ingress-nginx-4
app.kubernetes.io/version: 0.47.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
namespace: ingress-nginx-4
spec:
template:
metadata:
name: ingress-nginx-admission-patch
labels:
helm.sh/chart: ingress-nginx-3.33.0
app.kubernetes.io/name: ingress-nginx-4
app.kubernetes.io/instance: ingress-nginx-4
app.kubernetes.io/version: 0.47.0
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/component: admission-webhook
spec:
containers:
- name: patch
image: docker.io/jettech/kube-webhook-certgen:v1.5.1
imagePullPolicy: IfNotPresent
args:
- patch
- --webhook-name=ingress-nginx-admission-4
- --namespace=$(POD_NAMESPACE)
- --patch-mutating=false
- --secret-name=ingress-nginx-admission
- --patch-failure-policy=Fail
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
restartPolicy: OnFailure
serviceAccountName: ingress-nginx-admission
securityContext:
runAsNonRoot: true
runAsUser: 2000
</code></pre>
| <p>The problem was caused by what probably is a bug on Istio: services on the Kubernetes cluster that had 443 ports (https) and didn't contain a port name as "https" seemed to be getting in the way of external https requests.
To solve the issue I just identified these services using the <code>istioctl proxy-config routes <my-pod> --name 443 -o json</code> command and added the "https" port name.
I also deleted a ServiceEntry from a previous Istio version that was also impacting routing.</p>
<p>Istio issue comment that helped me identify the problem: <a href="https://github.com/istio/istio/issues/14264#issuecomment-496774533" rel="nofollow noreferrer">https://github.com/istio/istio/issues/14264#issuecomment-496774533</a></p>
|
<p>I have a terraform-managed EKS cluster. It used to have 2 nodes on it. I doubled the number of nodes (4).</p>
<p>I have a kubernetes_deployment resource that automatically deploys a fixed number of pods to the cluster. It was set to 20 when I had 2 nodes, and seemed evenly distributed with 10 each. I doubled that number to 40.</p>
<p>All of the new pods for the kubernetes deployment are being scheduled on the first 2 (original) nodes. Now the two original nodes have 20 pods each, while the 2 new nodes have 0 pods. The new nodes are up and ready to go, but I cannot get kubernetes to schedule the new pods on those new nodes.</p>
<p>I am unsure where to even begin searching, as I am fairly new to k8s and ops in general.</p>
<p>A few beginner questions that may be related:</p>
<ol>
<li><p>I'm reading about pod affinity, and it seems like I could tell k8s to have a pod ANTI affinity with itself within a deployment. However, I am having trouble setting up the anti-affinity rules. I see that the <a href="https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/deployment#preferred_during_scheduling_ignored_during_execution" rel="nofollow noreferrer">kubernetes_deployment</a> resource has a scheduling argument, but I can't seem to get the syntax right.</p>
</li>
<li><p>Naively it seems that the issue may be that the deployment somehow isn't aware of the new nodes. If that is the case, how could I reboot the entire deployment (without taking down the already-running pods)?</p>
</li>
<li><p>Is there a cluster level scheduler that I need to set? I was under the impression that the default does round robin, which doesn't seem to be happening at the node level.</p>
</li>
</ol>
<p>EDIT:
The EKS terraform module <a href="https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/11.0.0/submodules/node_groups" rel="nofollow noreferrer">node_groups submodule</a> has fields for desired/min/max_capacity. To increase my worker nodes, I just increased those numbers. The change is reflected in the aws eks console.</p>
| <p>Check a couple of things:</p>
<ol>
<li>Do your nodes show up correctly in the output of <code>kubectl get nodes -o wide</code> and do they have a state of ready?</li>
<li>Instead of pod affinity look into <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/" rel="nofollow noreferrer">pod topology spread constraints</a>. Anti affinity will not work with multiple pods.</li>
</ol>
|
<p>I'm going through a tutorial on running jenkins on your kubernetes cluster. In the tutorial they're using minikube and for my existing cluster it's running on eks. When I apply my jenkins.yaml file, the pod it creates gets this error</p>
<pre><code> Normal Scheduled 27m default-scheduler Successfully assigned default/jenkins-799666d8db-ft642 to ip-192-168-84-126.us-west-2.compute.internal
Warning Failed 24m (x12 over 27m) kubelet Error: ErrImageNeverPull
Warning ErrImageNeverPull 114s (x116 over 27m) kubelet Container image "myjenkins:latest" is not present with pull policy of Never
</code></pre>
<p>This was from describing the pod ^</p>
<p>Here's my jenkins.yaml file that I'm using to try to run jenkins on my cluster</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins
namespace: default
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: jenkins
namespace: default
rules:
- apiGroups: [""]
resources: ["pods","services"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["create","delete","get","list","patch","update","watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: jenkins
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins
subjects:
- kind: ServiceAccount
name: jenkins
---
# Allows jenkins to create persistent volumes
# This cluster role binding allows anyone in the "manager" group to read secrets in any namespace.
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: jenkins-crb
subjects:
- kind: ServiceAccount
namespace: default
name: jenkins
roleRef:
kind: ClusterRole
name: jenkinsclusterrole
apiGroup: rbac.authorization.k8s.io
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
# "namespace" omitted since ClusterRoles are not namespaced
name: jenkinsclusterrole
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["create","delete","get","list","patch","update","watch"]
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: jenkins
namespace: default
spec:
selector:
matchLabels:
app: jenkins
replicas: 1
template:
metadata:
labels:
app: jenkins
spec:
containers:
- name: jenkins
image: myjenkins:latest
env:
- name: JAVA_OPTS
value: -Djenkins.install.runSetupWizard=false
ports:
- name: http-port
containerPort: 8080
- name: jnlp-port
containerPort: 50000
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
- name: docker-sock-volume
mountPath: "/var/run/docker.sock"
imagePullPolicy: Never
volumes:
# This allows jenkins to use the docker daemon on the host, for running builds
# see https://stackoverflow.com/questions/27879713/is-it-ok-to-run-docker-from-inside-docker
- name: docker-sock-volume
hostPath:
path: /var/run/docker.sock
- name: jenkins-home
hostPath:
path: /mnt/jenkins-store
serviceAccountName: jenkins
---
apiVersion: v1
kind: Service
metadata:
name: jenkins
namespace: default
spec:
type: NodePort
ports:
- name: ui
port: 8080
targetPort: 8080
nodePort: 31000
- name: jnlp
port: 50000
targetPort: 50000
selector:
app: jenkins
</code></pre>
<p>Edit:</p>
<p>So far I tried removing <code>imagePullPolicy: Never</code> and tried it again and got a different error</p>
<pre><code>Warning Failed 17s (x2 over 32s) kubelet Failed to pull image "myjenkins:latest": rpc error: code = Unknown desc = Error response from daemon: pull access denied for myjenkins, repository does not exist or may re
quire 'docker login': denied: requested access to the resource is denied
</code></pre>
<p>I tried running docker login and logging in and I'm still getting this same error ^. I tried changing imagePullPolicy: Never to Always and received the same error</p>
<p>After changing the image to jenkins/jenkins:lts it's still crashing and when I describe, this is what it says</p>
<pre><code> Normal Scheduled 4m37s default-scheduler Successfully assigned default/jenkins-776574886b-x2l8p to ip-192-168-77-17.us-west-2.compute.internal
Normal Pulled 4m26s kubelet Successfully pulled image "jenkins/jenkins:lts" in 11.07948886s
Normal Pulled 4m22s kubelet Successfully pulled image "jenkins/jenkins:lts" in 908.246481ms
Normal Pulled 4m7s kubelet Successfully pulled image "jenkins/jenkins:lts" in 885.936781ms
Normal Created 3m39s (x4 over 4m23s) kubelet Created container jenkins
Normal Started 3m39s (x4 over 4m23s) kubelet Started container jenkins
Normal Pulled 3m39s kubelet Successfully pulled image "jenkins/jenkins:lts" in 895.651242ms
Warning BackOff 3m3s (x8 over 4m20s) kubelet Back-off restarting failed container
</code></pre>
<p>When I try to run "kubectl logs" on that pod I even get an error for that, which I've never received before when getting logs</p>
<pre><code>touch: cannot touch '/var/jenkins_home/copy_reference_file.log': Permission denied
Can not write to /var/jenkins_home/copy_reference_file.log. Wrong volume permissions?
</code></pre>
<p>Also had to change my volumemount for jenkins to this and it worked!</p>
<p>I found another resource online saying to change my jenkins volume mount to this to fix the permissions issue and my container works now
`</p>
<pre><code>volumeMounts:
- mountPath: /var
name: jenkins-volume
subPath: jenkins_home`
</code></pre>
| <p>As you already did, removing <code>imagePullPolicy: Never</code> would solve your first problem. Your second problem comes from the fact that you are trying to pull an image called <code>myjenkins:latest</code>, which doesn't exist. What you most likely want is <a href="https://hub.docker.com/r/jenkins/jenkins/" rel="nofollow noreferrer">this image</a>.</p>
<p>Change</p>
<pre><code>image: myjenkins:latest
</code></pre>
<p>to</p>
<pre><code>image: jenkins/jenkins:lts
</code></pre>
|
<p>I have a vault setup in k8s with k8s auth enabled to allow vault agent to read secrets and export them as an environment variables to a k8s pod using K8s service account. everything is working fine if I’m using a single k8s namespace.</p>
<p>I am not able to use a service account from A namespace and trying to use it in B namespace after attaching it via a rolebinding in namespace B</p>
<p>step 1 - I created a service account called vault-ro in default namespace and configured it in vault k8s auth role. everything works good for any k8s pod in default namespace. they are able to read secerts from vault.</p>
<pre><code>---
apiVersion: v1
kind: ServiceAccount
metadata:
name: vault-ro
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: role-tokenreview-binding ##This Role!
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: vault-ro
namespace: default
</code></pre>
<p>now, I want to enable namespace B to use same vault role and k8s service account to read secret from vault. so i created a rolebinding as follow in namespace B</p>
<p>role binding in Namespace B</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: role-tokenreview-binding-dev
namespace: B
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: vault-ro
namespace: default
</code></pre>
<p>expected behaviour, I should be able to spin up a k8s pod with vault-ro service account user and it should be able to read the secret from vault same way as it does in default namespace but when I try that, i’m getting error as</p>
<p><strong>Error from server (Forbidden): error when creating "test-app-nonprod.yaml": pods "test-app" is forbidden: error looking up service account B/vault-ro: serviceaccount "vault-ro" not foun</strong>d</p>
<p>why it’s not able to reference service account vault-ro from default namespace and still trying to find if it’s present in B namespace? is it something to do with vault? I tried my best to find from everywhere, all documents saying above should work!</p>
<p>appreciate any help!</p>
| <p>Your error message is saying that your pod can't find service account <code>vault-ro</code> in the <code>B</code> namespace.</p>
<blockquote>
<p>error looking up service account B/vault-ro</p>
</blockquote>
<p>Are you setting a <code>pod.spec.serviceAccountName</code> entry in your yaml? If so, the service account must exist in the same namespace as the pod is running. Both Pods and ServiceAccounts are namespaced.</p>
<p>I can't give you a good link to the docs where this is stated (that the sa must exist, it's implied in a few places. To check whether an object is namespaced or not you can use <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/#not-all-objects-are-in-a-namespace" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/#not-all-objects-are-in-a-namespace</a>) but I learnt this through experience.</p>
<p>I would create another service account in namespace <code>B</code> and do the rolebinding again.</p>
<p>As an aside, your have a mix of versions in your yaml for rbac, so if you want to avoid having this possibly bite you in the future if v1beta gets deprecated, it's worth tidying that too. Also, you are doing a ClusterRoleBinding in the first half, and a RoleBinding in the second which isn't necessary.</p>
<p>I'd use this manifest and change it for each serviceaccount as indicated:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: role-tokenreview-binding
namespace: default #*** change 1
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: vault-ro
namespace: default #*** change 2
</code></pre>
<p>Create a serviceaccount in each namespace you need to deploy vault-accessing pods in, and update the namespaces marked for each namespace you create an sa in.</p>
|
<p>I got my own cluster which has a control-plane and worker. i trying to connect to PostgreSQL in the cluster using <code>psql -h <ingress-url> -p 5432 -U postgres -W</code>, but it occur an error:</p>
<pre><code>psql: error: server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
</code></pre>
<p>But <code>curl <ingress-url></code> response like this:</p>
<pre><code>2022-05-27 04:00:50.406 UTC [208] LOG: invalid length of startup packet
</code></pre>
<p>That response mean my request have reached to PostgreSQL Server, but why i cannot connect to my PostgreSQL Server?</p>
<p>Here are my resources:</p>
<p>Ingress:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: preflight
labels:
helm.sh/chart: "preflight"
helm.sh/version: "0.1.0"
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /
pathType: Exact
backend:
service:
name: preflight
port:
number: 5432
</code></pre>
<p>Deployment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: preflight
labels:
helm.sh/chart: "preflight"
helm.sh/version: "0.1.0"
spec:
replicas: 1
selector:
matchLabels:
helm.sh/chart: "preflight"
helm.sh/version: "0.1.0"
template:
metadata:
labels:
helm.sh/chart: "preflight"
helm.sh/version: "0.1.0"
spec:
containers:
- name: preflight
image: postgres:14
env:
- name: POSTGRES_PASSWORD
valueFrom:
configMapKeyRef:
name: preflight
key: postgresPassword
ports:
- containerPort: 5432
</code></pre>
<p>Service:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: preflight
labels:
helm.sh/chart: "preflight"
helm.sh/version: "0.1.0"
spec:
type: ClusterIP
selector:
helm.sh/chart: "preflight"
helm.sh/version: "0.1.0"
ports:
- port: 5432
targetPort: 5432
</code></pre>
<p>ConfigMap values is a <code>POSTGRES_PASSWORD=postgres</code>.</p>
| <p>Nginx ingress controller is an HTTP proxy. You are trying to route PGSQL traffic over HTTP, and that simply can't work.</p>
<p>What you need to do is expose a TCP service through nginx ingress controller. See <a href="https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/" rel="nofollow noreferrer">this page</a>.</p>
<p>In a nutshell, you need to create a configmap like this:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: tcp-services
namespace: <namespace where you deployed ingress controller>
data:
5432: "default/preflight:5432"
</code></pre>
<p>then ensure your nginx ingress controller starts with the <code>--tcp-services-configmap=tcp-services</code> flag.</p>
<p>Finally, ensure the nginx ingress controller Service (the one with type == <code>LoadBalancer</code>) exposes port 5432:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
...
spec:
type: LoadBalancer
ports:
...
- name: pgsql
port: 5432
targetPort: 5432
protocol: TCP
...
</code></pre>
<p>Please note that your provider's Load Balancer should support TCP backends (most should do, but worth mentioning).</p>
|
<p>I want to build a python script that accesses a kubernetes worker node, lists the network interfaces and captures the traffic of the choosen interface (with tcpdump for exemple) and then store the pcap file somewhere on the master node.</p>
<p>I want to know if it is possible to access a worker node from my master node without ssh? (maybe with a direct k8s-apiserver call ?)
If ssh is the only way to access the worker node, how can i make a connection without entring the worker's password (for authentification).</p>
<p>Or maybe there is another way to do this ?</p>
| <p>One way to connect nodes, when SSH isn't an option, would be to start some privileged container, that would access your nodes filesystem, disabling pid namespace isolation.</p>
<p>Say I have some node "infra1"</p>
<pre><code>$> kubectl get nodes
infra1 Ready infra 728d v1.21.6
</code></pre>
<p>I can get in using:</p>
<pre><code>$ kubectl debug node/infra1 -it --image=busybox
Creating debugging pod node-debugger-infra1-s46g6 with container debugger on node infra1.
If you don't see a command prompt, try pressing enter.
/ #
/ # chroot /host
root@infra1:/#
root@infra1:/# crictl ps | head -2
CONTAINER IMAGE CREATED STATE
NAME ATTEMPT POD ID
a36863efe2a2e 3fb5cabb64693 4 minutes ago Running
</code></pre>
<p>The <code>/host</code> being a "volume", sharing my host filesystem with that debug container. Using chroot, you're now working from your node runtime.</p>
<pre><code>$ k get pods
NAME READY STATUS RESTARTS AGE
node-debugger-infra1-g5nwg 0/1 Error 0 71s
node-debugger-infra1-s46g6 1/1 Running 0 55s
</code></pre>
<p>In practice, this is done creating a Pod, such as the following:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/psp: hostaccess
name: node-debugger-infra1-s46g6
namespace: default
spec:
containers:
- image: busybox
...
volumeMounts:
- mountPath: /host
name: host-root
...
dnsPolicy: ClusterFirst
enableServiceLinks: true
hostIPC: true
hostNetwork: true
hostPID: true
nodeName: infra1
nodeSelector:
kubernetes.io/os: linux
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Never
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- operator: Exists
volumes:
- hostPath:
path: /
type: ""
name: host-root
...
</code></pre>
<hr />
<p>Answering your follow-up question, regarding caliXXX interfaces. Those are specific to calico SDN, although the same remarks may apply to other implementations: there's no easy way to resolve pods IPs from those.</p>
<p>We can however inspect pods configuration, figuring out which interface they use:</p>
<pre><code># crictl pods
...
b693b9ff6487c 3 hours ago Ready controller-6b78bff7d9-5b7vr metallb-system 2 (default)
# crictl inspectp b693b9ff6487c | jq '.info.cniResult'
{
"Interfaces": {
"cali488d4aeb0e6": {
"IPConfigs": null,
"Mac": "",
"Sandbox": ""
},
"eth0": {
"IPConfigs": [
{
"IP": "10.233.105.55",
"Gateway": ""
}
],
"Mac": "",
"Sandbox": ""
},
"lo": {
"IPConfigs": [
{
"IP": "127.0.0.1",
"Gateway": ""
},
{
"IP": "::1",
"Gateway": ""
}
],
"Mac": "00:00:00:00:00:00",
"Sandbox": "/var/run/netns/cni-55322514-e37a-2913-022a-9f7488df8ca5"
}
},
"DNS": [
{},
{}
],
"Routes": null
}
</code></pre>
<p>Then, resolving the interface name for a given IP, we could so something like:</p>
<pre><code># MATCH_IP="10\.233\.105\.55"
# crictl pods | awk '/Ready/{print $1}' | while read pod
do
crictl inspectp $pod | grep $MATCH_IP >/dev/null 2>&1 || continue
echo found pod $pod
crictl inspectp $pod \
| jq '.info.cniResult.Interfaces | with_entries(select(.key|match("cali"))) | to_entries[] | .key'
break
done
found pod b693b9ff6487c
"cali488d4aeb0e6"
</code></pre>
|
<p>Can anyone tell me how to list all namespaces in k8s using Go? I have been referencing this link but couldn't find anything that can list all namespaces.</p>
<p>Link: <a href="https://pkg.go.dev/github.com/gruntwork-io/terratest/modules/k8s" rel="nofollow noreferrer">https://pkg.go.dev/github.com/gruntwork-io/terratest/modules/k8s</a></p>
<p>I don't see any <code>ListNamespaces</code> functions for the k8s package in Go.</p>
| <p>Try <a href="https://github.com/kubernetes/client-go/tree/master/examples" rel="nofollow noreferrer">kubernetes/client-go</a>, you can do like <code>clientset.CoreV1().Namespaces("").List(context.TODO(), metav1.ListOptions{})</code>. Your <code>clientset</code> maybe instantiate within the cluster or outside.</p>
|
<p>I'm planning to deploy more than 30 apps in 5 namespaces. I will be using existing AWS EKS 1.21 Cluster. All the apps will be requiring external access because it's being used by clients. I don't want to use multiple load balancers to ease the management and also avoiding extra cost on AWS side (because ELB is charged based on hourly usage too.)</p>
<p>What I'm trying to do it basically ;</p>
<pre><code> apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: random-ingress
spec:
rules:
- host: randomhost-in-namespace1.com
http:
paths:
- path: /
backend:
serviceName: randomhost-in-namespace1 (in first namespace)
servicePort: 80
- host: randomhost-in-namespace2.com
http:
paths:
- path: /
backend:
serviceName: randomhost-in-namespace2 (in second namespace)
servicePort: 80
- host: randomhost-in-namespace3.com
http:
paths:
- path: /
backend:
serviceName: randomhost-in-namespace3 (in third namespace)
servicePort: 80
</code></pre>
<p>Something like this.</p>
<p>Is it possible to cover all these apps in all these namespaces with a single ingress load balancer? I didn't find any clear information about this.</p>
<p>Any help will be highly appreciated.
Thank you.</p>
| <p>using the <a href="https://docs.aws.amazon.com/eks/latest/userguide/aws-load-balancer-controller.html" rel="nofollow noreferrer">AWS LB Controller</a> and not Nginx LB, you can have <strong>1x ALB</strong>, re-used by each namespace.</p>
<p>define Ingress.yaml file per namespace and annotate them with the same <a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/ingress/annotations/#ingressgroup" rel="nofollow noreferrer">alb-group-name</a>.</p>
<blockquote>
<p>If group.name specified, all Ingresses with this IngressClass will
belong to the same IngressGroup specified and result in a single ALB.</p>
</blockquote>
<p>the AWS LB Controller will then create 1x ALB, it desired rules, listeners to TG's and register the right EC2 nodes etc.</p>
<p>this can be something like this:</p>
<p><strong>Ingress-namespace1.yaml</strong></p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: random-ingress
namespace: namespace1
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/group.name: my-group
alb.ingress.kubernetes.io/scheme: internet-facing
spec:
rules:
- host: randomhost-in-namespace1.com
http:
paths:
- path: /
backend:
serviceName: randomhost-in-namespace1 (in first namespace)
servicePort: 80
</code></pre>
<p><strong>Ingress-namespace2.yaml</strong></p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: random-ingress
namespace: namespace2
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/group.name: my-group
alb.ingress.kubernetes.io/scheme: internet-facing
spec:
rules:
- host: randomhost-in-namespace2.com
http:
paths:
- path: /
backend:
serviceName: randomhost-in-namespace2 (in second namespace)
servicePort: 80
</code></pre>
<p>where both files contain <strong>same group.name</strong> and <strong>differ</strong> by their namespace and host rule.</p>
<p>you can also follow AWS LBC logs to see if everything has been created successfully (should contain no errors on logs):</p>
<pre><code>kubectl logs deploy/aws-load-balancer-controller -n kube-system --follow
</code></pre>
|
<p>I was deploying an integration solution using Integration Studio, after deploying it to my kubernetes cluster the service appeared fine in my services list in the publisher.</p>
<p><strong>Problem</strong>: after creating an API from that service, the endpoint ends up being <strong>localhost:{port}</strong>, of course this endpoint doesn't work when using the gateway since it must be <strong>{service IP of integration}:{port}</strong></p>
<p>Do I have to manually change the endpoint every time ? I want to know if there's something I can do to make this process easier</p>
<p>here's my metadata.yaml folder</p>
<pre><code>serviceUrl: "https://{MI_HOST}:{MI_PORT}/mock"
</code></pre>
<p>Thank you</p>
| <p>If you need to parameterize the serviceUrl in the metadata file, you must inject the parameterized values as environment variables in the MI instance.</p>
<p>Refer the point 5 in step 4 in here - <a href="https://apim.docs.wso2.com/en/latest/tutorials/develop-an-integration-with-a-managed-api/#step-4-configure-the-micro-integrator" rel="nofollow noreferrer">https://apim.docs.wso2.com/en/latest/tutorials/develop-an-integration-with-a-managed-api/#step-4-configure-the-micro-integrator</a></p>
|
<p>What does this error mean? I have a simple airflow setup where I ran the <a href="https://airflow.apache.org/docs/helm-chart/stable/index.html" rel="nofollow noreferrer">airflow helm chart</a> on a local kind/kubernetes cluster with the <code>CeleryKubernetesExecutor</code></p>
<pre><code>│ scheduler [2022-05-25 06:55:40,086] {dagrun.py:648} WARNING - Failed to get task '<TaskInstance: tutorial.generated-task-13 manual__2022 │
│ scheduler [2022-05-25 06:55:40,086] {dagrun.py:648} WARNING - Failed to get task '<TaskInstance: tutorial.generated-task-17 manual__2022 │
│ scheduler [2022-05-25 06:55:40,207] {scheduler_job.py:1347} WARNING - Failing (6) jobs without heartbeat after 2022-05-25 06:50:40.19719 │
│ scheduler [2022-05-25 06:55:40,207] {scheduler_job.py:1355} ERROR - Detected zombie job: {'full_filepath': '/opt/airflow/dags/repo/dags/ │
│ scheduler [2022-05-25 06:55:40,208] {scheduler_job.py:753} ERROR - Exception when executing SchedulerJob._run_scheduler_loop │
│ scheduler Traceback (most recent call last): │
│ scheduler File "/home/airflow/.local/lib/python3.7/site-packages/airflow/jobs/scheduler_job.py", line 736, in _execute │
│ scheduler self._run_scheduler_loop() │
│ scheduler File "/home/airflow/.local/lib/python3.7/site-packages/airflow/jobs/scheduler_job.py", line 836, in _run_scheduler_loop │
│ scheduler next_event = timers.run(blocking=False) │
│ scheduler File "/usr/local/lib/python3.7/sched.py", line 151, in run │
│ scheduler action(*argument, **kwargs) │
│ scheduler File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/event_scheduler.py", line 36, in repeat │
│ scheduler action(*args, **kwargs) │
│ scheduler File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/session.py", line 71, in wrapper │
│ scheduler return func(*args, session=session, **kwargs) │
│ scheduler File "/home/airflow/.local/lib/python3.7/site-packages/airflow/jobs/scheduler_job.py", line 1356, in _find_zombies │
│ scheduler self.executor.send_callback(request) │
│ scheduler AttributeError: 'CeleryKubernetesExecutor' object has no attribute 'send_callback' │
│ scheduler [2022-05-25 06:55:40,218] {kubernetes_executor.py:807} INFO - Shutting down Kubernetes executor │
│ scheduler [2022-05-25 06:55:41,237] {process_utils.py:129} INFO - Sending Signals.SIGTERM to group 64. PIDs of all processes in the grou │
│ scheduler [2022-05-25 06:55:41,237] {process_utils.py:80} INFO - Sending the signal Signals.SIGTERM to group 64 │
│ scheduler [2022-05-25 06:55:41,409] {process_utils.py:75} INFO - Process psutil.Process(pid=64, status='terminated', exitcode=0, started │
│ scheduler [2022-05-25 06:55:41,410] {scheduler_job.py:765} INFO - Exited execute loop │
│ scheduler Traceback (most recent call last): │
│ scheduler File "/home/airflow/.local/bin/airflow", line 8, in <module> │
│ scheduler sys.exit(main()) │
│ scheduler File "/home/airflow/.local/lib/python3.7/site-packages/airflow/__main__.py", line 38, in main │
│ scheduler args.func(args) │
│ scheduler File "/home/airflow/.local/lib/python3.7/site-packages/airflow/cli/cli_parser.py", line 51, in command │
│ scheduler return func(*args, **kwargs) │
│ scheduler File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/cli.py", line 99, in wrapper │
│ scheduler return f(*args, **kwargs) │
│ scheduler File "/home/airflow/.local/lib/python3.7/site-packages/airflow/cli/commands/scheduler_command.py", line 75, in scheduler │
│ scheduler _run_scheduler_job(args=args) │
│ scheduler File "/home/airflow/.local/lib/python3.7/site-packages/airflow/cli/commands/scheduler_command.py", line 46, in _run_schedule │
│ scheduler job.run() │
│ scheduler File "/home/airflow/.local/lib/python3.7/site-packages/airflow/jobs/base_job.py", line 244, in run │
│ scheduler self._execute() │
│ scheduler File "/home/airflow/.local/lib/python3.7/site-packages/airflow/jobs/scheduler_job.py", line 736, in _execute │
│ scheduler self._run_scheduler_loop() │
│ scheduler File "/home/airflow/.local/lib/python3.7/site-packages/airflow/jobs/scheduler_job.py", line 836, in _run_scheduler_loop │
│ scheduler next_event = timers.run(blocking=False) │
│ scheduler File "/usr/local/lib/python3.7/sched.py", line 151, in run │
│ scheduler action(*argument, **kwargs) │
│ scheduler File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/event_scheduler.py", line 36, in repeat │
│ scheduler action(*args, **kwargs) │
│ scheduler File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/session.py", line 71, in wrapper │
│ scheduler return func(*args, session=session, **kwargs) │
│ scheduler File "/home/airflow/.local/lib/python3.7/site-packages/airflow/jobs/scheduler_job.py", line 1356, in _find_zombies │
│ scheduler self.executor.send_callback(request) │
│ scheduler AttributeError: 'CeleryKubernetesExecutor' object has no attribute 'send_callback' │
</code></pre>
<p>Isit erroring because I updated my dag definition and some tasks are now zombie/invalid? I suspect this is causing a crash tho "<em><strong>AttributeError: 'CeleryKubernetesExecutor' object has no attribute 'send_callback'</strong></em>", but whats wrong?</p>
| <p>Seem to be an airflow <a href="https://github.com/apache/airflow/issues/23610" rel="nofollow noreferrer">issue</a> in version 2.3.0</p>
<p>It is fixed in <a href="https://airflow.apache.org/docs/apache-airflow/2.3.1/release_notes.html#airflow-2-3-1-2022-05-25" rel="nofollow noreferrer">2.3.1</a></p>
|
<p>Currently it takes quite a long time before the pod can be terminated after a kubectl delete command. I have the feeling that it could be because of the sleep command.</p>
<p>How can I make the container stop faster?
What best practices should I use here?</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
...
spec:
template:
spec:
containers:
- image: alpine
..
command:
- /bin/sh
- -c
- |
trap : TERM INT
while true; do
# some code to check something
sleep 10
done
</code></pre>
<p>Is my approach with "trap: TERM INT" correct? At the moment I don't see any positive effect...</p>
<p>When I terminate the pod it takes several seconds for the command to come back.</p>
<pre><code>kubectl delete pod my-pod
</code></pre>
| <p>Add <code>terminationGracePeriodSeconds</code> to your spec will do:</p>
<pre><code>...
spec:
template:
spec:
terminationGracePeriodSeconds: 10 # <-- default is 30, can go as low as 0 to send SIGTERM immediately.
containers:
- image: alpine
</code></pre>
|
<p>I tried to run a docker container that I built on a mounted network volume under Parallels. I am adding this here to document the issue as it is different from: <a href="https://stackoverflow.com/questions/56784492/permissionerror-errno-13-permission-denied-manage-py">PermissionError: [Errno 13] Permission denied: '/manage.py'</a>, in that this is not an issue with /var/run/docker.sock at all but with a little known issue with the permissions on network shares and how this intersects with your container image after building.</p>
<p>Steps to reproduce the issue, this is from the Flask demo app which I was modifying to run under Kubernetes.</p>
<p>Pertinent excerpt from Dockerfile:</p>
<pre><code>WORKDIR /flaskr
COPY ./flaskr .
</code></pre>
<p>Build the image normally, then run the image with:</p>
<pre><code>docker run -p 5001:5000 flaskr-k8s:0.1.0
</code></pre>
<p>The result:</p>
<pre><code> PermissionError
[Errno 13] Permission denied: '/flaskr/app.py'
at /usr/local/lib/python3.9/os.py:597 in _execvpe
593│ argrest = (args,)
594│ env = environ
595│
596│ if path.dirname(file):
→ 597│ exec_func(file, *argrest)
598│ return
599│ saved_exc = None
600│ path_list = get_exec_path(env)
601│ if name != 'nt':
</code></pre>
| <p>The issue lies with the COPY step in the Docker manifest and how Parallels manages permissions on network shares. Even if volumes are shared with 'ignore ownership', Parallels will make the files appear to be owned by the currently logged in user (see: <a href="http://download.parallels.com/doc/pcs/html/Parallels_Cloud_Server_Users_Guide/35697.htm" rel="nofollow noreferrer">http://download.parallels.com/doc/pcs/html/Parallels_Cloud_Server_Users_Guide/35697.htm</a>). If you build with 'sudo', or run as a different user either when the container is launched or during the build process, then the ownership will not be correct and you will get PermissionError as above. See: <a href="https://stackoverflow.com/questions/26500270/understanding-user-file-ownership-in-docker-how-to-avoid-changing-permissions-o">Understanding user file ownership in docker: how to avoid changing permissions of linked volumes</a> for more information on how Docker manages permissions on volumes.</p>
<p>The solution I used was to change ownership of the files after copying them to the Docker image -- in this case to user 1000 as it is the user that will be running the containers by default on my Kubernetes nodes:</p>
<pre><code># Dockerfile extract
...
COPY ./app /app/updatr
# do stuff
# fix permissions
RUN chown -R 1000:1000 /app
# lastly
USER 1000
</code></pre>
<p>There are other solutions, please see: <a href="https://blog.gougousis.net/file-permissions-the-painful-side-of-docker/" rel="nofollow noreferrer">https://blog.gougousis.net/file-permissions-the-painful-side-of-docker/</a>, <a href="https://vsupalov.com/docker-shared-permissions/" rel="nofollow noreferrer">https://vsupalov.com/docker-shared-permissions/</a> and <a href="https://stackoverflow.com/questions/24549746/switching-users-inside-docker-image-to-a-non-root-user">Switching users inside Docker image to a non-root user</a> for excellent discussions.</p>
|
<p>I am creating a ClusterIssuer and a Certificate. However, there is <em><strong>no</strong></em> <code>tls.crt</code> on the secret! What I am doing wrong?</p>
<p>The clusterissuer looks like is running fine, but neither the keys has the crt</p>
<pre><code>apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-myapp-clusterissuer
namespace: cert-manager
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: [email protected]
privateKeySecretRef:
name: wildcard-myapp-com
solvers:
- dns01:
cloudDNS:
serviceAccountSecretRef:
name: clouddns-service-account
key: dns-service-account.json
project: app
selector:
dnsNames:
- '*.myapp.com'
- myapp.com
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: myapp-com-tls
namespace: cert-manager
spec:
secretName: myapp-com-tls
issuerRef:
name: letsencrypt-myapp-issuer
kind: ClusterIssuer
commonName: '*.myapp.com'
dnsNames:
- 'myapp.com'
- '*.myapp.com'
</code></pre>
<p><a href="https://i.stack.imgur.com/s65zp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/s65zp.png" alt="enter image description here" /></a>
<a href="https://i.stack.imgur.com/5ZlNZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5ZlNZ.png" alt="enter image description here" /></a></p>
| <p>With the information provided it is very hard to troubleshoot this, you could be hitting this <a href="https://github.com/cert-manager/cert-manager/issues/2111" rel="nofollow noreferrer">bug</a>.</p>
<p>You can start troubleshooting this kind of issues by following this procedure:</p>
<ol>
<li>Get the certificate request name:</li>
</ol>
<pre><code>kubectl -n <namespace> describe certificate myapp-com-tls
...
Created new CertificateRequest resource "myapp-com-tls-xxxxxxx"
</code></pre>
<ol start="2">
<li>The request will generate an order, get the order name with the command:</li>
</ol>
<pre><code>kubectl -n <namespace> describe certificaterequests myapp-com-tls-xxxxxxx
…
Created Order resource <namespace>/myapp-com-tls-xxxxxxx-xxxxx
</code></pre>
<ol start="3">
<li>The order will generate a challenge resource, get that with:</li>
</ol>
<pre><code>kubectl -n <namespace> describe order myapp-com-tls-xxxxxxx-xxxxx
…
Created Challenge resource "myapp-com-tls-xxxxxxx-xxxxx-xxxxx" for domain "yourdomain.com"
</code></pre>
<ol start="4">
<li>Finally, with the challenge name, you can get the status of the validation for you certificate:</li>
</ol>
<pre><code>kubectl -n <namespace> describe challenges myapp-com-tls-xxxxxxx-xxxxx-xxxxx
...
Reason: Successfully authorized domain
...
Normal Started 2m45s cert-manager Challenge scheduled for processing
Normal Presented 2m45s cert-manager Presented challenge using http-01 challenge mechanism
Normal DomainVerified 2m22s cert-manager Domain "yourdomain.com" verified with "http-01" validation
</code></pre>
<p>If the status of the challenge is other than <code>DomainVerified</code>, then something went wrong while requesting the certificate from let's encrypt and will see a reason in the output.</p>
|
<p>I created below statfulset on microk8s:</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgresql13
spec:
selector:
matchLabels:
app: postgresql13
serviceName: postgresql13
replicas: 1
template:
metadata:
labels:
app: postgresql13
spec:
containers:
- name: postgresql13
image: postgres:13
imagePullPolicy: Always
ports:
- containerPort: 5432
name: sql-tcp
volumeMounts:
- name: postgresql13
mountPath: /data
env:
- name: POSTGRES_PASSWORD
value: testpassword
- name: PGDATA
value: /data/pgdata
volumeClaimTemplates:
- metadata:
name: postgresql13
spec:
storageClassName: "microk8s-hostpath"
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Ki
</code></pre>
<p>in the <code>volumeClaimTemplates</code> i gave it only 1Ki (this is one KB right ?)
But the DB started normally and when i run <code>kubectl exec postgresql13-0 -- df -h</code> on the pod i get this</p>
<pre><code>Filesystem Size Used Avail Use% Mounted on
overlay 73G 11G 59G 15% /
tmpfs 64M 0 64M 0% /dev
/dev/mapper/ubuntu--vg-ubuntu--lv 73G 11G 59G 15% /data
shm 64M 16K 64M 1% /dev/shm
tmpfs 7.7G 12K 7.7G 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 3.9G 0 3.9G 0% /proc/acpi
tmpfs 3.9G 0 3.9G 0% /proc/scsi
tmpfs 3.9G 0 3.9G 0% /sys/firmware
</code></pre>
<p>Isn't is supposed to not use more than what the PVC have ?
I intentially sat the storage class <code>AllowVolumeExpansion: False</code></p>
<p>what am i missing here ?</p>
| <p><a href="https://kubernetes.io/docs/concepts/storage/storage-classes/#allow-volume-expansion" rel="nofollow noreferrer">allowVolumeExpansion</a> and storage size does not apply to <code>hostPath</code>. The actual size will be the host volume size where the host path resides.</p>
|
<p>Is there a way to run <code>kubectl diff -f .</code> in that way so that output shows only file names that would be changed with <code>kubectl apply -f .</code> and not every single diff line...?</p>
| <p>You can use, something as below, here <code>KUBECTL_EXTERNAL_DIFF</code> is an env variable, to use a custom <code>diff</code> program with custom flags. So, basically, here we are telling <code>kubectl</code> to use <code>diff -qr</code> for displaying the difference. By default, kubectl use <code>diff -u -N</code>.</p>
<pre><code>KUBECTL_EXTERNAL_DIFF='diff -qr' kubectl diff -f .
</code></pre>
|
<p>I am trying to run a kubectl exec command on a pod, but it fails saying <em>'No such file or directory'</em></p>
<p>I can run the command if I login to the terminal of the pod through bash
Also this problem is only for a few commands.
I found that there is a PATH variable difference</p>
<ol>
<li><p>When i do kubectl exec $POD -- printenv , then
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin</p>
</li>
<li><p>When i run -- printenv from the terminal of POD , then
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/opt/abc/scripts:/opt/abc/bin:/opt/admin/bin:/opt/abc/bin:/root/bin</p>
</li>
</ol>
<p>I am guessing this is causing the commands to fails when run through kubectl exec.</p>
<p>Any ideas to overcome this are welcome; can we pass the env variable of PATH in someway to the POD which using kubectl exec ?</p>
| <p>You can try executing <code>bash -c "<command>"</code></p>
<pre><code>$ kubectl exec <pod> -- bash -c "<cmd>"
</code></pre>
<p>It is likely PATH is being modified by some shell initialization files</p>
|
<p>I am running Apache Kafka on Kubernetes via Strimzi operator.
I am trying to install Kafka Connect with mysql debezium connector.</p>
<p>This is the Connector configuration file:</p>
<pre><code>apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
metadata:
name: my-connect-cluster
annotations:
strimzi.io/use-connector-resources: "true"
spec:
version: 3.1.0
replicas: 1
bootstrapServers: <bootstrap-server>
config:
group.id: connect-cluster
offset.storage.topic: connect-cluster-offsets
config.storage.topic: connect-cluster-configs
status.storage.topic: connect-cluster-status
config.storage.replication.factor: -1
offset.storage.replication.factor: -1
status.storage.replication.factor: -1
build:
output:
type: docker
image: <my-repo-in-ecr>/my-connect-cluster:latest
pushSecret: ecr-secret
plugins:
- name: debezium-mysql-connector
artifacts:
- type: tgz
url: https://repo1.maven.org/maven2/io/debezium/debezium-connector-mysql/1.9.0.Final/debezium-connector-mysql-1.9.0.Final-plugin.tar.gz
</code></pre>
<p>I created the ecr-secret in this way:</p>
<pre><code>kubectl create secret docker-registry ecr-secret \
--docker-server=${AWS_ACCOUNT}.dkr.ecr.${AWS_REGION}.amazonaws.com \
--docker-username=AWS \
--docker-password=$(aws ecr get-login-password) \
--namespace=default
</code></pre>
<p>The error I get is the following:</p>
<blockquote>
<p>error checking push permissions -- make sure you entered the correct
tag name, and that you are authenticated correctly, and try again:
checking push permission for
"/my-connect-cluster:latest":
POST https:/ │ │ Stream closed EOF for
default/my-connect-cluster-connect-build
(my-connect-cluster-connect-build)</p>
</blockquote>
<p>I am not sure what permission I should check. I already tried to use a configuration of the aws cli with a role with admin priviledge just to debug but I got the same error. Any guess?</p>
| <p>I thought some role was missing from the node in the EKS cluster but that is not the case since the only thing needed to authenticate is the information contained in the secret. <br> <br></p>
<p>The error was actually in the secret creation: two details are very relevant:</p>
<ol>
<li>the --region flag in the aws ecr get-login-password command was missing and therefore a different password was generated.</li>
<li>the https:// is needed in front of the docker-server</li>
</ol>
<p>Below the right command for the secret generation.</p>
<pre><code>kubectl create secret docker-registry ecr-secret \
--docker-server=https://${AWS_ACCOUNT}.dkr.ecr.${AWS_REGION}.amazonaws.com \
--docker-username=AWS \
--docker-password=$(aws ecr get-login-password --region eu-central-1) \
--namespace=default
</code></pre>
|
<p>I'm getting a low disk space warning on a server where my microk8s and applications are installed. When I run the microk8s ctr image ls command, multiple images appear for an application. Does the "docker image prune -f" command in Docker have an equivalent in microk8s? Or is there a way possible?</p>
| <p>If you want to delete all custom added images from the built-in library, you can do this:</p>
<pre><code># get all images that start with localhost:32000, output the results into image_ls file
sudo microk8s ctr images ls name~='localhost:32000' | awk {'print $1'} > image_ls
# loop over file, remove each image
cat image_ls | while read line || [[ -n $line ]];
do
microk8s ctr images rm $line
done;
</code></pre>
<p>Put it into a .sh file and run the script</p>
|
<p>I want to test the queuing and preemption features of Kubernetes (v1.21.0). I run Kubernetes using Minikube with a pod limit of 10. I have a script that creates two priority classes: 'low-priority' and 'high-priority'.</p>
<p><a href="https://i.stack.imgur.com/P5dpe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/P5dpe.png" alt="Priority Classes" /></a></p>
<p>I then have a script that creates 10 low priority jobs, waits 20 seconds, and then creates a high priority one. In this scenario, one of the low priority ones is correctly terminated so that the high priority job can be executed.</p>
<p><a href="https://i.stack.imgur.com/dGfBF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dGfBF.png" alt="Preemption Working" /></a></p>
<p>I then have another script that does the same thing, but in a namespace with resource quotas:</p>
<pre><code>kubectl create namespace limited-cpu
cat <<EOF | kubectl apply -n limited-cpu -f -
apiVersion: v1
kind: ResourceQuota
metadata:
name: limit-max-cpu
spec:
hard:
requests.cpu: "1000m"
EOF
</code></pre>
<p>In this scenario, the low priority jobs request 333m of cpu and the high priority one 500m. The expected behavior is for Kubernetes to run three low priority at the same time, then to stop two of them when the high priority one is submitted.</p>
<p>But it does not happen. Worst: even when the low priority jobs end, other low priority jobs are scheduled before the high priority one.</p>
<p><a href="https://i.stack.imgur.com/uQHY1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uQHY1.png" alt="Preemption and queuing not working with resource quotas" /></a></p>
<p>Here are the two jobs definitions:</p>
<pre><code>for i in $(seq -w 1 10) ;
do
cat <<EOF | kubectl apply -n limited-cpu -f -
apiVersion: batch/v1
kind: Job
metadata:
name: low-priority-$i
spec:
template:
spec:
containers:
- name: low-priority-$i
image: busybox
command: ["sleep", "60s"]
resources:
requests:
memory: "64Mi"
cpu: "333m"
restartPolicy: Never
priorityClassName: "low-priority"
EOF
done
sleep 20s
cat <<EOF | kubectl apply -n limited-cpu -f -
apiVersion: batch/v1
kind: Job
metadata:
name: high-priority-1
spec:
template:
spec:
containers:
- name: high-priority-1
image: busybox
command: ["sleep", "30s"]
resources:
requests:
memory: "128Mi"
cpu: "500m"
restartPolicy: Never
priorityClassName: "high-priority"
EOF
</code></pre>
<p>Even the <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#preemption" rel="nofollow noreferrer">Kubernetes Documentation</a> agrees that it should be working.</p>
<p>EDIT:
Here are the Priority Classes definitions:</p>
<pre><code>cat <<EOF | kubectl apply -f -
apiVersion: scheduling.k8s.io/v1
description: Low-priority Priority Class
kind: PriorityClass
metadata:
name: low-priority
value: 1000000
EOF
cat <<EOF | kubectl apply -f -
apiVersion: scheduling.k8s.io/v1
description: Low-priority Priority Class
kind: PriorityClass
metadata:
name: high-priority
value: 99999999
EOF
</code></pre>
| <p>Fair question, fair assumption.</p>
<p>I've run into a similar situation and was also disappointed to see that k8s does not evict low-priority pods in favor of high-priority ones.</p>
<p>A couple of consultations with k8s experts revealed that indeed, when framed into namespace quotas, k8s is not expected to be as aggressive as in pure "pods x nodes" setup.</p>
<p>The official documentation you point to also seems to describe everything only in the context of "pods x nodes".</p>
|
<p>I created below statfulset on microk8s:</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgresql13
spec:
selector:
matchLabels:
app: postgresql13
serviceName: postgresql13
replicas: 1
template:
metadata:
labels:
app: postgresql13
spec:
containers:
- name: postgresql13
image: postgres:13
imagePullPolicy: Always
ports:
- containerPort: 5432
name: sql-tcp
volumeMounts:
- name: postgresql13
mountPath: /data
env:
- name: POSTGRES_PASSWORD
value: testpassword
- name: PGDATA
value: /data/pgdata
volumeClaimTemplates:
- metadata:
name: postgresql13
spec:
storageClassName: "microk8s-hostpath"
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Ki
</code></pre>
<p>in the <code>volumeClaimTemplates</code> i gave it only 1Ki (this is one KB right ?)
But the DB started normally and when i run <code>kubectl exec postgresql13-0 -- df -h</code> on the pod i get this</p>
<pre><code>Filesystem Size Used Avail Use% Mounted on
overlay 73G 11G 59G 15% /
tmpfs 64M 0 64M 0% /dev
/dev/mapper/ubuntu--vg-ubuntu--lv 73G 11G 59G 15% /data
shm 64M 16K 64M 1% /dev/shm
tmpfs 7.7G 12K 7.7G 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 3.9G 0 3.9G 0% /proc/acpi
tmpfs 3.9G 0 3.9G 0% /proc/scsi
tmpfs 3.9G 0 3.9G 0% /sys/firmware
</code></pre>
<p>Isn't is supposed to not use more than what the PVC have ?
I intentially sat the storage class <code>AllowVolumeExpansion: False</code></p>
<p>what am i missing here ?</p>
| <blockquote>
<p>Isn't is supposed to not use more than what the PVC have?</p>
</blockquote>
<p>This is a misunderstanding. What you specify in a <strong>resource request</strong> is the resources your application <em>needs at least</em>. You might get more. You typically use <strong>resource limits</strong> to set hard limits.</p>
|
<p>I am trying to migrate <code>cert-manager</code> to API v1, I was able to migrate the Issuer to ClusterIssue (the first part of the YAML). However, I am dealing with a breaking change that there is no more <code>acme</code> on kind Certificate</p>
<pre><code>apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-myapp-issuer
namespace: cert-manager
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: [email protected]
privateKeySecretRef:
name: wildcard-myapp-com
solvers:
- dns01:
cloudDNS:
serviceAccountSecretRef:
name: clouddns-service-account
key: key.json
project: project-id
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: myapp-com-tls
namespace: default
spec:
secretName: myapp-com-tls
issuerRef:
name: letsencrypt-myapp-issuer
commonName: '*.myapp.com'
dnsNames:
- myapp.com
acme:
config:
- dns01:
provider: google-dns
domains:
- '*.myapp.com'
- myapp.com
</code></pre>
<p>When I run kubectl apply I got the error:</p>
<blockquote>
<p>error validating data: ValidationError(Certificate.spec): unknown field "acme" in io.cert-manager.v1.Certificate.spec</p>
</blockquote>
<p>How can I migrate to the new version of cert-manager?</p>
| <p>As part of v0.8, a new format for configure ACME Certificate resources has been introduced. Notably, challenge solver configuration has moved from the Certificate resource (under <code>certificate.spec.acme</code>) and now resides on your configure Issuer resource, under <code>issuer.spec.acme.solvers</code>.</p>
<p>So the result manifests should be as following;</p>
<pre><code>apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-myapp-issuer
namespace: cert-manager
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: [email protected]
privateKeySecretRef:
name: wildcard-myapp-com
solvers:
- selector:
dnsNames:
- '*.myapp.com'
- myapp.com
dns01:
cloudDNS:
serviceAccountSecretRef:
name: clouddns-service-account
key: key.json
project: project-id
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: myapp-com-tls
namespace: default
spec:
secretName: myapp-com-tls
issuerRef:
name: letsencrypt-myapp-issuer
commonName: '*.myapp.com'
dnsNames:
- myapp.com
</code></pre>
|
<p>first post on stack overflow, hope i will get some help :)</p>
<p>I've been struggling for a while trying to get this to work.
So basically i have kubernetes cluster in Oracle Cloud.
I'm using nginx ingress controller + cert manager for certificates.</p>
<p>When i deployed nginx ingress controller Oracle Cloud provided me with Load Balancer.</p>
<p>So we come to my problem. By default Load Balancer got listeners in cloud using TCP protocol (here is pic)
<a href="https://i.stack.imgur.com/18IhS.png" rel="nofollow noreferrer">TCP Listener</a></p>
<p>And in kubernetes service (nginx ingress controller) protocol is tcp also.</p>
<p>When this setup is used it is working all fine.</p>
<p>Also im using SNI (multiple service on one endpoint).</p>
<p>But Oracle Cloud provides WAF that can be used with only http listeners (one more pic)
<a href="https://i.stack.imgur.com/fpcGR.png" rel="nofollow noreferrer">Http listener</a>.
So if i intend to use WAF i need to set this http listeners on Load Balancer.</p>
<p>This is nginx ingress service pic where you can see protocol is TCP.
<a href="https://i.stack.imgur.com/DRHMf.png" rel="nofollow noreferrer">TCP nginx ingress service</a></p>
<p>When i change listeners to http. My domain have this error "ERR_SSL_PROTOCOL_ERROR" when i try to open it on browser.</p>
<p>I tried many things including changing configuration of nginx ingress controller, adding annotations etc..</p>
<pre><code>nginx.ingress.kubernetes.io/backend-protocol: HTTP
nginx.ingress.kubernetes.io/ssl-passthrough: 'true'
nginx.ingress.kubernetes.io/ssl-redirect: 'true'
</code></pre>
<p>But nothing works i always get that "ERR_SSL_PROTOCOL_ERROR".</p>
<p>So i don't know is this even possible, and by that i mean can there be http listeners that point on tcp protocol (nginx ingress service)?</p>
<p>Any advice would be appreciated.</p>
<p>Thank you in advance.</p>
| <p>OCI <strong>TCP</strong> listeners to <strong>HTTP</strong> listeners backed as HTTPS or SSL passthrough.</p>
<p>You can first create the <strong>self-sign</strong> <strong>SSL</strong> <strong>certificate</strong> if you are attaching it at the <strong>LB</strong> level.</p>
<p>If you are using cert-manager and saving it into a secret that's fine you can also follow it.</p>
<p>You can go to LoadBalancer and change the Protocol or Listeners to <strong>HTTP</strong> instead of <strong>TCP</strong> manually and attach the certificate to <strong>LB</strong> on the <strong>443</strong> <strong>port</strong> if you are following that method (as i did).</p>
<p>Next</p>
<p>Go to <strong>Nginx ingress controller svc</strong> and edit it and change the target port for <strong>443</strong> to <strong>http</strong></p>
<p><a href="https://i.stack.imgur.com/GT0EM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GT0EM.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/PJRtv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PJRtv.png" alt="enter image description here" /></a></p>
<p>Now inside your app ingress configuration you can use the <strong>annotations</strong> it will work without error of <code>400 Bad request</code> or <code>HTTP requested on HTTPS</code> port</p>
<pre><code>nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/ssl-passthrough: 'true'
nginx.ingress.kubernetes.io/ssl-redirect: 'true'
</code></pre>
|
<p>I can create a secret using a single line command like so:</p>
<pre><code>kubectl -n myns create secret generic my-secret --from-literal=type=super --from-literal=url=example.com --from-file=samples=sample.txt
</code></pre>
<p>But it's missing a label when this is created. I want to keep it as a single line command.</p>
<p>Can I pass in a label parameter or do i need to do a second call to modify the created secret?</p>
| <p>There isn't an option in the <code>kubectl create secret</code> command to add a label.</p>
<p>You will need to run a second command to add the label:</p>
<pre class="lang-bash prettyprint-override"><code>kubectl label secret my-secret -n myns "foo=bar"
</code></pre>
<p>But you could technically do it on one line like:</p>
<pre class="lang-bash prettyprint-override"><code>kubectl create secret generic my-secret ... && kubectl label secret my-secret "foo=bar"
</code></pre>
<p>Or do as <a href="https://stackoverflow.com/a/72467037/2446208">@larsks suggests</a> which is much better for job security.</p>
|
<p>For <code>kubectl describe</code> I can abbreviate a few classes of resources, for example:</p>
<pre><code>po/xxx -> pods/xxx
rs/xxx -> replicasets/xxx
</code></pre>
<p>Where can I find the full list?</p>
<p>I'm trying to find the abbreviation for deployments.</p>
| <p>To get a full list of your resources, including their <em>shortname</em>, use:</p>
<pre><code>kubectl api-resources
</code></pre>
<p>e.g. Deployment has the shorthname <code>deploy</code>.</p>
<hr />
<p>Example output from <code>kubectl api-resources</code>:</p>
<pre><code>NAME SHORTNAMES APIVERSION NAMESPACED KIND
daemonsets ds apps/v1 true DaemonSet
deployments deploy apps/v1 true Deployment
replicasets rs apps/v1 true ReplicaSet
statefulsets sts apps/v1 true StatefulSet
...
</code></pre>
|
<p>I am trying to understand the difference between pods and annotations.</p>
<p>Standard documentation says that annotations captures "non-identifying information".
While on labels, selectors can be applied. Labels are used to organise objects in kubernetes cluster.</p>
<p>If this is the case then why istio use pod annotations instead of labels for various different settings : <a href="https://istio.io/latest/docs/reference/config/annotations/" rel="nofollow noreferrer">https://istio.io/latest/docs/reference/config/annotations/</a></p>
<p>Isn't label is good approach ?</p>
<p>Just trying to understand what advantages does annotations provide, if istio developers chose to use annotations.</p>
| <p>As Extending the Burak answer,</p>
<blockquote>
<p>Kubernetes labels and annotations are both ways of adding metadata to
Kubernetes objects. The similarities end there, however. Kubernetes
labels allow you to identify, select and operate on Kubernetes
objects. Annotations are non-identifying metadata and do none of these
things.</p>
</blockquote>
<p>Labels are mostly used to attach with the resources like POD, Replica set, etc. it also get used to route the traffic and routing deployment to service and other.</p>
<p>Labels are getting stored in the ETCD database so you can search using it.</p>
<p>Annotation is mostly to store metadata and config-if any.</p>
<p>Metadata like : owner details, last helm release if using helm, side car injection</p>
<p>You can store owner details in labels, K8s use <strong>labels</strong> for traffic routing from <strong>service to deployment</strong> and labels should be the <strong>same</strong> on both resources (deployment & service) to route traffic.</p>
<p>What will you do in that case to match labels for resources? Use service owner name same inside all deployment & service? when you are running multiple distributed services managed by diff team and service owners.</p>
<p>If you notice some of annotation of istio is just for storing metadata like : <code>install.operator.istio.io/chart-owner</code>, <code>install.operator.istio.io/owner-generation</code></p>
<p>Read more at : <a href="https://istio.io/latest/docs/reference/config/annotations/" rel="nofollow noreferrer">https://istio.io/latest/docs/reference/config/annotations/</a></p>
<p>You should also check once syntax of both <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set" rel="nofollow noreferrer">label</a> and <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/#syntax-and-character-set" rel="nofollow noreferrer">annotation</a>.</p>
|
<p>I'm just trying to run a simple batch job and getting this error "MountVolume.SetUp failed for volume "kube-api-access-cvwdt" : object "default"/"kube-root-ca.crt" not registered"</p>
<p>Here's my logs when running describe pod on that pod</p>
<pre><code> Normal Scheduled 59s default-scheduler Successfully assigned default/stock-api to ip-192-168-63-5.us-west-2.compute.internal
Normal Pulling 58s kubelet Pulling image "mpriv32/stockapi:latest"
Normal Pulled 38s kubelet Successfully pulled image "mpriv32/stockapi:latest" in 19.862095063s
Normal Created 35s kubelet Created container stock-api
Normal Started 35s kubelet Started container stock-api
Warning FailedMount 33s (x3 over 34s) kubelet MountVolume.SetUp failed for volume "kube-api-access-cvwdt" : object "default"/"kube-root-ca.crt" not registered
</code></pre>
<p>My job.yaml file to create the pod</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: stock-api
labels:
app: stock-api
spec:
containers:
- name: stock-api
image: mpriv32/stock-api:latest
envFrom:
- secretRef:
name: api-credentials
restartPolicy: Never
</code></pre>
<p>Just in case it makes a difference, I'm trying to run this on EKS</p>
| <p>You might be facing a bug on the kubelet that is discussed <a href="https://github.com/kubernetes/kubernetes/issues/105204" rel="noreferrer">here</a> and was fixed in K8s 1.23.6.</p>
<p>Failure should be sporadic, so a simple recreation of your Pod might already fix it. More reliably, you could upgrade K8s or disable <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#use-the-default-service-account-to-access-the-api-server" rel="noreferrer">automountServiceAccountToken</a>. This stops the CA in question from being mounted into your Pod. Since the CA is only required, if your Pod needs to talk to the Kube API server, this is a sensible security measure anyway.</p>
|
<p>I have a GKE cluster that uses a mix of Cloud IAM and cluster RBAC rules for resource access. For granularity, we use RBAC bindings for certain resources on the cluster, but I'm unable to find a place where those events are logged.</p>
<p>How do I see the logs for when cluster RBAC denies a user the permissions to do something? I can only see IAM related logs in Cloud Logging's audit logs. I'd like to know when the cluster itself denies access.</p>
| <p>You can check the Kube API logs</p>
<pre><code>kubectl proxy &
curl -s http://localhost:8001/logs/kube-apiserver.log
</code></pre>
<p>While GKE logs : <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/audit-logging#viewing_logs" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/audit-logging#viewing_logs</a></p>
|
<p>I am trying to authenticate in ArgoCD using Keycloak. I am following <a href="https://argo-cd.readthedocs.io/en/stable/operator-manual/user-management/keycloak/" rel="nofollow noreferrer">this guide</a>, but there is a problem. ArgoCD redirects me to http://URL/<strong>auth</strong>/login?return_url=... which throws this Bad Request 400 - Invalid return_url. The correct link though is http://URL/login?return_url=..., but I don't see where I should change it so it doesn't append <strong>auth</strong> to the path.</p>
<p>The <strong>Valid Redirect URIs</strong> field is set to <strong>*</strong> in the Keycloak client for ArgoCD.</p>
| <p>I had the same issue, the culprit:
the missing “url: https//:yourargocdurl”
in the comfigMap argocd-cm</p>
|
<p>I get the following error message whenever I run a pulumi command. I verified and my kubeconfig file is <code>apiVersion: v1</code> I updated <code>client.authentication.k8s.io/v1alpha1</code> to <code>client.authentication.k8s.io/v1beta1</code> and still have the issue, what could be the reason for this error message?</p>
<pre><code>Kubeconfig user entry is using deprecated API version client.authentication.k8s.io/v1alpha1. Run 'aws eks update-kubeconfig' to update.
</code></pre>
| <p>The bug report for this issue is <a href="https://github.com/pulumi/pulumi-eks/issues/599" rel="noreferrer">here</a></p>
<p>The underlying cause is that the AWS cli shipped a breaking change in a minor version release. You can see this <a href="https://github.com/aws/aws-cli/issues/6920" rel="noreferrer">here</a></p>
<p>I'm assuming here you're using the <code>pulumi-eks</code> package in order to provision an EKS cluster greater than <code>v1.22</code>. The EKS package uses a resource provider to configure some EKS resources like the <code>aws-auth</code> config map, and this isn't the same transient kubeconfig you're referring to in <code>~/.kube/config</code></p>
<p>In order to fix this, you need to do the following:</p>
<ul>
<li>Ensure your <code>aws-cli</code> version is greater than <code>1.24.0</code> or <code>2.7.0</code></li>
<li>Ensure you've updated your <code>pulumi-eks</code> package in your language SDK package manager to greater than <code>0.40.0</code>. This will mean also updated the provider in your existing stack.</li>
<li>Ensure you have the version of <code>kubectl</code> installed locally that matches your cluster version that has been provisioned</li>
</ul>
|
<p>In <a href="https://kubernetes.github.io/ingress-nginx/deploy/baremetal/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/baremetal/</a>
In metalLB mode, one node attracts all the traffic for the ingress-nginx
By node port we can gather all traffic and loadbalance it podes by service</p>
<p>what is diffrence between node port and metalLB?</p>
| <p>I was also struggling a bit to understand: why would i need a service of type loadbalancer, if i can use a nodePort service, which would allow me to access a service through a nodeport on all the nodes (loadbalanced by kube-proxy).</p>
<p>I think the main reason is security. Nodeport service forces you to expose your k8s nodes ip adresses to users.
Whereas when using loadbalancer service (either with metallb or any cload-provider), as @starfry mentioned it allocates services with separate dedicated IP addresses allocated from a pool.</p>
<p>Nodeport service allows only exposing ports of the range 30000-32767. As the port is exposed on the K8s node and would be possible otherwise to tromp on real ports used by the node.
So another reason would be for exposing a service outside of this range.</p>
|
<p>I have an nginx-pod which redirects traffic into Kubernetes services and stores related certificates insides its volume. I want to monitor these certificates - mainly their expiration.</p>
<p>I found out that there is a TLS integration in Datadog (we use Datadog in our cluster): <a href="https://docs.datadoghq.com/integrations/tls/?tab=host" rel="nofollow noreferrer">https://docs.datadoghq.com/integrations/tls/?tab=host</a>.</p>
<p>They provide sample file, which can be found here: <a href="https://github.com/DataDog/integrations-core/blob/master/tls/datadog_checks/tls/data/conf.yaml.example" rel="nofollow noreferrer">https://github.com/DataDog/integrations-core/blob/master/tls/datadog_checks/tls/data/conf.yaml.example</a></p>
<p>To be honest, I am completely lost and do not understand comments of the sample file - such as:</p>
<pre><code>## @param server - string - required
## The hostname or IP address with which to connect.
</code></pre>
<p>I want to monitor certificates that are stored in the pod, does it mean this value should be localhost or do I need to somehow iterate over all the certificates that are stored using this value (such as server_names in nginx.conf)?
If anyone could help me with setting sample configuration, I would be really grateful - if there are any more details I should provide, that is not a problem at all.</p>
| <h3>TLS Setup on Host</h3>
<p>You can use a host type of instance to track all your certificate expiration dates</p>
<p>1- Install TLS Integration from datadog UI</p>
<p>2- Create instance and install datadog agent in there.</p>
<p>3- Create a /etc/datadog/conf.d/tls.d/conf.yaml</p>
<p>4- Edit following template for your need</p>
<pre><code>init_config:
instances:
## @param server - string - required
## The hostname or IP address with which to connect.
#
- server: https://yourDNS1.com/
tags:
- dns:yourDNS.com
- server: https://yourDNS2.com/
tags:
- dns:yourDNS2
- server: yourDNS3.com
tags:
- dns:yourDNS3
- server: https://yourDNS4.com/
tags:
- dns:yourDNS4.com
- server: https://yourDNS5.com/
tags:
- dns:yourDNS5.com
- server: https://yourDNS6.com/
tags:
- dns:yourDNS6.com
</code></pre>
<p>5- Restart datadog-agent</p>
<pre><code>systemctl restart datadog-agent
</code></pre>
<p>6- Check status if you see the tls is running successfully</p>
<pre><code>watch systemctl status datadog-agent
</code></pre>
<p>8- Create a TLS Overview Dashboard</p>
<p>9- Create a Monitor for getting alert on expiration dates</p>
<hr />
<h3>TLS Setup on Kubernetes</h3>
<p>1- Create a ConfigMap and attach that as a Volume
<a href="https://docs.datadoghq.com/agent/kubernetes/integrations/?tab=configmap" rel="nofollow noreferrer">https://docs.datadoghq.com/agent/kubernetes/integrations/?tab=configmap</a></p>
|
<p>I have installed nginx ingress controller. As I understand that one of the main reasons to use ingress is to save money by not creating multiple load balancers.</p>
<p>My kubesphere-console service yaml manifest looks like below:</p>
<pre><code>cat kubesphere-console.yaml
apiVersion: v1
kind: Service
metadata:
annotations:
meta.helm.sh/release-name: ks-core
meta.helm.sh/release-namespace: kubesphere-system
creationTimestamp: "2022-05-30T04:51:22Z"
finalizers:
- service.kubernetes.io/load-balancer-cleanup
labels:
app: ks-console
app.kubernetes.io/managed-by: Helm
tier: frontend
version: v3.1.0
name: ks-console
namespace: kubesphere-system
resourceVersion: "785863"
uid: 8628c2d0-164b-499f-ac0c-254ac77aa48c
spec:
allocateLoadBalancerNodePorts: true
clusterIP: 10.0.29.29
clusterIPs:
- 10.0.29.29
externalTrafficPolicy: Cluster
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- name: nginx
nodePort: 30880
port: 80
protocol: TCP
targetPort: 8000
selector:
app: ks-console
tier: frontend
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 20.248.217.111
</code></pre>
<p>The kubesphere-console ingress route yaml manifest is like below:</p>
<pre><code>cat ingress-route-kubesphere.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kubesphere-console
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ks-console
port:
number: 80
- path: /(.*)
pathType: Prefix
backend:
service:
name: ks-console
port:
number: 80
---
</code></pre>
<p>I created ingress-controller on AKS cluster as like below:</p>
<pre><code>NAMESPACE=ingress-basic
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx \
--create-namespace \
--namespace $NAMESPACE \
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz
</code></pre>
<p>LB created for nginxingress-controller as like below</p>
<pre><code> k get svc -ningress-basic
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.0.194.38 20.92.133.79 80:32703/TCP,443:31053/TCP 23h
ingress-nginx-controller-admission ClusterIP 10.0.189.8 <none> 443/TCP 23h
</code></pre>
<p>Kindly do note that I am not that great in Ingress resource/networking so I am still learning. My main intention is use one ingress controller to expose all applications</p>
<p>Example:</p>
<p>Kubesphere-console --> Port 80 and Endpoint should be ksconsole</p>
<p>tekton-pipelines ----> Port 80 and Endpoint should be tekton-dashboard</p>
<p>I shall sincerely appreciate any help.
P.S: I investigated the ingress control pod logs but no luck there or I really don't what they mean....</p>
<pre><code>k logs -f ingress-nginx-controller-6648b5dbb8-ntm8t -ningress-basic
10.240.0.4 - - [02/Jun/2022:12:29:42 +0000] "GET /healthz HTTP/1.1" 302 43 "-" "Load Balancer Agent" 194 0.002 [kubesphere-system-ks-console-80] [] 10.240.0.41:8000 43 0.000 302 6ea21ca50e247cc5d5e56975c8946fcc
10.240.0.4 - - [02/Jun/2022:12:29:42 +0000] "GET /healthz HTTP/1.1" 302 43 "-" "Load Balancer Agent" 194 0.003 [kubesphere-system-ks-console-80] [] 10.240.0.41:8000 43 0.004 302 1f2e1bfa0223e6b3b630a59fe7dba4d1
10.240.0.4 - - [02/Jun/2022:12:29:48 +0000] "GET /healthz HTTP/1.1" 302 43 "-" "Load Balancer Agent" 194 0.004 [kubesphere-system-ks-console-80] [] 10.240.0.41:8000 43 0.000 302 dbb5af0fd6426f783c337da06912e427
10.240.0.4 - - [02/Jun/2022:12:29:48 +0000] "GET /healthz HTTP/1.1" 302 43 "-" "Load Balancer Agent" 194 0.003 [kubesphere-system-ks-console-80] [] 10.240.0.41:8000 43 0.004 302 9fc084018c70c0f61f379e2266a056f8
10.240.0.4 - - [02/Jun/2022:12:29:54 +0000] "GET /healthz HTTP/1.1" 302 43 "-" "Load Balancer Agent" 194 0.003 [kubesphere-system-ks-console-80] [] 10.240.0.41:8000 43 0.004 302 f9e4c75d81aa0e7ff03e402dfdca2d35
10.240.0.4 - - [02/Jun/2022:12:29:54 +0000] "GET /healthz HTTP/1.1" 302 43 "-" "Load Balancer Agent" 194 0.003 [kubesphere-system-ks-console-80] [] 10.240.0.41:8000 43 0.000 302 187cc1fdb0fb1499c1ef040e8aeb5e01
10.240.0.4 - - [02/Jun/2022:12:30:00 +0000] "GET /healthz HTTP/1.1" 302 43 "-" "Load Balancer Agent" 194 0.003 [kubesphere-system-ks-console-80] [] 10.240.0.41:8000 43 0.004 302 9b3b1ae4165184ed16fa974be51cd460
10.240.0.4 - - [02/Jun/2022:12:30:00 +0000] "GET /healthz HTTP/1.1" 302 43 "-" "Load Balancer Agent" 194 0.003 [kubesphere-system-ks-console-80] [] 10.240.0.41:8000 43 0.004 302 eac2591d333af20ab1a3ee4052327291
10.240.0.4 - - [02/Jun/2022:12:30:06 +0000] "GET /healthz HTTP/1.1" 302 43 "-" "Load Balancer Agent" 194 0.004 [kubesphere-system-ks-console-80] [] 10.240.0.41:8000 43 0.000 302 f08f30018bcfd229adb1ed36b6473563
10.240.0.4 - - [02/Jun/2022:12:30:06 +0000] "GET /healthz HTTP/1.1" 302 43 "-" "Load Balancer Agent" 194 0.002 [kubesphere-system-ks-console-80] [] 10.240.0.41:8000 43 0.000 302 f157141e253c7fcc61008440ff047605
10.240.0.4 - - [02/Jun/2022:12:30:12 +0000] "GET /healthz HTTP/1.1" 302 43 "-" "Load Balancer Agent" 194 0.003 [kubesphere-system-ks-console-80] [] 10.240.0.41:8000 43 0.004 302 6ba07067e1345b665e16a3cab03ee50c
10.240.0.4 - - [02/Jun/2022:12:30:12 +0000] "GET /healthz HTTP/1.1" 302 43 "-" "Load Balancer Agent" 194 0.003 [kubesphere-system-ks-console-80] [] 10.240.0.41:8000 43 0.004 302 4978c44153381d1aac6bc959c4afa72d
10.240.0.4 - - [02/Jun/2022:12:30:18 +0000] "GET /healthz HTTP/1.1" 302 43 "-" "Load Balancer Agent" 194 0.003 [kubesphere-system-ks-console-80] [] 10.240.0.41:8000 43 0.004 302 fc1e6e7fd4f4219ef6ca38689275035a
10.240.0.4 - - [02/Jun/2022:12:30:18 +0000] "GET /healthz HTTP/1.1" 302 43 "-" "Load Balancer Agent" 194 0.003 [kubesphere-system-ks-console-80] [] 10.240.0.41:8000 43 0.000 302 1e286ef7a2b36a5b04521e59657e0433
</code></pre>
| <p>Usually, you can access your services directly through the external IP (wildcard) of the ingress-controller svc if you create an ingress without a specified host. You can set a domain name for the ingress after configuring DNS resolution correctly.</p>
<p>Here is an example of exposing both <code>ks-apiserver</code> and <code>ks-console</code> through nginx-ingress-controller.</p>
<p>nginx controller svc</p>
<pre><code>root@master:~# kubectl -n ingress-nginx get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.233.11.24 <none> 80:30882/TCP,443:31054/TCP 15m
ingress-nginx-controller-admission ClusterIP 10.233.18.191 <none> 443/TCP 37m
</code></pre>
<p>ingress of <code>ks-apiserver</code> and <code>ks-console</code></p>
<pre><code>root@master:~# kubectl -n kubesphere-system get ing api -o yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubesphere.io/creator: admin
creationTimestamp: "2022-06-02T08:48:48Z"
generation: 8
name: kubesphere
namespace: kubesphere-system
resourceVersion: "982261"
uid: f8f6cadf-874c-4996-8759-3467ac33d61c
spec:
ingressClassName: nginx
rules:
- host: api.172.31.50.170.nip.io
http:
paths:
- backend:
service:
name: ks-apiserver
port:
number: 80
path: /
pathType: ImplementationSpecific
- host: console.172.31.50.170.nip.io
http:
paths:
- backend:
service:
name: ks-console
port:
number: 80
path: /
pathType: ImplementationSpecific
status:
loadBalancer:
ingress:
- ip: 172.31.50.170
</code></pre>
<p>Verify the ingresses</p>
<pre><code>$ curl http://api.172.31.50.170.nip.io:30882/
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "forbidden: User \"system:anonymous\" cannot GET path \"/\"",
"reason": "Forbidden",
"details": {
},
"code": 403
}%
$ curl http://console.172.31.50.170.nip.io:30882/
Redirecting to <a href="/login">/login</a>.%
</code></pre>
|
<p>I am fairly new to GCP and I have a rest URI to upload large files.<br />
I have a ngress-nginx-controller service and want to change it to upload files larger than 1mb and set a limit.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app.kubernetes.io/component":"controller","app.kubernetes.io/instance":"ingress-nginx","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/name":"ingress-nginx","app.kubernetes.io/version":"0.35.0","helm.sh/chart":"ingress-nginx-2.13.0"},"name":"ingress-nginx-controller","namespace":"ingress-nginx"},"spec":{"externalTrafficPolicy":"Local","ports":[{"name":"http","port":80,"protocol":"TCP","targetPort":"http"},{"name":"https","port":443,"protocol":"TCP","targetPort":"https"}],"selector":{"app.kubernetes.io/component":"controller","app.kubernetes.io/instance":"ingress-nginx","app.kubernetes.io/name":"ingress-nginx"},"type":"LoadBalancer"}}
creationTimestamp: "2020-09-21T18:37:27Z"
finalizers:
- service.kubernetes.io/load-balancer-cleanup
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/version: 0.35.0
helm.sh/chart: ingress-nginx-2.13.0
name: ingress-nginx-controller
namespace: ingress-nginx
</code></pre>
<p>This is the error it throws :</p>
<pre><code><html>
<head><title>413 Request Entity Too Large</title></head>
<body>
<center><h1>413 Request Entity Too Large</h1></center>
<hr><center>nginx/1.19.2</center>
</body>
</html>
</code></pre>
| <p>If you need to increase the body size of files you upload via the ingress controller, you need to add an annotation to your ingress resource:</p>
<pre><code>nginx.ingress.kubernetes.io/proxy-body-size: 8m
</code></pre>
<p>Documentation available here: <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#custom-max-body-size" rel="noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#custom-max-body-size</a></p>
|
<p>This is my first time using GitLab for EKS and I feel so lost. I've been following the docs and so far I</p>
<ul>
<li>Created a project on GitLab that contains my kubernetes manifest files</li>
<li>Created a config.yaml in that project in the directory .gitlab/agents/stockagent</li>
</ul>
<p>Here's the <strong>config.yaml</strong>, my project name is "Stock-Market-API-K8s" and my k8s manifests are in the root directory of that project</p>
<pre class="lang-yaml prettyprint-override"><code>ci_access:
projects:
- id: "root/Stock-Market-API-K8s"
</code></pre>
<p>In my root directory of my project, I also have a <strong>.gitlab-ci.yml</strong> file and here's the contents of that</p>
<pre class="lang-yaml prettyprint-override"><code>deploy:
image:
name: mpriv32/stock-api:latest
entrypoint: ['']
script:
- kubectl config get-contexts
- kubectl config use-context .gitlab/agents/stockagent
- kubectl get pods
</code></pre>
<p>Using the default example from the docs, it seems that the get-contexts script is the one that failed. Here's the full error from my logs</p>
<pre class="lang-none prettyprint-override"><code>Executing "step_script" stage of the job script
00:01
Using docker image sha256:58ddf823e9d7ee4c0e75779b7e01dab9b11ac0d985d1b2d2fe6c6b95a849573d for mpriv32/stock-api:latest with digest mpriv32/stock-api@sha256:a2e79a2c3a57327f93e36ec55297a606626e4dc8d72e469dd4dc2f3c1f589bac ...
$ kubectl config get-contexts
/bin/bash: line 123: kubectl: command not found
Cleaning up project directory and file based variables
00:00
ERROR: Job failed: exit code 1
</code></pre>
<p>Here's my <strong>job.yaml</strong> file for my kubernetes pod, just in case it plays a factor at all</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: stock-api
labels:
app: stock-api
spec:
containers:
- name: stock-api
image: mpriv32/stock-api:latest
envFrom:
- secretRef:
name: api-credentials
restartPolicy: Never
</code></pre>
| <p>In your case, I guess the image(mpriv32/stock-api:latest) that you are using doesn't have a dependency kubectl as a global executable, please use an image as an example - bitnami/kubectl which "contains" kubectl</p>
<pre><code>deploy:
image:
name: bitnami/kubectl
</code></pre>
<p>the image keyword is the name of the Docker image the Docker executor uses to run CI/CD jobs.
For more information <a href="https://docs.gitlab.com/ee/ci/docker/using_docker_images.html" rel="nofollow noreferrer">https://docs.gitlab.com/ee/ci/docker/using_docker_images.html</a></p>
<p>Or you can build your docker image on top of bitnami/kubectl</p>
<pre><code>FROM bitnami/kubectl:1.20.9 as kubectl
FROM ubuntu-or-whatever-image:tag
# Do whatever you need to with the
# ubuntu-or-whatever-image:tag image, then:
COPY --from=kubectl /opt/bitnami/kubectl/bin/kubectl /usr/local/bin/
</code></pre>
<p>Or you can go with the approach of building an image from the scratch by
installing there the dependencies that you are using</p>
<p>smth like</p>
<pre><code>FROM ubuntu:18.10
WORKDIR /root
COPY bootstrap.sh ./
RUN apt-get update && apt-get -y install --no-install-recommends \
gnupg \
curl \
wget \
git \
apt-transport-https \
ca-certificates \
zsh \
&& rm -rf /var/lib/apt/lists/*
ENV SHELL /usr/bin/zsh
# Install kubectl
RUN curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - && \
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | tee -a /etc/apt/sources.list.d/kubernetes.list && \
apt-get update && apt-get -y install --no-install-recommends kubectl
</code></pre>
|
<p>Scenario is like this:</p>
<p>I have a pod running in a node in K8s cluster in GCP. cluster is created using kops and pod is created using kne_cli.</p>
<p>I know only the name of the pod e.g. "test-pod".</p>
<p>My requirement is to configure something in the node where this pod is running. e.g. I want to update "iptables -t nat" table in node.</p>
<p>how to access the node and configure it from within a pod?</p>
<p>any suggestion will be helpful.</p>
| <p>You the Job or deployment or POD, not sure how POD is getting managed. If you just want to run that task Job is good fir for you.</p>
<p>One option is to use <strong>SSH</strong> way :</p>
<p>You can run one POD inside that you get a list of Nodes or specific node as per need and run SSH command to connect with that node.</p>
<p>That way you will be able to access Node from POD and run commands top of Node.</p>
<p>You can check this document for ref : <a href="https://alexei-led.github.io/post/k8s_node_shell/" rel="nofollow noreferrer">https://alexei-led.github.io/post/k8s_node_shell/</a></p>
<p>Option <strong>two</strong> :</p>
<p>You can <strong>mount</strong> <strong>sh</strong> file on <strong>Node</strong> with <strong>IP table command</strong> and invoke that shell script from POD to execute which will run the command whenever you want.</p>
<p><strong>Example</strong> :</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: command
data:
command.sh: |
#!/bin/bash
echo "running sh script on node..!"
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: command
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
serviceAccountName: cron-namespace-admin
containers:
- name: command
image: IMAGE:v1
imagePullPolicy: IfNotPresent
volumeMounts:
- name: commandfile
mountPath: /test/command.sh
subPath: command.sh
- name: script-dir
mountPath: /test
restartPolicy: OnFailure
volumes:
- name: commandfile
configMap:
name: command
defaultMode: 0777
- name: script-dir
hostPath:
path: /var/log/data
type: DirectoryOrCreate
</code></pre>
<p>Use <strong>privileged</strong> mode</p>
<pre><code> securityContext:
privileged: true
</code></pre>
<blockquote>
<p>Privileged - determines if any container in a pod can enable
privileged mode. By default a container is not allowed to access any
devices on the host, but a "privileged" container is given access to
all devices on the host. This allows the container nearly all the same
access as processes running on the host. This is useful for containers
that want to use linux capabilities like manipulating the network
stack and accessing devices.</p>
</blockquote>
<p>Read more : <a href="https://kubernetes.io/docs/concepts/security/pod-security-policy/#privileged" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/security/pod-security-policy/#privileged</a></p>
|
<p>Any help or hint would greatly appreciated.
I am using Windows 11 Professional.</p>
<p>I am connect to AWS and when I type the command "kops" I get the below error message.</p>
<p>[ec2-user@ip-172-31-10-126 ~]$ curl -Lo kops https://github.com/kubernetes/kops/releases/download/$(curl -s <a href="https://api.github.com/repos/kubernetes/kops/releases/latest" rel="nofollow noreferrer">https://api.github.com/repos/kubernetes/kops/releases/latest</a> | grep tag_name | cut -d '"' -f 4)/kops-darwin-amd64</p>
<p>% Total % Received % Xferd Average Speed Time Time Time Current</p>
<pre><code> Dload Upload Total Spent Left Speed
</code></pre>
<p>0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0</p>
<p>100 157M 100 157M 0 0 98.8M 0 0:00:01 0:00:01 --:--:-- 119M</p>
<p>[ec2-user@ip-172-31-10-126 ~]$ chmod +x kops</p>
<p>[ec2-user@ip-172-31-10-126 ~]$ sudo mv kops /usr/local/bin/kops</p>
<p>[ec2-user@ip-172-31-10-126 ~]$ kops</p>
<p>-bash: /usr/local/bin/kops: cannot execute binary file</p>
<p><a href="https://i.stack.imgur.com/bRgU4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bRgU4.png" alt="enter image description here" /></a></p>
| <p>You're only using Windows 11 as a ssh client - it's not doing anything for Kubernetes.</p>
<p>As @jordanm indicates you're trying to download the MacOS version. Change your command to:</p>
<pre><code>curl -Lo kops https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
</code></pre>
<p>to get the Linux version</p>
|
<p>To run prefect jobs in azure kubernetes (aks) virtual nodes, besides adding the recommended <code>nodeSelector/tolerations</code>:</p>
<pre class="lang-py prettyprint-override"><code>"nodeSelector": {
"kubernetes.io/role": "agent",
"beta.kubernetes.io/os": "linux",
"type": "virtual-kubelet",
},
"tolerations": [
{"key": "virtual-kubelet.io/provider", "operator": "Exists"},
{"key": "azure.com/aci", "effect": "NoSchedule"},
],
</code></pre>
<p>container creation fails with:</p>
<pre class="lang-bash prettyprint-override"><code>Warning ProviderCreateFailed pod/prefect-job-XXXXX-XXXXX ACI does not support providing args without specifying the command. Please supply both command and args to the pod spec.
</code></pre>
<p>The different <code>command</code> options stated in <a href="https://linen.prefect.io/t/48087/topic" rel="nofollow noreferrer">https://linen.prefect.io/t/48087/topic</a> do not work.</p>
<p>How to solve it?</p>
| <pre><code>command": ["tini", "-g", "--"],
</code></pre>
<p>solves the issue and allows prefect jobs to run in aks virtual nodes.</p>
<p>Here an example of a working <code>job_template</code>:</p>
<pre class="lang-py prettyprint-override"><code>{
"apiVersion": "batch/v1",
"kind": "Job",
"spec": {
"template": {
"spec": {
"containers": [
{
"name": "flow",
"command": ["tini", "-g", "--"],
}
],
"nodeSelector": {
"kubernetes.io/role": "agent",
"beta.kubernetes.io/os": "linux",
"type": "virtual-kubelet",
},
"tolerations": [
{"key": "virtual-kubelet.io/provider", "operator": "Exists"},
{"key": "azure.com/aci", "effect": "NoSchedule"},
],
"imagePullSecrets": [
{"name": "regcred"},
],
},
}
},
}
</code></pre>
|
<p>I am trying to execute below commands in a Kubeflow(v1.4.1) Jupyter Notebook.</p>
<pre><code>KServe = KServeClient()
KServe.create(isvc)
</code></pre>
<p>I am getting mentioned error while attempting to execute above mentioned command.</p>
<pre><code>ApiException: (403)
Reason: Forbidden
HTTP response headers: HTTPHeaderDict({'Audit-Id': '86bb1b59-20ae-4127-9732-d0355671b12f', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Content-Type-Options': 'nosniff', 'X-Kubernetes-Pf-Flowschema-Uid': 'a5a5d542-8a9a-4031-90d9-4faf01914391', 'X-Kubernetes-Pf-Prioritylevel-Uid': '6846984d-14c5-4f4d-9251-fe97d91b17fc', 'Date': 'Thu, 02 Jun 2022 07:53:30 GMT', 'Content-Length': '429'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"inferenceservices.serving.kserve.io is forbidden: User \"system:serviceaccount:kubeflow-user-example-com:default-editor\" cannot create resource \"inferenceservices\" in API group \"serving.kserve.io\" in the namespace \"kubeflow-user-example-com\"","reason":"Forbidden","details":{"group":"serving.kserve.io","kind":"inferenceservices"},"code":403}
</code></pre>
<p>As a mitigation step I have added underlying manifests to my Kubernetes cluster via kubectl apply -f <manifest-location.yaml></p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: sla-manager-service-role
namespace: default //(have tried it with kubeflow-user-example-com as well)
labels:
app: sla-manager-app
rules:
- apiGroups: ["serving.kserve.io"] # "" indicates the core API group
resources: ["inferenceservices"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: role-binding
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: sla-manager-service-role
subjects:
- kind: ServiceAccount
name: default //(have added it with default-editor as well)
namespace: kubeflow-user-example-com
</code></pre>
<p>But this does not resolved the issue.</p>
<p>Output I am receiving for</p>
<pre><code>kubectl auth can-i create inferenceservice --as=system:serviceaccount:kubeflow-user-example-com:default-editor -n default
(Output) no
</code></pre>
<pre><code>kubectl auth can-i create inferenceservices
(Output) yes
</code></pre>
<p>Can you please help me with what I am missing here?</p>
| <p>Got to know that KServe is not comaptible with Kubeflow version 1.4 and works for Kubeflow version >=1.5.</p>
<p>Switching to kfserving 0.6 resolved my issue.</p>
|
<p>I have a created an nginx pod and nginx clusterIP service and assign an externalIP to that service like below</p>
<pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
test-nginx ClusterIP 10.110.93.251 192.168.0.10 443/TCP,80/TCP,8000/TCP,5443/TCP 79m
</code></pre>
<p>In one of my application pod, I am trying to execute below command and get the fqdn of it.</p>
<pre><code>>>> import socket
>>> socket.getfqdn('192.168.0.10')
'test-nginx.test.svc.cluster.local'
</code></pre>
<p>It returns me the nginx service fqdn instead of my host machine fqdn. Is there a way to block dns resolution only for external-ip ? or is there any other workaround for this problem?</p>
| <p>You assigned an external ip to a <code>ClusterIP</code> service in Kubernetes, so you can access your application from outside the Cluster, but you are concerned about the Pods having access to that external ip and want to block the dns resolution.</p>
<p>This is not the best approach to your issue, Kubernetes has several ways to expose the services without compromising the security; for what you want, maybe a better option is to implement an <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/" rel="nofollow noreferrer">Ingress</a> instead.
<a href="https://i.stack.imgur.com/loQjJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/loQjJ.png" alt="enter image description here" /></a></p>
<p>As you can see in the diagram, the Ingress routes the incoming traffic to the desired service based on configured rules, isolating the outside world from your service and only allowing specific traffic to go in. You can also implement features as TLS termination for your HTTPS traffic, and it performs load balancing by default.</p>
<p>Even further, if your main concern is security within your Cluster, you can take a look at the <a href="https://istio.io/latest/about/service-mesh/" rel="nofollow noreferrer">Istio Service mesh</a>.</p>
|
<p>I have to check for mandatory values based on some condition.<br />
My <em>values.yaml</em> is as below</p>
<pre class="lang-yaml prettyprint-override"><code>id: 3
test:
id: 2
test1:
id: 1
</code></pre>
<p>In my template, I need to check if <code>id</code> is present at <code>.Values.test.test1.id</code> and assign that value. If not fall back to <code>.Values.test.id</code> and at last <code>.Values.id</code>. But <code>id</code> has to be mandatory and I need to use required function.</p>
<p>My template is as below</p>
<pre><code>{{- if .Values.test.test1.id }}
<assign> {{ .Values.test.test1.id }}
{{- else }}
{{- $id := .Values.test.id }}
{{- $id2 := .Values.id }}
<assign> <need to check required with or of $id $id2> </assign>
{{- end }}
</code></pre>
<p>I know this can be solved with one <code>elseif</code> in between <code>if</code> and <code>else</code>. But I need to repeat the same logic for many ids.<br />
What is the best way to achieve this?</p>
| <p>It seems like a perfect fit for the <a href="https://helm.sh/docs/chart_template_guide/function_list/#coalesce" rel="nofollow noreferrer"><code>coalesce</code> function</a>, which</p>
<blockquote>
<p>takes a list of values and returns the first non-empty one.</p>
</blockquote>
<p><sup><em>Source: <a href="https://helm.sh/docs/chart_template_guide/function_list/#coalesce" rel="nofollow noreferrer">https://helm.sh/docs/chart_template_guide/function_list/#coalesce</a></em></sup></p>
<p>You will also need to use the <a href="https://helm.sh/docs/chart_template_guide/functions_and_pipelines/#using-the-default-function" rel="nofollow noreferrer"><code>default</code> function</a> to convert the cases when the <code>test</code> or <code>test.test1</code> dictionaries are not defined.</p>
<p>All this together, gives:</p>
<pre><code>{{- $test := .Values.test | default dict -}}
{{- $test1 := $test.test1 | default dict -}}
{{- $id := required "Please provide an ID" (
coalesce $test1.id $test.id .Values.id
) -}}
id: {{ $id }}
</code></pre>
<p>And here are the test cases and results:</p>
<ul>
<li>Gives:
<pre class="lang-yaml prettyprint-override"><code>id: 1
</code></pre>
when <em>values.yaml</em> is
<pre class="lang-yaml prettyprint-override"><code>id: 3
test:
id: 2
test1:
id: 1
</code></pre>
</li>
<li>Gives:
<pre class="lang-yaml prettyprint-override"><code>id: 2
</code></pre>
when <em>values.yaml</em> is
<pre class="lang-yaml prettyprint-override"><code>id: 3
test:
id: 2
</code></pre>
</li>
<li>Gives:
<pre class="lang-yaml prettyprint-override"><code>id: 3
</code></pre>
when <em>values.yaml</em> is
<pre class="lang-yaml prettyprint-override"><code>id: 3
</code></pre>
</li>
<li>Gives:
<pre class="lang-none prettyprint-override"><code>Error: execution error at (demo/templates/test.yaml:3:11):
Please provide an ID
</code></pre>
when <em>values.yaml</em> is an empty file</li>
</ul>
|
<h2 id="background">Background:</h2>
<p>I have a GKE cluster which has suddenly stopped being able to pull my docker images from GCR; both are in the same GCP project. It has been working well for several months, no issues pulling images, and has now started throwing errors without having made any changes.</p>
<p>(NB: I'm generally the only one on my team who accesses Google Cloud, though it's entirely possible that someone else on my team may have made changes / inadvertently made changes without realising).</p>
<p>I've seen a few other posts on this topic, but the solutions offered in others haven't helped. Two of these posts stood out to me in particular, as they were both posted around the same day my issues started ~13/14 days ago. Whether this is coincidence or not who knows..</p>
<p><a href="https://stackoverflow.com/questions/68305918/troubleshooting-a-manifests-prod-403-error-from-kubernetes-in-gke">This post</a> has the same issue as me; unsure whether the posted comments helped them resolve, but it hasn't fixed for me. <a href="https://serverfault.com/questions/1069107/gke-node-from-new-node-pool-gets-403-on-artifact-registry-image">This post</a> seemed to also be the same issue, but the poster says it resolved by itself after waiting some time.</p>
<h2 id="the-issue">The Issue:</h2>
<p>I first noticed the issue on the cluster a few days ago. Went to deploy a new image by pushing image to GCR and then bouncing the pods <code>kubectl rollout restart deployment</code>.</p>
<p>The pods all then came back with <code>ImagePullBackOff</code>, saying that they couldn't get the image from GCR:</p>
<p><code>kubectl get pods</code>:</p>
<pre><code>XXX-XXX-XXX 0/1 ImagePullBackOff 0 13d
XXX-XXX-XXX 0/1 ImagePullBackOff 0 13d
XXX-XXX-XXX 0/1 ImagePullBackOff 0 13d
...
</code></pre>
<p><code>kubectl describe pod XXX-XXX-XXX</code>:</p>
<pre><code>Normal BackOff 20s kubelet Back-off pulling image "gcr.io/<GCP_PROJECT>/XXX:dev-latest"
Warning Failed 20s kubelet Error: ImagePullBackOff
Normal Pulling 8s (x2 over 21s) kubelet Pulling image "gcr.io/<GCP_PROJECT>/XXX:dev-latest"
Warning Failed 7s (x2 over 20s) kubelet Failed to pull image "gcr.io/<GCP_PROJECT>/XXX:dev-latest": rpc error: code = Unknown desc = failed to pull and unpack image "gcr.io/<GCP_PROJECT>/XXX:dev-latest": failed to resolve reference "gcr.io/<GCR_PROJECT>/XXX:dev-latest": unexpected status code [manifests dev-latest]: 403 Forbidden
Warning Failed 7s (x2 over 20s) kubelet Error: ErrImagePull
</code></pre>
<h2 id="troubleshooting-steps-followed-from-other-posts">Troubleshooting steps followed from other posts:</h2>
<p>I know that the image definitely exists in GCR -</p>
<ul>
<li>I can pull the image to my own machine (also removed all docker images from my machine to confirm it was really pulling)</li>
<li>I can see the tagged image if I look on the GCR UI on chrome.</li>
</ul>
<p>I've SSH'd into one of the cluster nodes and tried to docker pull manually, with no success:</p>
<pre><code>docker pull gcr.io/<GCP_PROJECT>/XXX:dev-latest
Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
</code></pre>
<p>(Also did a docker pull of a public mongodb image to confirm <em>that</em> was working, and it's specific to GCR).</p>
<p>So this leads me to believe it's an issue with the service account not having the correct permissions, as <a href="https://cloud.google.com/kubernetes-engine/docs/troubleshooting#gke_service_account_deleted" rel="noreferrer">in the cloud docs</a> under the 'Error 400/403' section. This seems to suggest that the service account has either been deleted, or edited manually.</p>
<p>During my troubleshooting, I tried to find out exactly <em>which</em> service account GKE was using to pull from GCR. In the steps outlined in the docs, it says that: <code>The name of your Google Kubernetes Engine service account is as follows, where PROJECT_NUMBER is your project number:</code></p>
<pre><code>service-PROJECT_NUMBER@container-engine-robot.iam.gserviceaccount.com
</code></pre>
<p>I found the service account and checked the polices - it did have one for <code>roles/container.serviceAgent</code>, but nothing specifically mentioning kubernetes as I would expect from the description in the docs.. '<em>the Kubernetes Engine Service Agent role</em>' (unless that is the one they're describing, in which case I'm no better off that before anyway..).</p>
<p>Must not have had the correct roles, so I then followed the steps to re-enable (disable then enable the Kubernetes API). Running <code>cloud projects get-iam-policy <GCP_PROJECT></code> again and diffing the two outputs (before/after), the only difference is that a service account for '@cloud-filer...' has been deleted.</p>
<p>Thinking maybe the error was something else, I thought I would try spinning up a new cluster. Same error - can't pull images.</p>
<h2 id="send-help">Send help..</h2>
<p>I've been racking my brains to try to troubleshoot, but I'm now out of ideas! Any and all help much appreciated!</p>
| <p>I don't know if it still helps, but I had the same issue and managed to fix it.</p>
<p>In my case I was deploying GKE trough terraform and did not specify <code>oauth_scope</code> property for node pool as show in <a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/container_cluster#example-usage---with-the-default-node-pool" rel="nofollow noreferrer">example</a>. As I understand you need to make gcp APIs available here to make nodes able to use them.</p>
|
<p>I am using <a href="https://airflow.apache.org/docs/helm-chart/stable/index.html" rel="nofollow noreferrer">the Helm chart for Apache Airflow</a> and trying to set the password of the default user to the value of an environment variable:</p>
<pre class="lang-yaml prettyprint-override"><code>airflow:
env:
- name: PASSWORD
value: Hello, World!
webserver:
defaultUser:
password: $PASSWORD
</code></pre>
<p>However, this is setting the password to literally <code>$PASSWORD</code> instead of <code>Hello, World!</code>.</p>
<p>I have tried other things like <code>password: ${PASSWORD}</code> to no avail.</p>
| <p>Use as follow, this is as described in <a href="https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/#using-environment-variables-inside-of-your-config" rel="nofollow noreferrer">official examples</a>.</p>
<pre><code>$(PASSWORD)
</code></pre>
|
<p>In my <code>istio-system</code> namespace, I have the following secret</p>
<pre><code>▶ k get secret istio-ca-secret -o yaml
apiVersion: v1
data:
ca-cert.pem: LS0tLS1CR...
ca-key.pem: LS0..
cert-chain.pem: ""
key.pem: ""
root-cert.pem: ""
</code></pre>
<p>While the following query works:</p>
<pre><code>kubectl get secret istio-ca-secret -n istio-system -o jsonpath="{.data}"
{"ca-cert.pem":"LS0t=...","ca-key.pem":"LS0tLS1","cert-chain.pem":"","key.pem":"","root-cert.pem":""}%
</code></pre>
<p>the following, which I execute trying to get only the <code>ca-cert.pem</code> value returns nothing</p>
<pre><code>kubectl get secret istio-ca-secret -n istio-system -o jsonpath="{.data.ca-cert.pem}"
</code></pre>
<p>why is that?</p>
| <p>you need to escape the dot in "ca-cert.pem" to work.</p>
<p>like this</p>
<pre><code>kubectl get secret istio-ca-secret -n istio-system -o jsonpath="{.data.ca-cert\.pem}"
</code></pre>
|
<p>I am trying to use workflow identity for my kubernetes cluster. I have created the service account on a new namespace. My issue is that I am not able to specify the name space when I am trying to add the service account name on the pod deployment YML.</p>
<p>Following is my pod spect file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: test-scheduler
spec:
replicas: 1
selector:
matchLabels:
app: test-scheduler
template:
metadata:
labels:
app: test-scheduler
spec:
serviceAccountName: test-na/test-k8-sa
nodeSelector:
iam.gke.io/gke-metadata-server-enabled: "true"
containers:
- name: test-scheduler
image: gcr.io/PROJECT_ID/IMAGE:TAG
ports:
- name: scheduler-port
containerPort: 8002
protocol: TCP
env:
- name: NAMESPACE
value: test-scheduler
- name: CONTAINER_NAME
value: test-scheduler
---
apiVersion: v1
kind: Service
metadata:
name: test-scheduler
spec:
selector:
app: test-scheduler
ports:
- port: 8002
protocol: TCP
targetPort: scheduler-port
</code></pre>
<p>When I deploy this code using github actions I get this error:</p>
<pre><code>The Deployment "test-scheduler" is invalid: spec.template.spec.serviceAccountName: Invalid value: "test-na/test-k8-sa": a lowercase RFC 1123 subdomain must consist of lower case alphanumeric characters, '-' or '.',
</code></pre>
<p>When I remove the namespace in a file like this:</p>
<pre><code>serviceAccountName: test-k8-sa
</code></pre>
<p>It searches for the service account on default name space and fails.</p>
<p>My question here is what is the right way to specify the custom namespace with the service account in kubernetes?</p>
<p>I can start using the default but I am inclined to keep the namespace. I saw some reference to service account file but I don't really understand how to use them.</p>
<p>By the way, I am using this guide <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity#gcloud_3" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity#gcloud_3</a></p>
| <p><code>...I have created the service account on a new namespace. My issue is that I am not able to specify the name space when I am trying to add the service account name on the pod deployment YML.</code></p>
<p>To assign the created service account to your deployment, you can create the deployment in the same namespace as the service account:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: test-scheduler
namespace: test-na # <-- add this line with the namespace where the service account resides
spec:
...
template:
...
spec:
serviceAccountName: test-k8-sa
...
</code></pre>
|
<p>I have a few services running that require the Source IP to be preserved to perform authentication properly. I am running a Kubernetes environment in AKS using Nginx as my Ingress controller and am having problems understanding how I can implement this properly.</p>
<p>I read through this</p>
<p><a href="https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typeloadbalancer" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-typeloadbalancer</a></p>
<p>as well as this</p>
<p><a href="https://github.com/kubernetes/ingress-nginx/issues/1067" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/issues/1067</a></p>
<p>And read that setting this service.spec.externalTrafficPolicy to Local should resolve my problem, but I am having a hard time understanding which Service I should apply this to. It almost sounds like this needs to be added to the nginx deployment and it would effect all deployments which is going to be undesirable.</p>
<p>My question is, is there a way to apply this to my Service manifests for just the services that need it and not blanket add this to everything either with Nginx annotations or adjusting the Service manifest for the services I am deploying?</p>
<p>For reference, here is my Service manifest I am deploying with helm:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: {{ template "service.fullname" . }}
labels:
app: {{ template "service.name" . }}
chart: {{ template "service.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
type: {{ .Values.service.type }}. #ClusterIP
ports:
- port: {{ .Values.service.port }}
targetPort: {{ .Values.service.targetPort }}
protocol: TCP
name: http
selector:
app: {{ template "service.name" . }}
release: {{ .Release.Name }}
</code></pre>
| <p>You clearly need to add that setting to the <code>nginx ingress controller</code> service, the one with <code>type</code> = <code>LoadBalancer</code>.</p>
<p>If you think about it, there is no other option. <code>Nginx ingress controller</code> is the entry point in your cluster, so if you don't keep the source IP address there, there is no way you can have it in the services that come after that.</p>
|
<p>I'm trying to deploy a Kubernetes cluster in Oracle Cloud Infrastructure using Terraform.</p>
<p>I want that every node deployed (in private subnet) has the Bastion agent plugin activate in Cloud Agent.</p>
<p>But I cannot see how to define the details of the instance (setting agent_config in the node pool instances).</p>
<p>My code, until now is:</p>
<pre><code>
resource "oci_containerengine_cluster" "generated_oci_containerengine_cluster" {
compartment_id = var.cluster_compartment
endpoint_config {
is_public_ip_enabled = "true"
subnet_id = oci_core_subnet.oke_public_api.id
}
kubernetes_version = var.kubernetes_version
name = "josealbarran_labcloudnative_oke"
options {
kubernetes_network_config {
pods_cidr = "10.244.0.0/16"
services_cidr = "10.96.0.0/16"
}
service_lb_subnet_ids = [oci_core_subnet.oke_public_lb.id]
}
vcn_id = var.cluster_vcn
}
# Check doc: https://registry.terraform.io/providers/oracle/oci/latest/docs/resources/containerengine_node_pool
resource "oci_containerengine_node_pool" "node_pool01" {
cluster_id = "${oci_containerengine_cluster.generated_oci_containerengine_cluster.id}"
compartment_id = var.cluster_compartment
initial_node_labels {
key = "name"
value = "pool01"
}
kubernetes_version = var.kubernetes_version
name = "lab_cloud_native_oke_pool01"
node_config_details {
size = "${length(data.oci_identity_availability_domains.ads.availability_domains)}"
dynamic "placement_configs" {
for_each = data.oci_identity_availability_domains.ads.availability_domains[*].name
content {
availability_domain = placement_configs.value
subnet_id = oci_core_subnet.oke_private_worker.id
}
}
}
node_shape = "VM.Standard.A1.Flex"
node_shape_config {
memory_in_gbs = "16"
ocpus = "1"
}
node_source_details {
image_id = "ocid1.image.oc1.eu-frankfurt-1.aaaaaaaalgodii3qx3mfasp6ai22bja7mabfwsxiwkzxx7lhdfdbbuyqcznq"
source_type = "IMAGE"
}
ssh_public_key = "ssh-rsa AAAAB3xxxxxxxx......."
timeouts {
create = "60m"
delete = "90m"
}
}
</code></pre>
| <p>You can use the "<strong>cloudinit_config</strong>" to run the custom script in OKE node pool in OCI.</p>
<pre><code>second_script_template = templatefile("${path.module}/cloudinit/second.template.sh",{})
</code></pre>
<p>More scripts like</p>
<pre><code>data "cloudinit_config" "worker" {
gzip = false
base64_encode = true
part {
filename = "worker.sh"
content_type = "text/x-shellscript"
content = local.worker_script_template
}
part {
filename = "second.sh"
content_type = "text/x-shellscript"
content = local.second_script_template
}
part {
filename = "third.sh"
content_type = "text/x-shellscript"
content = local.third_script_template
}
}
</code></pre>
<p>Refer : <a href="https://github.com/oracle-terraform-modules/terraform-oci-oke/blob/main/docs/instructions.adoc#14-configuring-cloud-init-for-the-nodepools" rel="nofollow noreferrer">https://github.com/oracle-terraform-modules/terraform-oci-oke/blob/main/docs/instructions.adoc#14-configuring-cloud-init-for-the-nodepools</a></p>
<p>If you are looking forward to just edit the default script : <a href="https://github.com/oracle-terraform-modules/terraform-oci-oke/blob/main/docs/cloudinit.adoc" rel="nofollow noreferrer">https://github.com/oracle-terraform-modules/terraform-oci-oke/blob/main/docs/cloudinit.adoc</a></p>
|
<p>I'm moving my project to Kubernetes using Traefik for routing and MetalLB as my load balancer.</p>
<p>I've deployed several apps and I'd like to make use of official <a href="https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/" rel="nofollow noreferrer">Kubernetes-Dashboard</a>. So I deployed the Kubernetes-Dashboard using recommended config and created IngressRoute:</p>
<pre class="lang-yaml prettyprint-override"><code># dashboard.yaml
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
entryPoints:
- websecure
routes:
- match: Host(`k8s.example.com`, `www.k8s.example.com`)
kind: Rule
middlewares:
# - name: https-redirectscheme
# - name: nginx-basic-auth
services:
- kind: Service
name: kubernetes-dashboard
# namespace: kubernetes-dashboard
port: 443
tls:
secretName: k8s.example.com-tls
</code></pre>
<p>It shows up in the Traefik Dashboard, but when I try to access k8s.example.com I get <code>Internal Server Error</code>.</p>
<p>Thank you</p>
| <p>I had the same problem - which is why I ended on this question. When I find out how to use the <code>IngressRoute</code> I'll update this answer.</p>
<p>This answer describes how to use <code>NodePort</code> instead.</p>
<pre class="lang-bash prettyprint-override"><code>kubectl patch svc kubernetes-dashboard -p '{"spec": {"type": "NodePort"}}'
# Confirm
kubectl get svc -n kubernetes-dashboard kubernetes-dashboard -o yaml
# patch the dashboard
tee ~/nodeport_dashboard_patch.yaml<<EOF
spec:
ports:
- nodePort: 32000
port: 443
protocol: TCP
targetPort: 8443
EOF
kubectl patch svc kubernetes-dashboard --patch "$(cat ~/nodeport_dashboard_patch.yaml)"
</code></pre>
<hr />
<p>Now the dashboard can be reached on the external IP Traefik gave you - in collaboration with MetalLB - with port :32000.<br />
If you have a website routed to your cluster, you can use:</p>
<pre><code>https://yourwebsite.com:32000
</code></pre>
<p>As described in the link you shared, fetch the token by using:</p>
<pre><code>export SA_NAME= # admin user from the ServiceAccount
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep ${SA_NAME} | awk '{print $1}')
</code></pre>
<p>(I could change this answer for a complete script to do this; If you'd like)</p>
|
<p>When setting up an ingress in my kubernetes project I can't seem to get it to work. I already checked following questions:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/65193758/enable-ingress-controller-on-docker-desktop-with-wls2">Enable Ingress controller on Docker Desktop with WLS2</a></li>
<li><a href="https://stackoverflow.com/questions/60839510/docker-desktop-k8s-plus-https-proxy-multiple-external-ports-to-pods-on-http-in">Docker Desktop + k8s plus https proxy multiple external ports to pods on http in deployment?</a></li>
<li><a href="https://stackoverflow.com/questions/59255445/how-can-i-access-nginx-ingress-on-my-local">How can I access nginx ingress on my local?</a></li>
</ul>
<p>But I can't get it to work. When testing the service via NodePort (<a href="http://kubernetes.docker.internal:30090/" rel="nofollow noreferrer">http://kubernetes.docker.internal:30090/</a> or localhost:30090) it works without any problem, but when using <a href="http://kubernetes.docker.internal/" rel="nofollow noreferrer">http://kubernetes.docker.internal/</a> I get kubernetes.docker.internal didn’t send any data. ERR_EMPTY_RESPONSE.</p>
<p>This is my yaml file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp
spec:
minReadySeconds: 30
selector:
matchLabels:
app: webapp
replicas: 1
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: gcr.io/google-samples/hello-app:2.0
env:
- name: "PORT"
value: "3000"
---
apiVersion: v1
kind: Service
metadata:
name: webapp-service
spec:
selector:
app: webapp
ports:
- name: http
port: 3000
nodePort: 30090 # only for NotPort > 30,000
type: NodePort #ClusterIP inside cluster
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: webapp-ingress
spec:
defaultBackend:
service:
name: webapp-service
port:
number: 3000
rules:
- host: kubernetes.docker.internal
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: webapp-service
port:
number: 3000
</code></pre>
<p>I also used following command:</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.45.0/deploy/static/provider/cloud/deploy.yaml
</code></pre>
<p>The output of <code>kubectl get all -A</code> is as follows (indicating that the ingress controller is running):</p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE
default pod/webapp-78d8b79b4f-7whzf 1/1 Running 0 13m
ingress-nginx pod/ingress-nginx-admission-create-gwhbq 0/1 Completed 0 11m
ingress-nginx pod/ingress-nginx-admission-patch-bxv9v 0/1 Completed 1 11m
ingress-nginx pod/ingress-nginx-controller-6f5454cbfb-s2w9p 1/1 Running 0 11m
kube-system pod/coredns-f9fd979d6-6xbxs 1/1 Running 0 19m
kube-system pod/coredns-f9fd979d6-frrrv 1/1 Running 0 19m
kube-system pod/etcd-docker-desktop 1/1 Running 0 18m
kube-system pod/kube-apiserver-docker-desktop 1/1 Running 0 18m
kube-system pod/kube-controller-manager-docker-desktop 1/1 Running 0 18m
kube-system pod/kube-proxy-mfwlw 1/1 Running 0 19m
kube-system pod/kube-scheduler-docker-desktop 1/1 Running 0 18m
kube-system pod/storage-provisioner 1/1 Running 0 18m
kube-system pod/vpnkit-controller 1/1 Running 0 18m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 19m
default service/webapp-service NodePort 10.111.167.112 <none> 3000:30090/TCP 13m
ingress-nginx service/ingress-nginx-controller LoadBalancer 10.106.21.69 localhost 80:32737/TCP,443:32675/TCP 11m
ingress-nginx service/ingress-nginx-controller-admission ClusterIP 10.105.208.234 <none> 443/TCP 11m
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 19m
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/kube-proxy 1 1 1 1 1 kubernetes.io/os=linux 19m
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
default deployment.apps/webapp 1/1 1 1 13m
ingress-nginx deployment.apps/ingress-nginx-controller 1/1 1 1 11m
kube-system deployment.apps/coredns 2/2 2 2 19m
NAMESPACE NAME DESIRED CURRENT READY AGE
default replicaset.apps/webapp-78d8b79b4f 1 1 1 13m
ingress-nginx replicaset.apps/ingress-nginx-controller-6f5454cbfb 1 1 1 11m
kube-system replicaset.apps/coredns-f9fd979d6 2 2 2 19m
NAMESPACE NAME COMPLETIONS DURATION AGE
ingress-nginx job.batch/ingress-nginx-admission-create 1/1 1s 11m
ingress-nginx job.batch/ingress-nginx-admission-patch 1/1 3s 11m
</code></pre>
<p>I already tried debugging, and when doing an exec to the nginx service:
<code>kubectl exec service/ingress-nginx-controller -n ingress-nginx -it -- sh</code>
I can do the following curl: <code>curl -H "host:kubernetes.docker.internal" localhost</code> and it returns the correct content. So to me this seems like my loadbalancer service is not used when opening <a href="http://kubernetes.docker.internal" rel="nofollow noreferrer">http://kubernetes.docker.internal</a> via the browser. I also tried using the same curl from my terminal but that had the same 'empty response' result.</p>
| <p>I knew this is quite outdated thread, but i think my answer can help later visitors</p>
<p><strong>Answer</strong>: You have to install ingress controller. For exam: ingress-nginx controller,</p>
<p>either using <strong>helm</strong>:</p>
<pre><code>helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx --create-namespace
</code></pre>
<p>or <strong>kubectl</strong>:</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.2.0/deploy/static/provider/cloud/deploy.yaml
</code></pre>
<p>you can find addional info <a href="https://kubernetes.github.io/ingress-nginx/deploy/#quick-start" rel="nofollow noreferrer">here</a></p>
<p>don't forget to add defined host to <code>/etc/hosts</code> file. e.g</p>
<pre><code>127.0.0.1 your.defined.host
</code></pre>
<p>then access defined host as usual</p>
|
<p>I'm trying to apply the same job history limits to a number of CronJobs using a <a href="https://github.com/kubernetes-sigs/kustomize/blob/572d5841c60b9a4db1a75443b8badb7e8334f727/examples/patchMultipleObjects.md" rel="nofollow noreferrer">patch</a> like the following, named <code>kubeJobHistoryLimit.yml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: batch/v1beta1
kind: CronJob
spec:
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
</code></pre>
<p>My <code>kustomization.yml</code> looks like:</p>
<pre class="lang-yaml prettyprint-override"><code>bases:
- ../base
configMapGenerator:
- name: inductions-config
env: config.properties
patches:
- path: kubeJobHistoryLimit.yml
target:
kind: CronJob
patchesStrategicMerge:
- job_specific_patch_1.yml
- job_specific_patch_2.yml
...
resources:
- secrets-uat.yml
</code></pre>
<p>And at some point in my CI pipeline I have:</p>
<pre><code>kubectl --kubeconfig $kubeconfig apply --force -k ./
</code></pre>
<p>The <code>kubectl</code> version is <code>1.21.9</code>.</p>
<p>The issue is that the job history limit values don't seem to be getting picked up. Is there something wrong w/ the configuration or the version of K8s I'm using?</p>
| <p>With kustomize 4.5.2, your patch as written doesn't apply; it fails with:</p>
<pre><code>Error: trouble configuring builtin PatchTransformer with config: `
path: kubeJobHistoryLimit.yml
target:
kind: CronJob
`: unable to parse SM or JSON patch from [apiVersion: batch/v1
kind: CronJob
spec:
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
]
</code></pre>
<p>This is because it's missing <code>metadata.name</code>, which is required, even if it's ignored when patching multiple objects. If I modify the patch to look like this:</p>
<pre><code>apiVersion: batch/v1
kind: CronJob
metadata:
name: ignored
spec:
successfulJobsHistoryLimit: 1
failedJobsHistoryLimit: 1
</code></pre>
<p>It seems to work.</p>
<p>If I have <code>base/cronjob1.yaml</code> that looks like:</p>
<pre><code>apiVersion: batch/v1
kind: CronJob
metadata:
name: cronjob1
spec:
failedJobsHistoryLimit: 2
successfulJobsHistoryLimit: 5
jobTemplate:
spec:
template:
spec:
containers:
- command:
- sleep
- 60
image: docker.io/alpine:latest
name: example
schedule: 30 3 * * *
</code></pre>
<p>Then using the above patch and a <code>overlay/kustomization.yaml</code> like this:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- ../base
patches:
- path: kubeJobHistoryLimit.yml
target:
kind: CronJob
</code></pre>
<p>I see the following output from <code>kustomize build overlay</code>:</p>
<pre><code>apiVersion: batch/v1
kind: CronJob
metadata:
name: cronjob2
spec:
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
containers:
- command:
- sleep
- 60
image: docker.io/alpine:latest
name: example
schedule: 30 3 * * *
successfulJobsHistoryLimit: 1
</code></pre>
<p>You can see the two attributes have been updated correctly.</p>
|
<p>We have multiple microservices and a couple of them use some external APIs. Certificates to connect to those external ones from our services are periodically rotated. How can we update our services to use the new public certs of external APIs without much disruption and much outage at our end. We use kubernetes and docker images.</p>
| <p>You can use the <a href="https://kubernetes.io/docs/concepts/configuration/configmap/" rel="nofollow noreferrer">configmap</a> to store your certificate and mount it to your deployment.</p>
<p>Configmap will get auto-updated inside the Running POD without restarting the POD. So you have to just mount the config map once and changes will be autp updated to all available replicas of deployment without any restart and disruption.</p>
<p>Read my article : <a href="https://faun.pub/update-configmap-without-restarting-pod-56801dce3388" rel="nofollow noreferrer">Update configmap without restarting POD</a></p>
<p>Store you cert in configmap and mount it to POD</p>
<p><strong>Example</strong> :</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: test-config
data:
cert: <data>
</code></pre>
<p><strong>Note</strong> : People consider configmap as insecure option to store secure data, if someone has your cluster access they can watch or view your certificates. If that's not issue in your case it is made for storing configuration only so will work like charm.</p>
|
<p>I am unable to issue a working certificate for my ingress host in k8s. I use a ClusterIssuer to issue certificates and the same ClusterIssuer has issued certificates in the past for my ingress hosts under my domain name *xyz.com. But all of a sudden neither i can issue new Certificate with state 'True' for my host names nor a proper certificate secret (kubernetes.io/tls) gets created (but instead an Opaque secret gets created).</p>
<pre><code>
**strong text**
**kubectl describe certificate ingress-cert -n abc**
Name: ingress-cert
Namespace: abc
Labels: <none>
Annotations: <none>
API Version: cert-manager.io/v1beta1
Kind: Certificate
Metadata:
Creation Timestamp: 2021-09-08T07:48:32Z
Generation: 1
Owner References:
API Version: extensions/v1beta1
Block Owner Deletion: true
Controller: true
Kind: Ingress
Name: test-ingress
UID: c03ffec0-df4f-4dbb-8efe-4f3550b9dcc1
Resource Version: 146643826
Self Link: /apis/cert-manager.io/v1beta1/namespaces/abc/certificates/ingress-cert
UID: 90905ab7-22d2-458c-b956-7100c4c77a8d
Spec:
Dns Names:
abc.xyz.com
Issuer Ref:
Group: cert-manager.io
Kind: ClusterIssuer
Name: letsencrypt
Secret Name: ingress-cert
Status:
Conditions:
Last Transition Time: 2021-09-08T07:48:33Z
Message: Issuing certificate as Secret does not exist
Reason: DoesNotExist
Status: False
Type: Ready
Last Transition Time: 2021-09-08T07:48:33Z
Message: Issuing certificate as Secret does not exist
Reason: DoesNotExist
Status: True
Type: Issuing
Next Private Key Secret Name: ingress-cert-gdq7g
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Issuing 11m cert-manager Issuing certificate as Secret does not exist
Normal Generated 11m cert-manager Stored new private key in temporary Secret resource "ingress-cert-gdq7g"
Normal Requested 11m cert-manager Created new CertificateRequest resource "ingress-cert-dp6sp"
</code></pre>
<p>I checked the certificate request and it contains no events. Also i can see no challenges. I have added the logs below. Any help would be appreciated</p>
<pre><code>
kubectl describe certificaterequest ingress-cert-dp6sp -n abc
Namespace: abc
Labels: <none>
Annotations: cert-manager.io/certificate-name: ingress-cert
cert-manager.io/certificate-revision: 1
cert-manager.io/private-key-secret-name: ingress-cert-gdq7g
API Version: cert-manager.io/v1beta1
Kind: CertificateRequest
Metadata:
Creation Timestamp: 2021-09-08T07:48:33Z
Generate Name: ingress-cert-
Generation: 1
Owner References:
API Version: cert-manager.io/v1alpha2
Block Owner Deletion: true
Controller: true
Kind: Certificate
Name: ingress-cert
UID: 90905ab7-22d2-458c-b956-7100c4c77a8d
Resource Version: 146643832
Self Link: /apis/cert-manager.io/v1beta1/namespaces/abc/certificaterequests/ingress-cert-dp6sp
UID: fef72617-fc1d-4384-9f4b-a7e4502582d8
Spec:
Issuer Ref:
Group: cert-manager.io
Kind: ClusterIssuer
Name: letsencrypt
Request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ2Z6Q0NBV2NDQVFBd0FEQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUxMNgphTGhZNjhuNnhmMUprYlF5ek9OV1J4dGtLOXJrbjh5WUtMd2l4ZEFMVUl0TERra0t6Uksyb3loZzRMMThSQmQvCkNJaGJ5RXBYNnlRditKclRTOC84T1A0MWdwTUxBLzROdVhXWWtyeWhtZFdNaFlqa21OOFpiTUk1SlZZcVV2cVkKRWQ1b2cydmVmSjU1QlJPRExsd0o3YjBZa3hXckUwMGJxQ1ExWER6ZzFhM08yQ2JWd1NQT29WV2x6Uy9CdzRYVgpMeVdMS3E4QU52b2dZMUxXRU8xcG9YelRObm9LK2U2YVZueDJvQ1ZLdGxPaG1iYXRHYXNSaTJKL1FKK0dOWHovCnFzNXVBSlhzYVErUzlxOHIvbmVMOXNPYnN2OWd1QmxCK09yQVg2eHhkNHZUdUIwVENFU00zWis2c2MwMFNYRXAKNk01RlY3dkFFeDQyTWpuejVoa0NBd0VBQWFBNk1EZ0dDU3FHU0liM0RRRUpEakVyTUNrd0p3WURWUjBSQkNBdwpIb0ljY25kemMyZHdMbU5zYjNWa1oyRjBaUzV0YVdOeWIyWnBiaTVrWlRBTkJna3Foa2lHOXcwQkFRc0ZBQU9DCkFRRUFTQ0cwTXVHMjZRbVFlTlBFdmphNHZqUUZOVFVINWVuMkxDcXloY2ZuWmxocWpMbnJqZURuL2JTV1hwdVIKTnhXTnkxS0EwSzhtMG0rekNPbWluZlJRS1k2eHkvZU1WYkw4dTgrTGxscDEvRHl3UGxvREE2TkpVOTFPaDM3TgpDQ0E4NWphLy9FYVVvK0p5aHBzaTZuS1d4UXRpYXdmYXhuNUN4SENPWGF5Qzg0Q0IzdGZ2WWp6YUF3Ykx4akxYCmxvd09LUHNxSE51ZktFM0NtcjZmWGgramd5VWhxamYwOUJHeGxCWEFsSVNBNkN5dzZ2UmpWamFBOW82TmhaTXUKbmdheWZON00zUzBBYnAzVFFCZW8xYzc3QlFGaGZlSUE5Sk51SWtFd3EvNXppYVY1RDErNUxSSnR5ZkVpdnJLTwpmVjQ5WkpCL1BGOTdiejhJNnYvVW9CSkc2Zz09Ci0tLS0tRU5EIENFUlRJRklDQVRFIFJFUVVFU1QtLS0tLQo=
Status:
Conditions:
Last Transition Time: 2021-09-08T07:48:33Z
Message: Waiting on certificate issuance from order abc/ingress-cert-dp6sp-3843501305: ""
Reason: Pending
Status: False
Type: Ready
Events: <none>
</code></pre>
<p>Here is the ingress.yaml</p>
<pre><code>kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: 20m
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt"
spec:
rules:
- host: abc.xyz.com
http:
paths:
- path: /static
backend:
serviceName: app-service
servicePort: 80
- path: /
backend:
serviceName: app-service
servicePort: 8000
tls:
- hosts:
- abc.xyz.com
secretName: ingress-cert
</code></pre>
<p>Here is the clusterissuer:</p>
<pre><code>apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: [email protected]
privateKeySecretRef:
name: letsencrypt-key
solvers:
- http01:
ingress:
class: nginx
</code></pre>
| <p>Works only with Nginx Ingress Controller</p>
<p>I was using ClusterIssuer but I changed it to Issuer and it works.</p>
<p>-- Install cert-manager (Installed version 1.6.1) and be sure that the three pods are running</p>
<p>-- Create an Issuer by appling this yml be sure that the issuer is running.</p>
<pre><code>apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: letsencrypt-nginx
namespace: default
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: [email protected]
privateKeySecretRef:
name: letsencrypt-nginx-private-key
solvers:
- http01:
ingress:
class: nginx
</code></pre>
<p>-- Add this to your ingress annotations</p>
<pre><code>cert-manager.io/issuer: letsencrypt-nginx
</code></pre>
<p>-- Add the secretName to your ingress spec.tls.hosts
spec:</p>
<pre><code> tls:
- hosts:
- yourdomain.com
secretName: letsencrypt-nginx
</code></pre>
<p>Notice that the Nginx Ingress Controller is able to generate the Certificate CRD automatically via a special annotation: cert-manager.io/issuer. This saves work and time, because you don't have to create and maintain a separate manifest for certificates as well (only the Issuer manifest is required). For other ingresses you may need to provide the Certificate CRD as well.</p>
|
<p>I am learning kubernetes on minikube. I studied the kubernetes official documentation and followed their <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/expose/expose-interactive/" rel="nofollow noreferrer">interactive tutorial</a> in a sandboxed environment. Everything worked fine in the sandbox but I tried the same thing on my system it failed.</p>
<h3>My Setup :</h3>
<ul>
<li>I am using macOS Big Sur version 11.6.2(20G314) on Apple M1.</li>
<li>I have used docker instead of virtual machine environment for minikube.</li>
</ul>
<h3>Steps to reproduce :</h3>
<p>First I created a deployment, then I created a <code>NodePort</code> type service to expose it to external traffic.</p>
<p>The pod is running fine and no issues are seen in the service description.</p>
<p>To test if the app is exposed outside of the cluster I used <code>curl</code> to send a request to the node :</p>
<pre class="lang-sh prettyprint-override"><code>curl $(minikube ip):$NODE_PORT
</code></pre>
<p>But I get no response from the server :</p>
<blockquote>
<p>curl: (7) Failed to connect to 192.168.XX.X port 32048: Operation timed out.</p>
</blockquote>
<p>I have copied everything that was done in the tutorial. Same deployment name, same image, same service-name, literally EVERYTHING.</p>
<p>I tried <code>LoadBalancer</code> type, but found out that minikube doesn't support it. To access the <code>LoadBalancer</code> deployment, I used the command <code>minikube tunnel</code> but this did not help.</p>
<p>What could be the possible reasons? Is it my system?</p>
| <p>I also had this problem on my m1 mac. I was able to access the service by using this command :</p>
<pre class="lang-sh prettyprint-override"><code>kubectl port-forward svc/kubernetes-bootcamp 8080:8080
</code></pre>
<p>You can see <a href="https://levelup.gitconnected.com/minikube-tips-tricks-739f4b00ac17" rel="noreferrer">this article</a> and <a href="https://stackoverflow.com/questions/71667587/apple-m1-minikube-no-service-url">this answer</a> for more info and ways to go about it.</p>
|
<p>I have one question which I couldn't find a clear explaination.</p>
<p>If I have a service :</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-app-svc
namespace: myns
spec:
type: LoadBalancer
ports:
- name: http
port: 8080
targetPort: 8282
selector:
app: my-app
</code></pre>
<p>As you can see above, I explicitly declared <code>type: LoadBalancer</code>. I understand what it means. I am using AWS EKS. I wonder from traffic perspective, does it mean the incoming http traffic flow is :</p>
<pre><code>Load Balancer --> Node port --> service port(8080) --> Pod port(8282)
</code></pre>
<p>Or:</p>
<pre><code>Load Balancer --> service port(8080) --> Pod port(8282)
</code></pre>
<p>Which one is correct? If neither is correct, what would be the traffic flow in terms of the order in which each k8s component is involved?</p>
| <p><code>Load Balancer --> Node port --> service port(8080) --> Pod port(8282)</code></p>
<p>Your diagram is correct for instance mode:</p>
<blockquote>
<p>Traffic reaching the ALB is routed to NodePort for your service and then proxied to your pods. This is the default traffic mode.</p>
</blockquote>
<p>There is an option of using IP mode where you have AWS LB Controller installed and set <code>alb.ingress.kubernetes.io/target-type: ip</code>:</p>
<blockquote>
<p>Traffic reaching the ALB is directly routed to pods for your service.</p>
</blockquote>
<p>More details can be found <a href="https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html" rel="nofollow noreferrer">here</a>.</p>
|
<p>Dear Cassandra Admins,</p>
<p>I am thinking if Cassandra is appropriate to be installed on Kubernetes Cluster, as I have never implemented that way before.
I appreciate it if you could share some thoughts ? Here are my questions:</p>
<ol>
<li><p>I know there exist solutions to install Cassandra on K8s platforms. But in terms of "Pros v.s. Cons", is Kubernetes a really a good platform for Cassandra servers to be installed (assume I need to install Cassandra servers) ?</p>
</li>
<li><p>If I want to install Cassandra in "on-prem" Kubernetes cluster(NOT public cloud like Azure, AWS or Google), what "on-prem" Kubernetes solutions you choose? For example:</p>
</li>
</ol>
<ul>
<li>OpenShift</li>
<li>Charmed Kubernetes (I think it’s from Ubuntu)</li>
<li>HPE Ezmeral</li>
<li>Vanilla K8S (complete open-source)</li>
<li>Minikube</li>
<li>Microk8s,
Any others K8s solution you choose?</li>
</ul>
<p>I appreciate you could share insights and thoughts !</p>
| <blockquote>
<p>If I want to install Cassandra in "on-prem" Kubernetes cluster(NOT
public cloud like Azure, AWS or Google), what "on-prem" Kubernetes
solutions you choose?</p>
</blockquote>
<p>Minikube, MicroK8s and other are not for production usage if you setting for just development option you can use any.</p>
<p>If you are setting up for Production grade you can use the <a href="https://github.com/kubernetes/kops" rel="nofollow noreferrer">Kops</a>, <a href="https://kubernetes.io/blog/2018/12/04/production-ready-kubernetes-cluster-creation-with-kubeadm/" rel="nofollow noreferrer">kubeadm</a> etc.</p>
<blockquote>
<p>I know there exist solutions to install Cassandra on K8s platforms.
But in terms of "Pros v.s. Cons", is Kubernetes a really a good
platform for Cassandra servers to be installed (assume I need to
install Cassandra servers) ?</p>
</blockquote>
<p>Yes, good option in terms of Pros you can manage things easily, and scale them as per need.</p>
<p>You might also need to focus on regular backup, monitoring and logging part or server it's not just setup so with K8s setup you have need log backup, database backup things needs to be takecare.</p>
<p>Read this nice article : <a href="https://medium.com/flant-com/running-cassandra-in-kubernetes-challenges-and-solutions-9082045a7d93" rel="nofollow noreferrer">https://medium.com/flant-com/running-cassandra-in-kubernetes-challenges-and-solutions-9082045a7d93</a></p>
|
<p>I've setup OpenTelemetry in Kubernetes. Below is my config.</p>
<pre><code>exporters:
logging: {}
extensions:
health_check: {}
memory_ballast: {}
processors:
batch: {}
memory_limiter:
check_interval: 5s
limit_mib: 819
spike_limit_mib: 256
receivers:
jaeger:
protocols:
grpc:
endpoint: 0.0.0.0:14250
thrift_compact:
endpoint: 0.0.0.0:6831
thrift_http:
endpoint: 0.0.0.0:14268
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318
prometheus:
config:
scrape_configs:
- job_name: opentelemetry-collector
scrape_interval: 10s
static_configs:
- targets:
- ${MY_POD_IP}:8888
zipkin:
endpoint: 0.0.0.0:9411
service:
extensions:
- health_check
- memory_ballast
pipelines:
logs:
exporters:
- logging
processors:
- memory_limiter
- batch
receivers:
- otlp
metrics:
exporters:
- logging
processors:
- memory_limiter
- batch
receivers:
- otlp
- prometheus
traces:
exporters:
- logging
processors:
- memory_limiter
- batch
receivers:
- otlp
- jaeger
- zipkin
telemetry:
metrics:
address: 0.0.0.0:8888
</code></pre>
<p>The endpoint is showing as up in Prometheus. But it doesn't show any data. When I check the OTEL collector logs, it shows as below</p>
<p><a href="https://i.stack.imgur.com/6HG4w.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6HG4w.png" alt="enter image description here" /></a></p>
<p>I have manually added the scrape config in Prometheus.</p>
<p>scrape_configs:</p>
<pre><code> - job_name: 'otel-collector'
scrape_interval: 10s
static_configs:
- targets: ['opentelemetry-collector.opentelemetry:8888']
</code></pre>
<p>So in OTEL collectore configmap I also see Prometheus scrape config.</p>
<pre><code> prometheus:
config:
scrape_configs:
- job_name: opentelemetry-collector
scrape_interval: 10s
static_configs:
- targets:
- ${MY_POD_IP}:8888
</code></pre>
<p>--Added--New--</p>
<pre><code>kubectl get all -n thanos
NAME READY STATUS RESTARTS AGE
pod/thanos-query-776688f499-pvm24 1/1 Running 0 14h
pod/thanos-query-frontend-5b55d44cc-b6qx5 1/1 Running 0 14h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/thanos-query ClusterIP 10.0.112.105 <none> 9090/TCP,10901/TCP 14h
service/thanos-query-frontend ClusterIP 10.0.223.246 <none> 9090/TCP 14h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/thanos-query 1/1 1 1 14h
deployment.apps/thanos-query-frontend 1/1 1 1 14h
NAME DESIRED CURRENT READY AGE
replicaset.apps/thanos-query-776688f499 1 1 1 14h
replicaset.apps/thanos-query-frontend-5b55d44cc 1 1 1 14h
</code></pre>
<p><strong>--Logs--</strong></p>
<pre><code>2022-06-07T07:20:49.852Z error exporterhelper/queued_retry.go:183 Exporting failed. The error is not retryable. Dropping data. {"kind": "exporter", "name": "prometheusremotewrite", "error": "Permanent error: Permanent error: remote write returned HTTP status 404 Not Found; err = <nil>: 404 page not found\n", "dropped_items": 18}
go.opentelemetry.io/collector/exporter/exporterhelper.(*retrySender).send
go.opentelemetry.io/[email protected]/exporter/exporterhelper/queued_retry.go:183
go.opentelemetry.io/collector/exporter/exporterhelper.(*metricsSenderWithObservability).send
go.opentelemetry.io/[email protected]/exporter/exporterhelper/metrics.go:132
go.opentelemetry.io/collector/exporter/exporterhelper.(*queuedRetrySender).start.func1
go.opentelemetry.io/[email protected]/exporter/exporterhelper/queued_retry_inmemory.go:118
go.opentelemetry.io/collector/exporter/exporterhelper/internal.consumerFunc.consume
go.opentelemetry.io/[email protected]/exporter/exporterhelper/internal/bounded_memory_queue.go:82
go.opentelemetry.io/collector/exporter/exporterhelper/internal.(*boundedMemoryQueue).StartConsumers.func2
go.opentelemetry.io/[email protected]/exporter/exporterhelper/internal/bounded_memory_queue.go:69
</code></pre>
<p>I configured the below URL in OpenTelemetry</p>
<pre><code>exporters:
prometheusremotewrite:
endpoint: "http://thanos-query-frontend.thanos:9090/api/v1/write"
</code></pre>
| <p>You have configured only <code>logging</code> exporter, which exports data to the console via zap.Logger only = it doesn't write data to the prometheus.</p>
<p>Configure also <code>prometheusremotewrite</code> exporter and add it to the metric pipeline. Minimalistic example:</p>
<pre><code>receivers:
prometheus:
config:
scrape_configs:
- job_name: opentelemetry-collector
scrape_interval: 10s
static_configs:
- targets: ['localhost:8888']
exporters:
prometheusremotewrite:
endpoint: <example: my-prometheus/api/v1/write>
service:
pipelines:
metrics:
receivers:
- prometheus
exporters:
- prometheusremotewrite
telemetry:
metrics:
address: 0.0.0.0:8888
level: basic
</code></pre>
<p>See doc for <code>prometheusremotewrite</code> exporter: <a href="https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/prometheusremotewriteexporter" rel="nofollow noreferrer">https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter/prometheusremotewriteexporter</a></p>
<p>See <a href="https://grafana.com/grafana/dashboards/15983" rel="nofollow noreferrer">https://grafana.com/grafana/dashboards/15983</a> if you want to have a Grafana dashboard for OpenTelemetry collector telemetry metrics.</p>
|
Subsets and Splits