prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>I have managed to install Prometheus and it's adapter and I want to use one of the pod metrics for autoscaling</p>
<pre><code> kubectl get --raw "/apis/custom.metrics.k8s.io/v1beta1" | jq . |grep "pods/http_request".
"name": "pods/http_request_duration_milliseconds_sum",
"name": "pods/http_request",
"name": "pods/http_request_duration_milliseconds",
"name": "pods/http_request_duration_milliseconds_count",
"name": "pods/http_request_in_flight",
</code></pre>
<p>Checking api I want to use <code>pods/http_request</code> and added it to my HPA configuration</p>
<pre><code>---
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: app
namespace: app
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: app
minReplicas: 4
maxReplicas: 8
metrics:
- type: Pods
pods:
metric:
name: http_request
target:
type: AverageValue
averageValue: 200
</code></pre>
<p>After applying the yaml and check the hpa status it shows up as <code><unkown></code></p>
<pre><code>$ k apply -f app-hpa.yaml
$ k get hpa
NAME REFERENCE TARGETS
app Deployment/app 306214400/2000Mi, <unknown>/200 + 1 more...
</code></pre>
<p>But when using other pod metrics such as <code>pods/memory_usage_bytes</code> the value is properly detected</p>
<p>Is there a way to check the proper values for this metric? and how do I properly add it for my hpa configuration</p>
<p>Reference <a href="https://www.ibm.com/support/knowledgecenter/SSBS6K_3.2.0/manage_cluster/hpa.html" rel="nofollow noreferrer">https://www.ibm.com/support/knowledgecenter/SSBS6K_3.2.0/manage_cluster/hpa.html</a></p>
| <p>1st deploy metrics server, it should be up and running.</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
</code></pre>
<p>Then in a few sec. metrics server deployed. check HPA it should resolved.</p>
<pre class="lang-sh prettyprint-override"><code>$ kubectl get deployment -A
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
.
.
kube-system metrics-server 1/1 1 1 34s
$ kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
ha-xxxx-deployment Deployment/xxxx-deployment 1%/5% 1 10 1 6h46m
</code></pre>
|
<p>I am using traefik ingress controller in Kubernetes. It is configured to redirect all request to https and terminate tls connection before passing the request to backend service.</p>
<p>Is it possible to only enable http for one particular ingress config but https for other ingresses ? Any example would be helpful.</p>
<p>I only want to enable http(no https) for this ingress</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
spec:
rules:
- host: testdomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: test-service
port:
number: 8080
</code></pre>
| <p>You can create the one ingress with the config like</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
annotation:
kubernetes.io/ingress.class: "traefik"
ingress.kubernetes.io/force-ssl-redirect: "false"
ingress.kubernetes.io/ssl-redirect: "false"
traefik.ingress.kubernetes.io/frontend-entry-points: http
metadata:
name: test-ingress
spec:
rules:
- host: testdomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: test-service
port:
number: 8080
</code></pre>
<p>you can get more idea about the annotation from here : <a href="https://doc.traefik.io/traefik/v1.6/configuration/backends/kubernetes/" rel="nofollow noreferrer">https://doc.traefik.io/traefik/v1.6/configuration/backends/kubernetes/</a></p>
|
<p>I am a new DevOps Engineer.</p>
<p>I was checking out our DEV AKS cluster at work and noticed that Fluentd is using a crazy amount of memory and isn't releasing it back, example below:</p>
<pre><code>fluentd-dev-95qmh 13m 1719Mi
fluentd-dev-fhd4w 9m 1732Mi
fluentd-dev-n22hf 11m 660Mi
fluentd-dev-qlzd8 12m 524Mi
fluentd-dev-rg9gp 9m 2338Mi
</code></pre>
<p>Fluentd is deployed as a daemonset so I can't just scale it up or down, unfortunately.</p>
<p>The version we are running is 1.2.22075.8 and it get deployed via CI/CD pipelines using a deployment.yml file and a dockerfile.</p>
<p>Here is the Dockerfile:</p>
<pre><code>FROM quay.io/fluentd_elasticsearch/fluentd:v3.2.0
#RUN adduser --uid 10000 --gecos '' --disabled-password fluent --no-create-home && \
#chown fluent:fluent /entrypoint.sh && \
#chown -R fluent:fluent /etc/fluent/ && \
#chown -R fluent:fluent /usr/local/bin/ruby && \
#chown -R fluent:fluent /usr/local/bundle/bin/fluent* && \
#chmod -R fluent:fluent /var/lib/docker/containers && \
#chmod -R fluent:fluent /var/log
#USER fluent
</code></pre>
<p>I went to <a href="https://quay.io/repository/fluentd_elasticsearch/fluentd?tab=tags&tag=latest" rel="nofollow noreferrer">https://quay.io/repository/fluentd_elasticsearch/fluentd?tab=tags&tag=latest</a> and saw that there were newer versions available. I wanted to update Fluentd to v3.3.0 and I thought I could just do this by changing the version number in the Dockerfile and triggering a build. I did this the release pipeline failed, two pods were in "CrashLoopBackOff" state and three pods were running normally. I also had a bunch of errors related to Ruby. I know, I should have taken note of the errors but since this was at work I just scared and reverted the version in the Dockerfile back to v3.2.0 from v3.3.0 and triggered a build and everything went back to how it was before.</p>
<p>How do I update the version of the Fluentd daemonset? Is there a way I can restart these pods and clear the memory? I've Googled this question and it doesn't seem like there is a way to do this easily because it is not a regular deployment.</p>
<p>Also, any idea why fluentd would be eating so much memory?</p>
<p>This issue is having a negative impact on the DEV cluster, 3 out of 5 nodes are above 110% memory usage.</p>
| <p>Regarding the question about restarting the fluentd pods:</p>
<p>If you have permissions to delete pods in the namespace where fluentd is deployed, you can simply delete the pods to restart fluentd</p>
<pre><code>kubectl delete pod fluentd-xxxxx
</code></pre>
<p>Since the daemonset definition is still there the Kubernetes control plane will notice there are fluentd pods missing and start new ones. This will however mean that there will be a short time windows where there will be no fluentd running on the node (i.e the time between the delete command is issued and until the new fluentd pod is operational). The Control plane will detect that there are fluentd pods missing almost instantly, but they do take some time to start.</p>
|
<p>We recently started upgrading our EKS clusters from 1.18 version to 1.19. One change that we did was to change our HPA manifest files to have apiVersion as autoscaling/v2beta2.
I can see all three api versions are available in my cluster</p>
<pre><code>kubectl api-versions | grep autoscaling
autoscaling/v1
autoscaling/v2beta1
autoscaling/v2beta2
</code></pre>
<p>once the HPA file is applied autoscaling/v2beta2 is changed to autoscaling/v1. In the annotation block, I see the last applied changes showing, apiversion as autoscaling/v2beta2. Autoscaling is working fine and I don't see any errors in events. I am getting edgy to see the apiversion getting changed automatically and don't see any document explaining why is this happening and if this is expected behavior. Any inputs or explanation of this will be appreciated</p>
| <p>This command show which versions are supported by your cluster</p>
<pre><code>kubectl api-versions | grep autoscaling
autoscaling/v1
autoscaling/v2beta1
autoscaling/v2beta2
</code></pre>
<p>however, we mainly focus on which ApiVersion is getting stored in etcd database.</p>
<p>Based on my understanding, there could be chances kubectl get the resource using one <strong>ApiVersion: autoscaling/v1</strong> but you have created the resource with <code>autoscaling/v1</code> in that case if you are getting your resource using <strong>kubectl</strong> CLI, it will show the change in API version.</p>
<p><code>kubectl get</code> uses the <strong>server prefer API</strong> version, you can check it out using :</p>
<pre><code>kubectl explain hpa
KIND: HorizontalPodAutoscaler
VERSION: autoscaling/v1
</code></pre>
<blockquote>
<p>The differences between API versions are things like default values
and field names. Because API versions are round-trippable, you can
safely get the same deployment object with different API version
endpoints.</p>
</blockquote>
|
<p>One of my namespace is in <code>Terminating</code> state.
While there are many posts that explain how to forcefully delete such namespaces. The ultimate result is that everything in your namespace will be gone. Which is not what you might want especially if that termination was a result of mistake or bug (or may cause downtime of any kind).</p>
<p>Is it possible to tell kubernetes not to try to delete that namespace anymore. Where that state is kept?</p>
<p><code>Terminating</code> state blocks me from recreating the whole stack with gitops (helm chart installation in such namespace is not possible).</p>
<p>I simply wish to remove the <code>terminating</code> state and my fluxcd controller would fix everything else.</p>
| <blockquote>
<p>Is there a way to cancel namespace termination in kubernetes?</p>
</blockquote>
<p>As far as I know, unfortunately not. Termination is a one-way process. Note how <a href="https://unofficial-kubernetes.readthedocs.io/en/latest/concepts/abstractions/pod-termination/#:%7E:text=Kubernetes%20marks%20the%20Pod%20state,still%20running%20in%20the%20Pod." rel="nofollow noreferrer">termination pods take place</a>:</p>
<blockquote>
<ol>
<li>You send a command or API call to terminate the Pod.</li>
<li>Kubernetes updates the Pod status to reflect the time after which the Pod is to be considered "dead" (the time of the termination request plus the grace period).</li>
<li><strong>Kubernetes marks the Pod state as "Terminating" and stops sending traffic to the Pod.</strong></li>
<li><strong>Kubernetes send a <code>TERM</code> signal to the Pod, indicating that the Pod should shut dow</strong>n.</li>
<li><strong>When the grace period expires, Kubernetes issues a <code>SIGKILL</code> to any processes still running in the Pod.</strong></li>
<li>Kubernetes removes the Pod from the API server on the Kubernetes Master.</li>
</ol>
</blockquote>
<p>So it is impossible to cancel termination process.</p>
<blockquote>
<p>Is it possible to tell kubernetes not to try to delete that namespace anymore.</p>
</blockquote>
<p>There is no dedicated solution, but you can try to automate this process with custom scripts. Look at <a href="https://gist.github.com/jossef/a563f8651ec52ad03a243dec539b333d" rel="nofollow noreferrer">this example in Python</a> and another one <a href="https://stackoverflow.com/a/62463004/15407542">in Bash</a>.</p>
<p>See also <a href="https://stackoverflow.com/questions/52369247/namespace-stuck-as-terminating-how-i-removed-it">this question</a>.</p>
|
<p>What is the preferred Kubernetes storageClass for a PersistentVolume used by a Postgresql database? Which factors should go into consideration choosing the storageClass when I have the choice between S3 (Minio), NFS and HostPath?</p>
| <p>When you choose a storage option for Postgresql in Kubernetes, you should take into account the following:</p>
<ol>
<li><p><strong>NFS / Minio</strong> is not the preferred storage for databases, if your application is latency-sensitive. A common use case is a download folder or a logging/backup folder.<br />
But it gives you flexibility to design a k8s cluster and ability to easily move to cloud-based solution in future (AWS EFS or S3 for example).</p>
</li>
<li><p><strong>HostPath</strong> is a better option for databases. But</p>
</li>
</ol>
<blockquote>
<p>Kubernetes supports hostPath for development and testing on a single-node cluster. A hostPath PersistentVolume uses a file or directory on the Node to emulate network-attached storage.</p>
</blockquote>
<blockquote>
<p>In a production cluster, you would not use hostPath. Instead a cluster administrator would provision a network resource like a Google Compute Engine persistent disk, an NFS share, or an Amazon Elastic Block Store volume. Cluster administrators can also use StorageClasses to set up dynamic provisioning.</p>
</blockquote>
<ol start="3">
<li>As you mentioned, there is quite a good option for non-cloud k8s clusters <strong>Longhorn</strong></li>
</ol>
<blockquote>
<p>Longhorn is a lightweight, reliable, and powerful distributed block storage system for Kubernetes.<br />
Longhorn implements distributed block storage using containers and microservices. Longhorn creates a dedicated storage controller for each block device volume and synchronously replicates the volume across multiple replicas stored on multiple nodes. The storage controller and replicas are themselves orchestrated using Kubernetes.</p>
</blockquote>
<ol start="4">
<li>Also, check this <a href="https://github.com/bitnami/charts/tree/master/bitnami/postgresql" rel="nofollow noreferrer">Bitnami PostgreSQL Helm chart</a></li>
</ol>
<blockquote>
<p>It offers a PostgreSQL Helm chart that comes pre-configured for security, scalability and data replication. It's a great combination: all the open source goodness of PostgreSQL (foreign keys, joins, views, triggers, stored procedures…) together with the consistency, portability and self-healing features of Kubernetes.</p>
</blockquote>
|
<p>I'm trying to access my ETCD database from a K8s controller, but getting rpc error/EOF when trying to open ETCD client.</p>
<p>My setup:</p>
<ul>
<li>ETCD service is deployed in my K8s cluster and included in my Istio service mesh (its DNS record: <code>my-etcd-cluster.my-etcd-namespace.svc.cluster.local</code>)</li>
<li>I have a custom K8s controller developed with use of Kubebuilder framework and deployed in the same cluster, different namespace, but configured to be a part of the same Istio service mesh</li>
<li>I'm trying to connect to ETCD database from the controller, using Go client SDK library for ETCD</li>
</ul>
<p>Here's my affected Go code:</p>
<pre class="lang-golang prettyprint-override"><code>cli, err := clientv3.New(clientv3.Config{
Endpoints: []string{"http://my-etcd-cluster.my-etcd-namespace.svc.cluster.local:2379"},
DialTimeout: 5 * time.Second,
Username: username,
Password: password,
})
if err != nil {
return nil, fmt.Errorf("opening ETCD client failed: %v", err)
}
</code></pre>
<p>And here's an error I'm getting when <code>clientv3.New(...)</code> gets executed:</p>
<pre><code>{"level":"warn","ts":"2022-03-16T23:37:42.174Z","logger":"etcd-client","caller":"[email protected]/retry_interceptor.go:62","msg":"retrying of unary invoker failed",
"target":"etcd-endpoints://0xc00057f500/#initially=[http://my-etcd-cluster.my-etcd-namespace.svc.cluster.local:2379]","attempt":0,
"error":"rpc error: code = Unavailable desc = error reading from server: EOF"}
...
1.647473862175209e+09 INFO controller.etcdclient Finish reconcile loop for some-service/test-svc-client {"reconciler group": "my-controller.something.io", "reconciler kind": "ETCDClient", "name": "test-svc-client", "namespace": "some-service", "reconcile-etcd-client": "some-service/test-svc-client"}
1.6474738621752858e+09 ERROR controller.etcdclient Reconciler error {"reconciler group": "my-controller.something.io", "reconciler kind": "ETCDClient", "name": "test-svc-client", "namespace": "some-service", "error": "opening ETCD client failed: rpc error: code = Unavailable desc = error reading from server: EOF"}
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:266
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:227
</code></pre>
<p>The same error happens when I'm passing some dummy, invalid credentials.</p>
<p>However, when I tried to access the database in a HTTP API manner:</p>
<pre class="lang-golang prettyprint-override"><code>postBody, _ := json.Marshal(map[string]string{
"name": username,
"password": password,
})
responseBody := bytes.NewBuffer(postBody)
resp, err := http.Post("http://my-etcd-cluster.my-etcd-namespace.svc.cluster.local:2379/v3/auth/authenticate", "application/json", responseBody)
if err != nil {
return ctrl.Result{}, fmt.Errorf("an error occured %w", err)
}
l.Info(fmt.Sprintf("code: %d", resp.StatusCode))
defer resp.Body.Close()
</code></pre>
<p>...I got 200 OK and a proper token (which is expected), so I believe my Istio configuration is ok and my controller should be able to see the ETCD db service. I have no clue why this doesn't work when following the client SDK approach.</p>
<p>When I'm using port-forwarding of the ETCD service and accessing it locally, <code>clientv3.New()</code> and other client SDK methods work like a charm. What am I missing? I'd really appreciate any suggestions.</p>
<p>EDIT:
I've also added a simple pod to try accessing my etcd db via etcdctl:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: test-pod
namespace: my-controller-namespace
spec:
containers:
- name: etcdctl
image: bitnami/etcd
command:
- sleep
- infinity
</code></pre>
<p>When logged into the container via <code>kubectl exec</code>, I was able to access my db:</p>
<pre><code>$ etcdctl --endpoints=my-etcd-cluster.my-etcd-namespace.svc.cluster.local:2379 --user="user" --password="password" put foo bob
OK
</code></pre>
<p>I guess the problem is somewhere in the SDK?</p>
| <p>Turned out to be version mismatch - my ETCD db is v3.5.2 and the clientv3 library that I used was v3.5.0.
As seen in ETCD changelog (<a href="https://github.com/etcd-io/etcd/blob/main/CHANGELOG/CHANGELOG-3.5.md" rel="nofollow noreferrer">https://github.com/etcd-io/etcd/blob/main/CHANGELOG/CHANGELOG-3.5.md</a>):</p>
<p><a href="https://i.stack.imgur.com/W2W2W.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/W2W2W.jpg" alt="enter image description here" /></a></p>
|
<p>I'm runnning a cluster in Kubernetes with minikube and VirtualBox.</p>
<p>This is my headless service</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
service: test
name: test-group-ping
spec:
clusterIP: None
ports:
- port: 4444
name: ping
protocol: TCP
targetPort: 4444
selector:
service: test
sessionAffinity: None
type: ClusterIP
</code></pre>
<p>This is how I deployed the pods</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: test
labels:
app: test
spec:
replicas: 3
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: image
env:
- name: KC_PROXY
value: "edge"
- name: PROXY_ADDRESS_FORWARDING
value: "true"
ports:
- containerPort: 8080
- containerPort: 4444
- containerPort: 8888
readinessProbe:
httpGet:
path: "/test"
port: 8080
initialDelaySeconds: 600
periodSeconds: 30
</code></pre>
<p>I have another Load Balancer for the pods</p>
<p>When I use <code>kubectl get services</code> , I get <code>test-group-ping</code> as an headless service</p>
<pre><code>test-group-ping ClusterIP None <none> 4444/TCP 8h
kubectl describe service test-group-ping
Name: test-group-ping
Namespace: default
Labels: service=test
Annotations: <none>
Selector: service=test
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: None
IPs: None
Port: ping 4444/TCP
TargetPort: 4444/TCP
Endpoints: <none>
Session Affinity: None
Events: <none>
</code></pre>
<p>I need the DNS query for this headless service</p>
<p>I tried <code>test-group-ping.default.svc.cluster.local</code>, but when I tried to do a DNS lookup on this, it doesn't return any IP addresses of the pod</p>
<pre><code> % dig +search test-group-ping.default.svc.cluster.local
; <<>> DiG 9.10.6 <<>> +search test-group-ping.default.svc.cluster.local
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NXDOMAIN, id: 8544
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;test-group-ping.default.svc.cluster.local. IN A
;; AUTHORITY SECTION:
. 10800 IN SOA a.root-servers.net. nstld.verisign-grs.com. 2022032100 1800 900 604800 86400
;; Query time: 140 msec
;; SERVER: 2600:1702:8c0:e100::1#53(2600:1702:8c0:e100::1)
;; WHEN: Mon Mar 21 10:33:50 CDT 2022
;; MSG SIZE rcvd: 151
;; Query time: 140 msec
;; SERVER: 2600:1702:8c0:e100::1#53(2600:1702:8c0:e100::1)
;; WHEN: Mon Mar 21 10:33:50 CDT 2022
;; MSG SIZE rcvd: 151
</code></pre>
| <p>Your Service selector <code>service: test</code> does not match with the Pod label <code>app: test</code>.</p>
|
<p>I have several <a href="https://stackoverflow.com/questions/58167618/exception-has-occurred-mongodarterror-mongodart-error-invalid-scheme-in-uri?rq=1">pages</a> that I found with similar question and most answer tell us to white list our IP. However I have allowed access from anywhere <code>0.0.0.0/0</code> in the atlas, and have installed the latest version of <code>mongoose</code>(6.2.6 ; which is supposed to have support for the protocol (mongodb+srv).</p>
<p>The connection works perfectly when I run locally using <code>npm start</code> or even from a dockerized container. But, when I deploy to a k8s cluster, I get an error saying:</p>
<pre><code>querySrv ENOTFOUND _mongodb._tcp.mongodb-cluster0.zvnxj.mongodb.net
</code></pre>
<p>The deployment and service file are as:</p>
<p><code>deployment.yml</code></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
namespace: ns-my-workflow-api
name: my-workflow-api
spec:
replicas: 2
selector:
matchLabels:
app: my-workflow-api
template:
metadata:
labels:
app: my-workflow-api
spec:
containers:
- name: my-workflow-api
image: "myname/my-workflow-api:1.0.0"
ports:
- containerPort: 3000
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: "256m"
</code></pre>
<p>The <code>service.yaml</code> has the contents:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
namespace: ns-my-workflow-api
name: my-workflow-api
spec:
selector:
app: my-workflow-api
type: LoadBalancer
ports:
- name: http
port: 8000
targetPort: 3000
protocol: TCP
</code></pre>
<p>The <code>namespace.yaml</code> has the contents:</p>
<pre><code>apiVersion: v1
kind: Namespace
metadata:
name: ns-my-workflow-api
</code></pre>
<p>I also tried the <code>deployment.yaml</code> with the <code>dns</code> rule:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
namespace: ns-my-workflow-api
name: my-workflow-api
spec:
replicas: 2
selector:
matchLabels:
app: my-workflow-api
template:
metadata:
labels:
app: my-workflow-api
spec:
dnsPolicy: Default # <------ this rule
containers:
- name: my-workflow-api
image: "myname/my-workflow-api:1.0.0"
ports:
- containerPort: 3000
imagePullPolicy: IfNotPresent
resources:
limits:
cpu: "256m"
</code></pre>
<p>Once I changed the connection url to use <code>2.0.14 or earlier</code> I was able to connect. The connection string started with <code>mongodb://....</code></p>
<p>While I have managed to make the connection work with the workaround using an old-style connection string, and it seems to be some sort of dns resolution issue, how do I make the newer protocols work to connect to atlas from inside the cluster? Thanks in advance</p>
| <p>I was able to solve it using this to start <code>minikube</code>:</p>
<p><code>minikube start --driver=docker</code></p>
<p>It seems there's some dns resolution issue with the underlying oracle's virtualbox driver(Maybe some configuration and setup issue as well)</p>
|
<p>I am working on deploying a certain pod to GKE but I am having an unhealthy state for my backend services.</p>
<p>The deployment went through via <code>helm install process</code> but the ingress reports a certain warning error that says <code>Some backend services are in UNHEALTHY state</code>. I have tried to access the logs but do not know exactly what to look out for. Also, I already have liveness and readiness probes running.</p>
<p>What could I do to make the ingress come back to a healthy state? Thanks</p>
<p><a href="https://i.stack.imgur.com/hNcFg.png" rel="nofollow noreferrer">Picture of warning error on GKE UI</a></p>
| <p>Without more details it is hard to determine the exact cause.</p>
<p>As first point I want to mention, that your error message is <code>Some backend services are in UNHEALTHY state</code>, <strong>not</strong> <code>All backend services are in UNHEALTHY state</code>. It indicates that only <strong>a few of your backends</strong> are affected.</p>
<p>There might be tons of reasons, if you are using <code>GCP Ingress</code> or <code>Nginx Ingress</code>, your configuration of <code>externalTrafficPolicy</code>, if you are using <code>preemptive nodes</code>, your <code>livenessProbe</code> and <code>readinessProbe</code>, <code>health checks</code>, etc.</p>
<p>In your scenario, only a few backends are affected, the only thing with current information I can suggest you some debug options.</p>
<ul>
<li>Using <code>$ kubectl get po -n <namespace></code> check if all your pods are working correctly, that all containers within pods are <code>Ready</code> and pod status is <code>Running</code>. Eventually check logs of suspicious pod <code>$ kubectl logs <podname> -c <containerName></code>. In general you should check all pods the load balancer is pointing to,</li>
<li>Confirm if <code>livenessProbe</code> and <code>readinessProbe</code> are configured properly and response is <code>200</code>,</li>
<li>Describe your ingress <code>$ kubectl describe ingress <yourIngressName></code> and check <code>backends</code>,</li>
<li>Check if you've configured your <code>health checks</code> properly according to <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress#health_checks" rel="nofollow noreferrer">GKE Ingress for HTTP(S) Load Balancing - Health Checks</a> guide.</li>
</ul>
<p>If you still won't be able to solve this issue with above debug options, please provide more details about your env with logs (without private information).</p>
<p><strong>Useful links</strong>:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/39294305/">kubernetes unhealthy ingress backend</a></li>
<li><a href="https://stackoverflow.com/questions/63268552/gke-ingress-shows-unhealthy-backend-services">GKE Ingress shows unhealthy backend services</a></li>
</ul>
|
<p>I'm exploring K8S possibilities and I'm wonder is there any way to create deployments for two or more apps in single deployment so it is transactional - when something is wrong after deployment all apps are rollbacked. Also I want to mention that <strong>I'm not saying about pod with multiple containers</strong> because additional side car containers are rather intended for some crosscutting concerns like monitoring, authentication (like kerberos) and others but it is not recommended to put different apps in single pod. Having this in mind, is it possible to have single deployment that can produce 2+ kind of pods?</p>
| <blockquote>
<p>Is it possible to have single deployment that can produce 2+ kind of pods?</p>
</blockquote>
<p>No. A Deployment creates only one kind of Pod. You can update a Deployment's contents, and it will incrementally replace existing Pods with new ones that match the updated Pod spec.</p>
<p>Nothing stops you from creating multiple Deployments, one for each kind of Pod, and that's probably the approach you're looking for here.</p>
<blockquote>
<p>... when something is wrong after deployment all apps are rollbacked.</p>
</blockquote>
<p>Core Kubernetes doesn't have this capability on its own; indeed, it has somewhat limited capacity to tell that something has gone wrong, other than a container failing its health checks or exiting.</p>
<p>Of the various tools in <a href="https://stackoverflow.com/a/71546490">@SYN's answer</a> I at least have some experience with Helm. It's not quite "transactional" in the sense you might take from a DBMS, but it does have the ability to manage a collection of related resources (a "release" of a "chart") and it has the ability to roll back an entire version of a release across multiple Deployments if required. See the <a href="https://docs.helm.sh/docs/helm/helm_rollback/" rel="nofollow noreferrer"><code>helm rollback</code> command</a>.</p>
|
<p>I would like to know if it's possible to override a value in environment.ts files (angular) with a Kubernetes manifest?</p>
<p>I do it for application.properties (spring) but looks like it's not working the same way with Angular.</p>
<p>here's what I do for Spring :</p>
<pre><code>spec:
containers:
- name: $(appName)
image: ACR/$(image)
imagePullPolicy:
ports:
- name: http
containerPort: 8080
protocol: TCP
env:
- name: SPRING_DATASOURCE_PASSWORD
value: "$(datasourcePassword)"
</code></pre>
<p>and the value of the password changes based on what I set for $(datasourcePassword) variable (azure devops variable). If I put a fake password API cannot access DB.</p>
<p>But if I do the same with the manifest I use for Angular :</p>
<pre><code>spec:
containers:
- name: $(appName)
image: ACR/$(image)
imagePullPolicy:
ports:
- name: http
containerPort: 8181
protocol: TCP
env:
- name: APIENDPOINT
value: "$(APIurl)"
</code></pre>
<p>the front still use the value defined by default in the environment.ts :</p>
<pre><code>export const environment = {
production: true,
APIEndpoint: 'http://localhost:8080'
};
</code></pre>
<p>where am I doing it wrong?</p>
<p>thanks a lot for the help !</p>
| <p>I faced the same issue you described.
A good way I found to solve it, was creating an environment-service-loader where I do a merge between the properties from an env.json file and the properties defined in the environment.ts.</p>
<p>Let me describe each step.</p>
<p><strong>ANGULAR</strong></p>
<p><strong>1. Create in the app.module.ts the two following providers</strong></p>
<pre><code> import {environment} from '../environments/environment';
function initEnvironmentServiceFactory(
envServiceLoader: EnvServiceLoader,
environmentService: EnvironmentService
) {
return async () => {
const envs = await envServiceLoader.loadEnvVariables();
environmentService.setEnvVariables(envs);
};
}
providers: [
....
{provide: 'environment_prod', useValue: environment},
{
provide: APP_INITIALIZER,
useFactory: initEnvironmentServiceFactory,
deps: [EnvServiceLoader, EnvironmentService],
multi: true
},
....
</code></pre>
<p><strong>2. Create the EnvServiceLoader that will "override" the environment.ts and the properties defined in env.json file under assets/json. In this example I defined the key production as uneditable.</strong></p>
<pre><code> @Injectable({
providedIn: 'root'
})
export class EnvServiceLoader {
UNEDITABLE_PROPERTIES = ['production'];
constructor(@Inject('environment_prod') private environment, private
httpClient: HttpClient) {
}
async loadEnvVariables(): Promise<any> {
const envJson = await this.getJsonConfig();
return {...this.environment, ...envJson, ...this.getUneditableConfig(this.environment)};
};
private getJsonConfig(): Promise<any> {
return this.httpClient.get('./assets/json/env.json')
.pipe(map((res: string) => JSON.parse(res)),
catchError(() => of({})
)).toPromise();
}
private getUneditableConfig(environmentConfig: any) {
return this.UNEDITABLE_PROPERTIES.reduce((config, key) => {
config[key] = environmentConfig[key];
return config;
}, {});
};
}
</code></pre>
<p><strong>3. Create the EnvironmentService</strong></p>
<pre><code> @Injectable({
providedIn: 'root'
})
export class EnvironmentService {
private env;
setEnvVariables(env): void {
this.env = env;
}
getEnvVariables(): any {
return this.env;
}
}
</code></pre>
<p><strong>KUBERNETES</strong></p>
<p><strong>1. Create a <strong>configmap</strong> in Kuberentes containing the value you want to override through the env.json</strong></p>
<pre><code> apiVersion: v1
kind: ConfigMap
metadata:
name: client-config
data:
env.json: |
"{\"API_endpoint\": \"http://localhost:8080\"}
</code></pre>
<p><strong>2. In your Kubernetes Deployment for your client through a volume (named config-volume in this example) you can copy the content of your configmap into the the file env.json under /usr/share/nginx/html/assets/json.</strong></p>
<pre><code> containers:
- image: your_client_image
imagePullPolicy: Always
name: your_client
ports:
- containerPort: 80
name: http
volumeMounts:
- mountPath: /usr/share/nginx/html/assets/json
name: config-volume
volumes:
- configMap:
name: client-config
name: config-volume
</code></pre>
|
<p>I am new to k8s and need some help, plz.</p>
<p>I want to make a change in a pod's deployment configuration and change readOnlyRootFilesystem to false.</p>
<p>This is what I am trying to do, but it doesn't seem to work. Plz suggest what's wrong:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl patch deployment eric-ran-rdm-singlepod -n vdu -o yaml -p {"spec":{"template":{"spec":{"containers":[{"name":"eric-ran-rdm-infra":{"securityContext":[{"readOnlyRootFilesystem":"true"}]}}]}}}}
</code></pre>
<p><a href="https://i.stack.imgur.com/cEIYl.png" rel="nofollow noreferrer">enter image description here</a></p>
<p>Thanks very much!!</p>
| <p>Your JSON is invalid. You need to make sure you are providing valid JSON and it should be in the correct structure as defined by the k8s API as well. You can use <a href="https://jsonlint.com" rel="nofollow noreferrer">jsonlint.com</a>.</p>
<pre class="lang-json prettyprint-override"><code>{
"spec": {
"template": {
"spec": {
"containers": [
{
"name": "eric-ran-rdm-infra",
"securityContext": {
"readOnlyRootFilesystem": "true"
}
}
]
}
}
}
}
</code></pre>
<blockquote>
<p>Note: I have only checked the syntax here and <strong>not</strong> checked/ tested the structure against the k8s API of this JSON here, but I think it should be right, please correct me if I am wrong.</p>
</blockquote>
<p>It might be easier to specify a deployment in a <code>.yaml</code> file and just apply that using <code>kubectl apply -f my_deployment.yaml</code>.</p>
|
<p>I'm trying to set up a OKE Cluster on OCI, deploy a Ghost container in it for blogging, then expose it to the internet.
I've successfully done it with a load balancer service in my YAML and my blog is visible to the internet:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: blog
annotations:
service.beta.kubernetes.io/oci-load-balancer-ssl-ports: "443"
service.beta.kubernetes.io/oci-load-balancer-tls-secret: ssl-certificate-secret
spec:
loadBalancerIP: x.x.x.x
type: LoadBalancer
selector:
app: blog
ports:
- protocol: TCP
port: 443
targetPort: 2368
</code></pre>
<p>which provisioned a new Load Balancer of shape 100Mbps in OCI. The problem is that it costs quite a bit.<br />
In OCI, there are two types of load balancer:</p>
<p><a href="https://i.stack.imgur.com/ZAO2C.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZAO2C.png" alt="enter image description here" /></a></p>
<p>and the second one (Network Load Balancer) is free.
So the question is, how do i use the second type (Network Load Balancer) with a Kubernetes Cluster in OCI? Is there any other way of exposing my Ghost container pod to the internet? I've read somewhere about creating a NodePort but not sure if it works in OCI and don't really understand it.<br />
Any clue is welcome. Thank you!</p>
| <p>OKE now supports OCI Network Load Balancers (NLB). The documentation is available at: <a href="https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contengcreatingloadbalancer.htm" rel="nofollow noreferrer">https://docs.oracle.com/en-us/iaas/Content/ContEng/Tasks/contengcreatingloadbalancer.htm</a>.</p>
|
<p>In one of my deployment files, I want to set an environment variable. The variable is <code>KUBE_VERSION</code> and values must be fetched from a ConfigMap.</p>
<pre><code> kube_1_21: 1.21.10_1550
</code></pre>
<p>This is part of ConfigMap where I want to set <code>1.21.10_1550</code> to <code>KUBE_VERSION</code>, but if the cluster is of IKS 1.20, then the key will be:</p>
<pre><code>kube_1_20: 1.20.21_3456
</code></pre>
<p><code>kube_</code> is always static. How can I set environment variable using a regex expression?</p>
<p>Something of this sort:</p>
<pre><code>
- name: KUBE_VERSION
valueFrom:
configMapKeyRef:
name: cluster-info
key: "kube_1*"
</code></pre>
| <p>As far as I know it is unfortunately not possible to use the regular expression as you would like. Additionally, you have information about the regular expression that validates the entered data:</p>
<blockquote>
<p>regex used for validation is '[-._a-zA-Z0-9]+')</p>
</blockquote>
<p>It follows that you have to enter <code>key</code> as an alphanumeric string and additionally you can use the characters <code>-</code>, <code>_</code> and <code>.</code> So it is not possible to use regex in this place.</p>
<p>To workaround you can write your custom script i.e. in Bash and replace the proper line with <a href="https://stackoverflow.com/questions/8822097/how-to-replace-a-whole-line-with-sed">sed command</a>.</p>
|
<p>I want to allow traffic to look like the following:</p>
<p>external client https request (e.g. <a href="https://my-app-out-side-cluster.com" rel="nofollow noreferrer">https://my-app-out-side-cluster.com</a>) -> inside the cluster (terminate tls) and change to http (e.g. <a href="http://my-app-out-side-cluster.com" rel="nofollow noreferrer">http://my-app-out-side-cluster.com</a>) -> service outside the cluster</p>
<p>I have followed <a href="https://stackoverflow.com/questions/57764237/kubernetes-ingress-to-external-service">this</a> post to configure my Ingress and External traffic, however, since my service outside the cluster is http, I get an SSL error when making a request with https. Changing the request to http works, however, this is not desired.</p>
<p>My question is, is there a way to</p>
<ol>
<li>Terminate SSL in the Ingress (using the ingress controller)</li>
<li>Redirect traffic to the service outside the cluster listening on http ?</li>
</ol>
<pre><code>---
kind: Service
apiVersion: v1
metadata:
name: my-external-service
spec:
type: ExternalName
externalName: my-app-out-side-cluster.com
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: kong
spec:
controller: ingress-controllers.konghq.com/kong
---
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: my-ingress
namespace: kong
annotations:
konghq.com/protocols: "https"
spec:
ingressClassName: kong
tls:
- secretName: my-secret
hosts:
- my-app-out-side-cluster.com
rules:
- host: my-app-out-side-cluster.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: my-external-service
port:
number: 80
</code></pre>
| <p>i am not sure how your setup and K8s cluster is set,</p>
<p>is it a <strong>private</strong> cluster or <strong>public</strong> cluster, how the request is getting outside of POD running any service of Node or Java that calling HTTP service?</p>
<blockquote>
<p>external client https request (e.g.
<a href="https://my-app-out-side-cluster.com" rel="nofollow noreferrer">https://my-app-out-side-cluster.com</a>) -> inside the cluster</p>
</blockquote>
<p>For this you are on rigth path. You have to setup the ingress controller which will handle the incoming request and do the TLS termination.</p>
<p>Your TLS/SSL cert will be stored inside the secret of the Kubernetes and will get attached to ingress.</p>
<p>Ingress will allow the HTTPS traffic and will do the <strong>TLS</strong> termination so in background it will forward the plain <strong>http</strong> traffic.</p>
<p>Reference article : <a href="https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes" rel="nofollow noreferrer">https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes</a></p>
<p>If you are on <strong>AWS</strong> : <a href="https://aws.amazon.com/premiumsupport/knowledge-center/terminate-https-traffic-eks-acm/" rel="nofollow noreferrer">https://aws.amazon.com/premiumsupport/knowledge-center/terminate-https-traffic-eks-acm/</a></p>
<blockquote>
<p>change to http (e.g. <a href="http://my-app-out-side-cluster.com" rel="nofollow noreferrer">http://my-app-out-side-cluster.com</a>) -> service
outside the cluster</p>
</blockquote>
<p>i think this endpoint might be getting called of service running inside the pod, so in that, you can change the HTTP simply and it will work.</p>
<p>In your K8s cluster depending on CNI plugin your traffic route, ideally, POD gets scheduled on Node and it will send a request directly from there.</p>
<p>Your request doesn't go outside of through the Nginx ingress controller unless it's the <strong>response</strong>.</p>
|
<p>I have a <code>Laravel</code> backend API and an <code>Angular</code> frontend. I deploy them with <code>Kubernetes ⎈</code> on <strong>Minikube</strong>.</p>
<pre><code>NAME READY STATUS RESTARTS AGE
pod/backend-deployment-bd4f98697-c2scp 1/1 Running 1 (22m ago) 23m
pod/frontend-deployment-8bc989f89-cxj67 1/1 Running 0 23m
pod/mysql-0 1/1 Running 0 23m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/backend-service NodePort 10.108.40.53 <none> 8000:30670/TCP 23m
service/frontend-service NodePort 10.105.57.226 <none> 4200:32353/TCP 23m
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 25m
service/mysql ClusterIP None <none> 3306/TCP 23m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/backend-deployment 1/1 1 1 23m
deployment.apps/frontend-deployment 1/1 1 1 23m
NAME DESIRED CURRENT READY AGE
replicaset.apps/backend-deployment-bd4f98697 1 1 1 23m
replicaset.apps/frontend-deployment-8bc989f89 1 1 1 23m
NAME READY AGE
statefulset.apps/mysql 1/1 23m
</code></pre>
<p>I can access both the front service and the back service with <code>minikube service SERVICE-NAME</code>.</p>
<ul>
<li>This work perfectly. ✅<br></li>
</ul>
<hr />
<p>I have also an <code>Ingress</code> for the frontend.</p>
<pre><code>NAME CLASS HOSTS ADDRESS PORTS AGE
frontend-ingress <none> kubiapp-frontend-group35.com localhost 80 27m
</code></pre>
<p>I can access the <code>Ingress</code> with a curl. <code>curl http://kubiapp-frontend-group35.com</code>. ✅<br>
But, when I check the URL on a browser, I get some CORS errors.<br></p>
<ul>
<li>The Ingress works but I have CORS Error. ❌</li>
</ul>
<hr />
<p><code>frontend-ingress</code>:<br></p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-origin: "*"
nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS, DELETE"
nginx.ingress.kubernetes.io/cors-allow-headers: "DNT,X-CustomHeader,X-LANG,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,X-Api-Key,X-Device-Id,Access-Control-Allow-Origin"
name: frontend-ingress
labels:
name: frontend-ingress
spec:
rules:
- host: kubiapp-frontend-group35.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend-service
port:
number: 4200
</code></pre>
<p>I don't understand the Ingress annotation well, does it enable CORS policy for the Frontend? Shouldn't I make it for the Backend? Do I have to create a Backend Ingress?<br></p>
<h3>EDIT:</h3>
<p>The error message was:</p>
<blockquote>
<p>Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://backend-service:8000/api/health. (Reason: CORS request did not succeed). Status code: (null).</p>
</blockquote>
<hr />
<h1>TL;DR :</h1>
<p>I can access the Ingress URL, but I have CORS errors. How to solve it with <code>Kubernetes</code></p>
| <p>The mistake I made was using the <code>Kubernetes</code> annotations for the Frontend Ingress.
What I had to do was create a Backend Ingress as well, that used the annotations:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-origin: "*"
nginx.ingress.kubernetes.io/cors-allow-methods: "PUT, GET, POST, OPTIONS, DELETE"
nginx.ingress.kubernetes.io/cors-allow-headers: "DNT,X-CustomHeader,X-LANG,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,X-Api-Key,X-Device-Id,Access-Control-Allow-Origin"
name: backend-ingress
labels:
name: backend-ingress
spec:
rules:
- host: kubiapp-backend-group35.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: backend-service
port:
number: 8000
</code></pre>
<p>The frontend app, will now communicate correctly with <code>http://kubiapp-backend-group35.com</code>.</p>
|
<p>I have a k8 setup that looks like this</p>
<p><code>ingress -> headless service (k8 service with clusterIp: none) -> statefulsets ( 2pods)</code></p>
<p>Fqdn looks like this:</p>
<pre><code>nslookup my-service
Server: 100.4.0.10
Address: 100.4.0.10#53
Name: my-service.my-namespace.svc.cluster.local
Address: 100.2.2.8
Name: my-service.my-namespace.svc.cluster.local
Address: 100.1.4.2
</code></pre>
<p>I am trying to reach one of the pod directly via the service using the following fqdn but not able to do so.</p>
<pre><code>curl -I my-pod-0.my-service.my-namespace.svc.cluster.local:8222
curl: (6) Could not resolve host: my-pod-0.my-service.my-namespace.svc.cluster.local
</code></pre>
<p>If I try to hit the service directly then it works correctly (as a loadbalancer)</p>
<pre><code>curl -I my-service.my-namespace.svc.cluster.local:8222
HTTP/1.1 200 OK
Date: Sat, 31 Jul 2021 21:24:42 GMT
Content-Length: 656
</code></pre>
<p>If I try to hit the pod directly using it's cluster ip, it also works fine</p>
<pre><code>curl -I 100.2.2.8:8222
HTTP/1.1 200 OK
Date: Sat, 31 Jul 2021 21:29:22 GMT
Content-Length: 656
Content-Type: text/html; charset=utf-8
</code></pre>
<p>But my use case requires me to be able to hit the statefulset pod using fqdn i.e <code>my-pod-0.my-service.my-namespace.svc.cluster.local</code> . What am I missing here?</p>
| <p>Original answer didn't clarify how OP fixed the issue, the problem was in <code>serviceName</code> property under statefulset implementation.</p>
|
<p>I want to start minikube to learn Kubernetes but am having trouble because of error <code>RSRC_INSUFFICIENT_CORES</code>.
My mac has 2 CPU cores and minikube docs say that 2 cores are required.
Here a the machine specs from "About this Mac":</p>
<ul>
<li>MacBook Pro (15-inch, Late 2008)</li>
<li>Processor 2.4 GHz Intel Core 2 Duo</li>
<li>Memory 8 GB 1067 MHz DDR3</li>
</ul>
<p>This machine has VirtualBox Version 5.2.35 r135669 but its not running, and working docker and docker-machine, as shown here:</p>
<pre><code>✗ docker-machine --version
docker-machine version 0.16.1, build
✗ docker --version
Docker version 17.05.0-ce, build 89658be
</code></pre>
<p>I have successfully installed minikube v1.25.1 using an updated version of MacPorts, as shown here:</p>
<pre><code>✗ which minikube
/opt/local/bin/minikube
✗ minikube version
minikube version: v1.25.1
</code></pre>
<p>I cannot start minikube and get error: <code>Exiting due to RSRC_INSUFFICIENT_CORES</code>. Here is the output that I see from 2 different <code>minikube start</code> attempts:</p>
<pre><code>✗ minikube start --cpus=2
😄 minikube v1.25.1 on Darwin 10.11.6
✨ Automatically selected the docker driver. Other choices: virtualbox, ssh
- Ensure your docker daemon has access to enough CPU/memory resources.
- Docs https://docs.docker.com/docker-for-mac/#resources
⛔ Exiting due to RSRC_INSUFFICIENT_CORES: Requested cpu count 2 is greater than the available cpus of 1
✗ minikube start --cpus=1
😄 minikube v1.25.1 on Darwin 10.11.6
✨ Automatically selected the docker driver. Other choices: virtualbox, ssh
⛔ Exiting due to RSRC_INSUFFICIENT_CORES: Requested cpu count 1 is less than the minimum allowed of 2
</code></pre>
<p>Please excuse newbie-ness--this is my first ever SO question!</p>
<p>Is it impossible to start minikube on this Mac?</p>
| <p>I ran into these errors on an M1 Mac because my podman (4.0.2) did not have the VM configured with enough capacity. <a href="https://itnext.io/goodbye-docker-desktop-hello-minikube-3649f2a1c469" rel="noreferrer">Abhinav Sonkar</a> figured out how to fix this. This builds on his trail blazing.</p>
<p>First you may need to get rid of your existing <a href="https://docs.podman.io/en/latest/markdown/podman-machine.1.html" rel="noreferrer">VM in podman</a>:</p>
<pre><code>podman machine stop
podman machine rm
</code></pre>
<p>Then recreate it with adequate specs and tweak the connections to <a href="https://minikube.sigs.k8s.io/docs/drivers/podman/" rel="noreferrer">work around</a> another issue:</p>
<pre><code>podman machine init --cpus 6 --memory 12288 --disk-size 50
podman machine start
podman system connection default podman-machine-default-root
</code></pre>
<p>After that I was able to install minikube from <code>brew</code> and start it with:</p>
<pre><code>minikube start --driver=podman --container-runtime=cri-o
</code></pre>
<p>With that the <code>minikube</code> subcommands work and <code>kubectl</code> seems fine talking to it. I've also gotten <code>minikube start</code> to work with <code>--kubernetes-version=v1.23.5</code>, <code>v1.22.5</code>, <code>v1.22.8</code> and <code>v1.23.2</code>.</p>
|
<p>I have an Elasticsearch cluster (6.3) running on Kubernetes (GKE) with the following manifest file:</p>
<pre class="lang-yaml prettyprint-override"><code>---
# Source: elasticsearch/templates/manifests.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: elasticsearch-configmap
labels:
app.kubernetes.io/name: "elasticsearch"
app.kubernetes.io/component: elasticsearch-server
data:
elasticsearch.yml: |
cluster.name: "${CLUSTER_NAME}"
node.name: "${NODE_NAME}"
path.data: /usr/share/elasticsearch/data
path.repo: ["${BACKUP_REPO_PATH}"]
network.host: 0.0.0.0
discovery.zen.minimum_master_nodes: 1
discovery.zen.ping.unicast.hosts: ${DISCOVERY_SERVICE}
log4j2.properties: |
status = error
appender.console.type = Console
appender.console.name = console
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%m%n
rootLogger.level = info
rootLogger.appenderRef.console.ref = console
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch
labels: &ElasticsearchDeploymentLabels
app.kubernetes.io/name: "elasticsearch"
app.kubernetes.io/component: elasticsearch-server
spec:
selector:
matchLabels: *ElasticsearchDeploymentLabels
serviceName: elasticsearch-svc
replicas: 2
updateStrategy:
# The procedure for updating the Elasticsearch cluster is described at
# https://www.elastic.co/guide/en/elasticsearch/reference/current/rolling-upgrades.html
type: OnDelete
template:
metadata:
labels: *ElasticsearchDeploymentLabels
spec:
terminationGracePeriodSeconds: 180
initContainers:
# This init container sets the appropriate limits for mmap counts on the hosting node.
# https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html
- name: set-max-map-count
image: marketplace.gcr.io/google/elasticsearch/ubuntu16_04@...
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
command:
- /bin/bash
- -c
- 'if [[ "$(sysctl vm.max_map_count --values)" -lt 262144 ]]; then sysctl -w vm.max_map_count=262144; fi'
containers:
- name: elasticsearch
image: eu.gcr.io/projectId/elasticsearch6.3@sha256:...
imagePullPolicy: Always
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: CLUSTER_NAME
value: "elasticsearch-cluster"
- name: DISCOVERY_SERVICE
value: "elasticsearch-svc"
- name: BACKUP_REPO_PATH
value: ""
ports:
- name: prometheus
containerPort: 9114
protocol: TCP
- name: http
containerPort: 9200
- name: tcp-transport
containerPort: 9300
volumeMounts:
- name: configmap
mountPath: /etc/elasticsearch/elasticsearch.yml
subPath: elasticsearch.yml
- name: configmap
mountPath: /etc/elasticsearch/log4j2.properties
subPath: log4j2.properties
- name: elasticsearch-pvc
mountPath: /usr/share/elasticsearch/data
readinessProbe:
httpGet:
path: /_cluster/health?local=true
port: 9200
initialDelaySeconds: 5
livenessProbe:
exec:
command:
- /usr/bin/pgrep
- -x
- "java"
initialDelaySeconds: 5
resources:
requests:
memory: "2Gi"
- name: prometheus-to-sd
image: marketplace.gcr.io/google/elasticsearch/prometheus-to-sd@sha256:8e3679a6e059d1806daae335ab08b304fd1d8d35cdff457baded7306b5af9ba5
ports:
- name: profiler
containerPort: 6060
command:
- /monitor
- --stackdriver-prefix=custom.googleapis.com
- --source=elasticsearch:http://localhost:9114/metrics
- --pod-id=$(POD_NAME)
- --namespace-id=$(POD_NAMESPACE)
- --monitored-resource-types=k8s
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumes:
- name: configmap
configMap:
name: "elasticsearch-configmap"
volumeClaimTemplates:
- metadata:
name: elasticsearch-pvc
labels:
app.kubernetes.io/name: "elasticsearch"
app.kubernetes.io/component: elasticsearch-server
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: standard
resources:
requests:
storage: 50Gi
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch-prometheus-svc
labels:
app.kubernetes.io/name: elasticsearch
app.kubernetes.io/component: elasticsearch-server
spec:
clusterIP: None
ports:
- name: prometheus-port
port: 9114
protocol: TCP
selector:
app.kubernetes.io/name: elasticsearch
app.kubernetes.io/component: elasticsearch-server
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch-svc-internal
labels:
app.kubernetes.io/name: "elasticsearch"
app.kubernetes.io/component: elasticsearch-server
spec:
ports:
- name: http
port: 9200
- name: tcp-transport
port: 9300
selector:
app.kubernetes.io/name: "elasticsearch"
app.kubernetes.io/component: elasticsearch-server
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: ilb-service-elastic
annotations:
cloud.google.com/load-balancer-type: "Internal"
labels:
app: elasticsearch-svc
spec:
type: LoadBalancer
loadBalancerIP: some-ip-address
selector:
app.kubernetes.io/component: elasticsearch-server
app.kubernetes.io/name: elasticsearch
ports:
- port: 9200
protocol: TCP
</code></pre>
<p>This manifest was written from the template that used to be available on the GCP marketplace.</p>
<p>I'm encountering the following issue: the cluster is supposed to have 2 nodes, and indeed 2 pods are running.
However</p>
<ul>
<li>a call to ip:9200/_nodes returns just one node</li>
<li>there still seems to be a second node running that receives traffic (at least, read traffic), as visible in the logs. Those requests typically fail because the requested entities don't exist on that node (just on the master node).</li>
</ul>
<p>I can't wrap my head around the fact that the node at the same time isn't visible to the master node, and receives read traffic from the load balanced pointing to the stateful set.</p>
<p>Am I missing something subtle ?</p>
| <p>Did you try checking which <strong>types</strong> of both <strong>Nodes</strong> are?</p>
<p>There are <strong>Master</strong> nodes and <strong>data</strong> nodes, at a time only one master gets elected while the other just stay in the background if the first master node goes down new <strong>Node</strong> gets elected and handles the further request.</p>
<p>i cant see Node type config in stateful sets. i would recommand checking out the helm of Elasticsearch to set up and deploy on GKE.</p>
<p>Helm chart : <a href="https://github.com/elastic/helm-charts/tree/main/elasticsearch" rel="nofollow noreferrer">https://github.com/elastic/helm-charts/tree/main/elasticsearch</a></p>
<p>Sharing example Env config for reference :</p>
<pre><code>env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: CLUSTER_NAME
value: my-es
- name: NODE_MASTER
value: "false"
- name: NODE_INGEST
value: "false"
- name: HTTP_ENABLE
value: "false"
- name: ES_JAVA_OPTS
value: -Xms256m -Xmx256m
</code></pre>
<p>read more at : <a href="https://faun.pub/https-medium-com-thakur-vaibhav23-ha-es-k8s-7e655c1b7b61" rel="nofollow noreferrer">https://faun.pub/https-medium-com-thakur-vaibhav23-ha-es-k8s-7e655c1b7b61</a></p>
|
<p>I've made a deployment in GKE with a readiness probe. My container is coming up but it seems the readiness probe is having some difficulty. When I try to describe the pod I see that there are many probe warnings but it's not clear what the warning is.</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 43m default-scheduler Successfully assigned default/tripvector-7996675758-7vjxf to gke-tripvector2-default-pool-78cf58d9-5dgs
Normal LoadBalancerNegNotReady 43m neg-readiness-reflector Waiting for pod to become healthy in at least one of the NEG(s): [k8s1-07274a01-default-tripvector-np-60000-a912870e]
Normal Pulling 43m kubelet Pulling image "us-west1-docker.pkg.dev/triptastic-1542412229773/tripvector/tripvector"
Normal Pulled 43m kubelet Successfully pulled image "us-west1-docker.pkg.dev/triptastic-1542412229773/tripvector/tripvector" in 888.583654ms
Normal Created 43m kubelet Created container tripvector
Normal Started 43m kubelet Started container tripvector
Normal LoadBalancerNegTimeout 32m neg-readiness-reflector Timeout waiting for pod to become healthy in at least one of the NEG(s): [k8s1-07274a01-default-tripvector-np-60000-a912870e]. Marking condition "cloud.google.com/load-balancer-neg-ready" to True.
Warning ProbeWarning 3m1s (x238 over 42m) kubelet Readiness probe warning:
</code></pre>
<p>I've tried examining events with <code>kubectl get events</code> but that also doesn't provide extra details on the probe warning:</p>
<pre><code> ❯❯❯ k get events
LAST SEEN TYPE REASON OBJECT MESSAGE
43m Normal LoadBalancerNegNotReady pod/tripvector-7996675758-7vjxf Waiting for pod to become healthy in at least one of the NEG(s): [k8s1-07274a01-default-tripvector-np-60000-a912870e]
43m Normal Scheduled pod/tripvector-7996675758-7vjxf Successfully assigned default/tripvector-7996675758-7vjxf to gke-tripvector2-default-pool-78cf58d9-5dgs
43m Normal Pulling pod/tripvector-7996675758-7vjxf Pulling image "us-west1-docker.pkg.dev/triptastic-1542412229773/tripvector/tripvector"
43m Normal Pulled pod/tripvector-7996675758-7vjxf Successfully pulled image "us-west1-docker.pkg.dev/triptastic-1542412229773/tripvector/tripvector" in 888.583654ms
43m Normal Created pod/tripvector-7996675758-7vjxf Created container tripvector
43m Normal Started pod/tripvector-7996675758-7vjxf Started container tripvector
3m38s Warning ProbeWarning pod/tripvector-7996675758-7vjxf Readiness probe warning:
</code></pre>
<p>How can I get more details on how/why this readiness probe is giving off warnings?</p>
<p>EDIT: the logs of the pod are mostly empty as well (<code>klf</code> is an alias I have to kubectl logs):</p>
<pre><code> ❯❯❯ klf tripvector-6f4d4c86c5-dn55c
(node:1) [DEP0131] DeprecationWarning: The legacy HTTP parser is deprecated.
{"line":"87","file":"percolate_synced-cron.js","message":"SyncedCron: Scheduled \"Refetch expired Google Places\" next run @Tue Mar 22 2022 17:47:53 GMT+0000 (Coordinated Universal Time)","time":{"$date":1647971273653},"level":"info"}
</code></pre>
| <p>Regarding the error in the logs <em><strong>“DeprecationWarning: The legacy HTTP parser is deprecated.”</strong></em>, it is due to the legacy HTTP parser being deprecated with the pending <em><strong>End-of-Life of Node.js 10.x</strong></em>. It will now warn on use, but otherwise continue to function and may be removed in a future <strong>Node.js 12.x</strong> release. Use this URL for more reference <a href="https://nodejs.org/ja/blog/release/v12.22.0/#:%7E:text=The%20legacy%20HTTP%20parser%20is%20runtime%20deprecated&text=x%20" rel="nofollow noreferrer">Node v12.22.0 (LTS)</a>.</p>
<p>On the other hand, about the <code>kubelet</code>’s <em><strong>“ProbeWarning”</strong></em> reason in the warning events on your container, <em><strong>Health check (liveness & readiness)</strong></em> probes using an <em><strong>HTTPGetAction</strong></em> will no longer follow redirects to different host-names from the original probe request. Instead, these non-local redirects will be treated as a <em><strong>Success</strong></em> (the documented behavior). In this case an event with reason <strong>"ProbeWarning"</strong> will be generated, indicating that the redirect was ignored. If you were previously relying on the redirect to run health checks against different endpoints, you will need to perform the healthcheck logic outside the <em><strong>Kubelet</strong></em>, for instance by proxying the external endpoint rather than redirecting to it. You can verify the detailed root cause of this in the following <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.14.md#urgent-upgrade-notes" rel="nofollow noreferrer">Kubernetes 1.14 release notes</a>. There is no way to see more detailed information about the <strong>“ProbeWarning”</strong> in the <em><strong>Events’</strong></em> table. Use the following URLs as a reference too <a href="https://phabricator.wikimedia.org/T294072" rel="nofollow noreferrer">Kubernetes emitting ProbeWarning</a>, <a href="https://github.com/kubernetes/kubernetes/issues/103877" rel="nofollow noreferrer">Confusing/incomplete readiness probe warning</a> and <a href="https://github.com/kubernetes/kubernetes/pull/103967" rel="nofollow noreferrer">Add probe warning message body</a>.</p>
|
<p>I'm trying to introduce a small change to an existing project without unit tests and decided I'd try to learn enough about nodejs and jest to include tests with my change. However, I cannot get mocks to work like I'd expect in, say, python. The project uses the "kubernetes-client" library from godaddy and tries to create a config object from the envvar "KUBECONFIG", like this:</p>
<pre><code>// a few lines into server.js
// Instantiate Kubernetes client
const Client = require('kubernetes-client').Client
const config = require('kubernetes-client').config;
if (process.env.KUBECONFIG) {
client = new Client({
config: config.fromKubeconfig(config.loadKubeconfig(process.env.KUBECONFIG)),
version: '1.13'
});
}
else {
client = new Client({ config: config.getInCluster(), version: '1.9' });
}
</code></pre>
<p>In my testing environment, I don't want any API calls, so I'm trying to mock it out:</p>
<pre><code>// __tests__/server.test.js
// Set up mocks at the top because of how server.js sets up k8s client
const k8sClient = require('kubernetes-client');
const narp = 'narp';
jest.mock('kubernetes-client', () => {
const noConfigmaps = jest.fn(() => {
throw narp;
});
const namespaces = jest.fn().mockReturnValue({
configmaps: jest.fn().mockReturnValue({
get: noConfigmaps
})
});
const addCustomResourceDefinition = jest.fn().mockReturnThis()
const mockClient = {
api: {
v1: {
namespaces
}
},
addCustomResourceDefinition: jest.fn().mockReturnThis(),
};
return {
Client: jest.fn(() => mockClient),
config: {
fromKubeconfig: jest.fn().mockReturnThis(),
loadKubeconfig: jest.fn().mockReturnThis(),
getInCluster: jest.fn().mockReturnThis()
},
};
});
const app = require('../server.js')
const supertest = require('supertest');
const requestWithSuperTest = supertest(app.app);
describe('Testing server.js', () => {
afterAll(() => {
app.server.close();
});
describe('Tests with k8s client throwing error when fetching configmaps', () => {
it("finds a resource's ingressGroup by name", () => {
var resource = {
"spec": {
"ingressClass": "foo",
"ingressTargetDNSName": "foo"
}
};
var ingressGroups = [
{
"ingressClass": "bar",
"hostName": "bar",
"name": "barName"
},
{
"ingressClass": "foo",
"hostName": "foo",
"name": "fooName"
}
];
expect(app.findMatchingIngressGroupForResource(resource, ingressGroups)).toBe("fooName");
});
it('GET /healthcheck should respond "Healthy"', async () => {
const resp = await requestWithSuperTest.get('/healthcheck');
console.log("Response in Testing Endpoints: " + JSON.stringify(resp));
expect(resp.status).toEqual(200);
expect(resp.type).toEqual(expect.stringContaining('text'));
expect(resp.text).toEqual('Healthy');
});
it('Tests getIngressGroups() rejects with error when it cannot get configmaps', async () => {
app.getIngressGroups()
.then()
.catch(error => {
expect(error).toEqual("Failed to fetch Ingress Groups: " + narp);
});
});
});
});
</code></pre>
<p>With this setup, the tests pass (although I suspect it's meaningless). If I try to move the mocks inside the <code>describe</code> or <code>it</code> block using a <code>beforeEach</code> function (or not) so that I can change the behavior to return mock data instead of throwing an error, I immediately get errors with the k8s client complaining it can't find my kubeconfig/clusterconfig:</p>
<pre><code>$ npm run testj
> testj
> jest --detectOpenHandles
kubernetes-client deprecated require('kubernetes-client').config, use require('kubernetes-client/backends/request').config. server.js:45:44
kubernetes-client deprecated loadKubeconfig see https://github.com/godaddy/kubernetes-client/blob/master/merging-with-kubernetes.md#request-kubeconfig- server.js:49:42
FAIL __tests__/server.test.js
● Test suite failed to run
ENOENT: no such file or directory, open 'NOT_A_FILE'
44 | if (process.env.KUBECONFIG) {
45 | client = new Client({
> 46 | config: config.fromKubeconfig(config.loadKubeconfig(process.env.KUBECONFIG)),
| ^
47 | version: '1.13'
48 | });
49 | }
at node_modules/kubernetes-client/backends/request/config.js:335:37
at Array.map (<anonymous>)
at Object.loadKubeconfig (node_modules/kubernetes-client/backends/request/config.js:334:28)
at Object.eval [as loadKubeconfig] (eval at wrapfunction (node_modules/kubernetes-client/node_modules/depd/index.js:425:22), <anonymous>:5:11)
at Object.<anonymous> (server.js:46:46)
</code></pre>
<p>If anybody has run into this kind of behavior before or sees some obviously-wrong lines, I'd really appreciate any tips or information. Thanks!</p>
| <p>I had to change a few things to get this working:</p>
<ul>
<li><code>jest.doMock()</code> instead of <code>jest.mock()</code></li>
<li>use of <code>let app</code> inside the <code>describe</code> block instead of <code>const app</code> at module-scope</li>
<li>a <code>beforeEach()</code> which calls <code>jest.resetModules()</code></li>
<li>an <code>afterEach()</code> which calls <code>app.close()</code></li>
<li>in the <code>it</code> block which overrides the mock(s), explicitly call <code>jest.resetModules()</code> before overriding</li>
<li>in the <code>it</code> block which overrides the mock(s), call <code>app.close()</code> and re-initialize <code>app</code> before invoking the actual function-under-test/expect</li>
</ul>
<p>Resulting test file:</p>
<pre><code>// Set up mocks at the top because of how server.js sets up k8s client
const k8sClient = require('kubernetes-client');
const supertest = require('supertest');
const narp = 'narp';
describe('Testing server.js', () => {
let app;
let requestWithSuperTest;
beforeEach(() => {
jest.resetModules();
jest.doMock('kubernetes-client', () => {
const noConfigmaps = jest.fn(() => {
throw narp;
});
const namespaces = jest.fn().mockReturnValue({
configmaps: jest.fn().mockReturnValue({
get: noConfigmaps
})
});
const addCustomResourceDefinition = jest.fn().mockReturnThis()
const mockClient = {
api: {
v1: {
namespaces
}
},
addCustomResourceDefinition: jest.fn().mockReturnThis(),
};
return {
Client: jest.fn(() => mockClient),
config: {
fromKubeconfig: jest.fn().mockReturnThis(),
loadKubeconfig: jest.fn().mockReturnThis(),
getInCluster: jest.fn().mockReturnThis()
},
};
});
app = require('../server.js');
requestWithSuperTest = supertest(app.app);
});
afterEach(() => {
app.server.close();
});
it("finds a Resource's ingressGroup by name", () => {
var resource = {
"spec": {
"ingressClass": "foo",
"ingressTargetDNSName": "foo"
}
};
var ingressGroups = [
{
"ingressClass": "bar",
"hostName": "bar",
"name": "barName"
},
{
"ingressClass": "foo",
"hostName": "foo",
"name": "fooName"
}
];
expect(app.findMatchingIngressGroupForResource(resource, ingressGroups)).toBe("fooName");
});
it('GET /healthcheck should respond "Healthy"', async () => {
const resp = await requestWithSuperTest.get('/healthcheck');
console.log("Response in Testing Endpoints: " + JSON.stringify(resp));
expect(resp.status).toEqual(200);
expect(resp.type).toEqual(expect.stringContaining('text'));
expect(resp.text).toEqual('Healthy');
});
it('Tests getIngressGroups() rejects with error when it cannot get configmaps', async () => {
expect.assertions(1);
await app.getIngressGroups()
.catch(error => {
expect(error).toEqual("Failed to fetch Ingress Groups: " + narp);
});
});
it('Tests getIngressGroups() succeeds when it gets configmaps', async () => {
expect.assertions(1);
jest.resetModules();
jest.doMock('kubernetes-client', () => {
const noConfigmaps = jest.fn(() => {
console.log('Attempted to get mocked configmaps');
return Promise.resolve({
body: {
items: []
}
});
});
const namespaces = jest.fn().mockReturnValue({
configmaps: jest.fn().mockReturnValue({
get: noConfigmaps
})
});
const addCustomResourceDefinition = jest.fn().mockReturnThis()
const mockClient = {
api: {
v1: {
namespaces
}
},
addCustomResourceDefinition: jest.fn().mockReturnThis(),
};
return {
Client: jest.fn(() => mockClient),
config: {
fromKubeconfig: jest.fn().mockReturnThis(),
loadKubeconfig: jest.fn().mockReturnThis(),
getInCluster: jest.fn().mockReturnThis()
},
};
});
app.server.close();
app = require('../server.js');
await app.getIngressGroups()
.then(result => {
expect(result).toEqual([])
});
});
});
</code></pre>
|
<p>I'm using <strong>google Composer version</strong> <strong>1.18.0</strong> which is having <strong>Airflow version 2.2.3</strong> and the worker node count is set to 4 with a Disk Size of 100 GB and machine type <strong>n1-standard-2</strong> and</p>
<pre><code>Web server machine type=composer-n1-webserver-2 (2 vCPU, 1.6 GB memory)
Cloud SQL machine type=db-n1-standard-2 (2 vCPU, 7.5 GB memory)
</code></pre>
<p>number of schedulers is 1 and the rest of the other configs are almost basic.</p>
<p>I have just moved 4 jobs into this and I'm already seeing POD evicted due to this below error</p>
<p><strong>Container airflow-worker exceeded its local ephemeral storage limit "10137Mi".</strong></p>
<p>I have tried contacting google support team, they told to create a new environment with enough resources and this one newly created as autopilot in another older environment is not working as expected and also pods are all evicted with the same error, so we creatd this above config environment with enough resources but still seeing this error and not sure how to get rid of this.</p>
<p>PFA screenshots for reference
<a href="https://i.stack.imgur.com/PGLmL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PGLmL.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/bDHDu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bDHDu.png" alt="enter image description here" /></a></p>
<p>Can anyone help me here</p>
<p>YAML FILE</p>
<pre><code> apiVersion: v1
kind: Pod
metadata:
annotations:
composer.cloud.google.com/running-task: "true"
composer.cloud.google.com/template-version:
919cc331f4a25a332a1e0b6989f5fb56
seccomp.security.alpha.kubernetes.io/pod: runtime/default
creationTimestamp: "2022-01-25T17:02:00Z"
generateName: airflow-worker-
labels:
run: airflow-worker
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:composer.cloud.google.com/template-version: {}
f:generateName: {}
f:labels:
.: {}
f:run: {}
f:ownerReferences:
.: {}
k:{"uid":"7ece9e6c-e252-43d9-b989-239d765d375b"}:
.: {}
f:apiVersion: {}
f:blockOwnerDeletion: {}
f:controller: {}
f:kind: {}
f:name: {}
f:uid: {}
f:spec:
f:containers:
k:{"name":"airflow-worker"}:
.: {}
f:args: {}
f:env:
.: {}
k:{"name":"AIRFLOW__CORE__FERNET_KEY"}:
.: {}
f:name: {}
f:valueFrom:
.: {}
f:secretKeyRef:
.: {}
f:key: {}
f:name: {}
k:{"name":"AIRFLOW__CORE__SQL_ALCHEMY_CONN"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"AIRFLOW__WEBSERVER__BASE_URL"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"AIRFLOW_DATABASE_VERSION"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"AIRFLOW_HOME"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"AUTOGKE"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"C_FORCE_ROOT"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"CLOUDSDK_METRICS_ENVIRONMENT"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"COMPOSER_ENVIRONMENT"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"COMPOSER_GKE_LOCATION"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"COMPOSER_GKE_NAME"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"COMPOSER_GKE_ZONE"}:
.: {}
f:name: {}
k:{"name":"COMPOSER_LOCATION"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"COMPOSER_PYTHON_VERSION"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"COMPOSER_VERSION"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"COMPOSER_VERSIONED_NAMESPACE"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"DAGS_FOLDER"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"GCP_PROJECT"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"GCS_BUCKET"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"GCSFUSE_EXTRACTED"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"GRPC_POLL_STRATEGY"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"SQL_DATABASE"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"SQL_HOST"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"SQL_PASSWORD"}:
.: {}
f:name: {}
f:valueFrom:
.: {}
f:secretKeyRef:
.: {}
f:key: {}
f:name: {}
k:{"name":"SQL_SUBNET"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"SQL_USER"}:
.: {}
f:name: {}
f:value: {}
f:image: {}
f:imagePullPolicy: {}
f:livenessProbe:
.: {}
f:exec:
.: {}
f:command: {}
f:failureThreshold: {}
f:initialDelaySeconds: {}
f:periodSeconds: {}
f:successThreshold: {}
f:timeoutSeconds: {}
f:name: {}
f:ports:
.: {}
k:{"containerPort":8793,"protocol":"TCP"}:
.: {}
f:containerPort: {}
f:protocol: {}
f:resources:
.: {}
f:limits:
.: {}
f:cpu: {}
f:ephemeral-storage: {}
f:memory: {}
f:requests:
.: {}
f:cpu: {}
f:ephemeral-storage: {}
f:memory: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:volumeMounts:
.: {}
k:{"mountPath":"/etc/airflow/airflow_cfg"}:
.: {}
f:mountPath: {}
f:name: {}
k:{"mountPath":"/home/airflow/container-comms"}:
.: {}
f:mountPath: {}
f:name: {}
k:{"mountPath":"/home/airflow/gcs"}:
.: {}
f:mountPath: {}
f:name: {}
k:{"mountPath":"/home/airflow/gcsfuse"}:
.: {}
f:mountPath: {}
f:mountPropagation: {}
f:name: {}
k:{"name":"gcs-syncd"}:
.: {}
f:args: {}
f:env:
.: {}
k:{"name":"AUTOGKE"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"COMPOSER_GKE_LOCATION"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"COMPOSER_GKE_NAME"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"COMPOSER_GKE_ZONE"}:
.: {}
f:name: {}
k:{"name":"GCS_BUCKET"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"SQL_DATABASE"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"SQL_PASSWORD"}:
.: {}
f:name: {}
f:valueFrom:
.: {}
f:secretKeyRef:
.: {}
f:key: {}
f:name: {}
k:{"name":"SQL_SUBNET"}:
.: {}
f:name: {}
f:value: {}
k:{"name":"SQL_USER"}:
.: {}
f:name: {}
f:value: {}
f:image: {}
f:imagePullPolicy: {}
f:name: {}
f:resources:
.: {}
f:limits:
.: {}
f:cpu: {}
f:ephemeral-storage: {}
f:memory: {}
f:requests:
.: {}
f:cpu: {}
f:ephemeral-storage: {}
f:memory: {}
f:terminationMessagePath: {}
f:terminationMessagePolicy: {}
f:volumeMounts:
.: {}
k:{"mountPath":"/home/airflow/gcs"}:
.: {}
f:mountPath: {}
f:name: {}
f:dnsPolicy: {}
f:enableServiceLinks: {}
f:restartPolicy: {}
f:schedulerName: {}
f:securityContext: {}
f:terminationGracePeriodSeconds: {}
f:volumes:
.: {}
k:{"name":"airflow-config"}:
.: {}
f:configMap:
.: {}
f:defaultMode: {}
f:name: {}
f:name: {}
k:{"name":"container-comms"}:
.: {}
f:hostPath:
.: {}
f:path: {}
f:type: {}
f:name: {}
k:{"name":"gcsdir"}:
.: {}
f:emptyDir: {}
f:name: {}
k:{"name":"gcsfuse"}:
.: {}
f:hostPath:
.: {}
f:path: {}
f:type: {}
f:name: {}
manager: manager
operation: Update
time: "2022-01-25T17:02:00Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
f:composer.cloud.google.com/running-task: {}
manager: OpenAPI-Generator
operation: Update
time: "2022-01-25T17:02:33Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:status:
f:message: {}
f:phase: {}
f:reason: {}
f:startTime: {}
manager: kubelet
operation: Update
time: "2022-01-26T10:01:06Z"
name: airflow-worker-nlzdq
namespace: composer-2-0-0-airflow-2-1-4-2884adda
ownerReferences:
- apiVersion: composer.cloud.google.com/v1beta1
blockOwnerDeletion: true
controller: true
kind: AirflowWorkerSet
name: airflow-worker
uid: 7ece9e6c-e252-43d9-b989-239d765d375b
resourceVersion: "27047147"
uid: b904b45d-0001-4dc4-8f6c-88420d286973
spec:
containers:
- args:
- worker
env:
- name: GRPC_POLL_STRATEGY
value: epoll1
- name: CLOUDSDK_METRICS_ENVIRONMENT
value: 2.1.4+composer
- name: GCS_BUCKET
value: us-east4-XYZ-gcp-airflow-p-2884adda-bucket
- name: GCP_PROJECT
value: XYZ-259410
- name: COMPOSER_LOCATION
value: us-east4
- name: COMPOSER_GKE_ZONE
- name: COMPOSER_GKE_NAME
value: us-east4-XYZ-gcp-airflow-p-2884adda-gke
- name: AUTOGKE
value: "TRUE"
- name: COMPOSER_GKE_LOCATION
value: us-east4
- name: COMPOSER_PYTHON_VERSION
value: "3"
- name: COMPOSER_ENVIRONMENT
value: XYZ-gcp-airflow-prod-shared-vpc
- name: COMPOSER_VERSIONED_NAMESPACE
value: composer-2-0-0-airflow-2-1-4-2884adda
- name: AIRFLOW_HOME
value: /etc/airflow
- name: DAGS_FOLDER
value: /home/airflow/gcs/dags
- name: SQL_HOST
value: airflow-sqlproxy-service.composer-
system.svc.cluster.local
- name: SQL_DATABASE
value: composer-2-0-0-airflow-2-1-4-2884adda
- name: SQL_USER
value: root
- name: SQL_PASSWORD
valueFrom:
secretKeyRef:
key: sql_password
name: airflow-secrets
- name: GCSFUSE_EXTRACTED
value: "TRUE"
- name: COMPOSER_VERSION
value: 2.0.0
- name: AIRFLOW__WEBSERVER__BASE_URL
value: https://XYZ-dot-us-east4.composer.googleusercontent.com
- name: SQL_SUBNET
value: 172.16.2.0/29
- name: AIRFLOW_DATABASE_VERSION
value: POSTGRES_13
- name: AIRFLOW__CORE__SQL_ALCHEMY_CONN
value: postgresql+psycopg2://$(SQL_USER):$(SQL_PASSWORD)@airflow-sqlproxy-service.composer-system.svc.cluster.local:3306/$(SQL_DATABASE)
- name: AIRFLOW__CORE__FERNET_KEY
valueFrom:
secretKeyRef:
key: fernet_key
name: airflow-secrets
- name: C_FORCE_ROOT
value: "TRUE"
image: us-east4-docker.pkg.dev/XYZ-259410/composer-images-us-east4-XYZ-gcp-airflow-p-2884adda-gke/e77f16de-356f-40b5-b42b-f5482f02f793
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
- /var/local/worker_checker.py
failureThreshold: 6
initialDelaySeconds: 120
periodSeconds: 90
successThreshold: 1
timeoutSeconds: 30
name: airflow-worker
ports:
- containerPort: 8793
protocol: TCP
resources:
limits:
cpu: 1700m
ephemeral-storage: 4Gi
memory: 6963Mi
requests:
cpu: 1700m
ephemeral-storage: 2Gi
memory: 6963Mi
securityContext:
capabilities:
drop:
- NET_RAW
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/airflow/airflow_cfg
name: airflow-config
- mountPath: /home/airflow/gcs
name: gcsdir
- mountPath: /home/airflow/container-comms
name: container-comms
- mountPath: /home/airflow/gcsfuse
mountPropagation: HostToContainer
name: gcsfuse
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-59j75
readOnly: true
- args:
- /home/airflow/gcs
env:
- name: GCS_BUCKET
value: us-east4-XYZ-gcp-airflow-p-2884adda-bucket
- name: SQL_DATABASE
value: composer-2-0-0-airflow-2-1-4-2884adda
- name: SQL_USER
value: root
- name: SQL_PASSWORD
valueFrom:
secretKeyRef:
key: sql_password
name: airflow-secrets
- name: COMPOSER_GKE_ZONE
- name: COMPOSER_GKE_NAME
value: us-east4-XYZ-gcp-airflow-p-2884adda-gke
- name: SQL_SUBNET
value: 172.16.2.0/29
- name: AUTOGKE
value: "TRUE"
- name: COMPOSER_GKE_LOCATION
value: us-east4
image: us-docker.pkg.dev/cloud-airflow-releaser/gcs-syncd/gcs-syncd:cloud_composer_service_2021-12-08-RC1
imagePullPolicy: IfNotPresent
name: gcs-syncd
resources:
limits:
cpu: 300m
ephemeral-storage: 102Mi
memory: 1228Mi
requests:
cpu: 300m
ephemeral-storage: 102Mi
memory: 1228Mi
securityContext:
capabilities:
drop:
- NET_RAW
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /home/airflow/gcs
name: gcsdir
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-59j75
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: gk3-us-east4-XYZ-gcp-ai-nap-meqa92wv-4be2b93f-5kdv
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: gke.io/optimize-utilization-scheduler
securityContext:
seccompProfile:
type: RuntimeDefault
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- configMap:
defaultMode: 420
name: airflow-configmap
name: airflow-config
- emptyDir: {}
name: gcsdir
- hostPath:
path: /var/composer/gcs_mount_status
type: ""
name: container-comms
- hostPath:
path: /var/composer/gcs_mount
type: ""
name: gcsfuse
- name: kube-api-access-59j75
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
status:
message: 'Container airflow-worker exceeded its local ephemeral
storage limit "10137Mi". '
phase: Failed
reason: Evicted
startTime: "2022-01-25T17:02:01Z"
</code></pre>
| <p>Check your <strong>resource quota</strong></p>
<pre><code>apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-resources
spec:
hard:
requests.ephemeral-storage: 2Gi
limits.ephemeral-storage: 4Gi
</code></pre>
<p>There could be chances if <strong>resource quota</strong> is set and POD <code>exceeded its local ephemeral storage limit</code></p>
<p>Read more about it : <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#setting-requests-and-limits-for-local-ephemeral-storage" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#setting-requests-and-limits-for-local-ephemeral-storage</a></p>
<p>You can also set the resource in YAML with</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: app
image: images.my-company.example/app:v4
resources:
requests:
ephemeral-storage: "2Gi"
limits:
ephemeral-storage: "4Gi"
</code></pre>
|
<p>I have a k8s cluster with one master node (node one) and three worker nodes(node two, three, and four). Is there any way to change node two to the master node and change node one to the worker node? In other words, switch the role of node one and node two.</p>
<p>Thanks</p>
| <p><code>Is there any way to change node two to the master node and change node one to the worker node?</code></p>
<p>K8s control plane (aka master) is make up of <code>kubectl get componentstatuses</code>. This is not like <code>docker node promote/demote</code>. In your case, you need to delete node 2 from the cluster, re-join the cluster as <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/#steps-for-the-rest-of-the-control-plane-nodes" rel="nofollow noreferrer">control plane</a>. Then delete node 1 and re-join as worker node.</p>
|
<p>I am using K8S</p>
<p>I want to calculate a string that that is a result of <code>kubectl exec -it ... -c </code></p>
<p>after the <code>-c</code> option there is a string.</p>
<p>How can I pass a string with double quotes inside.</p>
<p>The following example doesn't work properly.</p>
<pre><code>x="$(kubectl exec -it mysql-pod -- /bin/sh -c \"mysql -uroot -p12345
-e 'show databases'\" 2>/dev/null)"
echo $x
</code></pre>
<p>Thanks.</p>
| <ul>
<li>when only a command needs to be executed on a pod , <code>-it</code> option is not required as it stands for attaching an interactive teminal</li>
<li>when mysql is itself an executable command , no need to use <code>/bin/sh -c</code></li>
<li>no need to encapsulate whole command in <code>" "</code></li>
</ul>
<p>So try following</p>
<pre><code>x=$(kubectl exec mysql-pod -- mysql -uroot -p12345 -e 'show databases ;' 2>/dev/null)
echo $x
</code></pre>
|
<p>I am trying to add a zip file to our configmap due to the amount of files exceeding the 1mb limit. I deploy our charts with helm and was looking into binaryData but cannot get it to work properly. I wanted to see if anyone had any suggestions on how I could integrate this with helm so when the job is finished it deletes the configmap with it</p>
<p>Here is my configmap:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "db-migration.fullname" . }}
labels:
app: {{ template "db-migration.name" . }}
chart: {{ template "db-migration.chart" . }}
draft: {{ .Values.draft | default "draft-app" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
binaryData:
{{ .Files.Get "migrations.zip" | b64enc }}
immutable: true
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "db-migration.fullname" . }}-test
labels:
app: {{ template "db-migration.name" . }}
chart: {{ template "db-migration.chart" . }}
draft: {{ .Values.draft | default "draft-app" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
binaryData:
{{ .Files.Get "test.zip" | b64enc }}
immutable: true
</code></pre>
<p>The two zip files live inside the charts and I have a command to unzip them and then run the migration afterwards</p>
| <p><code>binaryData</code> exepcts a <em>map</em> but you are passing a <em>string</em> to it.<br />
When debugging the template we can see</p>
<pre class="lang-text prettyprint-override"><code>Error: INSTALLATION FAILED: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(ConfigMap.binaryData): invalid type for io.k8s.api.core.v1.ConfigMap.binaryData: got "string", expected "map"
</code></pre>
<p>The way to fix this is to add a key before <code>{{ .Files.Get "test.zip" | b64enc }}</code>.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "db-migration.fullname" . }}
labels:
app: {{ template "db-migration.name" . }}
chart: {{ template "db-migration.chart" . }}
draft: {{ .Values.draft | default "draft-app" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
binaryData:
migrations: {{ .Files.Get "migrations.zip" | b64enc }}
immutable: true
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ template "db-migration.fullname" . }}-test
labels:
app: {{ template "db-migration.name" . }}
chart: {{ template "db-migration.chart" . }}
draft: {{ .Values.draft | default "draft-app" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
binaryData:
test: {{ .Files.Get "test.zip" | b64enc }}
immutable: true
</code></pre>
|
<p>I have read some tutorials of how to mount a volume in container and run the script on host/node directly. These are the examples given.</p>
<p><strong>DeamonSet pod spec</strong></p>
<pre><code> hostPID: true
nodeSelector:
cloud.google.com/gke-local-ssd: "true"
volumes:
- name: setup-script
configMap:
name: local-ssds-setup
- name: host-mount
hostPath:
path: /tmp/setup
initContainers:
- name: local-ssds-init
image: marketplace.gcr.io/google/ubuntu1804
securityContext:
privileged: true
volumeMounts:
- name: setup-script
mountPath: /tmp
- name: host-mount
mountPath: /host
command:
- /bin/bash
- -c
- |
set -e
set -x
# Copy setup script to the host
cp /tmp/setup.sh /host
# Copy wait script to the host
cp /tmp/wait.sh /host
# Wait for updates to complete
/usr/bin/nsenter -m/proc/1/ns/mnt -- chmod u+x /tmp/setup/wait.sh
# Give execute priv to script
/usr/bin/nsenter -m/proc/1/ns/mnt -- chmod u+x /tmp/setup/setup.sh
# Wait for Node updates to complete
/usr/bin/nsenter -m/proc/1/ns/mnt /tmp/setup/wait.sh
# If the /tmp folder is mounted on the host then it can run the script
/usr/bin/nsenter -m/proc/1/ns/mnt /tmp/setup/setup.sh
containers:
- image: "gcr.io/google-containers/pause:2.0"
name: pause
</code></pre>
<p>(There is a configmap for composing the .sh files. I just skip that)</p>
<p>What does "/usr/bin/nsenter -m/proc/1/ns/mnt" mean? Is this a command to run something on host? what is "/proc/1/ns/mnt" ?</p>
| <p>Lets start from the namepaces to understand this in detail :</p>
<p>Namespaces in container helps to isolate resources among the process. Namespaces controls the resources from the kernal and allocate to the process. This provides a great isolation among different containers that may run in a system.</p>
<p>Having said that, it will also make things complicated with these access restrictions to the namespaces. so comes the nsenter command , which will give the conatiners access to the namespaces. something similar to the sudo command.
.This command can give us access to mount, UTS, IPC, Network, PID,user,cgroup, and time namespaces.</p>
<p>the -m in your example is --mount which will access to the mount namespace specified by that file.</p>
|
<p>I can ssh into a jumphost from where I can accesss k8 cluster.
there is a particular pod which has a UI exposed on port 4040, how can I view that on my local browser ?</p>
<p>What I have figured out so far, by executing the below command</p>
<pre><code>kubectl port-forward podName 4040:4040
</code></pre>
<p>I can now access the UI (by text based browser) on jumphost</p>
| <p>Had to be done in two steps</p>
<p>1st run <code>kubectl port-forward podName 4040:4040</code> from jumpbox</p>
<p>2nd run <code>ssh -L 4040:localhost:4040 -i some_key.pem user@jumpbox-server</code> from the local machine</p>
<p>access <code>http://localhost:4040</code> using browser</p>
|
<p><strong>Background</strong></p>
<p>We have a need for messages to be processed in the order they were published. This is necessary to make sure all updates related to a particular entity are applied in order. Example: User updated their address 2 times in quick succession (could be on the UI OR a burst of messages from a batch job), we want to be able to apply Change 1 first and then Change 2 on top to make sure the data is correct. We achieved this by implementing a Hash Exchange and 20 "Single-Active Consumer" queues bound to this exchange. We listed all these queues in the micro-service config file and deployed it on a Kubernetes pod. This setup works well. We see a single consumer on each of the 20 sharded queues and messages are processed in the order they were published (for a given entity). Note that our microservice is NOT just processing messages from RabbitMQ. It is also serving REST requests from the users VIA the UI/Mobile.</p>
<p><strong>Problem Statement</strong></p>
<p>We are working on autoscaling our microservice when the current instance is getting heavily loaded. We want to spin up new instances to help share the load. Today when we do this, the new Kubernetes pod connects to the 20 sharded queues as expected but the consumers are in a wait state (Like they should be because our queue is defined as a Single-Active Consumer queue). Consequently, the messaging load stays on the original pod and the new pods are only helping with the web traffic but they cannot share the messaging load. What we want is a way to dynamically rebalance consumers between pods so that load is evenly spread across pods. Upon initial review, this might sound like a VERY narrow use case but if you dig a bit deeper it becomes clearer that this is a common use case.</p>
<ul>
<li>Being able to process messages in order is a very common requirement many people would have.</li>
<li>Microservices on Kubernetes and autoscaling Up and Down based on load is also a very common scenario.</li>
<li>Combining those two would naturally lead to having the ability to distribute messaging load across Pods is also something folks would be very interested in.</li>
</ul>
<p>Given the current feature set offered by RabbitMQ and Spring, I can't see a way to implement this without writing a lot of code and in some cases ending up with a suboptimal solution. I would like to know if any one ran into this problem and how did you solve it. Any ideas/inputs in this matter would be greatly appreciated.</p>
| <p>One solution might be for each instance to bind a "control" queue to a fanout exchange; then, during startup, each instance could publish a control message to that exchange to tell the other instances to stop and restart its listener containers.</p>
<p>You might need to add a random delay between each container start to get a fair distribution across the new collection of instances. Otherwise, one instance, or a subset, might get all the active consumers.</p>
|
<p>I am having .net a self-hosted application, basically, it runs for 12 hours. The entire functionality is working as expected in my local machine.</p>
<p>Reference:
<a href="https://learn.microsoft.com/en-us/aspnet/core/fundamentals/host/hosted-services?view=aspnetcore-3.1&tabs=visual-studio" rel="nofollow noreferrer">Background tasks with hosted services in ASP.NET Core</a></p>
<p>Currently, we are using Kubernetes for deployment. While deploying it is checking the health status with the liveness endpoint and deployment is failing since there is no liveness endpoint as it is a background running application. To deploy the application in Kubernetes it is expecting the liveness endpoint.</p>
<p>Is there any way that we can serve some JSON data whenever the liveness endpoint is called from the docker side for the background running application?</p>
<p>Here is my docker code.</p>
<pre><code>FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build-env
WORKDIR /app
# Copy everything
COPY . ./
# Restore as distinct layers
RUN dotnet restore
# Build and publish a release
RUN dotnet publish -c Release -o out
# Build runtime image
FROM mcr.microsoft.com/dotnet/aspnet:6.0
WORKDIR /app
COPY --from=build-env /app/out .
EXPOSE 8080/tcp
CMD ["dotnet", "DarkTesting.WorkerService.dll"]
</code></pre>
| <p>Personally I have found that deploying my background services as ASP.NET Core applications works well, because we can:</p>
<ol>
<li>Deploy /health endpoints using <a href="https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/health-checks?view=aspnetcore-6.0" rel="nofollow noreferrer">Healthchecks</a></li>
<li>Use IHostedService for running the background services</li>
</ol>
|
<p>I have a 3-node ubuntu microk8s installation and it seems to be working ok. All 3 nodes are management nodes.</p>
<p>On only one of the nodes, I get an error message and associated delay whenever I use a <strong>kubectl</strong> command. It looks like this:</p>
<pre class="lang-sh prettyprint-override"><code>$ time kubectl get pods
I0324 03:49:44.270996 514696 request.go:665] Waited for 1.156689289s due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:16443/apis/authentication.k8s.io/v1?timeout=32s
NAME READY STATUS RESTARTS AGE
sbnweb-5f9d9b977f-lw7t9 1/1 Running 1 (10h ago) 3d3h
shell-6cfccdbd47-zd2tn 1/1 Running 0 6h39m
real 0m6.558s
user 0m0.414s
sys 0m0.170s
</code></pre>
<p>The error message always shows a different URL each time. I tried looking up the error code (I0324) and haven't found anything useful.</p>
<p>The other two nodes don't show this behavior. No error message and completes the request in less than a second.</p>
<p>I'm new to k8s so I am not sure how to diagnose this kind of problem. Any hints on what to look for would be greatly appreciated.</p>
| <p>Here's a good <a href="https://jonnylangefeld.com/blog/the-kubernetes-discovery-cache-blessing-and-curse" rel="noreferrer">write-up</a> about the issue. For some cases <code>rm -rf ~/.kube/cache</code> will remove the issue.</p>
|
<p>I am running <code>Kafka Streams 3.1.0</code> on <code>AWS OCP</code> cluster, and I am facing this error during restart of the pod:</p>
<pre><code>10:33:18,529 [INFO ] Loaded Kafka Streams properties {topology.optimization=all, processing.guarantee=at_least_once, bootstrap.servers=PLAINTEXT://app-kafka-headless.app.svc.cluster.local:9092, state.dir=/var/data/state-store, metrics.recording.level=INFO, consumer.auto.offset.reset=earliest, cache.max.bytes.buffering=10485760, producer.compression.type=lz4, num.stream.threads=3, application.id=AppProcessor}
10:33:18,572 [ERROR] Error changing permissions for the directory /var/data/state-store
java.nio.file.FileSystemException: /var/data/state-store: Operation not permitted
at java.base/sun.nio.fs.UnixException.translateToIOException(Unknown Source)
at java.base/sun.nio.fs.UnixException.rethrowAsIOException(Unknown Source)
at java.base/sun.nio.fs.UnixException.rethrowAsIOException(Unknown Source)
at java.base/sun.nio.fs.UnixFileAttributeViews$Posix.setMode(Unknown Source)
at java.base/sun.nio.fs.UnixFileAttributeViews$Posix.setPermissions(Unknown Source)
at java.base/java.nio.file.Files.setPosixFilePermissions(Unknown Source)
at org.apache.kafka.streams.processor.internals.StateDirectory.configurePermissions(StateDirectory.java:154)
at org.apache.kafka.streams.processor.internals.StateDirectory.<init>(StateDirectory.java:144)
at org.apache.kafka.streams.KafkaStreams.<init>(KafkaStreams.java:867)
at org.apache.kafka.streams.KafkaStreams.<init>(KafkaStreams.java:851)
at org.apache.kafka.streams.KafkaStreams.<init>(KafkaStreams.java:821)
at org.apache.kafka.streams.KafkaStreams.<init>(KafkaStreams.java:733)
at com.xyz.app.kafka.streams.AbstractProcessing.run(AbstractProcessing.java:54)
at com.xyz.app.kafka.streams.AppProcessor.main(AppProcessor.java:97)
10:33:18,964 [INFO ] Topologies:
Sub-topology: 0
Source: app-stream (topics: [app-app-stream])
--> KSTREAM-AGGREGATE-0000000002
Processor: KSTREAM-AGGREGATE-0000000002 (stores: [KSTREAM-AGGREGATE-STATE-STORE-0000000001])
--> none
<-- app-stream
10:33:18,991 [WARN ] stream-thread [main] Failed to delete state store directory of /var/data/state-store/AppProcessor for it is not empty
</code></pre>
<p>On OCP cluster, the user running the app is provided by the cluster, and the state store is provided by an persistent volume (allowing pod to restart on same context), so the <code>/var/data/state-store/</code> folder have following permissions <code>drwxrwsr-x. (u:root g:1001030000)</code> :</p>
<pre><code>1001030000@app-processor-0:/$ ls -al /var/data/state-store/
total 24
drwxrwsr-x. 4 root 1001030000 4096 Mar 21 10:43 .
drwxr-xr-x. 3 root root 25 Mar 23 11:04 ..
drwxr-x---. 2 1001030000 1001030000 4096 Mar 23 11:04 AppProcessor
drwxrws---. 2 root 1001030000 16384 Mar 21 10:36 lost+found
1001030000@app-processor-0:/$ chmod 750 /var/data/state-store/
chmod: changing permissions of '/var/data/state-store/': Operation not permitted
</code></pre>
<p>POD manifest relevant parts are:</p>
<pre><code>spec:
containers:
- name: app-processor
volumeMounts:
- mountPath: /var/data/state-store
name: data
securityContext:
capabilities:
drop:
- KILL
- MKNOD
- SETGID
- SETUID
securityContext:
fsGroup: 1001030000
runAsUser: 1001030000
seLinuxOptions:
level: s0:c32,c19
volumes:
- name: data
persistentVolumeClaim:
claimName: data-app-processor-0
</code></pre>
<p>How to handle that ?
Should we use a <code>subPath</code> on <code>volumeMount</code> ?</p>
<p>Thanks for your insights.</p>
| <p>As suggested, the fix I found was to set a <code>subPath</code> below the mountPath:</p>
<p>Here is the relevant part of helm template used:</p>
<pre><code>spec:
containers:
- name: app-processor
volumeMounts:
- name: data
mountPath: {{ dir .Values.streams.state_dir | default "/var/data/" }}
subPath: {{ base .Values.streams.state_dir | default "state-store" }}
</code></pre>
<p>Where <code>.Values.streams.state_dir</code> is mapped to stream property <code>state.dir</code>.
Note this value is mandatory, and must be initialized in the values.</p>
<p>In that case the <code>state-store</code> directory is created by <code>securityContext.runAsUser</code> user, instead of root, so the <code>org.apache.kafka.streams.processor.internals.StateDirectory</code> class can enforce the permissions.</p>
|
<p>I am taking a helm chart class and the 1st lab creates a pod, service and ingress. I am relatively new to k8s and I am running on minikube. The pod and service get created without issue; however the ingress.yaml file gives the following error:</p>
<p><strong>unable to recognize "ingress.yaml": no matches for kind "Ingress" in version "extensions/v1beta1</strong></p>
<p>I am guessing something is obsolete in the ingress.yaml file but have no idea how to fix it. here's the class repo:</p>
<pre><code>https://github.com/phcollignon/helm3
</code></pre>
<p>here's the pod frontend.yaml :</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 1
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- image: phico/frontend:1.0
imagePullPolicy: Always
name: frontend
ports:
- name: frontend
containerPort: 4200
</code></pre>
<p>here's the frontend_service.yaml :</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
name: frontend
name: frontend
spec:
ports:
- protocol: "TCP"
port: 80
targetPort: 4200
selector:
app: frontend
</code></pre>
<p>Here's the problem file ingress.yaml :</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: guestbook-ingress
spec:
rules:
- host: frontend.minikube.local
http:
paths:
- path: /
backend:
serviceName: frontend
servicePort: 80
- host: backend.minikube.local
http:
paths:
- path: /
backend:
serviceName: backend
servicePort: 80%
</code></pre>
<p>Here's minikube version (kubectrl version) :</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.5", GitCommit:"5c99e2ac2ff9a3c549d9ca665e7bc05a3e18f07e", GitTreeState:"clean", BuildDate:"2021-12-16T08:38:33Z", GoVersion:"go1.16.12", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.1", GitCommit:"86ec240af8cbd1b60bcc4c03c20da9b98005b92e", GitTreeState:"clean", BuildDate:"2021-12-16T11:34:54Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>Any help is very much appreciated.</p>
<p>I changed the ingress.yaml file to use
<code>apiVersion: networking.k8s.io/v1:</code></p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: guestbook-ingress
spec:
rules:
- host: frontend.minikube.local
http:
paths:
- path: /
backend:
service:
name: frontend
port:
number: 80
- host: backend.minikube.local
paths:
- path: /
pathType: Prefix
backend:
service:
name: backend
port:
number: 80
</code></pre>
<p>now I am getting an error:</p>
<p><strong>error: error parsing ingress.yaml: error converting YAML to JSON: yaml: line 17: mapping values are not allowed in this context</strong></p>
<p>line 17 is the second "paths:" line.</p>
<p>Again, any help appreciated.</p>
| <p>Ingress spec <code>apiVersion: extensions/v1beta1</code> has deprecated. You can update it to <code>apiVersion: networking.k8s.io/v1</code></p>
<p>Second question:</p>
<pre><code>kind: Ingress
metadata:
name: guestbook-ingress
spec:
rules:
- host: frontend.minikube.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend
port:
number: 80
- host: backend.minikube.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: backend
port:
number: 80
</code></pre>
|
<p>I have below go code that print my kubernetes deployment metadata,</p>
<pre class="lang-golang prettyprint-override"><code>package test
import (
"context"
"fmt"
"os"
"testing"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/client-go/kubernetes"
"k8s.io/client-go/tools/clientcmd"
)
func TestEksStandaloneModule(t *testing.T) {
namespace := fmt.Sprintf("%v-%v", "project", "dit")
kubeconfig := os.Getenv("HOME") + "/KUBECONFIG"
clientCfg, err := clientcmd.BuildConfigFromFlags("", kubeconfig)
if err != nil {
t.Error(err)
}
clientset, err := kubernetes.NewForConfig(clientCfg)
appDeployment, err := clientset.AppsV1().Deployments(namespace).Get(context.TODO(), "deploymentname", v1.GetOptions{})
fmt.Println(appDeployment)
}
</code></pre>
<p>Can any one please help me get a function that prints ingress metadata?
I am new to go.</p>
| <p>I was able to make it work by</p>
<pre><code>appIngress, err := clientset.NetworkingV1().Ingresses(namespace).Get(context.TODO(), "helloworldspringbootssl-ap145385", v1.GetOptions{})
fmt.Println(appIngress)
</code></pre>
|
<p>TL/DR:</p>
<ol>
<li>I don't know if I'm using the asynchronous programming features of
C# and Blazor correctly.</li>
<li>Even though things technically work, I'd like some guidance if I'm doing things correctly.</li>
<li>Also, I'm having issues trying to get my "loading spinner" to work. What am I doing wrong?</li>
</ol>
<p>I'd like some guidance of my code is doing things the correct way.</p>
<p>I'm currently trying to use KubernetesClient with a Blazor webapp to interact with my kubernetes cluster.</p>
<p>As a test, I've tried to list nodes in a cluster, asynchronously. Things appear to work, but I'm unsure if I'm doing this correctly. Please see the code below:</p>
<pre><code>@page "/kclient"
@using k8s
<PageTitle>Kubernetes Client Test</PageTitle>
<h1>Kubernetes Client Test</h1>
<br />
<button class="btn btn-primary" @onclick="@GetNodesAsync">Refresh Node List</button>
<br /><br />
<p>LOADING = @spin.ToString()</p>
<label>Node list:</label>
@if (spin)
{
<div class="spinner"></div>
}else
{
<ul>
@if (MyNodes == null || MyNodes.Count == 0)
{
<li>No Nodes. Please try to refresh the node list.</li>
}else
{
@foreach(string node in MyNodes)
{
<li>@node</li>
}
}
</ul>
}
@code {
public bool spin = false;
public IKubernetes client { get; set; }
public List<string> MyNodes { get; set; }
protected override void OnInitialized()
{
spin = false;
KubernetesClientConfiguration config = KubernetesClientConfiguration.BuildConfigFromConfigFile("C:\\Users\\DevAdmin\\.kube\\config");
client = new Kubernetes(config);
System.Console.WriteLine("Base URI: " + client.BaseUri);
}
async Task GetNodesAsync()
{
spin = true;
Task.Delay(1500).Wait();
await InvokeAsync(GetNodes);
spin = false;
await Task.CompletedTask;
}
public async void GetNodes()
{
MyNodes = null;
MyNodes = new List<string>();
System.Console.WriteLine("=== TRYING TO GET ALL NODES! ===");
var nodeList = await client.ListNodeAsync();
if (nodeList.Items.Count == 0)
{
Console.WriteLine("Empty! There are no nodes!");
}
else
{
foreach (var item in nodeList.Items)
{
//Console.WriteLine(item.Metadata.Name);
MyNodes.Add(item.Metadata.Name);
}
}
StateHasChanged();
}
}
</code></pre>
<p>The code above produces the page below:
<a href="https://i.stack.imgur.com/Tlmpv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Tlmpv.png" alt="enter image description here" /></a></p>
<p>When you click on the button "Refresh Node List" button, it will use the KubernetesClient library to get a list of Kubernetes nodes. Because I'm still new to .NET, Blazor, and asynchronous programming, I'm unsure if the way I'm retrieving the list of nodes and then showing it in a Blazor (razor?) page is correct.</p>
<p>Also I've been trying to show a loading spinner while waiting for the client to retrieve data, but I've been unable to get that working, and I'm sure it's because I'm not using async/tasks correctly.</p>
<p>Any guidance on my usage of the asynchronously programming features of C#/.NET or anything else with the code I've provided would be greatly appreciated.</p>
<p>Thanks in advance.</p>
| <p>I can spot a few issues. Always avoid <code>async void</code>.</p>
<pre class="lang-cs prettyprint-override"><code>async Task GetNodesAsync()
{
spin = true;
//Task.Delay(1500).Wait(); -- .Wait() blocks the UI
await Task.Delay(1500);
//await InvokeAsync(GetNodes); -- this won't run on another Thread
await GetNodes();
spin = false;
//await Task.CompletedTask;
}
//public async void GetNodes()
public async Task GetNodes()
{
... // as before
//StateHasChanged(); -- not needed
}
</code></pre>
|
<p>I'm fairly new to Kubernetes and I have played around with it for a few days now to get a feeling for it. Trying out to set up an Nginx Ingress controller on the google-cloud platform following <a href="https://cloud.google.com/community/tutorials/nginx-ingress-gke" rel="nofollow noreferrer">this guide</a>, I was able to set everything up as written there - no problems, I got to see the hello-app output.</p>
<p>However, when I tried replicating this in a slightly different way, I encountered a weird behavior that I am not able to resolve. Instead of using the image <code>--image=gcr.io/google-samples/hello-app:1.0</code> (as done in the tutorial) I wanted to deploy a standard <code>nginx</code> container with a custom index page to see if I understood stuff correctly. As far as I can tell, all the steps should be the same except for the exposed port: While the <code>hello-app</code> exposes port <code>8080</code> the standard port for the <code>nginx</code> container is <code>80</code>. So, naively, I thought exposing (i.e., creating a service) with this altered command should do the trick:</p>
<pre><code>kubectl expose deployment hello-app --port=8080 --target-port=80
</code></pre>
<p>where instead of having <code>target-port=8080</code> as for the hello-app, I put <code>target-port=80</code>. As far as I can tell, all other thins should stay the same, right? In any way, this does not work and when I try to access the page I get a "404 - Not Found" although the container is definitely running and serving the index page (I checked by using port forwarding from the google cloud which apparently directly makes the page accessible for dev purposes). In fact, I also tried several other combinations of ports (although I believe the above one should be the correct one) to no avail. Can anyone explain to my why the routing does not work here?</p>
| <p>If you notice the tutorial inside the ingress configuration <code>path: "/hello"</code></p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-resource
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: "34.122.88.204.nip.io"
http:
paths:
- pathType: Prefix
path: "/hello"
backend:
service:
name: hello-app
port:
number: 8080
</code></pre>
<p>you might have updated <strong>port</strong> number and <strong>service</strong> name config however if path <code>/hello</code> which means you request is going to Nginx container but not able to file the page <strong>hello.html</strong> so it's giving you <strong>404</strong>.</p>
<blockquote>
<p>You hit endpoint IP/hello (Goes to Nginx ingress controller)-->
checked for path <code>/hello</code> and forwarded request to service -->
hello-app (service forwarded request to PODs) --> Nginx POD (it
doesn't have anything at path <code>/hello</code> so 404)</p>
</blockquote>
<p><strong>404</strong> written by <strong>Nginx</strong> side, in your case either it will be <strong>Nginx ingress controller</strong> or else container(POD) itself.</p>
<p>So try you ingress config without setting <strong>path</strong> <code>path: "/"</code> and hit the endpoint you might see the output from Nginx.</p>
|
<p>I'm working through <a href="https://rads.stackoverflow.com/amzn/click/com/1617293725" rel="nofollow noreferrer" rel="nofollow noreferrer">Kubernetes in Action</a> (copyright 2018), and at least one of the examples is out-of-date with respect to current versions of <code>kubectl</code>.</p>
<p>Currently I'm stuck in section 2.3 on just trying to demo a simple web-server docker container ("kubia"):</p>
<pre><code>kubectl run kubia --image=Dave/kubia --port=8080 --generator=run/v1
</code></pre>
<p>the <a href="https://stackoverflow.com/a/53010591/1497199"><code>--generator</code> option has been removed</a> from current versions of <code>kubectl</code>. What command(s) achieve the same end in the current version of <code>kubectl</code>?</p>
<p>Note: I'm literally just 2 chapters into learning about Kubernetes, so I don't really know what a deployment or anything else (so the official kubernetes docuementation doesn't help), I just need the simplest way to verify that that I can, in fact, run this container in my minikube "cluster".</p>
| <p>in short , you can use following commands to create pods and deployments (imperative way) using following commands which are similar to the commands mentioned in that book :</p>
<ul>
<li>To create a pod named <code>kubia</code> with image <code>Dave/kubia</code></li>
</ul>
<pre><code> kubectl run kubia --image=Dave/kubia --port=8080
</code></pre>
<ul>
<li>To create a deployment named <code>kubia</code> with image <code>Dave/kubia</code></li>
</ul>
<pre><code>kubectl create deployment kubia --image=Dave/kubia --port=8080
</code></pre>
|
<p>I'm coming from the PHP/Python/JS environment where it's a standard to run multiple instances of web application as separate processes and asynchronous tasks like queue processing as separate scripts.</p>
<p><strong>eg. in the k8s environment, there would be</strong></p>
<ul>
<li>N instances of <strong>web server only</strong>, each running in separate pod</li>
<li>For each queue, <strong>dynamic number of consumers, each in separate pod</strong></li>
<li><strong>Cron scheduling using k8s crontab</strong> functionality, leaving the scheduling process to k8s</li>
</ul>
<p>Such approach matches well the cloud nature where the workload can be scheduled across both smaller number of powerful machines and <strong>lot of less powerful machines</strong> and allows very fine control of auto scaling (based on the number of messages in specific queue for example).</p>
<p>Also, there is a clear separation between the developer and DevOps responsibility.</p>
<p>Recently, I tried to replicate the same setup with <strong>Java Spring Boot</strong> application and failed miserably.</p>
<p>Even though Java frameworks say that they are "cloud native", it seems like all the documentation is still built around monolith application, which handles all consumers and cron scheduling in separate threads.</p>
<p>Clear answer to this problem is microservices but that's way out of scope.</p>
<p><strong>What I need is to deploy separate parts of application (like 1 queue listener only) per pod in the cloud yet keep the monolith code architecture.</strong></p>
<p>So, the question is:</p>
<p>How do I design my Spring Boot application so that:</p>
<ul>
<li>I can run the webserver separately without queue listeners and scheduled jobs</li>
<li>I can run one queue listener per pod in the k8s</li>
<li>I can use k8s cron scheduling instead of App level Spring scheduler?</li>
</ul>
<p>I found several ways to achieve something like this but I expect there must be some "more or less standard way".</p>
<hr />
<p>Alternative solutions that came to my mind:</p>
<ul>
<li>Having separate module with separate Application definition so that each "command" is built separately</li>
<li>Using Spring Profiles to instantiate specific services only according to some environment variables</li>
<li>Implement custom command line runner which would parse command name/queue name and dynamically create appropriate services (this seems to be the most similar approach to the way how it's done in "scripting languages")</li>
</ul>
<p>What I mainly want to achieve with such setup is:</p>
<ul>
<li>To be able to run the application on lot of weak HW instead of having 1 machine with 32 cpu cores</li>
<li>Easier scaling per workload</li>
<li>Removing one layer from already complex monitoring infrastructure (k8s already allows very fine resource monitoring, application level task scheduling and parallelism makes this way more difficult)</li>
</ul>
<p>Do I miss something or is it just that it's not standard to write Java server apps this way?
Thank you!</p>
| <blockquote>
<p>What I need is to deploy separate parts of application (like 1 queue listener only) per pod in the cloud yet keep the monolith code architecture.</p>
</blockquote>
<p>I agree with <a href="https://stackoverflow.com/a/71598155/2446208">@jacky-neo's answer</a> in terms of the appropriate architecture/best practice, but that may require you to break up your monolithic application.</p>
<p>To solve this without breaking up your monolithic application, deploy multiple instances of your monolith to Kubernetes each as a separate <code>Deployment</code>. Each deployment can have its own configuration. Then you can <a href="https://www.baeldung.com/spring-feature-flags" rel="nofollow noreferrer">utilize feature flags</a> and define the environment variables for each deployment based on the functionality you would like to enable.</p>
<p>In <code>application.properties</code>:</p>
<pre><code>myapp.queue.listener.enabled=${QUEUE_LISTENER_ENABLED:false}
</code></pre>
<p>In your <code>Deployment</code> for the queue listener, enable the feature flag:</p>
<pre class="lang-yaml prettyprint-override"><code>env:
- name: 'QUEUE_LISTENER_ENABLED'
value: 'true'
</code></pre>
<p>You would then just need to configure your monolithic application to use this <code>myapp.queue.listener.enabled</code> property and only enable the queue listener when the property is set to <code>true</code>.</p>
<p>Similarly, you could also apply this logic to the Spring profile to only run certain features in your app based on the profile defined in your <code>ConfigMap</code>.</p>
<p><a href="https://www.baeldung.com/spring-feature-flags" rel="nofollow noreferrer">This Baeldung article</a> explains the process I'm presenting here in detail.</p>
<p>For the scheduled task, just <a href="https://florianbuchner.com/kubernetes-curl-cronjob-for-internal-service/" rel="nofollow noreferrer">set up a <code>CronJob</code> using a <code>curl</code> container</a> which can invoke the service you want to perform the work.</p>
<h2>Edit</h2>
<p>Another option based on your comments below -- split the shared logic out into a <code>shared</code> module (using Gradle or Maven), and have two other runnable modules like <code>web</code> and <code>listener</code> that depend on the <code>shared</code> module. This will allow you to keep your shared logic in the same repository, and keep you from having to build/maintain an extra library which you would like to avoid.</p>
<p>This would be a good step in the right direction, and it would lend well to breaking the app into smaller pieces later down the road.</p>
<p><a href="https://spring.io/guides/gs/multi-module/" rel="nofollow noreferrer">Here's some additional info</a> about multi-module Spring Boot projects using Maven or Gradle.</p>
|
<p>Hello I'am new on K8S and react typescript app,
I'am trying to deploy my react application with typescript, i have an error in Kubernetes deployment</p>
<pre><code>Liveness probe failed: HTTP probe failed with statuscode: 404
</code></pre>
<p>where should I put the well-known routes /.well-known/live and /.well-known/ready ? and what should I write for it ?</p>
| <blockquote>
<p>where should I put the well-known routes /.well-known/live and
/.well-known/ready ? and what should I write for it ?</p>
</blockquote>
<p>No need for it if you are serving index from your container with Nginx/Apache it will work in that case you need to update the K8s YAML config and update <code>HTTP probe</code> path to just <code>/</code> probaly so when K8s check it's hit to <code>root(index.html)</code> main page of your react app.</p>
<p><strong>Ways to mitigate</strong></p>
<blockquote>
<p>Liveness probe failed: HTTP probe failed with statuscode: 404</p>
</blockquote>
<p>This clearly means <strong>Kubernetes</strong> failed to verify or check the health of deployment in K8s.</p>
<p>in your error status code is : <strong>404</strong></p>
<p>which means it was not able to get or hit the <strong>HTTP</strong> endpoint.</p>
<p>i am not sure how your route works and application is configured however this is the simplest HTTP option you could use.</p>
<pre><code>app.get('/live',(req,res)=> {
res.send ("OK");
});
</code></pre>
<p>K8s periodically check the HTTP status on this endpoint that you have configured using YAML config file.</p>
<p><strong>Nginx checks</strong></p>
<p>If you are deploying the separate <strong>Frontend</strong> pod and can not keep <strong>HTTP endpoint</strong> in container</p>
<p>You can use the default endpoint of <code>Apache/Nginx</code> as liveness probe just like <code>/</code> so by default <strong>Nginx</strong> or <strong>Apache</strong> <code>return 200</code> on <code>/</code></p>
<p><strong>Or else</strong></p>
<p>you can also give any HTML file name from your code, <code>/index-live.html</code> so when K8s will check <code>/live</code> endpoint frontend container will serve html file with status code <strong>200</strong>.</p>
|
<p>I have 2 nodes that I'm running development pods on. I'd like to be able to echo only the node that a pod is running on based on the name.</p>
<p>I can use <code>kubectl get pod -o=custom-columns=NAME:.metadata.name,NODE:spec.nodeName -n my-namespace</code> to pull back all the names and nodes for all pods in that namespace, but I'd like to filter just the nodename for specific pods. Using grep on the pod name works, but I'm not sure if its possible to only show the node when filtering based off a single pod name.</p>
| <p>Option-1: Using <code>custom-columns</code></p>
<pre><code>kubectl get pod mypod -o custom-columns=":.spec.nodeName" --no-headers
</code></pre>
<p>Option-2: Using <code>jsonpath</code></p>
<pre><code>kubectl get pod mypod -o jsonpath='{.spec.nodeName}'
</code></pre>
|
<p>We are getting logs that calls to k8s are being made, despite our cluster being private, as well as being behind the gcp firewall with a rule that blocks all ingress except IAP IPs (and ICMP). What am I missing?</p>
<pre><code>"protoPayload":{
"@type":"type.googleapis.com/google.cloud.audit.AuditLog"
"authenticationInfo":{
"principalEmail":"system:anonymous"
}
"authorizationInfo":["0":{2}]
"methodName":"io.k8s.post"
"requestMetadata":{
"callerIp":"45.*.*.*"
"callerSuppliedUserAgent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36"
}
"resourceName":"Autodiscover/Autodiscover.xml"
"serviceName":"k8s.io"
"status":{
"code":"7"
"message":"Forbidden"
}
}
</code></pre>
| <p>The private clusters have a control plane private endpoint and a control plane public endpoint and you can choose to disable the control plane public endpoint, this is the highest level of restricted access. So you can manage the cluster with the private endpoint internal IP address with tools like kubectl and any VM that uses the same subnet that your cluster can also access the private endpoint.However, it is important to say that even if you disable the public endpoint access, Google can use the control plane public endpoint for cluster management purposes, such as scheduled maintenance and automatic control plane upgrades.
If you need more information about how to create a private cluster with public endpoint disable, you can consult the following <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/private-cluster-concept#overview" rel="nofollow noreferrer">public document.</a></p>
<p>You can review your public endpoints with the following command:</p>
<pre><code>gcloud container clusters describe YOUR_CLUSTER_NAME
</code></pre>
<p>Also, you can verify that your cluster's nodes do not have external IP addresses with the following command:</p>
<pre><code>kubectl get nodes --output wide
</code></pre>
|
<p>I'm trying to create Kubernetes job with the following requirements:</p>
<ol>
<li>Only one pod can be created for each job at most</li>
<li>If the pod failed - the job will fail</li>
<li>Max run time of the pod will be 1 hour</li>
<li>If the job finished successfully - delete the job</li>
</ol>
<p>I tried the following configurations:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: {{ .Release.Name }}
annotations:
"helm.sh/hook": post-install
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
spec:
containers:
- name: {{ .Release.Name }}
image: {{ .Values.image }}
env:
- name: ARG1
value: {{ required "ARG1 is mandatory" .Values.ENV.ARG1 }}
- name: GITLAB_USER_EMAIL
value: {{ .Values.ENV.GITLAB_USER_EMAIL }}
envFrom:
- secretRef:
name: {{ .Release.Name }}
restartPolicy: Never
backoffLimit: 1
activeDeadlineSeconds: 3600
</code></pre>
<p>But it's not working as expected, any ideas?
Thanks !</p>
| <ul>
<li>Only one pod can be created for each job at most</li>
</ul>
<p>The requested parallelism (.spec.parallelism) can be set to any non-negative value. If it is unspecified, it defaults to 1. If it is specified as 0, then the Job is effectively paused until it is increased.</p>
<blockquote>
<p>For Cronjobs could be helpful <code>successfulJobsHistoryLimit: 0, failedJobsHistoryLimit: 0</code> this will
remove the PODs if it's get <strong>failed</strong> or <strong>success</strong> so no history or
POD will stays. So only one pod will get created or run.</p>
</blockquote>
<ul>
<li>If the pod failed - the job will fail</li>
</ul>
<p>That will be the default behavior, also <code>restartPolicy: Never</code> so it won't get restarted.</p>
<ul>
<li><p>Max run time of the pod will be 1 hour</p>
<p><code>activeDeadlineSeconds: 3600</code> you have already added</p>
</li>
<li><p>If the job finished successfully - delete the job</p>
</li>
</ul>
<p><code>ttlSecondsAfterFinished: 100</code> will solve your issue.</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: {{ .Release.Name }}
annotations:
"helm.sh/hook": post-install
"helm.sh/hook-delete-policy": hook-succeeded
spec:
template:
spec:
containers:
- name: {{ .Release.Name }}
image: {{ .Values.image }}
env:
- name: ARG1
value: {{ required "ARG1 is mandatory" .Values.ENV.ARG1 }}
- name: GITLAB_USER_EMAIL
value: {{ .Values.ENV.GITLAB_USER_EMAIL }}
envFrom:
- secretRef:
name: {{ .Release.Name }}
restartPolicy: Never
backoffLimit: 1
ttlSecondsAfterFinished: 100
activeDeadlineSeconds: 3600
</code></pre>
|
<p>What are the differences between</p>
<pre><code>kubectl expose deployment myservice --type=NodePort --port=80 --target-port=80
</code></pre>
<p>and</p>
<pre><code>kubectl create service nodeport myservice --tcp=80:80
</code></pre>
<p>?</p>
| <pre><code>kubectl create service nodeport myservice --tcp=80:80
</code></pre>
<p>it's just create the service with servicename as labels/selector.</p>
<pre><code>kubectl expose deployment myservice --type=NodePort --port=80 --target-port=80
</code></pre>
<p>it creates the service based on <code>labels/selector</code> specified in <strong>deployment</strong>. So using this you can expose all the PODs of specific one deployment over a single <strong>Nodeport</strong>.</p>
<p><strong>Explanation</strong> :</p>
<p>Both have difference in <strong>labels</strong> mainly or <strong>selectors</strong> in other words.</p>
<pre><code>kubectl create service nodeport myservice --tcp=80:80
</code></pre>
<p>If you will run above command it will create the service with selector <code>myservice</code> in it.</p>
<p><strong>Example</strong> :</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: myservice
name: myservice
spec:
ports:
- name: 80-80
nodePort: 30858
port: 80
protocol: TCP
targetPort: 80
selector:
app: myservice
type: NodePort
</code></pre>
<p>as you can see your <code>selector : myservice</code> and same set in <code>labels</code> & <code>servicename</code>. So <strong>service name</strong> goes as <strong>labels</strong> and <strong>selector</strong>. Your service was created here but won't be able to talk to deployment as the selector is different (unless you create the service with the name of matching selector from deployment).</p>
<p>While with command :</p>
<pre><code>kubectl expose deployment nginx-deployment --type=NodePort --port=80 --target-port=80
</code></pre>
<p>it will use the labels and selector of deployment. Which mean if you have deployment running of Nginx and it has <code>selector : nginx</code> it will get it from there as you are exposing that deployment.</p>
<p><strong>Example deployment :</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: nginx
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: nginx
spec:
containers:
- image: nginx:1.14.2
imagePullPolicy: IfNotPresent
name: nginx
ports:
- containerPort: 80
protocol: TCP
</code></pre>
<p>Now if you will run command like</p>
<pre><code>kubectl expose deployment nginx-deployment --type=NodePort --port=80 --target-port=80
</code></pre>
<p>service will have <code>selector: nginx</code> used in deployment.</p>
<p><strong>Example service</strong> :</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: nginx
name: nginx-deployment
spec:
ports:
- name: 80-80
nodePort: 30858
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
type: NodePort
</code></pre>
|
<p>My SpringBoot application is scheduled to run at 1 UTC each day for some data collection and put that in the database. We are using Kubernetes and we have two pods accessing the same database. The database is at some other location for which we have a connection string which is the same in both pods.</p>
<p>The problem is both of my pods wake up at 1 UTC and add duplicate entries in the database? How can I ensure that only one pod is talking to the database? Is this application is not ideal for k8s deployment?</p>
| <p>I know this is old, but for anybody else, look into <a href="https://github.com/lukas-krecan/ShedLock" rel="nofollow noreferrer">ShedLock</a>. It handles locking across distributed nodes and is pretty easy to implement.</p>
|
<p>We have a single node kubernetes environment hosted on an on prem server and we are attempting to host jitsi on it as a single pod. Jitsi web, jicofo, jvb and the prosody will be in on one pod rather than having separate pods for each (<a href="https://github.com/DushmanthaBandaranayake/jitsi-kubernetes-scalable-service" rel="nofollow noreferrer">reference here</a>)</p>
<p>So far what we have managed to set it up by adding our ingress hostname to as the PUBLIC_URL to all 4 containers within the pod. This service works fine if two users are on the same network.</p>
<p>If a user using another network joins the call, there is no video or audio and will receive such an error in the jvb container</p>
<blockquote>
<p>JVB 2022-03-16 02:03:28.447 WARNING: [62] [confId=200d989e4b048ad3 gid=116159 stats_id=Durward-H4W [email protected] ufrag=4vfdk1fu8vfgn1 epId=eaff1488 local_ufrag=4vfdk1fu8vfgn1] ConnectivityCheckClient.startCheckForPair#374: Failed to send BINDING-REQUEST(0x1)[attrib.count=6 len=92 tranID=0xBFC4F7917F010AF9DA6E21D7]
java.lang.IllegalArgumentException: No socket found for 172.17.0.40:10000/udp->192.168.1.23:42292/udp
at org.ice4j.stack.NetAccessManager.sendMessage(NetAccessManager.java:631)
at org.ice4j.stack.NetAccessManager.sendMessage(NetAccessManager.java:581)
at org.ice4j.stack.StunClientTransaction.sendRequest0(StunClientTransaction.java:267)
at org.ice4j.stack.StunClientTransaction.sendRequest(StunClientTransaction.java:245)
at org.ice4j.stack.StunStack.sendRequest(StunStack.java:680)
at org.ice4j.ice.ConnectivityCheckClient.startCheckForPair(ConnectivityCheckClient.java:335)
at org.ice4j.ice.ConnectivityCheckClient.startCheckForPair(ConnectivityCheckClient.java:231)
at org.ice4j.ice.ConnectivityCheckClient$PaceMaker.run(ConnectivityCheckClient.java:938)
at org.ice4j.util.PeriodicRunnable.executeRun(PeriodicRunnable.java:206)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:829)</p>
</blockquote>
<p>Furthermore the errors in the browser console are as such</p>
<p><a href="https://i.stack.imgur.com/o0rc1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/o0rc1.png" alt="enter image description here" /></a></p>
<p><strong>EDIT</strong></p>
<p>I have added the yaml file for the jitsi here</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: jitsi
name: jitsi
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
k8s-app: jitsi
template:
metadata:
labels:
k8s-app: jitsi
spec:
containers:
- name: jicofo
image: jitsi/jicofo:stable-7001
volumeMounts:
- mountPath: /config
name: jicofo-config-volume
imagePullPolicy: IfNotPresent
env:
- name: XMPP_SERVER
value: localhost
- name: XMPP_DOMAIN
value: meet.jitsi
- name: XMPP_AUTH_DOMAIN
value: auth.meet.jitsi
- name: PUBLIC_URL
value: <hidden>
- name: XMPP_INTERNAL_MUC_DOMAIN
value: internal-muc.meet.jitsi
- name: JICOFO_COMPONENT_SECRET
valueFrom:
secretKeyRef:
name: jitsi-config
key: JICOFO_COMPONENT_SECRET
- name: JICOFO_AUTH_USER
value: focus
- name: JICOFO_AUTH_PASSWORD
valueFrom:
secretKeyRef:
name: jitsi-config
key: JICOFO_AUTH_PASSWORD
- name: TZ
value: America/Los_Angeles
- name: JVB_BREWERY_MUC
value: jvbbrewery
- name: prosody
image: jitsi/prosody:stable-7001
volumeMounts:
- mountPath: /config
name: prosody-config-volume
imagePullPolicy: IfNotPresent
env:
- name: XMPP_DOMAIN
value: meet.jitsi
- name: XMPP_AUTH_DOMAIN
value: auth.meet.jitsi
- name: XMPP_MUC_DOMAIN
value: muc.meet.jitsi
- name: PUBLIC_URL
value: <hidden>
- name: XMPP_INTERNAL_MUC_DOMAIN
value: internal-muc.meet.jitsi
- name: JICOFO_COMPONENT_SECRET
valueFrom:
secretKeyRef:
name: jitsi-config
key: JICOFO_COMPONENT_SECRET
- name: JVB_AUTH_USER
value: jvb
- name: JVB_AUTH_PASSWORD
valueFrom:
secretKeyRef:
name: jitsi-config
key: JVB_AUTH_PASSWORD
- name: JICOFO_AUTH_USER
value: focus
- name: JICOFO_AUTH_PASSWORD
valueFrom:
secretKeyRef:
name: jitsi-config
key: JICOFO_AUTH_PASSWORD
- name: TZ
value: America/Los_Angeles
- name: JVB_TCP_HARVESTER_DISABLED
value: "true"
- name: web
image: jitsi/web:stable-7001
imagePullPolicy: IfNotPresent
env:
- name: XMPP_SERVER
value: localhost
- name: JICOFO_AUTH_USER
value: focus
- name: PUBLIC_URL
value: <hidden>
- name: XMPP_DOMAIN
value: meet.jitsi
- name: XMPP_AUTH_DOMAIN
value: auth.meet.jitsi
- name: XMPP_INTERNAL_MUC_DOMAIN
value: internal-muc.meet.jitsi
- name: XMPP_BOSH_URL_BASE
value: http://127.0.0.1:5280
- name: XMPP_MUC_DOMAIN
value: muc.meet.jitsi
- name: TZ
value: America/Los_Angeles
- name: JVB_TCP_HARVESTER_DISABLED
value: "true"
- name: jvb
image: jitsi/jvb:stable-7001
volumeMounts:
- mountPath: /config
name: jvb-config-volume
imagePullPolicy: IfNotPresent
env:
- name: XMPP_SERVER
value: localhost
- name: DOCKER_HOST_ADDRESS
value: <hidden>
- name: XMPP_DOMAIN
value: meet.jitsi
- name: XMPP_AUTH_DOMAIN
value: auth.meet.jitsi
- name: XMPP_INTERNAL_MUC_DOMAIN
value: internal-muc.meet.jitsi
- name: PUBLIC_URL
value: <hidden>
# - name: JVB_STUN_SERVERS
# value: stun.l.google.com:19302,stun1.l.google.com:19302,stun2.l.google.com:19302
- name: JICOFO_AUTH_USER
value: focus
- name: JVB_TCP_HARVESTER_DISABLED
value: "true"
- name: JVB_AUTH_USER
value: jvb
- name: JVB_PORT
value: "10000"
- name: JVB_TCP_PORT
value: "4443"
- name: JVB_TCP_MAPPED_PORT
value: "4443"
# - name: JVB_ENABLE_APIS
# value: "rest,colibri"
- name: JVB_AUTH_PASSWORD
valueFrom:
secretKeyRef:
name: jitsi-config
key: JVB_AUTH_PASSWORD
- name: JICOFO_AUTH_PASSWORD
valueFrom:
secretKeyRef:
name: jitsi-config
key: JICOFO_AUTH_PASSWORD
- name: JVB_BREWERY_MUC
value: jvbbrewery
- name: TZ
value: America/Los_Angeles
volumes:
- name: jvb-config-volume
hostPath:
path: /home/jitsi-config/jvb
- name: jicofo-config-volume
hostPath:
path: /home/jitsi-config/jicofo
- name: prosody-config-volume
hostPath:
path: /home/jitsi-config/prosody
</code></pre>
<p><strong>EDIT 2</strong></p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
service: web
name: web
namespace: default
spec:
ports:
- name: "http"
protocol: TCP
port: 80
targetPort: 80
nodePort: 31015
- name: "https"
protocol: TCP
port: 443
targetPort: 443
nodePort: 30443
- name: "prosody"
protocol: TCP
port: 5222
targetPort: 5222
- port: 30300
name: jvb-0
protocol: UDP
targetPort: 30300
nodePort: 30300
# - name: "jvbport"
# protocol: TCP
# port: 9090
# targetPort: 9090
- name: "udp"
protocol: UDP
port: 10000
targetPort: 10000
# - name: "udp-secondary"
# protocol: UDP
# port: 20000
# targetPort: 20000
- name: "test"
protocol: TCP
port: 4443
targetPort: 4443
selector:
k8s-app: jitsi
type: NodePort
---
# service for jvbs
# create service for jvb upd access on kubernetes Nodeport starting with 31000.
# Make sure NodePorts between 31000-31005 are available on your kube cluster.
# update this if you need JVBs more than 6.
# JVB-0
apiVersion: v1
kind: Service
metadata:
labels:
service: jvb-0
name: jvb-0
namespace: default
spec:
type: NodePort
externalTrafficPolicy: Cluster
ports:
- port: 31000
name: jvb-0
protocol: UDP
targetPort: 31000
nodePort: 31000
# - name: "udp"
# protocol: UDP
# port: 10000
# targetPort: 10000
# - name: "jvbport"
# protocol: TCP
# port: 9090
# targetPort: 9090
selector:
app: jvb
"statefulset.kubernetes.io/pod-name": jvb-0
---
</code></pre>
| <p>Managed to fix it. Posting this for anyone who comes across the same issue.</p>
<p>first off the UDP port 10000 does not work in kubernetes as you can only expose ports between 30000 to 32768. Having said that you need to pick a port within that range and use it for the JVB_PORT configuration in the JVB container.</p>
<p>secondly use that port in the service lay to expose it to the front end</p>
<pre><code>- name: "udp"
protocol: UDP
port: 31000
targetPort: 31000
nodePort: 31000
</code></pre>
<p>Thirdly, regarding the firewall and, if you are behind a company firewall, make sure you have enabled ingress and egress for your JVB_PORT</p>
|
<p>I want to intercept the helm YAML and customize it using a Python script, and then install it. I have been doing something like <code>helm template | python3 script... | kubectl apply -f -</code> but of course this doesn't create a helm release in my cluster, so I lose out on <code>helm rollback</code> etc.</p>
<p>I have considered using Kustomize but it doesn't have the features that I'd like.</p>
<p>Is there a way to take pre-generated YAML, like that from <code>helm template</code> or <code>helm install --dry-run</code> and then install/upgrade that <strong>using</strong> helm?</p>
| <p>Isn't that what post-renderers are for?</p>
<p>See <a href="https://helm.sh/docs/topics/advanced/#post-rendering" rel="nofollow noreferrer">https://helm.sh/docs/topics/advanced/#post-rendering</a></p>
<blockquote>
<p>A post-renderer can be any executable that accepts rendered Kubernetes manifests on STDIN and returns valid Kubernetes manifests on STDOUT. It should return an non-0 exit code in the event of a failure. This is the only "API" between the two components. It allows for great flexibility in what you can do with your post-render process.</p>
<p>A post renderer can be used with install, upgrade, and template. To use a post-renderer, use the --post-renderer flag with a path to the renderer executable you wish to use:</p>
<p><code>$ helm install mychart stable/wordpress --post-renderer ./path/to/executable</code></p>
</blockquote>
<p>I haven't used it myself yet, but it looks interesting if you want to run your own alternative kustomize.</p>
<p>See <a href="https://github.com/vmware-tanzu/carvel-ytt/tree/develop/examples/helm-ytt-post-renderer" rel="nofollow noreferrer">https://github.com/vmware-tanzu/carvel-ytt/tree/develop/examples/helm-ytt-post-renderer</a> for an example that is not kustomize.</p>
|
<p>I am using Kubernetes with Helm 3.</p>
<p>It is ran on CentOS Linux 7 (Core).</p>
<p>K8S (check by running: kubectl version):</p>
<p>git version (kubernetes): v1.21.6, go version: go1.16.9.</p>
<p>helm version: v3.3.4</p>
<p>helm version (git) go1.14.9.</p>
<p>I need to create a Job that is running after a Pod is created.</p>
<p>The pod yaml:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: {{ include "test.fullname" . }}-mysql
labels:
app: {{ include "test.fullname" . }}-mysql
annotations:
"helm.sh/hook": post-install
"helm.sh/hook-weight": "-20"
"helm.sh/delete-policy": before-hook-creation
spec:
containers:
- name: {{ include "test.fullname" . }}-mysql
image: {{ .Values.mysql.image }}
imagePullPolicy: IfNotPresent
env:
- name: MYSQL_ROOT_PASSWORD
value: "12345"
- name: MYSQL_DATABASE
value: test
</code></pre>
<p>The Job:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: {{ include "test.fullname" . }}-migration-job
labels:
app: {{ include "test.fullname" . }}-migration-job
annotations:
"helm.sh/hook": post-install
"helm.sh/hook-weight": "-10"
"helm.sh/hook-delete-policy": hook-succeeded, hook-failed
spec:
parallelism: 1
completions: 1
backoffLimit: 1
template: #PodTemplateSpec (Core/V1)
spec: #PodSpec (core/v1)
initContainers: # regular
- name: wait-mysql
image: bitnami/kubectl
imagePullPolicy: IfNotPresent
args:
- wait
- pod/{{ include "test.fullname" . }}-mysql
- --namespace={{ .Release.Namespace }}
- --for=condition=ready
- --timeout=120s
containers:
- name: {{ include "test.fullname" . }}
image: {{ .Values.myMigration.image }}
imagePullPolicy: IfNotPresent
command: {{- toYaml .Values.image.entrypoint | nindent 12 }}
args: {{- toYaml .Values.image.cmd | nindent 12}}
</code></pre>
<p>MySQL is MySQL 5.6 image.</p>
<p>When I write the above, also run <code>helm install test ./test --namespace test --create-namespace</code></p>
<p>Even though I changed the hook for pre-install (for Pod and Job), the job is never running.</p>
<p>In both situations, I get messages (and need to press - to exit - I don't want this behavior either:</p>
<blockquote>
<p>Pod test-mysql pending Pod test-mysql pending Pod</p>
</blockquote>
<blockquote>
<p>test-mysql pending Pod test-mysql running Pod</p>
</blockquote>
<blockquote>
<p>test-mysql running Pod test-mysql running Pod</p>
</blockquote>
<blockquote>
<p>test-mysql running ...</p>
</blockquote>
<p>In this example, when I put a 'bug' in the Job, for example: <code>containersx</code> instead of <code>container</code>, I don't get any notification that I have a wrong syntax.</p>
<p>Maybe because MySQL is running (and not completed), can I force to go to the next yaml declared by hook? (Even I declare the proper order for Pod and Job. Pod should run before Job).</p>
<p>What is wrong, and how can I ensure the pod is created before the job? And when the pod starts running, my job will run after that?</p>
<p>Thanks.</p>
| <p>As per your configuration, it looks like you need to set <code>post-install</code> <a href="https://helm.sh/docs/topics/charts_hooks/#the-available-hooks" rel="nofollow noreferrer">hook</a> precisely for Job as it should execute after all resources are loaded into Kubernetes. On executing <code>pre-install</code> hook both on Pod and Job, it is run before the rest of the chart is loaded, which seems to prevent Job from starting.</p>
|
<p>In k8s, I deploy an app running with clusterIP(port:5270). Then I want to set up an ingress-nginx to forward the request to this app. here is the configure:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: k8s-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /core
pathType: Prefix
backend:
service:
name: core-service
port:
number: 5270
</code></pre>
<p>My service is a NestJS application, and this is my request to this service :</p>
<pre><code>192.168.10.131/core/api/myorder/get-order-by-id?id=1
</code></pre>
<p>"myorder" is one of the module I have, and I add a global prefix by using <code>app.setGlobalPrefix('api');</code> in the main.ts</p>
<p>When enter the request path, I will get :</p>
<pre><code>{"statusCode":404,"message":"ENOENT: no such file or directory, stat '/app/client/index.html'"}
</code></pre>
<p>It seems that the ingress can send the request to the pods, but it will look for "/app/client/index.html" which is not expected. It should access the myorder module. there is a client folder under the /app.</p>
<p>If I use the <code>Nodeport</code> to access the service instead of the ingress, everything is fine.</p>
<p><em><strong>Here is the change I have since I got your guy's solution:</strong></em></p>
<p>I deployed a simple nest app, a very simple one:
<a href="https://i.stack.imgur.com/5njbm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5njbm.png" alt="enter image description here" /></a></p>
<p>It is only has a getHalleo function.
Here is the main.js:
<a href="https://i.stack.imgur.com/8vfNt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8vfNt.png" alt="enter image description here" /></a></p>
<p>Here is the yaml for deploying at k8s:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: k8-ingress-test1
labels:
app: k8-ingress-test
spec:
replicas: 1
selector:
matchLabels:
app: k8-ingress-test
template:
metadata:
labels:
app: k8-ingress-test
spec:
containers:
- name: k8-ingress-test
image: 192.168.10.145:8080/k8s-ingress-test
imagePullPolicy: Always
ports:
- containerPort: 3146
---
apiVersion: v1
kind: Service
metadata:
name: k8-ingress-test-service
spec:
selector:
app: k8-ingress-test
ports:
- protocol: TCP
port: 3146
targetPort: 3146
</code></pre>
<p>Here is my new ingress-nginx:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: k8s-ingress
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /hello
pathType: Prefix
backend:
service:
name: k8-ingress-test-service
port:
number: 3146
</code></pre>
<p>I deleted "nginx.ingress.kubernetes.io/rewrite-target: /".
Here is my access URL:</p>
<pre><code>192.168.10.131/hello
</code></pre>
<p>Based on my understanding, it should catch "/hello" and send "/hello" to the test app. since there is no 'hello' module in the app, I get the following :</p>
<pre><code>{"statusCode":404,"message":"Cannot GET /hello","error":"Not Found"}
</code></pre>
<p>so, this is the reason why we have :</p>
<pre><code>nginx.ingress.kubernetes.io/rewrite-target: /
</code></pre>
<p>this will edit the request from "/hello" to "/". In another word, it would send "/" to the test app instead of "/hello". so I can hit the getHello function.</p>
<p>For the <code>192.168.10.131/core/api/myorder/get-order-by-id?id=1</code>, I can not do <code>nginx.ingress.kubernetes.io/rewrite-target: /</code> because it would rewrite the request to <code>/</code> which is wrong.</p>
<p>So, is there any way that I can catch the request including <code>core</code>, and delete the <code>core</code> part then send the <code>/api/myorder/get-order-by-id?id=1</code> to the port 5270(and it also can catch "hello" then delete it then send "/" to the port 3146)?</p>
<p>Thanks</p>
| <p>I think you might have some mis-understanding between the nginx ingress's <code>rewrite-target</code> annotation and your NestJS application's <code>redirection</code>.</p>
<p>With the following annotation:</p>
<pre><code>nginx.ingress.kubernetes.io/rewrite-target: /
</code></pre>
<p>your request path <code>/core/api/myorder/get-order-by-id?id=1</code> will rewrite to <code>/</code>, not <code>/api/myorder/get-order-by-id?id=1</code>.
Check the docs for more info: <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#rewrite" rel="nofollow noreferrer">ingress-nginx annotations/#rewrite</a> and <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/" rel="nofollow noreferrer">examples</a>.</p>
<p>Not familiar with NestJS, I guess your application doesnot return correctly to root path <code>/</code>.</p>
<p>What you probably need is <code>Redirection</code> in a NestJS application. <a href="https://docs.nestjs.com/controllers#redirection" rel="nofollow noreferrer">NestJS Controllers Redirection</a></p>
|
<p>The Python API is available to read objects from a cluster. By cloning we can say:</p>
<ol>
<li>Get a copy of an existing Kubernetes object using <code>kubectl get</code></li>
<li>Change the properties of the object</li>
<li>Apply the new object</li>
</ol>
<p>Until recently, the option to <a href="https://medium.com/@jonathan.johnson/export-has-been-deprecated-in-1-14-51cfef5a0cb7" rel="nofollow noreferrer"><code>--export</code> api was deprecated in 1.14</a>. How can we use the Python Kubernetes API to do the steps from 1-3 described above?</p>
<p>There are multiple questions about how to extract the <a href="https://github.com/kubernetes-client/python/issues/977" rel="nofollow noreferrer">code from Python API to YAML</a>, but it's unclear how to transform the Kubernetes API object. </p>
| <p>Just use <code>to_dict()</code> which is now offered by Kubernetes Client objects. Note that it creates a partly deep copy. So to be safe:</p>
<pre><code>copied_obj = copy.deepcopy(obj.to_dict())
</code></pre>
<p>Dicts can be passed to <code>create*</code> and <code>patch*</code> methods.</p>
<p>For convenience, you can also wrap the dict in <a href="https://github.com/ramazanpolat/prodict" rel="nofollow noreferrer"><code>Prodict</code></a>.</p>
<pre><code>copied_obj = Prodict.from_dict(copy.deepcopy(obj.to_dict()))
</code></pre>
<p>The final issue is getting rid of superfluous fields. (Unfortunately, Kubernetes sprinkles them throughout the object.) I use <a href="https://kopf.readthedocs.io/en/stable/" rel="nofollow noreferrer"><code>kopf</code></a>'s internal facility for getting the "essence" of an object. (It takes care of the deep copy.)</p>
<pre><code>copied_obj = kopf.AnnotationsDiffBaseStorage().build(body=kopf.Body(obj.to_dict()))
copied_obj = Prodic.from_dict(copied_obj)
</code></pre>
|
<p>Using Terraform I spin up the following resources for my primary using a unique service account for this cluster:</p>
<pre><code>resource "google_container_cluster" "primary" {
name = var.gke_cluster_name
location = var.region
# We can't create a cluster with no node pool defined, but we want to only use
# separately managed node pools. So we create the smallest possible default
# node pool and immediately delete it.
remove_default_node_pool = true
initial_node_count = 1
ip_allocation_policy {}
networking_mode = "VPC_NATIVE"
node_config {
service_account = google_service_account.cluster_sa.email
oauth_scopes = [
"https://www.googleapis.com/auth/cloud-platform"
]
}
cluster_autoscaling {
enabled = true
resource_limits {
resource_type = "cpu"
maximum = 40
minimum = 3
}
resource_limits {
resource_type = "memory"
maximum = 100
minimum = 12
}
}
network = google_compute_network.vpc.name
subnetwork = google_compute_subnetwork.subnet.name
}
resource "google_container_node_pool" "primary_nodes" {
name = "${google_container_cluster.primary.name}-node-pool"
location = var.region
cluster = google_container_cluster.primary.name
node_count = var.gke_num_nodes
node_config {
service_account = google_service_account.cluster_sa.email
oauth_scopes = [
"https://www.googleapis.com/auth/cloud-platform"
]
labels = {
env = var.project_id
}
disk_size_gb = 150
preemptible = true
machine_type = var.machine_type
tags = ["gke-node", "${var.project_id}-gke"]
metadata = {
disable-legacy-endpoints = "true"
}
}
}
</code></pre>
<p>Even though I provide with the nodes with the appropriate permissions to pull from the google container registry (<code>roles/containerregistry.ServiceAgent</code>) sometimes I get an <code>ImagePullError</code> randomly from kubernetes:</p>
<pre><code>Unexpected status code [manifests latest]: 401 Unauthorized
</code></pre>
<p>After using the following command to inspect the service accounts assigned to the node pools:</p>
<p><code>gcloud container clusters describe master-env --zone="europe-west2" | grep "serviceAccount"</code></p>
<p>I see the following output:</p>
<pre><code>serviceAccount: default
serviceAccount: master-env@<project-id>.iam.gserviceaccount.com
serviceAccount: master-env@<project-id>.iam.gserviceaccount.com
</code></pre>
<p>Indicating that although I've specified the correct service account to assign to the nodes, for some reason (I think for the <code>primary</code> pool) it instead assigns the <code>default</code> service account which uses the wrong <code>oauth</code> scopes:</p>
<pre><code>oauthScopes:
- https://www.googleapis.com/auth/logging.write
- https://www.googleapis.com/auth/monitoring
</code></pre>
<p>Instead of <code>https://www.googleapis.com/auth/cloud-platform</code>.</p>
<p><em><strong>How can I make sure the same service account is used for all nodes?</strong></em></p>
<h2>Edit 1:</h2>
<p>After implementing the fix from @GariSingh now all my node-pools use the same <code>Service Account</code> as expected however I still get the <code>unexpected status code [manifests latest]: 401 Unauthorized</code> error sometimes when installing my services onto the cluster.</p>
<p>This unusual, as other services installed onto cluster seem to pull their images from <code>gcr</code> without issue.</p>
<p>Describing the pod events shows the following:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 11m default-scheduler Successfully assigned default/<my-deployment> to gke-master-env-nap-e2-standard-2-<id>
Warning FailedMount 11m kubelet MountVolume.SetUp failed for volume "private-key" : failed to sync secret cache: timed out waiting for the condition
Warning FailedMount 11m kubelet MountVolume.SetUp failed for volume "kube-api-access-5hh9r" : failed to sync configmap cache: timed out waiting for the condition
Warning Failed 9m34s (x5 over 10m) kubelet Error: ImagePullBackOff
</code></pre>
<h2>Edit 2:</h2>
<p>The final piece of the puzzle was to add <code>oauth_scopes</code> to the <code>auto_provisioning_defaults</code> similar to the node configs so that the <code>ServiceAccount</code> could be used properly.</p>
| <p>Not sure if you intended to use <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/node-auto-provisioning" rel="nofollow noreferrer">Node auto-provisioning (NAP)</a> (which I highly recommend you use unless it does not meet your needs), but the <code>cluster_autoscaling</code> argument for <code>google_container_cluster</code> actually enables this. It does not enable the cluster autoscaler for individual node pools.</p>
<p>If your goal is to enable cluster autoscaling for the node pool you created in your config and not use NAP, then you'll need to delete the <code>cluster_autoscaling</code> block and add an <a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/container_node_pool#nested_autoscaling" rel="nofollow noreferrer">autoscaling block</a> under your <code>google_container_node_pool</code> resource and change <code>node_count</code> to <code>initial_node_count</code>:</p>
<pre><code>resource "google_container_node_pool" "primary_nodes" {
name = "${google_container_cluster.primary.name}-node-pool"
location = var.region
cluster = google_container_cluster.primary.name
initial_node_count = var.gke_num_nodes
node_config {
autoscaling {
min_node_count = var.min_nodes
max_node_count = var.max_nodes
}
service_account = google_service_account.cluster_sa.email
oauth_scopes = [
"https://www.googleapis.com/auth/cloud-platform"
]
labels = {
env = var.project_id
}
disk_size_gb = 150
preemptible = true
machine_type = var.machine_type
tags = ["gke-node", "${var.project_id}-gke"]
metadata = {
disable-legacy-endpoints = "true"
}
}
}
</code></pre>
<p>(the above assume you set variables for min and max nodes)</p>
<p>If you do want to use NAP, then you'll need to add an <a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/container_cluster#nested_auto_provisioning_defaults" rel="nofollow noreferrer">auto_provisioning_defaults block</a> and configure the <code>service_account</code> property:</p>
<pre><code>resource "google_container_cluster" "primary" {
name = var.gke_cluster_name
location = var.region
# We can't create a cluster with no node pool defined, but we want to only use
# separately managed node pools. So we create the smallest possible default
# node pool and immediately delete it.
remove_default_node_pool = true
initial_node_count = 1
ip_allocation_policy {}
networking_mode = "VPC_NATIVE"
node_config {
service_account = google_service_account.cluster_sa.email
oauth_scopes = [
"https://www.googleapis.com/auth/cloud-platform"
]
}
cluster_autoscaling {
enabled = true
auto_provisioning_defaults {
service_account = google_service_account.cluster_sa.email
}
resource_limits {
resource_type = "cpu"
maximum = 40
minimum = 3
}
resource_limits {
resource_type = "memory"
maximum = 100
minimum = 12
}
}
network = google_compute_network.vpc.name
subnetwork = google_compute_subnetwork.subnet.name
}
</code></pre>
|
<p>I want to display pod details in the following format using promql/Prometheus.</p>
<p><a href="https://i.stack.imgur.com/AGPBm.png" rel="noreferrer"><img src="https://i.stack.imgur.com/AGPBm.png" alt="Image1" /></a></p>
<p>Furthermore, I want to display CPU and memory utilization of application/component in below format using promql</p>
<p><a href="https://i.stack.imgur.com/SLrzt.png" rel="noreferrer"><img src="https://i.stack.imgur.com/SLrzt.png" alt="Image2" /></a></p>
<p>promql query: sum(container_memory_working_set_bytes) by (pod)</p>
<p>I can get the consumed memory by pod using above query.</p>
<p>How to calculate percentage of memory used ? I am not able to fetch memory limit of stateful pod using promql
Could you please suggest any query/API details ?</p>
| <h3>Per-pod CPU usage in percentage (the query doesn't return CPU usage for pods without CPU limits)</h3>
<pre><code>100 * max(
rate(container_cpu_usage_seconds_total[5m])
/ on (container, pod)
kube_pod_container_resource_limits{resource="cpu"}
) by (pod)
</code></pre>
<p>The <code>kube_pod_container_resource_limits</code> metric can be scraped incorrectly if scrape config for <a href="https://github.com/kubernetes/kube-state-metrics/" rel="noreferrer">kube-state-metrics</a> pod is improperly configured. In this case the original <code>pod</code> label for this metric is moved to the <code>exported_pod</code> label because of <code>honor_labels</code> behavior - see <a href="https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config" rel="noreferrer">these docs</a> for details. In this case <a href="https://docs.victoriametrics.com/MetricsQL.html#label_replace" rel="noreferrer">label_replace</a> function must be used for moving <code>exported_pod</code> label to <code>pod</code> label:</p>
<pre><code>100 * max(
rate(container_cpu_usage_seconds_total[5m])
/ on (container, pod)
label_replace(kube_pod_container_resource_limits{resource="cpu"}, "pod", "$1", "exported_pod", "(.+)")
) by (pod)
</code></pre>
<h3>Per-pod memory usage in percentage (the query doesn't return memory usage for pods without memory limits)</h3>
<pre><code>100 * max(
container_memory_working_set_bytes
/ on (container, pod)
kube_pod_container_resource_limits{resource="memory"}
) by (pod)
</code></pre>
<p>If the <code>kube_pod_container_resource_limits</code> metric is scraped incorrectly as mentioned above, then the <a href="https://docs.victoriametrics.com/MetricsQL.html#label_replace" rel="noreferrer">label_replace</a> function must be used for moving <code>exported_pod</code> label value to <code>pod</code>:</p>
<pre><code>100 * max(
container_memory_working_set_bytes
/ on (container, pod)
label_replace(kube_pod_container_resource_limits{resource="memory"}, "pod", "$1", "exported_pod", "(.+)")
) by (pod)
</code></pre>
|
<p><code>rke --debug up --config cluster.yml</code></p>
<p>fails with health checks on etcd hosts with error:</p>
<blockquote>
<p>DEBU[0281] [etcd] failed to check health for etcd host [x.x.x.x]: failed to get /health for host [x.x.x.x]: Get "https://x.x.x.x:2379/health": remote error: tls: bad certificate</p>
</blockquote>
<p>Checking etcd healthchecks</p>
<pre><code>for endpoint in $(docker exec etcd /bin/sh -c "etcdctl member list | cut -d, -f5"); do
echo "Validating connection to ${endpoint}/health";
curl -w "\n" --cacert $(docker exec etcd printenv ETCDCTL_CACERT) --cert $(docker exec etcd printenv ETCDCTL_CERT) --key $(docker exec etcd printenv ETCDCTL_KEY) "${endpoint}/health";
done
Running on that master node
Validating connection to https://x.x.x.x:2379/health
{"health":"true"}
Validating connection to https://x.x.x.x:2379/health
{"health":"true"}
Validating connection to https://x.x.x.x:2379/health
{"health":"true"}
Validating connection to https://x.x.x.x:2379/health
{"health":"true"}
</code></pre>
<pre><code>you can run it manually and see if it responds correctly
curl -w "\n" --cacert /etc/kubernetes/ssl/kube-ca.pem --cert /etc/kubernetes/ssl/kube-etcd-x-x-x-x.pem --key /etc/kubernetes/ssl/kube-etcd-x-x-x-x-key.pem https://x.x.x.x:2379/health
</code></pre>
<p>Checking my self signed certificates hashes</p>
<pre><code># md5sum /etc/kubernetes/ssl/kube-ca.pem
f5b358e771f8ae8495c703d09578eb3b /etc/kubernetes/ssl/kube-ca.pem
# for key in $(cat /home/kube/cluster.rkestate | jq -r '.desiredState.certificatesBundle | keys[]'); do echo $(cat /home/kube/cluster.rkestate | jq -r --arg key $key '.desiredState.certificatesBundle[$key].certificatePEM' | sed '$ d' | md5sum) $key; done | grep kube-ca
f5b358e771f8ae8495c703d09578eb3b - kube-ca
</code></pre>
<pre><code>versions on my master node
Debian GNU/Linux 10
rke version v1.3.1
docker version Version: 20.10.8
kubectl v1.21.5
v1.21.5-rancher1-1
</code></pre>
<p>I think my <code>cluster.rkestate</code> gone bad, are there any other locations where rke tool checks for certificates?
Currently I cannot do anything with this production cluster, and want to avoid downtime. I experimented on testing cluster different scenarios, I could do as last resort to recreate the cluster from scratch, but maybe I can still fix it...
<code>rke remove</code> && <code>rke up</code></p>
| <p><code>rke util get-state-file</code> helped me to reconstruct bad cluster.rkestate file
and I was able to successfully <code>rke up</code> and add new master node to fix whole situation.</p>
|
<p>What happened:
We're on AKS with RBAC enabled. Executing any kubectl/helm command that connects to the k8s cluster, I have to reauthenticate. Output:</p>
<p><code>To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code XXXX to authenticate.</code>
This succeeds but then when I execute the next kubectl command I have to reauthenticate again.</p>
<p>What you expected to happen:
Authenticate and use that token for more than one command.</p>
<p>How to reproduce it (as minimally and precisely as possible):</p>
<p>Get credentials and then execute any kubectl command.</p>
<p>Anything else we need to know?:</p>
<p>Environment:</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0", GitCommit:"e199641833566e5d052d1f1fa930c4d7575bd50", GitTreeState:"clean", BuildDate:"2020-08-26T14:30:33Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>On one of the git answer (<a href="https://github.com/Azure/AKS/issues/1057" rel="nofollow noreferrer">https://github.com/Azure/AKS/issues/1057</a>), I found that we have to remove config.lock to resolve the issue, but there is no such file in my case.
Help appreciated !</p>
| <p>I recommend using <a href="https://learn.microsoft.com/en-us/cli/azure/authenticate-azure-cli" rel="nofollow noreferrer"><code>az login</code></a> An auth token is cached locally on your environment and should give you access to your Kubernetes cluster.</p>
<p>Then follow the Microsoft <a href="https://learn.microsoft.com/en-us/azure/aks/kubernetes-walkthrough#connect-to-the-cluster" rel="nofollow noreferrer">docs</a> to install <code>kubectl</code>.</p>
<pre><code>az aks install-cli
az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
</code></pre>
|
<p>I recently deployed Minio stand-alone on a K0s pod. I can successfully use mc on my laptop to authenticate and create a bucket on my pod’s ip:9000.</p>
<p>But when I try to access the web console and login I get a POST error to ip:9000 and I am unable to login.</p>
<p>Would anyone know what’s causing this?</p>
| <p>I've just started a minio container to verify this and it fact there are two ports you need to publish which are <code>9000</code> and <code>9001</code>.</p>
<p>You can reach the admin console on port <code>9001</code> and the API on port <code>9000</code>, hence your <code>mc</code> command which targets port <code>9000</code> works but trying to login on port <code>9000</code> fails.</p>
<p><a href="https://i.stack.imgur.com/eT5N3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eT5N3.png" alt="MinIO admin console on port 9001" /></a></p>
<h2>Edit</h2>
<p>Now that I understand the problem better thanks to your comments I've tested on my Docker what happens when you login. And in fact there is a <code>POST</code> request happening when clicking on <code>Login</code> but it's not going to port <code>9001</code> not <code>9000</code>, so it seems the your webconsole somehow issues request to the wrong port.</p>
<p>Here a screenshot of the Network tab in my DevTools showing the request that's being issued when I press Login.
<a href="https://i.stack.imgur.com/UdDfQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UdDfQ.png" alt="Chrome Dev Tools: Login request" /></a></p>
<p>I've copied the <code>curl</code> for this request from the DevTool and added the <code>-i</code> flag so you can see the HTTP response code. You could try this with your appropriate <code>accessKey</code> and <code>secretKey</code> of course.</p>
<pre class="lang-sh prettyprint-override"><code>curl -i 'http://localhost:9001/api/v1/login' -H 'Connection: keep-alive' -H 'User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/99.0.4844.83 Safari/537.36' -H 'Content-Type: application/json' -H 'Accept: */*' -H 'Sec-GPC: 1' -H 'Origin: http://localhost:9001' -H 'Sec-Fetch-Site: same-origin' -H 'Sec-Fetch-Mode: cors' -H 'Sec-Fetch-Dest: empty' -H 'Referer: http://localhost:9001/login' -H 'Accept-Language: en-US,en;q=0.9' -H 'Cookie: PGADMIN_LANGUAGE=en' --data-raw '{"accessKey":"minio-root-user","secretKey":"minio-root-password"}' --compressed
</code></pre>
<p>Expected result:</p>
<pre><code>HTTP/1.1 204 No Content
Server: MinIO Console
Set-Cookie: token=AFMyDDQmtaorbMvSfaSQs5N+/9pYgK/rartN8SrGawE3ovm9AoJ5zz/eC9tnY7fRy5k4LChYcJKvx0rWyHr/+4XN2JnqdsT6VLDGI0cTasWiOo87ggj5WEv/cK4OyFlWiv5cJA8GUgQhVmYSk7MqPCVnBlfrvXhF7FaXhy85zAvzuGnExaBv9/8vZFs2LDiDF/9RX3Skb2gzIPIKije0++q4mwllluLIrhxyGrDgO16u33fWnPMjtbmGvsaOJAjx178h19BxbVnacBFyUv7ep+TFQ3xTRFfHefIMQK9lulMZOb5/oZUgEPolZpiB1Z9IJoNHVnUDJRnIIQXjv0bti/Wkz5RnWSoFqDjUWBopqFOuWYM/GMDCVxMrXJgQ/iDSg12b0uo6sOFbtvokyccUHKp5TtEznadzMf3Ga9iiZ4WAAXqONTC4ACMGaHxgUPVD7NvlYkyOlb/dPL75q0g3Qj+hiI5FELqPLEXgXMFHAi0EQDsNo4IXeqlxTJpxQYTUXRgrx1Kg6IlRJ5P9eIKwnj/eXmvXe4lvQSXR7iwEviBa1NVl1alLP0d7eib75IfhiMo7Hvyywg==; Path=/; Expires=Sat, 26 Mar 2022 13:23:34 GMT; Max-Age=3600; HttpOnly; SameSite=Lax
Vary: Accept-Encoding
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-Xss-Protection: 1; mode=block
Date: Sat, 26 Mar 2022 12:23:34 GMT
Connection: close
</code></pre>
|
<p>I have a avi Kubernetes ingress and want to redirect / to /ui . Is it possible to do on Ingress routing rules.</p>
<p>poc.xxx.com/ --> How to redirect it to poc.xxx.com/ui</p>
<p>poc.xxx.com/ui --> ui-service</p>
<p>poc.xxx.com/backend --> backend-service</p>
<p>My ingress Yaml</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: poc-ingress
spec:
rules:
- host: poc.xxx.com
http:
paths:
- path: /ui
pathType: Prefix
backend:
service:
name: ui-service
port:
number: 443
- path: /backend
pathType: Prefix
backend:
service:
name: backend-service
port:
number: 443
</code></pre>
| <p>What if you do something like this, any request at <code>/</code> will get moved to <code>ui</code> service</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: poc-ingress
spec:
rules:
- host: poc.xxx.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ui-service
port:
number: 443
- path: /backend
pathType: Prefix
backend:
service:
name: backend-service
port:
number: 443
</code></pre>
<p>However still if you are looking for a redirect solution you can follow below option</p>
<p>Add this annotation in ingress :</p>
<pre><code>nginx.ingress.kubernetes.io/server-snippet: |
location ~ / {
rewrite / https://test.example.com/ui permanent;
}
</code></pre>
<p>if request comes at <code>/</code> it will get redirected to another domain or <code>ui</code> path as you wish.</p>
<p>You can also create the two ingress looks like this, first one check <code>backend</code> and <code>/</code> while another one handles <code>ui</code> :</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: poc-ingress
annotation:
nginx.ingress.kubernetes.io/server-snippet: |
location ~ / {
rewrite / https://test.example.com/ui permanent;
}
spec:
rules:
- host: poc.xxx.com
http:
paths:
- path: /backend
pathType: Prefix
backend:
service:
name: backend-service
port:
number: 443
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ui-ingress
spec:
rules:
- host: poc.xxx.com
http:
paths:
- path: /ui
pathType: Prefix
backend:
service:
name: ui-service
port:
number: 443
</code></pre>
<p>Do not forget to use the ingress class annotation in ingress.</p>
|
<h1>Context</h1>
<p>Say we have <code>d.yaml</code> in which a deployment, whose strategy is <code>RollingUpdate</code>, is defined.</p>
<p>We first create a deployment:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl apply -f d.yaml
</code></pre>
<p>After some time, we modify <code>d.yaml</code> and re-apply it to update the deployment.</p>
<pre class="lang-sh prettyprint-override"><code>vi d.yaml
kubectl apply -f d.yaml
</code></pre>
<p>This starts rolling out a new replicaset <code>R_new</code>.</p>
<p>Normally, the old (previous) replicaset <code>R_old</code> is <em>killed</em> only after <code>R_new</code> has successfully been rolled out.</p>
<h1>Question (tl;dr)</h1>
<p>Is it possible to kill <code>R_old</code> without waiting for rolling out <code>R_new</code> to complete?</p>
<p>By "kill", I mean completely stopping a replicaset; it should never restart. (So <code>kubectl delete replicaset</code> didn't help.)</p>
<h1>Question (long)</h1>
<p>In my specific situation, my containers connect to an external database. This single database is also connected from many containers managed by other teams.</p>
<p>If <a href="https://stackoverflow.com/questions/30778015/how-to-increase-the-max-connections-in-postgres">the maximum number of connections allowed</a> is already reached, new containers associated with <code>R_new</code> fail to start (i.e. <code>CrashLoopBackOff</code>).</p>
<p>If I could forcefully kill <code>R_old</code>, the number of connections would be lowered by <code>N</code> where <code>N</code> is the number of replicas, and thus <code>R_new</code>'s containers would successfully start.</p>
<p><strong>FAQ:</strong></p>
<p>Q. Why not temporarily stop using <code>RollingUpdate</code> strategy?</p>
<p>A. Actually I have no permission to edit <code>d.yaml</code>. It is edited by CI/CD.</p>
<p>Q. Why not just make the maximum number of connections larger?</p>
<p>A. I have no permission for the database either...</p>
| <blockquote>
<blockquote>
<p>Is it possible to kill R_old without waiting for rolling out R_new to complete?By "kill", I mean completely stopping a replicaset; it should never restart. (So kubectl delete replicaset didn't help.)</p>
</blockquote>
</blockquote>
<ol>
<li>Make Changes to deployment</li>
<li>Scale down deployment to <code>replicas=0</code> so that it is as good as stopping the old replicaset</li>
<li>scale up deployment to desired number of <code>replicas</code>, new <code>replicasets</code> will be created with new configuration changes in deployment.</li>
</ol>
<p><em>steps number 1 & 2 can be interchanged based on the requirement</em></p>
|
<p>I'm trying to deploy a simple .NET App in local kubernetes cluster (Kind) for testing purposes. When a deployment is applied, a pod doesn't start with an error. But the image is built well as a container works well if started locally in Docker.</p>
<pre><code>NAME READY STATUS RESTARTS AGE
orderproducer-68d5ff7944-d2d89 0/1 Error 4 103s
</code></pre>
<pre><code>kubectl logs -l app=orderproducer
Could not execute because the application was not found or a compatible .NET SDK is not installed.
Possible reasons for this include:
* You intended to execute a .NET program:
The application 'OrderProducer.dll' does not exist.
* You intended to execute a .NET SDK command:
It was not possible to find any installed .NET SDKs.
Install a .NET SDK from:
https://aka.ms/dotnet-download
</code></pre>
<p>It's weird, because if I start a docker container from the same image locally (not in a cluster) it works well. Besides, I had run bash on that container and ensured that <code>OrderProducer.dll</code> was really presented in the <code>/app</code> folder (which is a workdir).</p>
<pre><code>xxx@xxx:/mnt/c/Users/xxx$ docker run --name test6 orderproducer:latest
Order Producer has started!
Kafka broker: 127.0.0.1:9092
</code></pre>
<p>Do you have any ideas what's my mistake? Why it run in Docker, but not in a K8s's pod? I've already spent about 3 hours trying to figure out, but still not. Many thanks in advance.</p>
<p>Here are some artifacts I used.</p>
<p>Dockerfile:</p>
<pre><code>FROM mcr.microsoft.com/dotnet/runtime:6.0 AS base
WORKDIR /app
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
WORKDIR /src
COPY ["OrderProducer/OrderProducer.csproj", "OrderProducer/"]
COPY ["Common/Common.csproj", "Common/"]
RUN dotnet restore "OrderProducer/OrderProducer.csproj"
COPY . .
WORKDIR "/src/OrderProducer"
RUN dotnet build "OrderProducer.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "OrderProducer.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "OrderProducer.dll"]
</code></pre>
<p>Then I build an image and make it accessible for kind:</p>
<pre><code>docker build -f OrderProducer/Dockerfile -t orderproducer:latest .
kind load docker-image orderproducer:latest
</code></pre>
<p>Then I apply a deployment:</p>
<pre><code>kubectl apply -f orderproducer-deployment.yml
</code></pre>
<p>orderproducer-deployment.yml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: orderproducer
spec:
replicas: 1
selector:
matchLabels:
app: orderproducer
template:
metadata:
labels:
app: orderproducer
spec:
containers:
- name: orderproducer
image: orderproducer:latest
imagePullPolicy: Never
resources:
limits:
memory: "128Mi"
cpu: "500m"
</code></pre>
| <p>The reason was when a volume was mounted to <code>/app</code> it wiped out all the container's <code>/app</code> content.</p>
<p>I fixed the issue by editing the following deployment's part:</p>
<pre><code>volumeMounts:
- name: appsettings-volume
mountPath: /app/appSettings.json
subPath: appSettings.json
</code></pre>
|
<p>I have a namespace where new short-lived pods (< 1 minute) are created constantly by Apache Airflow. I want that all those new pods are annotated with <code>aws.amazon.com/cloudwatch-agent-ignore: true</code> automatically so that no CloudWatch metrics (container insights) are created for those pods.</p>
<p>I know that I can achieve that from airflow side with <a href="https://airflow.apache.org/docs/apache-airflow/stable/kubernetes.html#pod-mutation-hook" rel="nofollow noreferrer">pod mutation hook</a> but for the sake of the argument let's say that <strong>I have no control over the configuration of that airflow instance</strong>.</p>
<p>I have seen <code>MutatingAdmissionWebhook</code> and it seem that could do the trick, but it seems that it's considerable effort to set up. So I'm looking for a more of the shelf solution, I want to know if there is some "standard" admission controller that can do this specific use case, without me having to deploy a web server and implement the api required by <code>MutatingAdmissionWebhook</code>.</p>
<p><strong>Is there any way to add that annotation from kubernetes side at pod creation time?</strong> The annotation must be there "from the beginning", not added 5 seconds later, otherwise the cwagent might pick it between the pod creation and the annotation being added.</p>
| <p>To clarify I am posting community Wiki answer.</p>
<p>You had to use <a href="https://github.com/aws/amazon-cloudwatch-agent/blob/dd1be96164c2cd6226a33c8cf7ce10a7f29547cf/plugins/processors/k8sdecorator/stores/podstore.go#L31" rel="nofollow noreferrer"><code>aws.amazon.com/cloudwatch-agent-ignore: true</code></a> annotation. This means the pod that has one, it will be ignored by <code>amazon-cloudwatch-agent</code> / <code>cwagent</code>.</p>
<p>Here is the excerpt of your solution how to add this annotation to Apache Airflow:</p>
<blockquote>
<p>(...) In order to force Apache Airflow to add the
<code>aws.amazon.com/cloudwatch-agent-ignore: true</code> annotation to the task/worker pods and to the pods created by the <code>KubernetesPodOperator</code> you will need to add the following to your helm <code>values.yaml</code> (assuming that you are using the "official" helm chart for airflow 2.2.3):</p>
</blockquote>
<pre class="lang-yaml prettyprint-override"><code>airflowPodAnnotations:
aws.amazon.com/cloudwatch-agent-ignore: "true"
airflowLocalSettings: |-
def pod_mutation_hook(pod):
pod.metadata.annotations["aws.amazon.com/cloudwatch-agent-ignore"] = "true"
</code></pre>
<blockquote>
<p>If you are not using the helm chart then you will need to change the <code>pod_template_file</code> yourself to add the <code>annotation</code> and you will also need to modify the <code>airflow_local_settings.py</code> to include the <code>pod_mutation_hook</code>.</p>
</blockquote>
<p><a href="https://stackoverflow.com/questions/71438495/how-to-prevent-cloudwatch-container-insights-metrics-from-short-lived-kubernetes/71438496#71438496">Here</a> is the link to your whole answer.</p>
|
<p>Our GKE Autopilot cluster was recently upgraded to version 1.21.6-gke.1503, which apparently causes the <code>cluster-autoscaler.kubernetes.io/safe-to-evict=false</code> annotation to be banned.</p>
<p>I totally get this for deployments, as Google doesn't want a deployment preventing scale-down, but for jobs I'd argue this annotation makes perfect sense in certain cases. We start complex jobs that start and monitor other jobs themselves, which makes it hard to make them restart-resistant given the sheer number of moving parts.</p>
<p><strong>Is there any way to make it as unlikely as possible for job pods to be restarted/moved around when using Autopilot?</strong> Prior to switching to Autopilot, we used to make sure our jobs filled a single node by requesting all of its available resources; combined with a Guaranteed QoS class, this made sure the only way for a pod to be evicted was if the node somehow failed, which almost never happened. Now all we seem to have left is the Guaranteed QoS class, but that doesn't prevent pods from being evicted.</p>
| <p>At this point the only thing left is to ask to bring back this feature on <a href="https://issuetracker.google.com" rel="nofollow noreferrer">IssueTracker</a> - raise a new feature reqest and hope for the best.</p>
<p>Link to this thread also as it contains quite a lot of troubleshooting and may be useful.</p>
|
<p><a href="https://kubernetes.io/blog/2020/12/02/dont-panic-kubernetes-and-docker/" rel="nofollow noreferrer">Since Kubernetes 1.20</a>, Docker support is deprecated and will be totally removed from 1.24. We use GKE to manage Kubernetes so the upgrade will be done automatically.</p>
<p>As far as I've read, developers should not have been impacted but we made tests in Kubernetes 1.23 to check that all is OK and it seems we have some issues with a microservice using Testcontainers :</p>
<pre><code>09:59:44.578 [testcontainers-ryuk] WARN org.testcontainers.utility.ResourceReaper - Can not connect to Ryuk at localhost:49153
java.net.ConnectException: Connection refused (Connection refused)
at java.base/java.net.PlainSocketImpl.socketConnect(Native Method)
at java.base/java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:399)
at java.base/java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:242)
at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:224)
at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:403)
at java.base/java.net.Socket.connect(Socket.java:591)
at org.testcontainers.utility.ResourceReaper.lambda$null$3(ResourceReaper.java:194)
at org.rnorth.ducttape.ratelimits.RateLimiter.doWhenReady(RateLimiter.java:27)
at org.testcontainers.utility.ResourceReaper.lambda$start$4(ResourceReaper.java:190)
at java.base/java.lang.Thread.run(Thread.java:835)
</code></pre>
<p>This is not reproductible on a Kubernetes 1.19 where Docker is not deprecated nor removed.</p>
<p>We tried to disable Ryuk in <code>pom.xml</code> (as indicated for this error in a <a href="https://github.com/testcontainers/testcontainers-java/issues/3609" rel="nofollow noreferrer">Testcontainers issue</a>) but it has no effect :</p>
<pre><code><plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-failsafe-plugin</artifactId>
<executions>
<execution>
<goals>
<goal>verify</goal>
<goal>integration-test</goal>
</goals>
<configuration>
<environmentVariables>
<TESTCONTAINERS_RYUK_DISABLED>true</TESTCONTAINERS_RYUK_DISABLED>
</environmentVariables>
</configuration>
</execution>
</executions>
</plugin>
</code></pre>
<p>To reproduce locally, we tried to launch IT with testcontainers in a Minikube with Kubernetes 1.23 and Containerd as container runtime (no docker env):</p>
<pre><code>minikube start --kubernetes-version v1.23.0
--network-plugin=cni
--enable-default-cni
--container-runtime=containerd
--bootstrapper=kubeadm
</code></pre>
<p>But it leads to this error when launching <code>mvn -T 2 failsafe:integration-test failsafe:verify</code> :</p>
<pre><code>[ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.87 s <<< FAILURE! - in com.ggl.merch.kafka.it.MerchandisingConsumerIT
[ERROR] should_consume_merchandising_message_and_process_record Time elapsed: 0.012 s <<< ERROR!
java.lang.IllegalStateException: Could not find a valid Docker environment. Please see logs and check configuration
at com.ggl.merch.kafka.it.MerchandisingConsumerIT.<init>(MerchandisingConsumerIT.java:91)
</code></pre>
<p>Anyone already had the same problem?</p>
<p>Thank you by advance!</p>
| <p><strong>Solved</strong> by putting this in our configuration:</p>
<pre><code>- name: "TESTCONTAINERS_HOST_OVERRIDE"
valueFrom:
fieldRef:
fieldPath: status.hostIP
</code></pre>
<p>Following <a href="https://www.testcontainers.org/features/configuration/" rel="nofollow noreferrer">this doc</a>:</p>
<blockquote>
<p><strong><code>TESTCONTAINERS_HOST_OVERRIDE</code></strong><br />
Docker's host on which ports are exposed.<br />
Example: <code>docker.svc.local</code></p>
</blockquote>
|
<p>We have a 2 node K3S cluster with one master and one worker node and would like "reasonable availability" in that, if one or the other nodes goes down the cluster still works i.e. ingress reaches the services and pods which we have replicated across both nodes. We have an external load balancer (F5) which does active health checks on each node and only sends traffic to up nodes.</p>
<p><strong>Unfortunately, if the master goes down the worker will not serve any traffic (ingress).</strong></p>
<p>This is strange because all the service pods (which ingress feeds) on the worker node are running.</p>
<p>We suspect the reason is that key services such as the <code>traefik</code> ingress controller and <code>coredns</code> are only running on the master.</p>
<p>Indeed when we simulated a master failure, restoring it from a backup, none of the pods on the worker could do any DNS resolution. Only a reboot of the worker solved this.</p>
<p>We've tried to increase the number of replicas of the <code>traefik</code> and <code>coredns</code> deployment which helps a bit BUT:</p>
<ul>
<li>This gets lost on the next reboot</li>
<li>The worker still functions when the master is down but every 2nd ingress request fails
<ul>
<li>It seems the worker still blindly (round-robin) sends traffic to a non-existant master</li>
</ul>
</li>
</ul>
<p>We would appreciate some advice and explanation:</p>
<ul>
<li>Should not key services such as <code>traefik</code> and <code>coredns</code> be DaemonSets by default?</li>
<li>How can we change the service description (e.g. replica count) in a persistent way that does not get lost</li>
<li>How can we get intelligent traffic routing with ingress to only "up" nodes</li>
<li>Would it make sense to make this a 2-master cluster?</li>
</ul>
<p>UPDATE: Ingress Description:</p>
<pre><code>kubectl describe ingress -n msa
Name: msa-ingress
Namespace: msa
Address: 10.3.229.111,10.3.229.112
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
tls-secret terminates service.ourdomain.com,node1.ourdomain.com,node2.ourdomain.com
Rules:
Host Path Backends
---- ---- --------
service.ourdomain.com
/ gateway:8443 (10.42.0.100:8443,10.42.1.115:8443)
node1.ourdomain.com
/ gateway:8443 (10.42.0.100:8443,10.42.1.115:8443)
node2.ourdomain.com
/ gateway:8443 (10.42.0.100:8443,10.42.1.115:8443)
Annotations: kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/router.middlewares: msa-middleware@kubernetescrd
Events: <none>
</code></pre>
| <p>Your goals seems can be achievable with a few K8S internal features (not specific to Traffic):</p>
<ol>
<li><p>Assure you have 1 replica of <a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/" rel="nofollow noreferrer">Ingress Controller's Pod</a> on each Node => use <a href="https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/" rel="nofollow noreferrer">Daemon Set</a> as a installation method</p>
</li>
<li><p>To fix the error from Ingress Description set the correct load Balancer IP of Ingress Controller's Service.</p>
</li>
<li><p>Use external Traffic Policy to "Local" - this assures that traffic is routed to local endpoints only (Controller Pads running on Node accepting traffic from Load Balancer)</p>
</li>
</ol>
<blockquote>
<p><code>externalTrafficPolicy</code> - denotes if this Service desires to route external traffic to node-local or cluster-wide endpoints. There are two available options: <code>Cluster</code> (default) and <code>Local</code>. <code>Cluster</code> obscures the client source IP and may cause a second hop to another node, but should have good overall load-spreading. <code>Local</code> preserves the client source IP and avoids a second hop for LoadBalancer and NodePort type Services, but risks potentially imbalanced traffic spreading.</p>
</blockquote>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: example-service
spec:
selector:
app: example
ports:
- port: 8765
targetPort: 9376
externalTrafficPolicy: Local
type: LoadBalancer
</code></pre>
<ol start="5">
<li>Service name of Ingress Backend should use <a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/" rel="nofollow noreferrer">external Traffic Policy</a> <code>externalTrafficPolicy: Local</code> too.</li>
</ol>
|
<p>I have a running pod that was created with the following <code>pod-definition.yaml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: microservice-one-pod-name
labels:
app: microservice-one-app-label
type: front-end
spec:
containers:
- name: microservice-one
image: vismarkjuarez1994/microserviceone
ports:
- containerPort: 2019
</code></pre>
<p>I then created a Service using the following <code>service-definition.yaml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>kind: Service
apiVersion: v1
metadata:
name: microserviceone-service
spec:
ports:
- port: 30008
targetPort: 2019
protocol: TCP
selector:
app: microservice-one-app-label
type: NodePort
</code></pre>
<p>I then ran <code>kubectl describe node minikube</code> to find the Node IP I should be connecting to -- which yielded:</p>
<pre class="lang-sh prettyprint-override"><code>Addresses:
InternalIP: 192.168.49.2
Hostname: minikube
</code></pre>
<p>But I get no response when I run the following curl command:</p>
<pre class="lang-sh prettyprint-override"><code>curl 192.168.49.2:30008
</code></pre>
<p>The request also times out when I try to access <code>192.168.49.2:30008</code> from a browser.</p>
<p>The pod logs show that the container is up and running. Why can't I access my Service?</p>
| <p>The problem is that you are trying to access your service at the <code>port</code> parameter which is the internal port at which the service will be exposed, even when using <code>NodePort</code> type.</p>
<p>The parameter you were searching is called <code>nodePort</code>, which can optionally be specified together with <code>port</code> and <code>targetPort</code>. Quoting the <a href="https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p>By default and for convenience, the Kubernetes control plane will
allocate a port from a range (default: <code>30000-32767</code>)</p>
</blockquote>
<p>Since you didn't specify the <code>nodePort</code>, one in the range was automatically picked up. You can check which one by:</p>
<pre><code>kubectl get svc -owide
</code></pre>
<p>And then access your service externally at that port.</p>
<p>As an alternative, you can change your service definition to be something like:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: microserviceone-service
spec:
ports:
- port: 30008
targetPort: 2019
nodePort: 30008
protocol: TCP
selector:
app: microservice-one-app-label
type: NodePort
</code></pre>
<p>But take in mind that you may need to delete your service and create it again in order to change the <code>nodePort</code> allocated.</p>
|
<p>it's possible to synchronize repo github for dags with an azure storage account?<br />
I want that evey time i put the dags in github repository it appears on azure file share.</p>
| <p><em><strong>Airflow will not create the shared filesystem if you specify a Git repository. Instead, it will clone the DAG files to each of the nodes, and sync them periodically with the remote repository.</strong></em></p>
<p>You can refer this <a href="https://docs.bitnami.com/azure-templates/infrastructure/apache-airflow/configuration/sync-dags/" rel="nofollow noreferrer"><em>Document</em></a> to Synchronize DAGs With A Remote Git Repository. Also do refer this <a href="https://www.c-sharpcorner.com/article/azure-devops-copy-files-from-git-repository-to-azure-storage-account/" rel="nofollow noreferrer"><em>Document</em></a> that will help you to Copy Files From Git Repository To Azure Storage Account using Azure Devops</p>
|
<p>I have a running pod and I use <code>kubectl attach -ti mypod</code> to give him input.</p>
<p>But then, I would like to "deattach" from my pod and return back to my terminal without terminating my pod.
How can I do that?</p>
| <p>You can kill the <code>kubectl attach -it ..</code> process without killing the pod:</p>
<pre><code>host$ kubectl attach -it podname
pod$ do something in pod
</code></pre>
<p>Open the other terminal and find your <code>kubectl attach</code> command and then kill it, something like:</p>
<pre><code>host$ ps x | grep "kubectl attach"
> 1234 pts/22 Sl+ 0:00 kubectl attach -it podname
host$ kill 1234
</code></pre>
|
<p>We have created a few month ago controller which runs great using kubebuilder.</p>
<p>Few weeks ago we added a “listener” to a secret which when the secret is changing (secret properties)
The <strong>reconcile</strong> should be invoked, the problem is that it is <strong>sometimes working</strong> and <strong>sometimes not</strong>, (you change the secret apply it and the reconcile doesn’t happens) , we are doing it for the exact <strong>same</strong> secret file.</p>
<p>We tried few days to find the root cause without success, (we change the <code>k8s.io/client-go v0.23.4</code> and also to <code>v0.22.3</code> and now <code>v0.22.1</code> that is only working.
Any idea what the issue could be? any hint will be helpful. or Any other way to do it that we can try out.</p>
<pre><code>func (r *vtsReconciler) SetupWithManager(mgr ctrl.Manager) error {
manager := ctrl.NewControllerManagedBy(mgr).
For(&vts.str).
WithEventFilter(predicate.Or(predicate.AnnotationChangedPredicate{}))
manager = manager.Watches(&source.Kind{Type: &v1.Secret{}}, handler.EnqueueRequestsFromMapFunc(func(a client.Object) []reconcile.Request {
return r.SecretRequests.SecretFinder(a.GetName())
}))
return manager.Complete(r)
}
func (secm *SecretMapper) SecretFinder(name string) []reconcile.Request {
v := cli.ObjectKey{Name: name}
return secm.SecMap[v.String()]
}
</code></pre>
| <p>Most likely the issue is that the <code>WithEventFIlter</code> applies to all watched objects by the controller. The generation is auto-incremented for CRDs, but this doesn't hold for all resource types.</p>
<p>From the GenerationChangedPredicate docs:</p>
<pre><code>// Caveats:
//
// * The assumption that the Generation is incremented only on writing to the spec does not hold for all APIs.
// E.g For Deployment objects the Generation is also incremented on writes to the metadata.annotations field.
// For object types other than CustomResources be sure to verify which fields will trigger a Generation increment when they are written to.
</code></pre>
<p>You can check this by creating a secret / updating a secret you will see that there is no generation set (at least not on my local k3d cluster).</p>
<p>Most likely it works for the creation as initially the controller will sync existing resources with the cluster.</p>
<p>To solve it you can use:</p>
<pre class="lang-golang prettyprint-override"><code>func (r *vtsReconciler) SetupWithManager(mgr ctrl.Manager) error {
manager := ctrl.NewControllerManagedBy(mgr).
For(&vts.str, WithPredicates(predicate.Or(predicate.GenerationChangedPredicate{}, predicate.AnnotationChangedPredicate{}))).
manager = manager.Watches(&source.Kind{Type: &v1.Secret{}}, handler.EnqueueRequestsFromMapFunc(func(a client.Object) []reconcile.Request {
return r.SecretRequests.FindForSecret(a.GetNamespace(), a.GetName())
}))
return manager.Complete(r)
}
</code></pre>
<p>which should apply the predicates only to your custom resource.</p>
|
<p>Is there any way to configure promtail to send logs to loki via https-ingress?</p>
<p><code>promtail</code> ---> <code>https-ingress</code> ---> <code>loki</code></p>
<p>I used this helm chart <a href="https://github.com/grafana/helm-charts/tree/main/charts/promtail" rel="nofollow noreferrer">promtail</a> and configured <a href="https://github.com/grafana/helm-charts/blob/main/charts/promtail/values.yaml#L238" rel="nofollow noreferrer">loki url</a> as <code>http://gateway.loki.monitoring.example.com:80/loki/api/v1/push</code>. After I deploy <code>promtail</code> chart I see below errors in <code>promtail</code> pod</p>
<pre><code>level=error ts=2022-03-28T14:10:23.740581978Z caller=client.go:360 component=client host=gateway.loki.monitoring.example.com:80 msg="f
inal error sending batch" status=308 error="server returned HTTP status 308 Permanent Redirect (308): <html>"
</code></pre>
<p>I even specified https in loki url as <code>https://gateway.loki.monitoring.example.com:80/loki/api/v1/push</code> but still failing</p>
<pre><code>level=warn ts=2022-03-28T14:27:47.976570998Z caller=client.go:349 component=client host=gateway.loki.monitoring.example:80 msg="er
ror sending batch, will retry" status=-1 error="Post \"https://gateway.loki.monitoring.example.com:80/loki/api/v1/push\": http: server
gave HTTP response to HTTPS client"
</code></pre>
<p>I found this config <a href="https://grafana.com/docs/loki/latest/installation/helm/#run-loki-behind-https-ingress" rel="nofollow noreferrer">https://grafana.com/docs/loki/latest/installation/helm/#run-loki-behind-https-ingress</a>, but it is outdated</p>
<p>NOTE:</p>
<ul>
<li>I have not configured any https at loki side.</li>
<li>Configured <a href="https://github.com/grafana/helm-charts/tree/main/charts/loki-distributed" rel="nofollow noreferrer"><code>loki-distributed</code></a> chart's ingress like below(and rest ingress config are default)</li>
</ul>
<pre><code>...
ingress:
# -- Specifies whether an ingress for the gateway should be created
enabled: true
# -- Ingress Class Name. MAY be required for Kubernetes versions >= 1.18
ingressClassName: monitoring-ingress
# -- Annotations for the gateway ingress
annotations:
cert-manager.io/cluster-issuer: monitoring-cluster-issuer
# -- Hosts configuration for the gateway ingress
hosts:
- host: gateway.loki.monitoring.example.com
paths:
- path: /
# -- pathType (e.g. ImplementationSpecific, Prefix, .. etc.) might also be required by some Ingress Controllers
pathType: Prefix
# -- TLS configuration for the gateway ingress
tls:
- secretName: loki-gateway-tls-certs
hosts:
- gateway.loki.monitoring.example.com
...
</code></pre>
<p>Did I miss any ingress config at loki?</p>
| <p>After I played some time, I understood I need to remove port and specify <code>https</code> for the loki URL. Should be like below</p>
<pre><code>https://gateway.loki.monitoring.example.com/loki/api/v1/push
</code></pre>
|
<p>I am setting up External-DNS with Terraform. <a href="https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/azure.md" rel="nofollow noreferrer">Per the documentation</a>, I have to manually create an <code>azure.json</code> file and mount it as a secret volume. The directions also state:</p>
<blockquote>
<p>The Azure DNS provider expects, by default, that the configuration
file is at /etc/kubernetes/azure.json</p>
</blockquote>
<pre><code>{
"tenantId": "01234abc-de56-ff78-abc1-234567890def",
"subscriptionId": "01234abc-de56-ff78-abc1-234567890def",
"resourceGroup": "MyDnsResourceGroup",
"aadClientId": "01234abc-de56-ff78-abc1-234567890def",
"aadClientSecret": "uKiuXeiwui4jo9quae9o"
}
</code></pre>
<p>I then run <code>kubectl create secret generic azure-config-file --from-file=/local/path/to/azure.json</code> to mount the secret as a file.</p>
<p>The problem is that those values are dynamic, and I need to do this automatically per a CI/CD pipeline. I'm using Terraform Kubernetes resources, and here I've used the <code>kubernetes_secret</code> resource.</p>
<pre><code>resource "kubernetes_secret" "azure_config_file" {
metadata {
name = "azure-config-file"
}
data = {
tenantId = data.azurerm_subscription.current.tenant_id
subscriptionId = data.azurerm_subscription.current.subscription_id
resourceGroup = azurerm_resource_group.k8s.name
aadClientId = azuread_application.sp_externaldns_connect_to_dns_zone.application_id
aadClientSecret = azuread_application_password.sp_externaldns_connect_to_dns_zone.value
}
depends_on = [
kubernetes_namespace.external_dns,
]
}
</code></pre>
<p>The secret gets mounted, but the pod never sees it and it results in a crashLoopBackoff. This may not be the best direction.</p>
<p>How do I automate this process with Terraform and get it mounted correctly?</p>
<p>For reference, this is the related section of the YAML manifest</p>
<pre><code>...
volumeMounts:
- name: azure-config-file
mountPath: /etc/kubernetes
readOnly: true
volumes:
- name: azure-config-file
secret:
secretName: azure-config-file
items:
- key: externaldns-config.json
path: azure.json
</code></pre>
| <p>This is the Terraform version of using the <code>--from-file</code> flag with kubectl.</p>
<p>Basically, you'll add the name of the file and its contents per the structure of the <code>data</code> block below.</p>
<pre><code>resource "kubernetes_secret" "azure_config_file" {
metadata {
name = "azure-config-file"
}
data = { "azure.json" = jsonencode({
tenantId = data.azurerm_subscription.current.tenant_id
subscriptionId = data.azurerm_subscription.current.subscription_id
resourceGroup = data.azurerm_resource_group.rg.name
aadClientId = azuread_application.sp_externaldns_connect_to_dns_zone.application_id
aadClientSecret = azuread_application_password.sp_externaldns_connect_to_dns_zone.value
})
}
}
</code></pre>
|
<p>We have a default deny-all-egress policy for all pods and we have an egress-internet policy like below</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-external-egress-internet
spec:
podSelector:
matchLabels:
egress: internet
policyTypes:
- Egress
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
</code></pre>
<p>Now, if I try to add multiple labels under <code>spec/podselector/matchlabels</code> everything breaks. Is there a way for this network policy to get implemented on pods with label <code>egress: internet</code> OR <code>foo:bar</code>.</p>
<p>A pod with just <code>foo:bar</code> as label should be allowed but it's not working that way.</p>
| <p>Thats tricky because <code>matchLabels</code> does not take multiple key&value pairs and <code>matchExpressions</code> will be ANDed.
There are two possible ways (workarounds):</p>
<ol>
<li><p>Create another networkpolicy (along with existingone) where <code>matchLabels</code> contains <code>foo:bar</code>.</p>
<p>[or]</p>
</li>
<li><p>add a new label(common) to both the workloads and use that in <code>podSelector</code></p>
</li>
</ol>
|
<p>We have created our own custom resource a.k.a CRD and we need to add support for rolling update, as K8s is supporting it for deployments etc we want to reuse such logic, is there any lib which we can use (maybe partially) which we can use to support it? Or maybe learn and follow the logic as we don't want to re-invent the wheel? Any reference/lib would be helpful.</p>
<p>I've struggled to find this <a href="https://github.com/kubernetes/kubernetes" rel="nofollow noreferrer">here</a>.</p>
| <p>Posted community wiki answer to summarise the problem.</p>
<p><a href="https://stackoverflow.com/users/13906951/clark-mccauley" title="1,093 reputation">Clark McCauley</a> well suggested:</p>
<blockquote>
<p>You're probably looking for the logic contained <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/deployment/rolling.go#L32-L67" rel="nofollow noreferrer">here</a>.</p>
</blockquote>
<p>This is a reference to the k8s source code, so you probably won't find a better source of ideas :)</p>
<pre><code>// rolloutRolling implements the logic for rolling a new replica set.
func (dc *DeploymentController) rolloutRolling(ctx context.Context, d *apps.Deployment, rsList []*apps.ReplicaSet) error {
newRS, oldRSs, err := dc.getAllReplicaSetsAndSyncRevision(ctx, d, rsList, true)
if err != nil {
return err
}
allRSs := append(oldRSs, newRS)
// Scale up, if we can.
scaledUp, err := dc.reconcileNewReplicaSet(ctx, allRSs, newRS, d)
if err != nil {
return err
}
if scaledUp {
// Update DeploymentStatus
return dc.syncRolloutStatus(ctx, allRSs, newRS, d)
}
// Scale down, if we can.
scaledDown, err := dc.reconcileOldReplicaSets(ctx, allRSs, controller.FilterActiveReplicaSets(oldRSs), newRS, d)
if err != nil {
return err
}
if scaledDown {
// Update DeploymentStatus
return dc.syncRolloutStatus(ctx, allRSs, newRS, d)
}
if deploymentutil.DeploymentComplete(d, &d.Status) {
if err := dc.cleanupDeployment(ctx, oldRSs, d); err != nil {
return err
}
}
// Sync deployment status
return dc.syncRolloutStatus(ctx, allRSs, newRS, d)
}
</code></pre>
|
<p>I am using the Nginx annotations in Helm like so:</p>
<pre><code>ingress:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
hosts:
- host: "example.com"
paths:
- path: /api(/?)(.*)
</code></pre>
<p>When visiting <code>example.com/api/</code>, my URL is rewritten as expected and I am forwarded to my application.</p>
<p>However, when the trailing slash is omitted, e.g <code>example.com/api</code>, this no longer is the case. What could I do to ensure that the scenario without a trailing slash included is correctly evaluated?</p>
| <p>I think you are searching for regex alternatives?</p>
<pre><code>ingress:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
hosts:
- host: "example.com"
paths:
- path: /api(/|$)(.*)
</code></pre>
<p>Either after <code>/api</code> there's another <code>/</code> with whatever (captured by the <code>$2</code>) or there is the end of the line, which will make <code>/api</code> be rewritten to <code>/</code>.</p>
|
<p>I have a K8s cluster with multiple different services deployed and would like to use a single <code>Ingress</code> to route each incoming request to the appropriate service via a unique hostname <code>DNS</code>.</p>
<p>Currently, I've only been able to resolve a request when using the root path i.e. <code>service-123.app.com</code>.</p>
<p>As soon as I try to make a request with a path it doesn't resolve. The paths are valid paths to each service. For example, <code>service-123.app.com/page/12345</code> would be expected by the application.</p>
<p>I might not fully understand how K8s Ingress rules are expected to work, but I hoped that it would match based on hostname only and simply forward on the path to the appropriate service.</p>
<p>Am I missing something very simple here? Any help is much appreciated. Thanks!</p>
<p>Here are my config files.</p>
<p><strong>Ingress.yaml</strong></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
......
name: app-name
namespace: default
spec:
rules:
- host: service-123.app.com
http:
- path: "/*"
backend:
serviceName: service-123
servicePort: 80
- host: service-456.app.com
http:
paths:
- path: "/*"
backend:
serviceName: service-456
servicePort: 80
</code></pre>
<p><strong>service.yaml</strong></p>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
annotations: {}
labels:
app: service-123
name: service-123
namespace: default
spec:
ports:
- name: port8080
port: 80
targetPort: 8080
selector:
app: service-123
type: NodePort
</code></pre>
| <p>Not sure which K8s and ingress controller you are using, but in the later K8s you can specify the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#path-types" rel="nofollow noreferrer"><code>pathType</code></a> which takes care of path wildcards more nicely.</p>
<p>You would have something like this:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
......
name: app-name
namespace: default
spec:
rules:
- host: service-123.app.com
http:
- path: /
pathType: Prefix
backend:
serviceName: service-123
servicePort: 80
- host: service-456.app.com
http:
paths:
- path: /
pathType: Prefix
backend:
serviceName: service-456
servicePort: 80
</code></pre>
<p>If you are using an <a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer">nginx ingress controller</a> a good way to see the right nginx configuration is by looking at the actual <code>nginx.conf</code> generated by the ingress controller.</p>
<pre><code>$ kubectl cp <nginx-ingress-controller-pod>:nginx.conf nginx.conf
$ cat nginx.conf
</code></pre>
|
<p>One of our containers is using ephemeral storage but we don't know why. The app running in the container shouldn't be writing anything to the disk.</p>
<p>We set the storage limit to 20MB but it's still being evicted. We could increase the limit but this seems like a bandaid fix.</p>
<p>We're not sure what or where this container is writing to, and I'm not sure how to check that. When a container is evicted, the only information I can see is that the container exceeded its storage limit.</p>
<p>Is there an efficient way to know what's being written, or is our only option to comb through the code?</p>
| <p>Adding details to the topic.</p>
<blockquote>
<p>Pods use ephemeral local storage for scratch space, caching, and logs.
<strong>Pods can be evicted due to other pods filling the local storage, after which new pods are not admitted until sufficient storage has been reclaimed.</strong></p>
</blockquote>
<p>The kubelet can provide scratch space to Pods using local ephemeral storage to mount <a href="https://kubernetes.io/docs/concepts/storage/volumes/#emptydir" rel="nofollow noreferrer">emptyDir</a> volumes into containers.</p>
<ul>
<li><p>For container-level isolation, if a container's writable layer and log usage exceeds its storage limit, the kubelet marks the Pod for eviction.</p>
</li>
<li><p>For pod-level isolation the kubelet works out an overall Pod storage limit by summing the limits for the containers in that Pod. In this case, if the sum of the local ephemeral storage usage from all containers and also the Pod's emptyDir volumes exceeds the overall Pod storage limit, then the kubelet also marks the Pod for eviction.</p>
</li>
</ul>
<p>To see what files have been written since the pod started, you can run:</p>
<pre><code>find / -mount -newer /proc -print
</code></pre>
<p>This will output a list of files modified more recently than '/proc'.</p>
<pre><code>/etc/nginx/conf.d
/etc/nginx/conf.d/default.conf
/run/secrets
/run/secrets/kubernetes.io
/run/secrets/kubernetes.io/serviceaccount
/run/nginx.pid
/var/cache/nginx
/var/cache/nginx/fastcgi_temp
/var/cache/nginx/client_temp
/var/cache/nginx/uwsgi_temp
/var/cache/nginx/proxy_temp
/var/cache/nginx/scgi_temp
/dev
</code></pre>
<p>Also, try without the '-mount' option.</p>
<p>To see if any new files are being modified, you can run some variations of the following command in a Pod:</p>
<pre><code>while true; do rm -f a; touch a; sleep 30; echo "monitoring..."; find / -mount -newer a -print; done
</code></pre>
<p>and check the file size using the <code>du -h someDir</code> command.</p>
<p>Also, as @gohm'c pointed out in his answer, you can use sidecar/ephemeral debug containers.</p>
<p>Read more about <a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#local-ephemeral-storage" rel="nofollow noreferrer">Local ephemeral storage here</a>.</p>
|
<p>Yesterday, I stopped a Helm upgrade when it was running on a release pipeline in Azure DevOps and the followings deployments failed.</p>
<p>I tried to see the chart that has failed with the aim of delete it but the chart of the microservice ("auth") doesn't appear. I used the command «helm list -n [namespace_of_AKS]» and it doesn't appear.</p>
<p>What can I do to solve this problem?</p>
<p><strong>Error in Azure Release Pipeline</strong></p>
<pre><code>2022-03-24T08:01:39.2649230Z Error: UPGRADE FAILED: another operation (install/upgrade/rollback) is in progress
2022-03-24T08:01:39.2701686Z ##[error]Error: UPGRADE FAILED: another operation (install/upgrade/rollback) is in progress
</code></pre>
<p><strong>Helm List</strong>
<a href="https://i.stack.imgur.com/E8dB4.png" rel="noreferrer"><img src="https://i.stack.imgur.com/E8dB4.png" alt="Helm List" /></a></p>
| <p>This error can happen for few reasons, but it most commonly occurs when there is an interruption during the upgrade/install process as you already mentioned.</p>
<p>To fix this one may need to, <strong>first rollback to another version, then reinstall</strong> or helm upgrade again.</p>
<p><em>Try below command to list</em></p>
<pre><code>helm ls --namespace <namespace>
</code></pre>
<p>but you may note that when running that command ,it may not show any columns with information</p>
<p><em>Try to check the history of the previous deployment</em></p>
<pre><code>helm history <release> --namespace <namespace>
</code></pre>
<p>This provides with information mostly like the original installation was never completed successfully and is <strong>pending state</strong> something like STATUS: pending-upgrade state.</p>
<p><em>To escape from this state, use the rollback command:</em></p>
<pre><code>helm rollback <release> <revision> --namespace <namespace>
</code></pre>
<p>revision is optional, but you should try to provide it.</p>
<p>You may then try to issue your original command again to upgrade or reinstall.</p>
|
<p>I am trying to use a constant in skaffold, and to access it in skaffold profile:</p>
<p>example <code>export SOME_IP=199.99.99.99 && skaffold run -p dev</code></p>
<p>skaffold.yaml</p>
<pre><code>...
deploy:
helm:
flags:
global:
- "--debug"
releases:
- name: ***
chartPath: ***
imageStrategy:
helm:
explicitRegistry: true
createNamespace: true
namespace: "***"
setValueTemplates:
SKAFFOLD_SOME_IP: "{{.SOME_IP}}"
</code></pre>
<p>and in dev.yaml profile I need somehow to access it,<br />
something like:<br />
<code>{{ .Template.SKAFFOLD_SOME_IP }}</code> and it should be rendered as <code>199.99.99.99</code></p>
<p>I tried to use skaffold <strong>envTemplate</strong> and <strong>setValueTemplates</strong> fields, but could not get success, and could not find any example on web</p>
| <p>Basically found a solution which I truly don't like, but it works:</p>
<p>in <strong>dev</strong> profile: <strong>values.dev.yaml</strong> I added a placeholder</p>
<pre><code> _anchors_:
- &_IPAddr_01 "<IPAddr_01_TAG>" # will be replaced with SOME_IP
</code></pre>
<p>The <strong><IPAddr_01_TAG></strong> will be replaced with const <strong>SOME_IP</strong> which will become <strong>199.99.99.99</strong> at the skaffold run</p>
<p>Now to run skaffold I will do:</p>
<pre><code>export SOME_IP=199.99.99.99
sed -i "s/<IPAddr_01_TAG>/$SOME_IP/g" values/values.dev.yaml
skaffold run -p dev
</code></pre>
<p>so after the above sed, in <strong>dev</strong> profile: <strong>values.dev.yaml</strong>, we will see the SOME_IP const instead of placeholder</p>
<pre><code> _anchors_:
- &_IPAddr_01 "199.99.99.99"
</code></pre>
|
<p>Using the helm <a href="https://helm.sh/docs/howto/charts_tips_and_tricks/#using-the-tpl-function" rel="nofollow noreferrer">function tpl</a> or other similar functions, how do you pass in a <a href="https://helm.sh/docs/chart_template_guide/variables/" rel="nofollow noreferrer">file specific variable</a> and the <a href="https://helm.sh/docs/chart_template_guide/values_files/" rel="nofollow noreferrer">top level Values</a>? Here is a concrete example:</p>
<pre><code># values
template: "{{ .Values.name }} drinks {{ $drink }}"
name: "Tom"
# template
{{- $drink := "coffee" -}}
# how do I pass $drink into tpl???
{{ tpl .Values.template . }}
# expected output
Tom drinks coffee
</code></pre>
<p>It seems like when I do this it just passes in the <code>.Values</code>, but not the file specific <code>$drink</code> variable that's defined within the template and I get the error: <code>error calling tpl</code>. I don't see anything in the docs for how to merge these values together or just pass them both into the function.</p>
| <p>Helm is using a slightly modified version of sprig functions. Most things from sprig are available.</p>
<p>You can use one of the <a href="http://masterminds.github.io/sprig/dicts.html" rel="nofollow noreferrer">dict functions</a> to set the value or create a new dict that you pass as context.</p>
<pre><code>{{ $_ := set .Values "drink" $drink }}
{{ tpl .Values.template . }}
</code></pre>
<p>In this case I have set a new key on the <code>Values</code> dict.</p>
<pre class="lang-yaml prettyprint-override"><code>template: "{{ .Values.name }} drinks {{ .Values.drink }}"
</code></pre>
|
<p>Im trying to parse a yaml file and update only one property,
the problem is that the type is <code>RAW</code> and when I update one field it update the whole object,</p>
<p>what I want is to <strong>update only</strong> <code>NatIPNames</code> from <code>test1</code> to <code>test2</code>,
but <strong>without changing</strong> the value of <code>minPortsPerVM</code>(10000) How can I do it?</p>
<p>This is the yaml</p>
<pre><code>apiVersion: core.gardener.cloud/v1beta1
kind: Shoot
metadata:
name: test
namespace: ns
spec:
provider:
type: aaa
infrastructureConfig:
apiVersion: gcp.provider.extensions.gardener.cloud/v1alpha1
kind: InfrastructureConfig
networks:
cloudNAT:
minPortsPerVM: 10000
natIPNames:
- name: test1
</code></pre>
<p>This is the code</p>
<pre><code>package main
import (
"encoding/json"
"fmt"
gcpv1alpha1 "github.com/gardener/gardener-extension-provider-gcp/pkg/apis/gcp/v1alpha1"
"github.com/gardener/gardener/pkg/apis/core/v1beta1"
"io/ioutil"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/util/yaml"
)
func main() {
shoot, e := parseShoot("test.yaml")
if e != nil {
fmt.Println(e)
}
shoot.Spec.Provider.InfrastructureConfig.Raw = encode(&gcpv1alpha1.InfrastructureConfig{
Networks: gcpv1alpha1.NetworkConfig{
CloudNAT: &gcpv1alpha1.CloudNAT{
//MinPortsPerVM: ,
NatIPNames: []gcpv1alpha1.NatIPName{
{Name: "test2"},
},
},
},
})
fmt.Println(shoot.Spec.Provider.InfrastructureConfig)
}
func parseShoot(path string) (*v1beta1.Shoot, error) {
var shootSpec *v1beta1.Shoot
bytes, err := ioutil.ReadFile(path)
if err != nil {
return nil, err
}
err = yaml.Unmarshal(bytes, &shootSpec)
if err != nil {
return nil, err
}
return shootSpec, nil
}
func encode(obj runtime.Object) []byte {
data, _ := json.Marshal(obj)
return data
}
</code></pre>
| <p>I am not well aware of the gardener properties. But what you can do is create a struct for the <code>InfrastructureConfig</code> like below</p>
<pre class="lang-golang prettyprint-override"><code>type InfraConfig struct {
APIVersion string `json:"apiVersion"`
Kind string `json:"kind"`
Networks struct {
CloudNAT struct {
MinPortsPerVM int `json:"minPortsPerVM"`
NatIPNames []struct {
Name string `json:"name"`
} `json:"natIPNames"`
} `json:"cloudNAT"`
} `json:"networks"`
}
</code></pre>
<p>and create a variable referencing that struct and unmarshal the <code>Raw</code> object into that like below.</p>
<pre class="lang-golang prettyprint-override"><code> var existingInfraConfig InfraConfig
err := json.Unmarshal(shoot.Spec.Provider.InfrastructureConfig.Raw, &existingInfraConfig)
</code></pre>
<p>then you can edit on the name (you might want to add some logic to validate the slice to update the right index) and marshal it and assign it back to the InfraConfig like below.</p>
<pre class="lang-golang prettyprint-override"><code> existingInfraConfig.Networks.CloudNAT.NatIPNames[0].Name = "test2"
byteData, _ := json.Marshal(existingInfraConfig)
shoot.Spec.Provider.InfrastructureConfig = &runtime.RawExtension{
Raw: byteData,
Object: nil,
}
</code></pre>
<p>On the whole, it would be like</p>
<pre class="lang-golang prettyprint-override"><code>package main
import (
"encoding/json"
"fmt"
"github.com/gardener/gardener/pkg/apis/core/v1beta1"
"io/ioutil"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/util/yaml"
)
type InfraConfig struct {
APIVersion string `json:"apiVersion"`
Kind string `json:"kind"`
Networks struct {
CloudNAT struct {
MinPortsPerVM int `json:"minPortsPerVM"`
NatIPNames []struct {
Name string `json:"name"`
} `json:"natIPNames"`
} `json:"cloudNAT"`
} `json:"networks"`
}
func main() {
shoot, e := parseShoot("test.yaml")
if e != nil {
fmt.Println(e)
}
var existingInfraConfig InfraConfig
err := json.Unmarshal(shoot.Spec.Provider.InfrastructureConfig.Raw, &existingInfraConfig)
fmt.Println(err)
existingInfraConfig.Networks.CloudNAT.NatIPNames[0].Name = "test2"
byteData, _ := json.Marshal(existingInfraConfig)
shoot.Spec.Provider.InfrastructureConfig = &runtime.RawExtension{
Raw: byteData,
Object: nil,
}
fmt.Println(string(shoot.Spec.Provider.InfrastructureConfig.Raw))
}
func parseShoot(path string) (*v1beta1.Shoot, error) {
var shootSpec *v1beta1.Shoot
bytes, err := ioutil.ReadFile(path)
if err != nil {
return nil, err
}
err = yaml.Unmarshal(bytes, &shootSpec)
if err != nil {
return nil, err
}
return shootSpec, nil
}
</code></pre>
|
<p>I am a newbie on Ops and need to update through Lens the HPA configuration like:</p>
<p>From:</p>
<pre><code> minReplicas: 6
maxReplicas: 10
</code></pre>
<p>To:</p>
<pre><code> minReplicas: 4
maxReplicas: 16
</code></pre>
<p>My doubt is if the PODs will be recreated or not once we have 8 instances running.</p>
| <blockquote>
<p>In Kubernetes, a HorizontalPodAutoscaler automatically updates a workload resource (such as a Deployment or StatefulSet), with the aim of automatically scaling the workload to match demand.</p>
</blockquote>
<p>The HorizontalPodAutoscaler is implemented as a Kubernetes API <code>resource</code> and a <code>controller</code>.</p>
<p>By configuring <code>minReplicas</code> and <code>maxReplicas</code> you are configuring the API resource.</p>
<p><strong>In this case, the HPA controller does not recreate running pods. And it does not scale up/down the workload if the number of currently running replicas is within the new min/max.</strong></p>
<p>The HPA controller then continues to monitor the load:</p>
<ul>
<li>If the load decreases, and the number of Pods is above the configured minimum, the HorizontalPodAutoscaler instructs the workload resource (the Deployment, StatefulSet, or other similar resource) to scale down.</li>
<li>If the load increases, and the number of Pods is below the configured maximum, the HorizontalPodAutoscaler instructs the workload resource (the Deployment, StatefulSet, or other similar resource) to scale up.</li>
</ul>
<p>See more info about <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">Horizontal Pod Autoscaling here</a>.</p>
|
<p>I started using <a href="https://k8slens.dev/" rel="noreferrer">Lens</a> and noticed that it gives you some warnings when the pods inside the nodes have limits higher than the actual capacity.
<a href="https://i.stack.imgur.com/Txase.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Txase.png" alt="A graphic visualization from Lens where you can see a warning telling that the specified limits are higher than the node capacity" /></a></p>
<p>So I tried to get this information with <em>kubectl</em> but I'm new to <em>jsonpath</em> and I just managed to get the raw info using something like this:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl get pods -o=jsonpath='{.items..resources.limits}' -A
</code></pre>
<p>That produces something like this:</p>
<pre class="lang-json prettyprint-override"><code>{"cpu":"200m","memory":"1Gi"} {"cpu":"200m","memory":"1Gi"} {"cpu":"200m","memory":"512Mi"} {"cpu":"500m","memory":"250Mi"} {"memory":"170Mi"} {"memory":"170Mi"} {"cpu":"2","memory":"2Gi"} {"cpu":"2","memory":"2Gi"} {"cpu":"2","memory":"2Gi"} {"cpu":"1","memory":"1Gi"} {"cpu":"1","memory":"1Gi"} {"cpu":"2","memory":"2Gi"} {"cpu":"100m","memory":"128Mi"} {"cpu":"100m","memory":"128Mi"} {"cpu":"500m","memory":"600Mi"} {"cpu":"1","memory":"1Gi"} {"cpu":"100m","memory":"25Mi"} {"cpu":"100m","memory":"25Mi"}
</code></pre>
<p>So, my questions are, how can I sum all these values? Will these values be accurate or am I missing any other query? I've checked using LimitRange and the values I got seem to be correct, the results include the limits set by the LimitRange configuration.</p>
| <p>you can use a kubectl plugin to list/sort pods with cpu limits:</p>
<pre><code>kubectl resource-capacity --sort cpu.limit --util --pods
</code></pre>
<p><a href="https://github.com/robscott/kube-capacity" rel="noreferrer">https://github.com/robscott/kube-capacity</a></p>
|
<p>I am trying to build out my Custom Resource project in Kubebuilder but it seems like I am missing my controller gen whenever I build it out. I keep getting the error:</p>
<pre><code>/Users/*****/Kubernetes/postgres-writer-operator/bin/controller-gen object:headerFile="hack/boilerplate.go.txt" paths="./..."
bash: /Users/****/Kubernetes/postgres-writer-operator/bin/controller-gen: No such file or directory
make: *** [generate] Error 127
Error: failed to create API: unable to run post-scaffold tasks of "base.go.kubebuilder.io/v3": exit status 2
Usage:
kubebuilder create api [flags]
</code></pre>
<p>I am new to Kubernetes and am following the tutorial below:
<a href="https://yash-kukreja-98.medium.com/develop-on-kubernetes-series-operator-dev-understanding-and-dissecting-kubebuilder-4321d3ecd7d6" rel="nofollow noreferrer">https://yash-kukreja-98.medium.com/develop-on-kubernetes-series-operator-dev-understanding-and-dissecting-kubebuilder-4321d3ecd7d6</a></p>
<p>First I ran the following command that bootstraps my project:</p>
<pre><code>kubebuilder init \
--domain yash.com \
--repo github.com/yashvardhan-kukreja/postgres-writer-operator \
--project-name postgres-writer-operator \
--license apache2 \
--skip-go-version-check
</code></pre>
<p>After running the command I went into my directory and found the following files and folders:
<a href="https://i.stack.imgur.com/gqMRE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gqMRE.png" alt="enter image description here" /></a></p>
<p>Then the next step was to run bootstrapper for my custom resource PostgresWriter and attach it to the operator.</p>
<pre><code>kubebuilder create api \
--group demo \
--version v1 \
--kind PostgresWriter \
--resource true \
--controller true \
--namespaced true
</code></pre>
<p>This resulted in the error listed above. How do I generate my controller gen using kubebuilder? Is there a step I am missing here?</p>
<p>FYI by go version is 1.18.</p>
| <p>Not a perfect solution, this should work with the latest go version but it doesn't, so I had to downgrade the go version to 1.17 and then it worked</p>
<p>Summarizing what I learned from this article on downgrading your go version.
<a href="https://blog.notmyhostna.me/posts/downgrade-go-installed-with-homebrew/" rel="noreferrer">https://blog.notmyhostna.me/posts/downgrade-go-installed-with-homebrew/</a></p>
<p>Install version 1.17 go</p>
<pre><code>brew install [email protected]
</code></pre>
<p>You will still see that the version didn't change, so you need to unlink the current version by running</p>
<pre><code>brew unlink go
</code></pre>
<p>Now you can link the 1.17 version of Go, so when you do go version you'll get 1.17.</p>
<pre><code>brew link --force [email protected]
</code></pre>
<p>Voila! You should now see that your current version is 1.17 and you'll be able to successfully run Kubebuilder.</p>
|
<p>I'm trying to deploy an app to AKS cluster. Everytime I push changes to my branch, I want AKS to redeploy pods and make use of the most recent tag (which I have versioned with $(Build.BuildId))</p>
<p>The problem is right now I have to manually retrieve this build version and enter it into deployment.yaml and then run <code>kubectl apply -f deployment.yaml</code> for the change to go ahead. For example, the most recent tag is 58642, so I would have to log into my Azure Container Registry, retrieve the version number, update the deployment.yaml, and then apply for changes to take effect.</p>
<p>How can I change my setup here so that the most recently built and tagged container is deployed to the AKS cluster as part of my CICD?</p>
<p>Here is my <code>deployment.yaml</code></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: mission-model-api
spec:
replicas: 3
selector:
matchLabels:
app: mission-model-api
template:
metadata:
labels:
app: mission-model-api
spec:
containers:
- name: mission-model-api
image: my_registry.azurecr.io/serving/mission_model_api:58642
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
imagePullPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: mission-model-api
spec:
type: ClusterIP
ports:
- port: 80
selector:
app: mission-model-api
</code></pre>
<p>And here is my CI/CD <code>azure-pipelines.yaml</code></p>
<pre><code>variables:
tag: '$(Build.BuildId)'
vmImageName: 'ubuntu-latest'
envName: 'poc-releases'
docker_image_name: API
imagePullSecret: 'AUTH'
dockerRegistryServiceConnection: 'XX'
trigger:
batch: true
branches:
include:
- feature/*
stages:
- stage: Build
displayName: Build stage
pool:
vmImage: $(vmImageName)
jobs:
- job: Build
displayName: Build job
variables:
PROJECT_DIR: $(Build.SourcesDirectory)/apps/$(docker_image_name)
IMAGE_AND_TAG: "$(docker_image_name):$(tag)"
steps:
- script: |
az acr login --name my_registry.azurecr.io --username user --password $(acr_password)
displayName: ACR Login
- bash: >
docker build -f ./Dockerfile -t "$(IMAGE_AND_TAG)" .
displayName: Build docker image
workingDirectory: $(PROJECT_DIR)
- script: |
REGISTRY_PATH=my_registry.azurecr.io/serving
docker tag "$(IMAGE_AND_TAG)" "$REGISTRY_PATH/$(IMAGE_AND_TAG)"
docker push "$REGISTRY_PATH/$(IMAGE_AND_TAG)"
displayName: Tag and Push to ACR
- task: PublishPipelineArtifact@0
inputs:
artifact: 'manifests'
artifactName: 'manifests'
targetPath: '$(PROJECT_DIR)/manifests'
- stage: Deploy_BVT
displayName: Deploy BVT
dependsOn: Build
jobs:
- deployment: Deploy_BVT
pool:
vmImage: $(vmImageName)
environment: '$(envName).ingress-basic'
strategy:
runOnce:
deploy:
steps:
- task: DownloadPipelineArtifact@1
inputs:
artifactName: 'manifests'
downloadPath: '$(System.ArtifactsDirectory)/manifests'
- task: KubernetesManifest@0
displayName: Create imagePullSecret
inputs:
action: createSecret
secretName: $(imagePullSecret)
namespace: ingress-basic
dockerRegistryEndpoint: $(dockerRegistryServiceConnection)
- task: KubernetesManifest@0
displayName: Deploy to Kubernetes cluster
inputs:
action: deploy
namespace: "ingress-basic"
manifests: |
$(System.ArtifactsDirectory)/manifests/cluster-isseur.yaml
$(System.ArtifactsDirectory)/manifests/deployment.yaml
$(System.ArtifactsDirectory)/manifests/ingress.yaml
imagePullSecrets: |
$(imagePullSecret)
containers: |
"$REGISTRY_PATH/$(docker_image_name):$(tag)"
</code></pre>
| <p>Replace tokens task can solve your problem. I use it most of the time.</p>
<ol>
<li>For the deployment yaml, change the image like this.</li>
</ol>
<blockquote>
<pre><code>image: my_registry.azurecr.io/serving/mission_model_api:#{Build.BuildId}#
</code></pre>
</blockquote>
<ol start="2">
<li>Before the <strong>task: PublishPipelineArtifact@0</strong> task in <strong>Build</strong> stage, put a <strong><a href="https://marketplace.visualstudio.com/items?itemName=qetza.replacetokens" rel="nofollow noreferrer">Replace Tokens</a></strong> task. You should add it as an extension to Azure DevOps</li>
</ol>
<blockquote>
<pre><code>- task: replacetokens@4
inputs:
targetFiles: '**/deployment.yml'
encoding: 'auto'
tokenPattern: 'default'
writeBOM: true
actionOnMissing: 'warn'
keepToken: false
actionOnNoFiles: 'continue'
enableTransforms: false
useLegacyPattern: false
enableTelemetry: true
</code></pre>
</blockquote>
<p>Then it should work as you expected.</p>
|
<p>Is there a possibility to configure all the unbound configurations listed <a href="https://linux.die.net/man/5/unbound.conf" rel="nofollow noreferrer">here</a> similarly in kubernetes coredns 'Corefile' configuration like this. Only few options are listed <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/" rel="nofollow noreferrer">here</a>. I am looking for the below server options in unbound conf to be done on kubernetes Corefile coredns configmap.</p>
<ol>
<li>do-ip6</li>
<li>verbosity</li>
<li>outgoing-port-avoid, outgoing-port-permit</li>
<li>domain-insecure</li>
<li>access-control</li>
<li>local-zone</li>
</ol>
<p>Example unbound conf which I am looking to do as same in kubernetes Corefile configuration:</p>
<pre><code>server:
do-ip6: yes
root-hints: /etc/unbound/named.cache
verbosity: 1
outgoing-port-avoid: 0-32767
outgoing-port-permit: 32768-65535
domain-insecure: "pub"
access-control: 0.0.0.0/0 refuse
local-zone: 10.in-addr.arpa nodefault
local-zone: 16.172.in-addr.arpa nodefault
</code></pre>
<p>I need to do above unbound conf similarly in kubernetes Corefile configuration. As I am new to kubernetes coredns, I am not sure whether these configurations are possible in Coredns. Can someone direct me how to do that? Also I am looking for steps on how to configure this in Corefile configmap using helm. It would be really helpful if I get some information on this. Thanks in advance!!!</p>
| <p><code>CoreDNS</code> supports some requested features via <a href="https://coredns.io/plugins/" rel="nofollow noreferrer"><code>plugins</code></a>:</p>
<ul>
<li><code>do-ip6</code> - CoreDNS works with ipv6 by default (if cluster is dual-stack)</li>
<li><code>verbosity</code> - <a href="https://coredns.io/plugins/log/" rel="nofollow noreferrer"><code>log</code></a> plugin will show more details about queries, it can have different format and what it shows (success, denial, errors, everything)</li>
<li><code>outgoing-port-avoid, outgoing-port-permit</code> - did not find any support of this</li>
<li><code>domain-insecure</code> - please check if <a href="https://coredns.io/plugins/dnssec/" rel="nofollow noreferrer"><code>dnssec</code></a> can help (It looks similar to what <code>unbound</code> has, but I'm not really familiar with it).</li>
<li><code>access-control</code> - <a href="https://coredns.io/plugins/acl/" rel="nofollow noreferrer"><code>acl</code></a> plugin does it.</li>
<li><code>local-zone</code> - <a href="https://coredns.io/plugins/local/" rel="nofollow noreferrer"><code>local</code></a> plugin can be tried for this purpose, it doesn't have lots of options though.</li>
</ul>
<p>Bonus point:</p>
<ul>
<li>CoreDNS config's change - <a href="https://coredns.io/plugins/reload/" rel="nofollow noreferrer"><code>reload</code></a> allows automatic reload of a changed Corefile.</li>
</ul>
<p>All mentioned above plugins have syntax and examples on their pages.</p>
|
<p>We have a setup with Traefik as the Ingress Controller / CRD and ArgoCD. We installed ArgoCD into <a href="https://github.com/jonashackt/tekton-argocd-eks" rel="noreferrer">our EKS setup</a> as described in <a href="https://argo-cd.readthedocs.io/en/stable/getting_started/" rel="noreferrer">the Argo getting stared guide</a>:</p>
<pre><code>kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
</code></pre>
<p>Now <a href="https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#ingressroute-crd" rel="noreferrer">as the docs state</a> the <code>IngressRoute</code> object to configure Traefik correctly looks like this:</p>
<pre><code>apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: argocd-server
namespace: argocd
spec:
entryPoints:
- websecure
routes:
- kind: Rule
match: Host(`argocd.tekton-argocd.de`)
priority: 10
services:
- name: argocd-server
port: 80
- kind: Rule
match: Host(`argocd.tekton-argocd.de`) && Headers(`Content-Type`, `application/grpc`)
priority: 11
services:
- name: argocd-server
port: 80
scheme: h2c
tls:
certResolver: default
</code></pre>
<p>Right now <a href="https://github.com/argoproj/argo-cd/issues/8950" rel="noreferrer">there's a bug in the docs</a> - so be sure to remove the <code>options: {}</code> in order to let Traefik accept the configuration.</p>
<p>Traefik shows everything is fine in the dashboard:</p>
<p><a href="https://i.stack.imgur.com/SpNcJ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/SpNcJ.png" alt="enter image description here" /></a></p>
<p>But if we try to access the ArgoCD dashboard at <a href="https://argocd.tekton-argocd.de" rel="noreferrer">https://argocd.tekton-argocd.de</a> we get multiple <code>HTTP 307</code> redirects and can't access the dashboard in the end. You can see the redirects inside the developer tools:</p>
<p><a href="https://i.stack.imgur.com/w3me4.png" rel="noreferrer"><img src="https://i.stack.imgur.com/w3me4.png" alt="enter image description here" /></a></p>
<p>Searching for a solution we already found <a href="https://github.com/argoproj/argo-cd/issues/2953" rel="noreferrer">this issue</a> where the problem is described:</p>
<blockquote>
<p>The problem is that by default Argo-CD handles TLS termination itself
and always redirects HTTP requests to HTTPS. Combine that with an
ingress controller that also handles TLS termination and always
communicates with the backend service with HTTP and you get Argo-CD's
server always responding with a redirects to HTTPS.</p>
</blockquote>
<p>Also the solution is sketched:</p>
<blockquote>
<p>So one of the solutions would be to disable HTTPS on Argo-CD, which
you can do by using the --insecure flag on argocd-server.</p>
</blockquote>
<p>But how can we configure the <code>argocd-server</code> Deployment to add the <code>--insecure</code> flag to the argocd-server command - as it is also <a href="https://argo-cd.readthedocs.io/en/stable/operator-manual/ingress/#traefik-v22" rel="noreferrer">stated inside the ArgoCD docs</a>?</p>
| <h2><strong>0. Why a declarative ArgoCD setup with Kustomize is a great way to configure custom parameters</strong></h2>
<p>There are multiple options on how to configure ArgoCD. A great way is to use a declarative approach, which should be the default Kubernetes-style. Skimming the ArgoCD docs there's a <a href="https://argo-cd.readthedocs.io/en/stable/operator-manual/server-commands/additional-configuration-method/#synopsis" rel="noreferrer">additional configuration section</a> where the possible flags of the ConfigMap <code>argocd-cmd-params-cm</code> can be found. The flags are described in <a href="https://argo-cd.readthedocs.io/en/stable/operator-manual/argocd-cmd-params-cm.yaml" rel="noreferrer">argocd-cmd-params-cm.yaml</a>. One of them is the flag <code>server.insecure</code></p>
<pre><code>## Server properties
# Run server without TLS
server.insecure: "false"
</code></pre>
<p>The <code>argocd-server</code> deployment which ships with <a href="https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml" rel="noreferrer">https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml</a> will use this parameter, if it is defined in the <code>argocd-cmd-params-cm</code> ConfigMap.</p>
<p>In order to declaratively configure the ArgoCD configuration, <a href="https://argo-cd.readthedocs.io/en/stable/operator-manual/declarative-setup/#manage-argo-cd-using-argo-cd" rel="noreferrer">the ArgoCD docs have a great section</a> on how to do that with <a href="https://kustomize.io/" rel="noreferrer">Kustomize</a>. In fact the ArgoCD team itself uses this approach to deploy their own ArgoCD instances - a live deployment is available here <a href="https://cd.apps.argoproj.io/" rel="noreferrer">https://cd.apps.argoproj.io/</a> and the configuration used <a href="https://github.com/argoproj/argoproj-deployments/tree/master/argocd" rel="noreferrer">can be found on GitHub</a>.</p>
<p>Adopting this to our use case, we need to switch our ArgoCD installation from simply using <code>kubectl apply -f</code> to a Kustomize-based installation. The ArgoCD docs also have <a href="https://argo-cd.readthedocs.io/en/stable/operator-manual/installation/#kustomize" rel="noreferrer">a section on how to do this</a>. Here are the brief steps:</p>
<h2><strong>1. Create a <code>argocd/installation</code> directory with a new file <code>kustomization.yaml</code></strong></h2>
<p>We slightly enhance the <code>kustomization.yaml</code> proposed in the docs:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- https://raw.githubusercontent.com/argoproj/argo-cd/v2.3.3/manifests/install.yaml
## changes to config maps
patchesStrategicMerge:
- argocd-cmd-params-cm-patch.yml
namespace: argocd
</code></pre>
<p>Since the docs state</p>
<blockquote>
<p>It is recommended to include the manifest as a remote resource and
apply additional customizations using Kustomize patches.</p>
</blockquote>
<p>we use the <code>patchesStrategicMerge</code> configuration key, which contains another new file we need to create called <code>argocd-cmd-params-cm-patch.yml</code>.</p>
<h2><strong>2. Create a new file <code>argocd-cmd-params-cm-patch.yml</code></strong></h2>
<p>This new file only contains the configuration we want to change inside the ConfigMap <code>argocd-cmd-params-cm</code>:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cmd-params-cm
data:
server.insecure: "true"
</code></pre>
<h2><strong>3. Install ArgoCD using the Kustomization files & <code>kubectl apply -k</code></strong></h2>
<p>There's a separate <code>kustomize</code> CLI one can install e.g. via <code>brew install kustomize</code>. But as Kustomize is build into <code>kubectl</code> we only have to use <code>kubectl apply -k</code> and point that to our newly created <code>argocd/installation</code> directory like this. We just also need to make sure that the <code>argocd</code> namespace is created:</p>
<pre><code>kubectl create namespace argocd --dry-run=client -o yaml | kubectl apply -f -
kubectl apply -k argocd/installation
</code></pre>
<p>This will install ArgoCD and configure the <code>argocd-server</code> deployment to use the <code>--insecure</code> flag as needed to stop Argo from handling the TLS termination itself and giving that responsibility to Traefik. Now accessing <a href="https://argocd.tekton-argocd.de" rel="noreferrer">https://argocd.tekton-argocd.de</a> should open the ArgoCD dashboard as expected:</p>
<p><a href="https://i.stack.imgur.com/GwotZ.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/GwotZ.jpg" alt="enter image description here" /></a></p>
|
<p>What type of edits will change a ReplicaSet and StatefulSet AGE(CreationTimeStamp)?</p>
<p>I'm asking this because I noticed that</p>
<ol>
<li>If I change a Deployment image, a new ReplicaSet will be created.</li>
<li>The old ReplicaSet continues to exist with DESIRED set to 0.</li>
<li>If I change back to the previous container image, the 2 ReplicaSets don't change their age nor are recreated.</li>
</ol>
<p>So, what is the best way to verify if there were recent updates to a Deployment/ReplicaSet and StatefulSet?</p>
<p>So far, I'm using client-go to check these resources ages:</p>
<pre><code>func statefulsetCheck(namespace string, clientset *kubernetes.Clientset) bool {
// get the statefulsets in the namespace
statefulsets, err := clientset.AppsV1().StatefulSets(namespace).List(context.TODO(), metav1.ListOptions{})
if errors.IsNotFound(err) {
log.Fatal("\nNo statefulsets in the namespace", err)
} else if err != nil {
log.Fatal("\nFailed to fetch statefulsets in the namespace: ", err)
}
var stsNames []string
for _, sts := range statefulsets.Items {
stsNames = append(stsNames, sts.Name)
}
fmt.Printf("\nStatefulsets in the namespace: %v", stsNames)
// check if the statefulsets are older than the 9 days
for _, sts := range statefulsets.Items {
stsAge := time.Since(sts.CreationTimestamp.Time)
fmt.Printf("\nStatefulset %v age: %v", sts.Name, stsAge)
if stsAge.Minutes() < 5 {
fmt.Printf("\nStatefulset %v had recent updates. Skipping...", sts.Name)
return true
}
}
return false
}
func replicasetCheck(namespace string, clientset *kubernetes.Clientset) bool {
// get the replicasets in the namespace
replicasets, err := clientset.AppsV1().ReplicaSets(namespace).List(context.TODO(), metav1.ListOptions{})
if errors.IsNotFound(err) {
log.Fatal("\nNo replicasets in the namespace", err)
} else if err != nil {
log.Fatal("\nFailed to fetch replicasets in the namespace", err)
}
var rpsNames []string
for _, rps := range replicasets.Items {
rpsNames = append(rpsNames, rps.Name)
}
fmt.Printf("\nReplicasets in the namespace: %v", rpsNames)
// check if the replicasets have recent updates
for _, rps := range replicasets.Items {
rpsAge := time.Since(rps.CreationTimestamp.Time)
fmt.Printf("\nReplicaset %v age: %v", rps.Name, rpsAge)
if rpsAge.Minutes() < 5 {
fmt.Printf("\nReplicaset %v had recent updates...", rps.Name)
return true
}
}
return false
}
</code></pre>
| <blockquote>
<p>AGE(CreationTimeStamp)</p>
</blockquote>
<p>A resource's <code>CreationTimeStamp</code> (and thereby its age) is set when a resource is <strong>created</strong>. E.g. to change it, you must <em>delete</em> the resource and create it again.</p>
|
<p>I want to expose a pod app to port 80, for that i have installed Metallb and configured a load balancer like this:</p>
<p>metallb-config.yaml</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.1.100-192.168.1.150
</code></pre>
<p>loadbalancer.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: loadbalancer-watchdog
spec:
selector:
part: watchdog
ports:
- port: 80
targetPort: 10069
type: LoadBalancer
externalTrafficPolicy: Local
</code></pre>
<p>But when i do <code>kubectl get svc</code> the loadBalancer keep appearing as and if i check metallb with `kubectl -n metallb-system get all i can see this:</p>
<p><a href="https://i.stack.imgur.com/254gc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/254gc.png" alt="enter image description here" /></a></p>
<p>If i check the logs:</p>
<p><a href="https://i.stack.imgur.com/MEbPX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MEbPX.png" alt="enter image description here" /></a></p>
<p>Has something to do with my config or do i miss some step in the configuration of Metallb?</p>
<p>EDIT:</p>
<p>Output of <code>kubectl -n kube-system get pods</code>
<a href="https://i.stack.imgur.com/fbLDQ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fbLDQ.png" alt="enter image description here" /></a></p>
<p>Apparently the coredns are down and also the storage.</p>
| <p>the same thing happened to me but it did not start because it collided with a docker swarm port, just run</p>
<p><code>docker swarm leave -f </code></p>
<p>and try again</p>
|
<p>I have a GKE Cluster current scaled upto multiple nodes, the scale up happened during high load due to a DDOS attack on our services , but now the cluster is unable to scale down due to redis-master and redis-slave, this is ending up costing alot of overhead costs that has become an issue for us now.</p>
<p>The autoscale down shows the error: <code>no.scale.down.node.pod.has.local.storage</code>. I have seen in multiple answers setting the option <code>cluster-autoscaler.kubernetes.io/safe-to-evict</code> to true should solve the issue (GCP Suggests the same) but before i do that i want to know if doing so and scaling down redis-slave instances can cause any data loss? Any suggestion for this would be ideal as currently we are paying over 2x of what is needed.</p>
<p>I also checked the config as well , and i saw there are volumes the redis-master and redis-slaves under volumes and the config yaml also has that as :</p>
<pre><code>volumeClaimTemplates:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
app: redis
component: master
heritage: Tiller
release: prod-redis
name: redis-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
volumeMode: Filesystem
</code></pre>
| <p>Redis master or slave mostly save the data in to memory for backup and restore purpose it takes snapshot every min or second based on config.</p>
<p>You can checkout the deployment config or configmap if any snapshot option running or not. In Redis terms known as the <strong>AOF</strong> and <strong>RDB</strong> backup.</p>
<p>Whenver any Redis POD crash in K8s it starts and restores the data from the above file if present.</p>
<p>So make sure you <strong>exec</strong> into the POD and check files are present or not.</p>
<pre><code>image: redislabs/redis
args: ["--requirepass", "admin", "--appendonly", "yes", "--save", "900", "1", "--save", "30", "2"]
</code></pre>
<p>Check the config or YAML of deployment might be having some options like above.</p>
<p>Ref Document : <a href="https://redis.io/topics/persistence" rel="nofollow noreferrer">https://redis.io/topics/persistence</a></p>
<p>If files are not there you can manually take the backup so if POD crashes it start with old data.</p>
<p>Command to generate the backup in background: <code>BG SAVE</code></p>
<p>if you have large data in POD things might go wrong, <code>BG SAVE</code> will kick the process to save the data to the <code>filesystem</code> in PVC which can lead to Higher resource requirements and POD will get killed if resource set.</p>
<p>Once data is saved using the background command you can start removing the slaves with their respective PVCs.</p>
<p>So if <strong>AOF</strong> and <strong>RDB</strong> are disabled data loss will be there.</p>
<p>Just RDB is also not a good option as it takes periodic backup while <strong>AOF</strong> is an instant option.</p>
<p>If just <strong>RDB</strong> there, could be chances it has taken <strong>snapshot</strong> by night and you remove POD in the morning so lastest data after snapshot is in memory you might not get in snapshot.</p>
|
<p>I'm using minikube locally.
The following is the <code>.tf</code> file I use to create my kubernetes cluster:</p>
<pre><code>provider "kubernetes" {
config_path = "~/.kube/config"
}
resource "kubernetes_namespace" "tfs" {
metadata {
name = "tfs" # terraform-sandbox
}
}
resource "kubernetes_deployment" "golang_webapp" {
metadata {
name = "golang-webapp"
namespace = "tfs"
labels = {
app = "webapp"
}
}
spec {
replicas = 3
selector {
match_labels = {
app = "webapp"
}
}
template {
metadata {
labels = {
app = "webapp"
}
}
spec {
container {
image = "golang-docker-example"
name = "golang-webapp"
image_pull_policy = "Never" # this is set so that kuberenetes wont try to download the image but use the localy built one
liveness_probe {
http_get {
path = "/"
port = 8080
}
initial_delay_seconds = 15
period_seconds = 15
}
readiness_probe {
http_get {
path = "/"
port = 8080
}
initial_delay_seconds = 3
period_seconds = 3
}
}
}
}
}
}
resource "kubernetes_service" "golang_webapp" {
metadata {
name = "golang-webapp"
namespace = "tfs"
labels = {
app = "webapp_ingress"
}
}
spec {
selector = {
app = kubernetes_deployment.golang_webapp.metadata.0.labels.app
}
port {
port = 8080
target_port = 8080
protocol = "TCP"
}
# type = "ClusterIP"
type = "NodePort"
}
}
resource "kubernetes_ingress" "main_ingress" {
metadata {
name = "main-ingress"
namespace = "tfs"
}
spec {
rule {
http {
path {
backend {
service_name = "golang-webapp"
service_port = 8080
}
path = "/golang-webapp"
}
}
}
}
}
</code></pre>
<p>When executing <code>terraform apply</code>, I am successfully able to create all of the resources except for the ingress.</p>
<p>The error is:</p>
<pre><code>Error: Failed to create Ingress 'tfs/main-ingress' because: the server could not find the requested resource (post ingresses.extensions)
with kubernetes_ingress.main_ingress,
on main.tf line 86, in resource "kubernetes_ingress" "main_ingress":
86: resource "kubernetes_ingress" "main_ingress" {
</code></pre>
<p>When I try to create an ingress service with kubectl using the same configuration as the one above (only in <code>.yaml</code> and using the <code>kubectl apply</code> command) it works, so it seems that kubectl & minikube are able to create this type of ingress, but terraform cant for some reason...</p>
<p>Thanks in advance for any help!</p>
<p>Edit 1:</p>
<p>adding the <code>.yaml</code> that I'm able to create the ingress with</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
namespace: tfs
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: golang-webapp
port:
number: 8080
</code></pre>
| <p>The <code>kubernetes_ingress</code> resource generate an ingress with an <code>apiVersion</code> which is not supported by your kubernetes cluster. You have to use <code>[kubernetes_ingress_v1][1]</code> resource which looks similar to <code>kubernetes_ingress</code> resource with some diferences. For your example, it will be like this :</p>
<pre><code>resource "kubernetes_ingress_v1" "jenkins-ingress" {
metadata {
name = "example-ingress"
namespace = "tfs"
annotations = {
"nginx.ingress.kubernetes.io/rewrite-target" = "/$1"
}
}
spec {
rule {
http {
path {
path = "/"
backend {
service {
name = "golang-webapp"
port {
number = 8080
}
}
}
}
}
}
}
}
</code></pre>
|
<p>I'am trying to learn kubernetes and ingress. I have misunderstood something ?</p>
<p>I using proxmox with 3vm</p>
<pre><code> kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8sm Ready control-plane,master 127d v1.22.4
k8sn1 Ready <none> 127d v1.22.4
k8sn2 Ready <none>
127d v1.22.4
</code></pre>
<p>I Have 2 nginx pod as a deployment.</p>
<pre><code> kubectl get pod
NAME READY STATUS RESTARTS AGE
curl-test 1/1 Running 0 5d11h
frontend-86968456b9-jnbqf 1/1 Running 0 5d11h
frontend-86968456b9-tj2w9 1/1 Running 0 5d11h
</code></pre>
<p>1 service used with label</p>
<pre><code> kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 127d
my-service-ingress ClusterIP 10.104.228.72 <none> 80/TCP 25h
</code></pre>
<p>svc .yaml</p>
<pre><code> ---
apiVersion: v1
kind: Service
metadata:
name: my-service-ingress
spec:
selector:
app: frontend
ports:
- port: 80
targetPort: 80
</code></pre>
<p>I'm connected to the master node in ssh</p>
<pre><code> curl http://10.104.228.72
THIS IS CONTAINER : 1
</code></pre>
<p>This is my ingress file copied from officiel kubernetes doc</p>
<pre><code> apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /frontend
pathType: Prefix
backend:
service:
name: my-service-ingress
port:
number: 80
</code></pre>
<p>Ingress ressource:</p>
<pre><code> kubectl get ingress -o wide
NAME CLASS HOSTS ADDRESS PORTS AGE
test-ingress <none> * 80 39m
</code></pre>
<p>info about ingress controler</p>
<pre><code>kubectl get all -n ingress-nginx
NAME READY STATUS RESTARTS AGE
pod/ingress-nginx-admission-create--1-tnglj 0/1 Completed 0 24h
pod/ingress-nginx-admission-patch--1-97n79 0/1 Completed 1 24h
pod/ingress-nginx-controller-547f579dc-ssf5r 1/1 Running 0 24h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ingress-nginx-controller NodePort 10.100.181.85 <none> 80:32412/TCP,443:32016/TCP 24h
service/ingress-nginx-controller-admission ClusterIP 10.98.29.50 <none> 443/TCP 24h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ingress-nginx-controller 1/1 1 1 24h
NAME DESIRED CURRENT READY AGE
replicaset.apps/ingress-nginx-controller-547f579dc 1 1 1 24h
NAME COMPLETIONS DURATION AGE
job.batch/ingress-nginx-admission-create 1/1 4s 24h
job.batch/ingress-nginx-admission-patch 1/1 5s 24h
</code></pre>
<p>curl test with IP nodes don't work:</p>
<pre><code>kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8sm Ready control-plane,master 127d v1.22.4 192.168.1.38 <none> Debian GNU/Linux 10 (buster) 4.19.0-18-amd64 docker://20.10.11
k8sn1 Ready <none> 127d v1.22.4 192.168.1.39 <none> Debian GNU/Linux 10 (buster) 4.19.0-18-amd64 docker://20.10.11
k8sn2 Ready <none> 127d v1.22.4 192.168.1.40 <none> Debian GNU/Linux 10 (buster) 4.19.0-18-amd64 docker://20.10.11
rbo@K8sM:~/elearning/kubernestes_admin_course/Zeal_Vora/Ingress$ curl http://192.168.1.39
curl: (7) Failed to connect to 192.168.1.39 port 80: Connexion refusée
</code></pre>
| <p>In Ingress you have used the path <code>/frontend</code> but you are not hitting it while</p>
<p>Also, use the <code>Node Port</code> with <code>Node IP</code> you are using internal service IP.</p>
<p>Something like :</p>
<pre><code>curl http://192.168.1.39:32412/frontend
</code></pre>
|
<p>Nginx can be configured to support the HAProxy proxy protocol for inbound traffic: <a href="http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_protocol" rel="nofollow noreferrer">http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_protocol</a></p>
<p>However, using <code>proxy_protocol on;</code>, nginx cannot handle HTTP(S) traffic without the PROXY line.</p>
<p>The traefik reverse proxy, on the other hand, is able to treat the PROXY line as optional:</p>
<blockquote>
<p>If Proxy Protocol header parsing is enabled for the entry point, this entry point can accept connections with or without Proxy Protocol headers.
<a href="https://doc.traefik.io/traefik/routing/entrypoints/#proxyprotocol" rel="nofollow noreferrer">https://doc.traefik.io/traefik/routing/entrypoints/#proxyprotocol</a></p>
</blockquote>
<p><strong>Is it possible to configure nginx to treat the PROXY line as optional?</strong></p>
<p>My use case is nginx as a kubernetes ingress to handle proxy protocol traffic from outside via a load balancer and without the proxy protocol for cluster-internal HTTP(S) traffic.</p>
<p>I know there is <em>hairpin-proxy</em> that works, but it's a bit hacky as it adds rewrites for all CoreDNS cluster-internal DNS entries.
<a href="https://github.com/compumike/hairpin-proxy#:%7E:text=Modify%20nginx%20to%20treat%20the%20PROXY%20line%20as%20optional" rel="nofollow noreferrer">https://github.com/compumike/hairpin-proxy#:~:text=Modify%20nginx%20to%20treat%20the%20PROXY%20line%20as%20optional</a></p>
<hr />
<p>The K8s Cloud Setup used in this scenario:</p>
<ul>
<li>Rancher Kubernetes Engine (RKE)</li>
<li>Hetzner Cloud VMs for k8s Nodes</li>
<li>Hetzner Load Balancer as K8s Load Balancer</li>
<li>hcloud-cloud-controller-manager for provisioning Hetzner Cloud servers (<a href="https://github.com/hetznercloud/hcloud-cloud-controller-manager" rel="nofollow noreferrer">https://github.com/hetznercloud/hcloud-cloud-controller-manager</a>)</li>
<li>ingress-nginx as k8s ingress (<a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/</a>)</li>
<li>Worklaod deployments using helm3</li>
</ul>
| <blockquote>
<p>Is it possible to configure nginx to treat the PROXY line as optional?</p>
</blockquote>
<p>No.</p>
<p>You can use a listener with proxy protocol and one without but it's currently not possible to use both in one listener.</p>
|
<p>I am running three Ubuntu VMs using VMware workstation and I have been able to succesfully deploy a 3 node Kubernetes cluster using k3sup along with k3s.</p>
<p>I am now trying to stand up AWX using the AWX operator but when I check the logs in the AWX pods I see the recurring messages.</p>
<p>I am not sure what I am doing wrong if maybe it has to do with routing or if I failed to open up a certain port on one of the nodes.</p>
<pre><code>[wait-for-migrations] Waiting for database migrations...
[wait-for-migrations] Attempt 1 of 30
[wait-for-migrations] Waiting 0.5 seconds before next attempt
[wait-for-migrations] Attempt 2 of 30
[wait-for-migrations] Waiting 1 seconds before next attempt
[wait-for-migrations] Attempt 3 of 30
[wait-for-migrations] Waiting 2 seconds before next attempt
[wait-for-migrations] Attempt 4 of 30
[wait-for-migrations] Waiting 4 seconds before next attempt
[wait-for-migrations] Attempt 5 of 30
[wait-for-migrations] Waiting 8 seconds before next attempt
[wait-for-migrations] Attempt 6 of 30
[wait-for-migrations] Waiting 16 seconds before next attempt
[wait-for-migrations] Attempt 7 of 30
</code></pre>
| <p>By default, database upgrades are not applied when upgrading the AWX image.</p>
<p>You must therefore manually connect to your AWX Web pod and run the following command:</p>
<pre><code>awx-manage migrate --noinput
</code></pre>
<p>This should unblock the application startup.</p>
<p>Source: <a href="https://github.com/ansible/awx/issues/6539#issuecomment-609106337" rel="nofollow noreferrer">https://github.com/ansible/awx/issues/6539#issuecomment-609106337</a></p>
|
<p>I am using k3d to run local kubernetes</p>
<p>I have created a cluster using k3d.</p>
<p>Now I want to mount a local directory as a persistent volume.</p>
<p>How can i do this while using k3d.</p>
<p>I know in minikube</p>
<pre><code>$ minikube start --mount-string="$HOME/go/src/github.com/nginx:/data" --mount
</code></pre>
<p>Then If you mount /data into your Pod using <code>hostPath</code>, you will get you local directory data into Pod.</p>
<p>Is there any similar technique here also while using k3d</p>
| <p>According to the answers to <a href="https://github.com/k3d-io/k3d/issues/566" rel="nofollow noreferrer">this Github question</a> the feature you're looking for is not available yet.</p>
<p>Here is some idea from this link:</p>
<blockquote>
<p>The simplest I guess would be to have a pretty generic mount containing all the code, e.g. in my case, I could do <code>k3d cluster create -v "$HOME/git:/git@agent:*"</code> to get all the repositories on my host present in all agent nodes to be used for hot-reloading.</p>
</blockquote>
<p>According to <a href="https://k3d.io/v5.2.0/usage/commands/k3d_cluster_create/" rel="nofollow noreferrer">this</a> documentation one can use the following command with the adequate flag:</p>
<pre class="lang-yaml prettyprint-override"><code>k3d cluster create NAME -v [SOURCE:]DEST[@NODEFILTER[;NODEFILTER...]]
</code></pre>
<p>This command mounts volumes into the nodes</p>
<pre><code>(Format:[SOURCE:]DEST[@NODEFILTER[;NODEFILTER...]]
</code></pre>
<p>Example:</p>
<pre><code>`k3d cluster create --agents 2 -v /my/path@agent:0,1 -v /tmp/test:/tmp/other@server:0`
</code></pre>
<p><a href="https://dev.to/bbende/k3s-on-raspberry-pi-volumes-and-storage-1om5" rel="nofollow noreferrer">Here</a> is also an interesting article how volumes and storage work in a K3s cluster (with examples).</p>
|
<p>I am new to Kubernetes and I am trying to setup a Java-app locally in minikube. And I want it to be able to connect to locally hosted Postgres DB, connection URL looks like <code>jdbc:postgresql://localhost:5432/my_db</code>, not inside k8s. That sounds like something simple but I just can't find a solution. Could you pls give me a piece of advice on where to look at? Thanks.</p>
<p>Btw, external connections to the DB server are allowed</p>
<p>My <code>deployment.yaml</code></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 1
selector:
matchLabels:
app: my-app
template:
metadata:
name: my-app
labels:
app: my-app
spec:
containers:
- name: my-app
image: my-app-image
</code></pre>
| <p>Seems, I found the solution. <code>host.minikube.internal</code> is exactly the alias we can use in this case. In my example just referring DB URI to <code>jdbc:postgresql://host.minikube.internal:5432/my_db</code> solved the problem.</p>
<p>More info <a href="https://minikube.sigs.k8s.io/docs/handbook/host-access/" rel="nofollow noreferrer">here</a></p>
|
<p>Hello I am new to kubernetes and i need some help.</p>
<p>I want use kubernetes ingress path for my 2 different nuxt project.</p>
<p>First / path working well but my</p>
<p>second /v1 path not get resources like .css and .js</p>
<p>My first deployment and service yaml file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-1
labels:
app: nginx1
spec:
replicas: 1
selector:
matchLabels:
app: nginx1
template:
metadata:
labels:
app: nginx1
spec:
containers:
- name: nginx1
image: st/itmr:latest "can't show image"
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx1-svc
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: nginx1
</code></pre>
<p>My second deployment and service yaml file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx2
labels:
app: nginx2
spec:
replicas: 1
selector:
matchLabels:
app: nginx2
template:
metadata:
labels:
app: nginx2
spec:
containers:
- name: nginx2
image: st/itpd:latest "can't show image"
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: nginx2-svc
spec:
ports:
- port: 8080
targetPort: 80
protocol: TCP
name: http
selector:
app: nginx2
</code></pre>
<p>And there is the my ingress yaml file:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: some.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx1-svc
port:
number: 80
- path: /v1
pathType: Prefix
backend:
service:
name: nginx2-svc
port:
number: 8080
</code></pre>
<p>I tought using nginx.ingress.kubernetes.io/rewrite-target: /$1 would be work for me bu its not.</p>
<p>I don't know where is the problem so help me.</p>
| <p>To clarify I am posting a community wiki answer.</p>
<p>The problem here was resolved by switching the project path.</p>
<p>See more about ingress paths <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#path-types" rel="nofollow noreferrer">here</a>.</p>
|
<p>I'm trying to run a remote build execution with <a href="https://github.com/bazelbuild/bazel-buildfarm" rel="nofollow noreferrer">bazel buildfarm</a> memory workers on our k8s cluster.</p>
<p>I've set up the server pods, worker pods, and redis clusters, as buildfarm's architecture requires it, along with k8s services and ingresses to allow me to remotely send builds.</p>
<p>However, when I tried to execute it, I got the following:</p>
<pre><code>eito@fuji:~/MyRepo$ bazel --client_debug run //tools:ipython3 --config=rbe
[INFO 11:03:07.374 src/main/cpp/option_processor.cc:407] Looking for the following rc files: /etc/bazel.bazelrc,/home/eito/MyRepo/.bazelrc,/home/eito/.bazelrc
[INFO 11:03:07.374 src/main/cpp/rc_file.cc:56] Parsing the RcFile /home/eito/MyRepo/.bazelrc
[INFO 11:03:07.374 src/main/cpp/rc_file.cc:56] Parsing the RcFile user.bazelrc
[INFO 11:03:07.374 src/main/cpp/rc_file.cc:129] Skipped optional import of user.bazelrc, the specified rc file either does not exist or is not readable.
[INFO 11:03:07.374 src/main/cpp/rc_file.cc:56] Parsing the RcFile /home/eito/.bazelrc
[INFO 11:03:07.374 src/main/cpp/blaze.cc:1626] Debug logging requested, sending all client log statements to stderr
[INFO 11:03:07.374 src/main/cpp/blaze.cc:1509] Acquired the client lock, waited 0 milliseconds
[INFO 11:03:07.377 src/main/cpp/blaze.cc:1697] Trying to connect to server (timeout: 30 secs)...
[INFO 11:03:07.385 src/main/cpp/blaze.cc:1264] Connected (server pid=113490).
[INFO 11:03:07.385 src/main/cpp/blaze.cc:1974] Releasing client lock, let the server manage concurrent requests.
INFO: Invocation ID: c97091ec-e335-4882-8107-c9084d4453ff
ERROR: Failed to query remote execution capabilities: connection timed out: buildfarm.dev.azr.internal.mydomain.com/172.33.33.99:8980
[INFO 11:03:37.613 src/main/cpp/blaze.cc:2093] failure_detail: message: "Failed to query remote execution capabilities: connection timed out: buildfarm.dev.azr.internal.mydomain.com/172.33.33.99:8980"
remote_execution {
code: CAPABILITIES_QUERY_FAILURE
}
</code></pre>
<p>My worker deployment & service looks like (server very similar, just different images and different configmap mounted):</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: aks-buildfarm-worker
namespace: infrastructure--buildfarm
spec:
replicas: 1
selector:
matchLabels:
app: aks-buildfarm
template:
metadata:
labels:
app: aks-buildfarm
role: app
spec:
containers:
- name: buildfarm-worker
image: mydomain.azurecr.io/buildfarm-memory-worker:v8
volumeMounts:
- mountPath: "/config"
name: buildfarm-worker-config
ports:
- containerPort: 8980
protocol: TCP
resources:
limits:
memory: 256Mi
cpu: "300m"
volumes:
- name: buildfarm-worker-config
configMap:
name: buildfarm-worker-config
---
apiVersion: v1
kind: Service
metadata:
name: aks-buildfarm
namespace: infrastructure--buildfarm
spec:
type: ClusterIP
ports:
- protocol: TCP
name: grpc
port: 8980
targetPort: 8980
selector:
app: aks-buildfarm
</code></pre>
<p>I'm mostly using the following configs deployed as configmaps on k8s:
<a href="https://github.com/bazelbuild/bazel-buildfarm/blob/main/examples/shard-server.config.example" rel="nofollow noreferrer">https://github.com/bazelbuild/bazel-buildfarm/blob/main/examples/shard-server.config.example</a>
<a href="https://github.com/bazelbuild/bazel-buildfarm/blob/main/examples/worker.config.example" rel="nofollow noreferrer">https://github.com/bazelbuild/bazel-buildfarm/blob/main/examples/worker.config.example</a></p>
<p>The only difference being that I specify all <code>localhost:8980</code> in the worker config to <code>"aks-buildfarm-server.infrastructure--buildfarm.svc.cluster.local"</code>, since they are within the same k8s cluster, and can communicate through that.</p>
<p>My ingress is like the following:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
namespace: infrastructure--buildfarm
name: buildfarm-ingress
annotations:
kubernetes.io/ingress.class: nginx-internal
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/use-regex: "true"
cert-manager.io/cluster-issuer: selfsigned-cluster-issuer
spec:
rules:
- host: buildfarm.dev.azr.internal.mydomain.com
http:
paths:
- backend:
serviceName: aks-buildfarm
servicePort: 8980
path: /(.*)
</code></pre>
<p>My <code>.bazelrc</code> file looks like below:</p>
<pre><code>build:rbe --remote_executor=grpcs://buildfarm.dev.azr.internal.mydomain.com:8980
</code></pre>
| <p>You need to use the shard worker config from here:
<a href="https://github.com/bazelbuild/bazel-buildfarm/blob/main/examples/shard-worker.config.example" rel="nofollow noreferrer">https://github.com/bazelbuild/bazel-buildfarm/blob/main/examples/shard-worker.config.example</a>
You will also need a running redis instance or cluster as the 2-way communication between the server and the worker is triggered via redis</p>
|
<p>I have generated logs for my pods using <code>kubectl logs 'pod name</code>. But I want to persist these logs in a volume (some kind of persistent storage), because container logs will get wiped out if the pods go down. Is there a way to do this? Do I have to write some sort of a script?
I have read many answers but I still do not understand how to go about it, any help is appreciated. Thanks!</p>
| <p>I know this is an old question, but I've just had the same problem and I've spent some time to figure out the solution, so I'd like to share a more detailed solution.</p>
<p>Like Aayush Mall said, you'll need the <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="noreferrer">PersistentVolume</a> and <a href="https://kubernetes.io/docs/concepts/storage/volumes/#persistentvolumeclaim" rel="noreferrer">PersistentVolumeClaim</a> objects to create the volume and then link it to the pod (preferably via a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="noreferrer">Deployment</a> object).</p>
<p>Basically, The PersistentVolume would define how and where the volume would be stored in the host and the PersistentVolumeClaim would define the constraints to bind the volume to some container.</p>
<p>From the <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="noreferrer">docs</a>:</p>
<blockquote>
<p>A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes, but have a lifecycle independent of any individual Pod that uses the PV. This API object captures the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system.</p>
<p>A PersistentVolumeClaim (PVC) is a request for storage by a user. It is similar to a Pod. Pods consume node resources and PVCs consume PV resources. Pods can request specific levels of resources (CPU and Memory). Claims can request specific size and access modes (e.g., they can be mounted ReadWriteOnce, ReadOnlyMany or ReadWriteMany, see AccessModes).</p>
</blockquote>
<p>So, let's say your pods are running in two nodes: <code>mynode-1</code> and <code>mynode-2</code>.</p>
<p>Your <code>PersistentVolume</code> spec will look like this.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: myapp-log-pv
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /var/log/myapp
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- mynode-1
- mynode-2
</code></pre>
<p>Your <code>PersistentVolumeClaim</code> like this.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myapp-log-pvc
spec:
volumeMode: Filesystem
accessModes:
- ReadWriteMany
storageClassName: local-storage
resources:
requests:
storage: 2Gi
volumeName: myapp-log
</code></pre>
<p>And then, you just have to tell the deployment object how to mount the volume inside the container. So, your <code>Deployment</code> spec will look like this.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deploy
spec:
selector:
matchLabels:
app: myapp
replicas: 1
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myrepo/myapp:latest
volumeMounts:
- name: log
mountPath: /var/log
volumes:
- name: log
persistentVolumeClaim:
claimName: myapp-log-pvc
</code></pre>
<p>And that's it. When your deployment starts, it'll create the pod with the container, mount a volume named <code>log</code> for the path <code>/var/log</code> (inside the container) and bound this volume to some PV matching the requirements of the PVC named <code>myapp-log-pvc</code>. As we've created the <code>myapp-log-pv</code> with the same <code>volumeMode</code>, <code>accessModes</code> and <code>storageClassName</code> fields and with more storage capacity then the required by <code>myapp-log-pvc</code>, they will be bound. So, your app logs will be stored in the path <code>/var/log/myapp</code> (field <code>spec.local.path</code> in the <code>myapp-log-pv</code> spec) inside the node running the pod.</p>
<p>I hope it help :)</p>
<p>Also, I'm kinda new in the kubernetes world, so please let me know if you notice I misunderstood something or if there is a better way to do this.</p>
|
<p>I have an openshift namespace (<code>SomeNamespace</code>), in that namespace I have several pods.</p>
<p>I have a route associated with that namespace (<code>SomeRoute</code>).</p>
<p>In one of pods I have my spring application. It has REST controllers.</p>
<p>I want to send message to that REST controller, how can I do it?</p>
<p>I have a route URL: <code>https://some.namespace.company.name</code>. What should I find next?</p>
<p>I tried to send requests to <code>https://some.namespace.company.name/rest/api/route</code> but it didn't work. I guess, I must somehow specify pod in my URL, so route will redirect requests to concrete pod but I don't know how I can do it.</p>
| <p>You don't need to specify the pod in the route.</p>
<p>The chain goes like this:</p>
<ul>
<li><code>Route</code> exposes a given port of a <code>Service</code></li>
<li><code>Service</code> selects some pod to route the traffic to by its <code>.spec.selector</code> field</li>
</ul>
<p>You need to check your <code>Service</code> and <code>Route</code> definitions.</p>
<p>Example service and route (including only the related parts of the resources):</p>
<p><code>Service</code></p>
<pre><code>spec:
ports:
- name: 8080-tcp
port: 8080
protocol: TCP
targetPort: 8080
selector:
<label-key>: <label-value>
</code></pre>
<p>Where <code>label-key</code> and <code>label-value</code> is any label key-value combination that selects your pod with the http application.</p>
<p><code>Route</code></p>
<pre><code>spec:
port:
targetPort: 8080-tcp <port name of the service>
to:
kind: Service
name: <service name>
</code></pre>
<p>When your app exposes some endpoint on <code>:8080/rest/api</code>, you can invoke it with <code><route-url>/rest/api</code></p>
<p>You can try it out with some example application (some I found randomly on github, to verify everything works correctly on your cluster):</p>
<ul>
<li><p>create a new app using s2i build from github repository: <code>oc new-app registry.redhat.io/openjdk/openjdk-11-rhel7~https://github.com/redhat-cop/spring-rest</code></p>
</li>
<li><p>wait until the s2i build is done and the pod is started</p>
</li>
<li><p>expose the service via route: <code>oc expose svc/spring-rest</code></p>
</li>
<li><p>grab the route url: <code>oc get route spring-rest -o jsonpath='{.spec.host}'</code></p>
</li>
<li><p>invoke the api: <code>curl -k <route-url>/v1/greeting</code></p>
</li>
<li><p>response should be something like: <code>{"id":3,"content":"Hello, World!"}</code></p>
</li>
</ul>
|
<p>Attempting to deploy autoscaling to my cluster, but the target shows "unknown", I have tried different metrics servers to no avail. I followed [this githhub issue](https"//github.com/kubernetes/minikube/issues4456/) even thought I'm using Kubeadm not minikube and it did not change the problem.</p>
<p>I also <a href="https://stackoverflow.com/questions/54106725/docker-kubernetes-mac-autoscaler-unable-to-find-metrics">followed this Stack post</a> with no success either.</p>
<p>I'm running Ubuntu 20.0.4 LTS.</p>
<p>Using kubernetes version 1.23.5, for kubeadm ,kubcectl, ect</p>
<p>Following the advice the other stack post, I grabbed the latest version via curl</p>
<p><code>curl -L https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml</code></p>
<p>I edited the file to be as followed:</p>
<pre><code> spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubectl-insecure-tls
- --kubelet-preferred-address-types=InternalIP
- --kubelet-use-node-status-port
- --metric-resolution=15s
image: k8s.gcr.io/metrics-server/metrics-server:v0.6.1
imagePullPolicy: IfNotPresent
</code></pre>
<p>I then ran kubectl apply -f components.yaml</p>
<p>Still did not work:</p>
<p>$ kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
teastore-webui-hpa Deployment/teastore-webui <unknown>/50% 1 20 1 20h</p>
<p>Another suggestion was specifically declaring limits.</p>
<pre><code>$ kubectl autoscale deployment teastore-webui --max=20 --cpu-percent=50 --min=1
horizontalpodautoscaler.autoscaling/teastore-webui autoscaled
group8@group8:~/Downloads/TeaStore-master/examples/kubernetes$ kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
teastore-webui Deployment/teastore-webui <unknown>/50% 1 20 0 4s
teastore-webui-hpa Deployment/teastore-webui <unknown>/50% 1 20 1 20h
</code></pre>
<p>That also did not work.</p>
<p>Here is an exert of the deployment and service config that I'm trying to autoscale.</p>
<pre><code> spec:
containers:
- name: teastore-webui
image: descartesresearch/teastore-webui
ports:
- containerPort: 8080
env:
- name: HOST_NAME
value: "teastore-webui"
- name: REGISTRY_HOST
value: "teastore-registry"
resources:
requests:
cpu: "250m"
---
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: teastore-webui-hpa
labels:
app: teastore
spec:
maxReplicas: 20
minReplicas: 1
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: teastore-webui
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
---
apiVersion: v1
kind: Service
metadata:
name: teastore-webui
labels:
app: teastore
run: teastore-webui
spec:
type: NodePort
ports:
- port: 8080
nodePort: 30080
protocol: TCP
selector:
run: teastore-webui
</code></pre>
<p>Based on other suggestions I have the resource specifically declared as cpu with 50% utilization, and CPU requests are set to 250 milicores.</p>
<pre><code> $kubectl describe hpa
Warning: autoscaling/v2beta2 HorizontalPodAutoscaler is deprecated in v1.23+, unavailable in v1.26+; use autoscaling/v2 HorizontalPodAutoscaler
Name: teastore-webui
Namespace: default
Labels: <none>
Annotations: <none>
CreationTimestamp: Sat, 02 Apr 2022 16:07:25 -0400
Reference: Deployment/teastore-webui
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): <unknown> / 50%
Min replicas: 1
Max replicas: 20
Deployment pods: 1 current / 0 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale
ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedComputeMetricsReplicas 29m (x12 over 32m) horizontal-pod-autoscaler invalid metrics (1 invalid out of 1), first error is: failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)
Warning FailedGetResourceMetric 2m12s (x121 over 32m) horizontal-pod-autoscaler failed to get cpu utilization: unable to get metrics for resource cpu: unable to fetch metrics from resource metrics API: the server is currently unable to handle the request (get pods.metrics.k8s.io)
</code></pre>
| <p>Syntaxerror on line 6 of this yaml. It needs to be <code>- --kubelet-insecure-tls</code> and not <code>- --kubectl-insecure-tls</code></p>
<pre><code>spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubectl-insecure-tls
- --kubelet-preferred-address-types=InternalIP
- --kubelet-use-node-status-port
- --metric-resolution=15s
image: k8s.gcr.io/metrics-server/metrics-server:v0.6.1
imagePullPolicy: IfNotPresent
</code></pre>
<p>Noticed by checking the log files with</p>
<pre><code>kubectl logs -f metric-server -n kube-system
</code></pre>
<p>Thank you David Maze for the comment.</p>
|
<p>I have a use case where I want to check which pods are covered by a network policy, right now my focus is only k8s generated network policies.</p>
<p>What's the easiest way to do this? I know we can go through each network policy and from there filter out pods but a network policy can have multiple ways in which one uses the pod filtering. I am not sure if there is a way to tackle every possible pod filter on the network policy and then get the list of the pods from it.</p>
| <p>Using the <strong>podSelector</strong> field you can check all the pods that are covered by a Network Policy. Using the label mentioned in podSelector you can retrieve the list of pods which are using the NetworkPolicy.</p>
<p>Each NetworkPolicy includes a <strong>podSelector</strong> which selects the grouping of pods to which the policy applies. Let us consider an example policy which contains a <strong>podSelector</strong> with the label “role=db”. The example policy selects pods with the label "role=db". An empty podSelector selects all pods in the namespace.</p>
<p>When you run NetworkPolicy, you can check the label used for a podSelector by describing the networkpolicy.</p>
<pre><code>$ kubectl describe networkpolicy <networkpolicy-name>
</code></pre>
<p>Pod selector will show you which labels this network policy applied too. Then you can present all the pods with this label by:</p>
<pre><code>$ kubectl get pods -l <podSelector>
</code></pre>
<p>Refer <a href="https://kubernetes.io/docs/concepts/services-networking/network-policies/#networkpolicy-resource" rel="nofollow noreferrer">NetworkPolicy resource</a> for more information.</p>
|
<p>I have an application that gets deployed from a docker image to a Kubernetes pod. Inside of my docker image I run the following command</p>
<pre><code>FROM openjdk:17.0.1-slim
USER root
WORKDIR /opt/app
ARG JAR_FILE
ARG INFO_APP_BUILD
RUN apt-get update
RUN apt-get install -y sshpass
RUN apt-get install -y openssh-service
COPY /build/libs/*SNAPSHOT.jar /opt/app/app.jar
ENV INFO_APP_BUILD=${INFO_APP_BUILD}
EXPOSE 8080
CMD java -jar /opt/app/app.jar
</code></pre>
<p>When the application gets deployed, out of my control, the user gets set to a non root user.</p>
<p>Now the important part is that when i try to launch an ssh command i get an error message <code>no user exists for uid [random id here]</code></p>
<p>My goal is to configure the docker image to create a user and grant it permission to use the SSH command.</p>
| <blockquote>
<p>When the application gets deployed, out of my control, the user gets set to a non root user.</p>
</blockquote>
<p>Inside the container, the user running <code>java -jar /opt/app/app.jar</code> is root, because of <code>USER root</code>.</p>
<p>Outside the container, on the host, a deployed application is usually (almost exclusively) never executed/accessed as <code>root</code>.</p>
<p>But it should still make ssh request from within the container to a server:</p>
<ul>
<li>the <a href="https://ubuntu.com/server/docs/service-openssh" rel="nofollow noreferrer">openssh service</a> is started</li>
<li>the container /root/.ssh has the right public/private key</li>
<li>the <code>~user/.ssh</code> folder, on the target server where the Docker application is running, has the authorized_keys with the public one in it.</li>
</ul>
<p>But if the user does not exist inside the container, you need to create it on <code>docker run</code>, as <a href="https://unix.stackexchange.com/a/613055/7490">in here</a>:</p>
<pre class="lang-sh prettyprint-override"><code>docker run -it --rm --entrypoint sh "$@" \
-c "[ -x /usr/sbin/useradd ] && useradd -m -u $(id -u) u1 -s /bin/sh || adduser -D -u $(id -u) u1 -s /bin/sh;
exec su - u1"
</code></pre>
|
<p>Forum,</p>
<p>I am currently looking into Azure Synapse as an option for migrating our on-prem data architecture. I am excited by the functionality it offers - SQL Pools, Spark Pools, and the accompanying notebooks. I get that Synapse can function as a all in one data platform, where my data scientists and data analists can use its functionality to deliver insights at will. However, a large part of the work my team does is creating <em>data products</em>.</p>
<p>We currently have a kubernetes cluster with several stand-alone API's that perform data-science operations in the larger whole of our software. They can be thought of as microservices. Most of the ETL is done in our SQL-server, and the microservices in our K8S cluster (usually python + some python packages + FastAPI) typically get the required data from our SQL-server through some SQL-query with an ODBC connector.</p>
<p>Now my question is, how suitable is Synapse for such an architecture? Can I call upon the SQL-pool or spark-pool to do the heavy data-lifting from outside the azure environment, say from a kubernetes pod?</p>
| <p>Unfortunately you can't integrate Azure Synapse Analytics with Kubernetes Services.</p>
<p>While Synapse SQL helps perform SQL queries, Apache Spark executes batch/stream processing on Big Data. SQL Pool is used to work with data stored in Dedicated SQL Pool while Spark SQL can be integrated with existing data preparation or data science projects that you may hold in Azure Databricks or Azure Machine Learning Services.</p>
<p>Also, as per this <a href="https://sourceforge.net/software/compare/Azure-Synapse-vs-Azure-Kubernetes-Service-AKS/" rel="nofollow noreferrer">third-party document</a>, Azure Synapse Analytics can't integrate with Kubernetes Services.</p>
<p><a href="https://i.stack.imgur.com/w9BAM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/w9BAM.png" alt="enter image description here" /></a></p>
<p>As a workaround, you can copy/move your data from Kubernetes to Azure Services like Azure Dedicated SQL Pool, Azure Blob Storage or Azure Data Lake Storage and then integrate it with Azure Synapse pipeline or Spark Pool.</p>
|
<p>I have a .NetCore C# project which performs an HTTP POST. The project is set up in Kubernetes and I've noticed the logs below:</p>
<pre><code>Heartbeat took longer than "00:00:01" at "02/22/2020 15:43:45 +00:00".
warn: Microsoft.AspNetCore.Server.Kestrel[22]
Heartbeat took longer than "00:00:01" at "02/22/2020 15:43:46 +00:00".
warn: Microsoft.AspNetCore.Server.Kestrel[22]
Heartbeat took longer than "00:00:01" at "02/22/2020 15:43:47 +00:00".
warn: Microsoft.AspNetCore.Server.Kestrel[22]
Heartbeat took longer than "00:00:01" at "02/22/2020 15:43:48 +00:00".
warn: Microsoft.AspNetCore.Server.Kestrel[22]
Heartbeat took longer than "00:00:01" at "02/22/2020 15:43:49 +00:00".
warn: Microsoft.AspNetCore.Server.Kestrel[22]
Heartbeat took longer than "00:00:01" at "02/22/2020 15:43:50 +00:00".
warn: Microsoft.AspNetCore.Server.Kestrel[22]
Heartbeat took longer than "00:00:01" at "02/22/2020 15:43:51 +00:00".
warn: Microsoft.AspNetCore.Server.Kestrel[22]
Heartbeat took longer than "00:00:01" at "02/22/2020 15:43:52 +00:00".
warn: Microsoft.AspNetCore.Server.Kestrel[22]
Heartbeat took longer than "00:00:01" at "02/22/2020 15:43:53 +00:00".
warn: Microsoft.AspNetCore.Server.Kestrel[22]
Heartbeat took longer than "00:00:01" at "02/22/2020 15:43:54 +00:00".
warn: Microsoft.AspNetCore.Server.Kestrel[22]
Heartbeat took longer than "00:00:01" at "02/22/2020 15:43:55 +00:00".
warn: Microsoft.AspNetCore.Server.Kestrel[22]
Heartbeat took longer than "00:00:01" at "02/22/2020 15:43:56 +00:00".
warn: Microsoft.AspNetCore.Server.Kestrel[22]
Heartbeat took longer than "00:00:01" at "02/22/2020 15:44:33 +00:00".
warn: Microsoft.AspNetCore.Server.Kestrel[22]
Heartbeat took longer than "00:00:01" at "02/22/2020 15:44:34 +00:00".
warn: Microsoft.AspNetCore.Server.Kestrel[22]
Heartbeat took longer than "00:00:01" at "02/22/2020 15:44:35 +00:00".
</code></pre>
<p>After some initial research, it seems this is a common result of threadpool starvation. Accordingly, in November last year, I made the post asynchronous and also logged the Max threads and Available threads as follows for monitoring purposes:</p>
<pre><code>ThreadPool.GetMaxThreads(out int workerThreads, out int completionPortThreads);
ThreadPool.GetAvailableThreads(out int workerThreadAvailable, out int completionPortThreadsAvailable);
_log.Info(new { message = $"Max threads = {workerThreads} and Available threads = {workerThreadAvailable}" });
</code></pre>
<p>Consistently over the past few months, the logging shows: <em>Max threads = 32767 and Available threads = 32766</em>. That seems fine, however, I'm noticing the same Heartbeat error so am wondering if this really is a threadpool starvation issue. Might someone know what else is going on and if this error is actually a result of something else? Any investigation/resolution tips for this would be much appreciated!</p>
| <p>This is a resource issue, as @andy pointed out in his response.</p>
<p>According to OP, the solution to this problem is to either increase the server's CPU capacity (vertically) or the number of instances of your app (horizontally).</p>
|
Subsets and Splits