prompt
stringlengths 65
38.7k
| response
stringlengths 41
29.1k
|
---|---|
<p>My pod metrics stopped working on a local <code>minikube</code> deployment. It appears similar to an <a href="https://github.com/kubernetes/kubernetes/issues/94281" rel="nofollow noreferrer">issue reported a while back</a>, but I don't see the same error messages in the logs. The node statistics are working fine:</p>
<pre><code>% kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
minikube 245m 6% 3199Mi 40%
</code></pre>
<p>However, the pod statistics result in an error:</p>
<pre><code>% kubectl top pod
W0506 11:12:00.129992 50236 top_pod.go:265] Metrics not available for pod default/node-5bbfbdd47-24z4c, age: 19h15m52.129979s
error: Metrics not available for pod default/node-5bbfbdd47-24z4c, age: 19h15m52.129979s
</code></pre>
<p>There are no errors on the logs:</p>
<pre><code>% kubectl logs -n kube-system metrics-server-6b76bd68b6-bb2mn
I0506 09:07:22.734381 1 serving.go:325] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
I0506 09:07:23.544196 1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I0506 09:07:23.544214 1 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
I0506 09:07:23.544267 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0506 09:07:23.544273 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0506 09:07:23.544284 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0506 09:07:23.544287 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0506 09:07:23.549006 1 secure_serving.go:197] Serving securely on [::]:4443
I0506 09:07:23.549351 1 dynamic_serving_content.go:130] Starting serving-cert::/tmp/apiserver.crt::/tmp/apiserver.key
I0506 09:07:23.549577 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0506 09:07:23.644335 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0506 09:07:23.644361 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0506 09:07:23.644374 1 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController
</code></pre>
<p>Inspecting minkube stats I do not see the pods listed.</p>
<pre><code>% kubectl get --raw /api/v1/nodes/minikube/proxy/stats/summary
{
"node": {
"nodeName": "minikube",
"systemContainers": [
{
"name": "kubelet",
# SOME OUTPUT REMOVED
},
"pods": []
}%
</code></pre>
<p>How do I get pod metrics working on minikube?</p>
<p>Environment:</p>
<ul>
<li>Intel Mac - Darwin Kernel Version 21.4.0</li>
<li>Docker version 20.10.10, build b485636</li>
<li>minikube version: v1.25.2 (driver=docker, 2cpu/6g/200g)</li>
<li>kubectl client v1.20.10</li>
<li>kubectl server v1.23.3</li>
</ul>
| <p>Either switching to <code>hyperkit</code> or making sure that there are enough resources available resolved the issue. Perhaps <code>metrics-server</code>, same as the <code>ingress-dns</code> only works reliably with the <code>hyperkit</code> driver or requires a certain amount of available resources. Below are steps to resolve my problem. Note that my minikube settings had to be wiped for this to work:</p>
<pre><code>% brew install hyperkit
% minikube delete
% minikube start --driver=hyperkit --memory 8192 --cpus 6 --disk-size 200g
</code></pre>
|
<p>Having read the Quasar framework's description for <a href="https://quasar.dev/quasar-cli-webpack/handling-process-env" rel="nofollow noreferrer">Handling process.env</a>, I understand that it is possible to add environment variables <strong>when building the application</strong> for development or production.</p>
<blockquote>
<p>You can even go one step further. Supply it with values taken from the
quasar dev/build env variables:</p>
</blockquote>
<pre class="lang-js prettyprint-override"><code>// quasar.config.js
build: {
env: {
FOO: process.env.FOO,
}
}
</code></pre>
<p>Then, I can use that variable by using <code>process.env.FOO</code>.</p>
<p>For staging and production, however, I'm building a Docker image which runs an NGINX serving the final <code>dist/spa</code> folder. I'd like to pass an environment variable when deploying the application, so that I can configure the <code>FOO</code> variable depending on its value in the <code>docker-compose.yml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>// staging
services:
image: my-quasar-image
environment:
FOO: "STAGING"
// production
services:
image: my-quasar-image
environment:
FOO: "PROD"
</code></pre>
<p>I have found some blog post which mentions that you could create a custom <code>entrypoint.sh</code> for the Docker image which reads env variables and adds them to the <code>window</code> object but I wonder if there might be a more "elegant" solution.</p>
<p>The primary question is: Is it possible to pass in (Docker) environment variables before the application starts and which are then available on <code>process.env</code>?</p>
| <p>This is how I sorted my requirement that works perfectly for my use case.
A quick review of what I wanted to do: Be able to pass in environment variables via a docker-compose file to a Vue.js application to allow different team members to test different development APIs depending on their assignment(localhost if running the server locally, api-dev, api-staging, api-prod).</p>
<p><strong>Update public/index.html</strong> to contain following at the head:</p>
<pre><code><script>
// CONFIGURATIONS_PLACEHOLDER
</script>
</code></pre>
<p>There is no need to update vue.config.js as we are using the public folder for configuration.</p>
<p><strong>Create new file</strong> env.js to consume runtime variables (keep it inside src folder)</p>
<pre><code>export default function getEnv(name) {
return window?.configs?.[name] || process.env[name];
}
</code></pre>
<p><strong>Create new bash file</strong> set-env-variable.sh in the root folder of the app.</p>
<pre><code>#!/bin/sh
JSON_STRING='window.configs = { \
"VUE_APP_VAR1":"'"${VUE_APP_VAR1}"'", \
"VUE_APP_VAR2":"'"${VUE_APP_VAR2}"'" \
}'
sed -i "s@// CONFIGURATIONS_PLACEHOLDER@${JSON_STRING}@" /usr/share/nginx/html/index.html
exec "$@"
</code></pre>
<p><strong>Update docker file</strong> (assuming it's in the root folder of your vue app)</p>
<pre><code># build stage
FROM node:lts-alpine as build-stage
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run build
# production stage
FROM nginx:stable-alpine as production-stage
COPY --from=build-stage /app/dist /usr/share/nginx/html
COPY ./set-env-variable.sh /docker-entrypoint.d
RUN chmod +x /docker-entrypoint.d/set-env-variable.sh
RUN dos2unix /docker-entrypoint.d/set-env-variable.sh
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
</code></pre>
<p><strong>Deployment</strong></p>
<pre><code>vue-app:
....
volumes:
- "./nginx/templates/:/etc/nginx/templates/"
environment:
VUE_APP_VAR1: my-app
VUE_APP_VAR2: 8080
</code></pre>
<p><strong>Consuming variables</strong> in vue app</p>
<pre><code>import getEnv from "./service/env.js";
var myVar = getEnv("VUE_APP_VAR1");
</code></pre>
|
<p>Going through blogs/official sites, I installed kubectl and minikube. After successful installation of both, I executed the following command.</p>
<pre><code>minikube start --driver=hyperv
</code></pre>
<p>After executing the above command I am struck and the process is not completed at all as mentioned below screenshot.
<a href="https://i.stack.imgur.com/6N4IK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6N4IK.png" alt="enter image description here" /></a></p>
<p>The process has been running in step:4 (Updating the running hyperv "minikube" VM...) for more than 30 minutes.</p>
<p>Please help me to resolve this as I just started learning Kubernetes.</p>
<p>Thanks in advance.</p>
| <p>Maybe this can help (from here <a href="https://stackoverflow.com/questions/56327843/minikube-is-slow-and-unresponsive">Minikube is slow and unresponsive</a>):</p>
<p><strong>1)</strong> Debugging issues with minikube by adding <code>-v</code> flag and set debug level (0, 1, 2, 3, 7).</p>
<p>As example: <code>minikube start --v=1</code> to set outbut to INFO level.<br/>
More detailed information <a href="https://github.com/kubernetes/minikube/blob/master/docs/debugging.md" rel="nofollow noreferrer">here</a></p>
<p><strong>2)</strong> Use logs command <code>minikube logs</code></p>
<p><strong>3)</strong> Because Minikube is working on Virtual Machine sometimes is better to delete minikube and start it again (It helped in this case).</p>
<pre><code>minikube delete
minikube start
</code></pre>
<p><strong>4)</strong> It might get slow due to lack of resources.</p>
<p>Minikube as default is using 2048MB of memory and 2 CPUs. More details about this can be fund <a href="https://github.com/kubernetes/minikube/blob/232080ae0cbcf9cb9a388eb76cc11cf6884e19c0/pkg/minikube/constants/constants.go#L97" rel="nofollow noreferrer">here</a>
In addition, you can enforce Minikue to create more using command <code>minikube start --cpus 4 --memory 8192</code></p>
|
<p>I have a website that is developed by WordPress. Recently I try to activate Google CDN for this website. My host provider is not google. After installing CDN it takes almost 3 hours to activate SSL. I replaced the IP that I created in A record of the website. I get this error :</p>
<pre><code>ERR_TOO_MANY_REDIRECT
</code></pre>
<p>I disabled all plugins and rename .htaccess. But I read an article in TargetTrend website which was published one years ago that says:</p>
<blockquote>
<p>If you are not hosting your site with Google Cloud Platform (GCP), you
will be unable to use Google Cloud CDN. This is because Google Cloud
CDN only works with Google load balancer and cloud storage origin.</p>
</blockquote>
<p>Do you try to use Google CDN for your WordPress website which is not hosted by google cloud?</p>
<p><strong>UPDATED: Adding the process I followed.</strong></p>
<p>To make my question more clear I follow these steps:</p>
<ol>
<li><p>I did not have google cloud account, so I signed up.</p>
</li>
<li><p>Network services >> Cloud CDN - In this they ask me to enable cloud engine and I enable that, later
again I return to : <em>Network services >> Cloud CDN</em></p>
</li>
<li><p>I click on <em>Add Origin</em>.</p>
</li>
<li><p>I set a name, and I choose <em>customer origin</em>, as <em>IP
address/Domain Name</em> I type example.nl and <em>Port</em> 80</p>
</li>
<li><p>I click on <em>Create new load balancer for me</em> and type
<em>my-nl-website</em></p>
</li>
<li><p>under <em>Custom request headers</em> , I clicked on Add header, <em>Host</em>
as <em>header name</em>, and <em>example.nl</em> as <em>header value 1</em> and click on <em>DONE</em></p>
</li>
<li><p>I went to Network service >> Load Balancer, here I clicked on the
<em>my-nl-website</em> , there was an IP, I copy this IP and add an extra A record for this IP</p>
</li>
<li><p>Later I get error on <code>this website is not secure</code> and I edited the load balancer, I creat a new IP and clicked on google certificate, and wait until SSL created.</p>
</li>
<li><p>Now instead of getting the website is not secure, I get <em>redirect to many times.</em></p>
</li>
<li><p>Currently based on what I read in the Document, I try to create a <em>load balancer</em> and <em>network endpoint group</em> manually and then try <em>cloud CDN</em>. I hope this manual creation solves my problem.</p>
</li>
</ol>
| <p>Yes, you can use the Cloud CDN with <strong>external</strong> original also.</p>
<p>we were running it before and moved WordPress to GKE for redirect SSL error try adding snippet into <code>config.php</code> once, plus make sure inside database <code>wp_option or wp_setting</code> table have site URL are using <strong>HTTPS</strong> not <strong>HTTP</strong></p>
<pre><code>if ( isset( $_SERVER['HTTP_X_FORWARDED_PROTO'] ) && 'https' == $_SERVER['HTTP_X_FORWARDED_PROTO'] ) {
$_SERVER['HTTPS'] = 'on';
}
</code></pre>
<p>in cloud CDN you have options for <code>backend service</code>, <code>cloud bucket</code> and <code>custom origin</code> also.</p>
<p><a href="https://i.stack.imgur.com/vdrtP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vdrtP.png" alt="enter image description here" /></a></p>
<p>Ref document : <a href="https://cloud.google.com/cdn/docs/setting-up-cdn-with-ex-backend-internet-neg" rel="nofollow noreferrer">https://cloud.google.com/cdn/docs/setting-up-cdn-with-ex-backend-internet-neg</a></p>
|
<p>I'm still a noob, so go gentle on me please!</p>
<p>I have an eks cluster running with this node group configs:</p>
<pre><code>resource "aws_eks_node_group" "this" {
cluster_name = aws_eks_cluster.this.name
node_group_name = local.cluster_name
node_role_arn = aws_iam_role.eks_node.arn
subnet_ids = aws_subnet.this.*.id
instance_types = ["t2.micro"]
scaling_config {
desired_size = 2
max_size = 4
min_size = 2
}
# Optional: Allow external changes without Terraform plan difference
lifecycle {
ignore_changes = [scaling_config[0].desired_size]
}
depends_on = [
aws_iam_role_policy_attachment.eks_AmazonEKSWorkerNodePolicy,
aws_iam_role_policy_attachment.eks_AmazonEKS_CNI_Policy,
aws_iam_role_policy_attachment.eks_AmazonEC2ContainerRegistryReadOnly,
]
}
</code></pre>
<p>my scaling configs are:</p>
<pre><code> scaling_config {
desired_size = 2
max_size = 4
min_size = 2
}
</code></pre>
<p>and I successfully can deploy <code>2</code> nginx replicas with the following configs:</p>
<pre><code>resource "kubernetes_deployment" "nginx" {
metadata {
name = "nginx"
labels = {
App = "Nginx"
}
}
spec {
replicas = 2
selector {
match_labels = {
App = "Nginx"
}
}
template {
metadata {
labels = {
App = "Nginx"
}
}
spec {
container {
image = "nginx:1.7.8"
name = "nginx"
port {
container_port = 80
}
resources {
limits = {
cpu = "0.5"
memory = "512Mi"
}
requests = {
cpu = "250m"
memory = "50Mi"
}
}
}
}
}
}
}
</code></pre>
<p>but when I scale my replicas to <code>4</code> the pods are created but in a pending state with the following reason:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 18s (x2 over 108s) default-scheduler 0/2 nodes are available: 2 Too many pods.
</code></pre>
<p>I tried ignoring <code>desired_size</code> in <code>scaling_config</code> but didn't help to resolve the issue.</p>
<p>I believe I'm missing a crucial understanding working with scaling_config and the scaling group that it creates and k8s deployment replicas. Any guidance to help me understand what's going on, will be highly appreciated. Thanks a lot In advance.</p>
<p><a href="https://github.com/ehabshaaban/deploy-nginx/tree/eks" rel="nofollow noreferrer">https://github.com/ehabshaaban/deploy-nginx/tree/eks</a></p>
| <p>according to the message <code>0/2 nodes are available: 2 Too many pods.</code>, you can find the node can not be placed any pods. In EKS, the number of max pods could be placed in the node would based on several things <code>instance type</code> & <code>cni</code>. By default, you can refer this document <a href="https://github.com/awslabs/amazon-eks-ami/blob/master/files/eni-max-pods.txt" rel="noreferrer">eni-max-pod</a></p>
<p>To solve your issue, you can increase the <code>desired_size</code> from 2 to 3. So the pods would be placed to the new nodes.</p>
|
<p>i'm working with Minikube to make a full stack K8s application using React as a frontend and ASP NET Core as a backend.
Here there are my configuration</p>
<p>Deployments and Services</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: web-frontend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-deployment
labels:
app: frontend
spec:
replicas: 1
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app:
frontend
spec:
serviceAccountName: web-frontend
containers:
- name: frontend
image: frontend
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: frontend-svc
spec:
selector:
app: frontend
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: backend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend-deployment
labels:
app: backend
spec:
replicas: 1
selector:
matchLabels:
app: backend
template:
metadata:
labels:
app:
backend
spec:
serviceAccountName: backend
containers:
- name: backend
image: backend
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
selector:
app: backend
ports:
- protocol: TCP
port: 5000
targetPort: 5000
</code></pre>
<p>Dockerfiles for the frontend</p>
<pre><code>
FROM node:alpine as build-image
WORKDIR /app
COPY package.json ./
COPY package-lock.json ./
RUN npm i
COPY . .
CMD ["npm", "run", "start"]
</code></pre>
<p>This is instead my Ingress</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: frontend-backend-ingress
annotations:
# nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /?(.*)
pathType: Prefix
backend:
service:
name: frontend-svc
port:
number: 80
- path: /api/?(.*)
pathType: Prefix
backend:
service:
name: backend
port:
number: 5000
</code></pre>
<p>However, when I type <code>minikube tunnel</code> to expose the ingress IP locally I can reach the frontend, but when the frontend tries to get a <code>fetch</code> request to <code>/api/something</code> in the browser console I get <code>GET http://localhost/api/patients/ 404 (Not Found)</code> and an error <code>SyntaxError: Unexpected token < in JSON at position 0</code>.</p>
<p>Moreover, If I change the Ingress in this way</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: frontend-backend-ingress
annotations:
# nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend-svc
port:
number: 80
- path: /api/
pathType: Prefix
backend:
service:
name: backend
port:
number: 5000
</code></pre>
<p>Then I can issue <code>curl localhost/api/something</code> and I get the JSON result, but when the frontend tries to contact the backend I get</p>
<pre><code>GET http://localhost/api/patients/ 500 (Internal Server Error)
SyntaxError: Unexpected end of JSON input
at main.358f50ad.js:2:473736
at s (main.358f50ad.js:2:298197)
at Generator._invoke (main.358f50ad.js:2:297985)
at Generator.next (main.358f50ad.js:2:298626)
at Al (main.358f50ad.js:2:439869)
at a (main.358f50ad.js:2:440073)
</code></pre>
<p>This looks strange because if I try the frontend and the backend outside kubernetes everything works fine and from the React application the result from the backend is correctly fetched (of course using the <code>proxy</code> inside the <code>package.json</code>)</p>
| <p>To contact or make links between apps you could use their kubernetes native FQDN ( try to ping or telnet it if you want to test the connection but here is how it works:
Thr default FQDN of any service is:</p>
<pre><code> <service-name>.<namespace>.svc.cluster.local.
</code></pre>
<p>In your above example, you should be able to contact you backend service from your frontend one with:</p>
<pre><code>backend.YOURNAMESPACENAME.svc.cluster.local:5000
</code></pre>
<p>For services in same namespace, you don't need to use the FQDN to access services, just the service name would be enough:</p>
<pre><code>backend:5000
</code></pre>
<p>I don't know where you exactly configure the links between the frontend and backend but however, you should "variabilize" this link and add the variable definition in kubernetes manifest.</p>
|
<p>We have a dozen of services exposed using a ingress-nginx controller in GKE.</p>
<p>In order to route the traffic correctly on the same domain name, we need to use a rewrite-target rule.</p>
<p>The services worked well without any maintenance since their launch in 2019, that is until recently; when cert-manager suddenly stopped renewing the Let's Encrypt certificates, we "resolved" this by temporarily removing the "tls" section from the ingress definition, forcing our clients to use the http version.</p>
<p>After that we removed all traces of cert-manager attempting to set it up from scratch.</p>
<p>Now, the cert-manager is creating the certificate signing request, spawns an acme http solver pod and adds it to the ingress, however upon accessing its url I can see that it returns an empty response, and not the expected token.</p>
<p><strong>This has to do with the rewrite-target annotation that messes up the routing of the acme challenge.
What puzzles me the most, is that this used to work before. (It was set up by a former employee)</strong></p>
<p>Disabling rewrite-target is unfortunately not an option, because it will stop the routing from working correctly.</p>
<p>Using dns01 won't work because our ISP does not support programmatic changes of the DNS records.</p>
<p><strong>Is there a way to make this work without disabling rewrite-target?</strong></p>
<p>P.S.
Here's a number of similar cases reported on Github:</p>
<ul>
<li><a href="https://github.com/cert-manager/cert-manager/issues/2826" rel="nofollow noreferrer">https://github.com/cert-manager/cert-manager/issues/2826</a></li>
<li><a href="https://github.com/cert-manager/cert-manager/issues/286" rel="nofollow noreferrer">https://github.com/cert-manager/cert-manager/issues/286</a></li>
<li><a href="https://github.com/cert-manager/cert-manager/issues/487" rel="nofollow noreferrer">https://github.com/cert-manager/cert-manager/issues/487</a></li>
</ul>
<p>None of them help.</p>
<p>Here's the definition of my ClusterIssuer</p>
<pre><code>apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: [email protected]
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-prod
# Enable the HTTP-01 challenge provider
solvers:
- http01:
ingress:
class: nginx
</code></pre>
| <p>Please share the cluster issuer or issue you are using.</p>
<p><strong>ingressClass</strong></p>
<blockquote>
<p>If the <strong>ingressClass</strong> field is specified, <code>cert-manager</code> will create
new <strong>Ingress</strong> resources in order to route traffic to the
<code>acmesolver</code> pods, which are responsible for responding to <strong>ACME</strong>
challenge validation requests.</p>
</blockquote>
<p>Ref : <a href="https://cert-manager.io/v0.12-docs/configuration/acme/http01/#ingressclass" rel="nofollow noreferrer">https://cert-manager.io/v0.12-docs/configuration/acme/http01/#ingressclass</a></p>
<p>Mostly we don't see the HTTP solver challenge it comes and get removed if DNS or HTTP working fine.</p>
<p>Also, make sure your ingress doesn't have SSL-redirect annotation that could be also once reason behind <strong>certs</strong> not getting generated.</p>
<p>Did you try checking the other object of cert-manager like order and certificate status request ? <code>kubectl describe challenge</code> are you getting <strong>404</strong> there ?</p>
<p>If you are trying continuously there could be chance you hit rate limit of let's encrypt to request generating certificates.</p>
<p>Troubleshooting : <a href="https://cert-manager.io/docs/faq/troubleshooting/#troubleshooting-a-failed-certificate-request" rel="nofollow noreferrer">https://cert-manager.io/docs/faq/troubleshooting/#troubleshooting-a-failed-certificate-request</a></p>
|
<p>I want block outgoing traffic to the ip (eg-DB) in IP tables in K8s.</p>
<p>I know that in K8s ip tables exist only at node level.</p>
<p>and I'm not sure in which file changes should be made and what is the command or changes required.</p>
<p>Please help me with this query.</p>
<p>Thanks.</p>
| <p>You could deploy istio and specifically the istio egress gateway.</p>
<p>This way you will be able to manage outgoing traffic within the istio manifest</p>
|
<p>If I run, for example, <code>kubectl logs --namespace kube-system kube-apiserver-XXXX | head -n 25</code> I can see the output with only a timestamp and no date. I can't tell if these are from the inception of the pod or not.</p>
<p>Generally speaking, how long do a pod's logs last in Kubernetes?</p>
| <p><a href="https://unofficial-kubernetes.readthedocs.io/en/latest/concepts/cluster-administration/logging/#:%7E:text=Kubernetes%20performs%20log%20rotation%20daily,five%20logging%20rotations%20per%20container." rel="nofollow noreferrer">From k8s unofficial docs</a></p>
<blockquote>
<p>Kubernetes performs log rotation daily, or if the log file grows beyond 10MB in size. Each rotation belongs to a single container; if the container repeatedly fails or the pod is evicted, all previous rotations for the container are lost. By default, Kubernetes keeps up to five logging rotations per container.</p>
</blockquote>
<p>The container runtime attaches the current timestamp to every line produced by the application. You can display these timestamps by using the --timestamps=true option while running kubectl logs.</p>
|
<p>i am trying GCP and GKE google kubernetes engine.
1-)i am create a cluster
2-)i opened cloud shell and used command "kubectl get nodes"</p>
<p>i get this error:
"The connection to the server localhost:8080 was refused - did you specify the right host or port?"</p>
<p>how can i solve. thanks.</p>
| <p>You must have a local Kubernetes config file that is used by <code>kubectl</code> to access cluster(s).</p>
<p>Conventionally, the config file is called <code>config</code> (YAML) and is found in <code>${HOME}/.kube/config</code>.</p>
<p>Google provides a way to generate a config file (and context) for you. To do this run <code>gcloud container clusters get-credentials ...</code>. You'll need to fill in the blanks of the cluster name and probably the project, zone/region etc.:</p>
<pre class="lang-sh prettyprint-override"><code>gcloud container clusters get-credentials ${CLUSTER_NAME} \
--project=${PROJECT} \
--region=${REGION}
</code></pre>
<p>After running this command, you should be able to <code>more ${HOME}/.kube/config</code> and you should be able to access the cluster using e.g. <code>kubectl get nodes</code>.</p>
|
<p>I'm using ArgoCD for gitops deployments right now and I have k8s manifest files that require configMap values to be built (via helm right now). However, ArgoCD does not allow me to use helm lookup feature so I can't have that in my helm chart.</p>
<p>I have a helm chart that uses helm lookup to pull values from a configMap to build a k8s manifest file (pretty common). The problem is that argocd does not allow me to use the helm lookup functionality when it is deploying the helm chart. Has anyone come across this situation where you need to build a manifest file using values in a configMap while using ArgoCD? How did you accomplish that? Any help is greatly appreciated.</p>
| <p>I think this is a well-known issue for ArgoCD to work with helm, I think the below thread will be helpful for you.
<a href="https://github.com/argoproj/argo-cd/issues/5202" rel="nofollow noreferrer">https://github.com/argoproj/argo-cd/issues/5202</a></p>
|
<p>When running following command to update kubernetes config to get connected with EKS cluster then getting this error "'NoneType' object is not iterable"</p>
<pre><code>aws eks update-kubeconfig --region us-east-2 --name <cluster name>
</code></pre>
| <p>Do you have an existing k8s config? Running</p>
<p><code>aws eks update-kubeconfig --region <region> --name <cluster name></code></p>
<p>Generates a ~/.kube/config.</p>
<p>If you already have a ~/.kube/config, there could be a conflict between the file to be generated, and the file that already exists that prevents them from being merged.</p>
<p>If you have a ~/.kube/config file, and you aren't actively using it, running</p>
<p><code>rm ~/.kube/config</code></p>
<p>and then attempting</p>
<p><code>aws eks update-kubeconfig --region us-east-2 --name <cluster name></code></p>
<p>afterwards will likely solve your issue.</p>
<p>If you are using your <code>~/.kube/config file,</code> rename it something else so you could use it later, and then run the eks command again.</p>
<p>See a similar issue here:
<a href="https://github.com/aws/aws-cli/issues/4843" rel="noreferrer">https://github.com/aws/aws-cli/issues/4843</a></p>
|
<p>Is there a way to connect C# Thick Client running in the Windows Machine outside of the Kubernetes with Apache Ignite Cluster nodes are present in the Kubernetes.</p>
<p>Below specified article says it is not possible but this article was written in 2020. We are looking for Scenario-3 from below article
<a href="https://dzone.com/articles/apache-ignite-on-kubernetes-things-to-know-about" rel="nofollow noreferrer">https://dzone.com/articles/apache-ignite-on-kubernetes-things-to-know-about</a>
I hope there might some enhancements for Scenario-3.</p>
<p>We dont want convert our C# Thick client to Thin Client as we are using Data Streamer to insert data in bulk and same functionality is not available with Thin Client.</p>
| <p>The recommendation here would be to use the thin-client. The .net thin-client does have the data streamer API.</p>
<p>There is no straight-forward way to connect a thick-client node from outside Kubernetes to a cluster inside it.</p>
|
<p>I've pod with two containers, one is creating a file and one delete it, I was able to create the file but not to delete it. I want it to delete the files every 2 hours, how can I make it work in clean way? we dont want to use cron job...</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: its
spec:
volumes:
- name: common
emptyDir: {}
containers:
- name: 1st
image: nginx
volumeMounts:
- name: common
mountPath: /usr/share/nginx/html
- name: 2nd
image: debian
volumeMounts:
- name: common
mountPath: /html
command: ["/bin/sh", "-c"]
args:
- while true; do
date >> /html/index.html;
sleep 7200;
done
</code></pre>
| <p>This is what works for me</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: mc1
spec:
volumes:
- name: html
emptyDir: {}
containers:
- name: 1st
image: nginx
command: ["/bin/sh", "-c"]
args:
- while true; do
touch /usr/share/nginx/html/test.txt;
ls /usr/share/nginx/html/;
echo "file created cotnainer 1";
sleep infinity;
done
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
- name: 2nd
image: debian
volumeMounts:
- name: html
mountPath: /html
command: ["/bin/sh", "-c"]
args:
- while true; do
ls /html;
rm /html/test.txt;
echo "container 2 - file removed";
ls /html;
sleep 7200;
done
</code></pre>
<p>i am creating a file from <code>container 1</code> and getting removed by <code>container 2</code> what error are you getting?</p>
|
<p>I need to get the current date in Helm with the following format YY-MM-DD. I'm trying <code>{{ now | date }}</code> but this returns an error because the number of arguments expected by date is two, not one.</p>
| <p>Try Format the date to YEAR-MONTH-DAY:</p>
<pre><code>{{ now | date "20060102150405" }}
</code></pre>
<p>Source <a href="http://masterminds.github.io/sprig/date.html" rel="nofollow noreferrer">here</a>:</p>
<p>or add the following annotation :</p>
<pre><code>annotations:
deploymentTime: {{ now | date "2006-01-02T15:04:05" }}
</code></pre>
|
<p>I want to delete a specific file from a cronJob to the following container, the problem is that when I run exec I got error, how can I exec to distroless container (k8s v1.22.5) and delte the file from a cronJob, which option do we have?</p>
<p>this is the deployment</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: distro
labels:
app: distro
spec:
replicas: 1
selector:
matchLabels:
app: distro
template:
metadata:
labels:
app: distro
spec:
containers:
- name: edistro
image: timberio/vector:0.21.X-distroless-libc
ports:
- containerPort: 80
</code></pre>
<p>what I tried is</p>
<pre class="lang-sh prettyprint-override"><code>kubectl exec -i -t -n apits aor-agent-zz -c tor "--" sh -c "clear; (bash || ash || sh)"
</code></pre>
<p>The error is:</p>
<pre class="lang-sh prettyprint-override"><code>error: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec
</code></pre>
<p>I tried it out like following</p>
<pre class="lang-sh prettyprint-override"><code>kubectl debug -it distro-d49b456cf-t85cm --image=ubuntu --target=edistro --share-processes -n default
</code></pre>
<p><strong>And got error:</strong></p>
<pre><code>Targeting container "edistro". If you don't see processes from this container it may be because the container runtime doesn't support this feature. Defaulting debug container name to debugger-fvfxs. error: ephemeral containers are disabled for this cluster (error from server: "the server could not find the requested resource").
</code></pre>
<p>As I guess (not sure) that our the container runtime doesnt support it which option do we have?</p>
<p><strong>The answer below</strong> doesn't solve the issue, I need a way to access from <strong>outside the distroless pod</strong> and delete specific file there, how can I do this?</p>
| <p>The point of using <code>distro-less</code> is to have a minimal amount of tools/software packaged in the image. This means the removal of unnecessary tools like <code>shell</code> from the image.</p>
<p>You may work around using, however it may depend on your objective:</p>
<pre><code>kubectl debug -it <POD_TO_DEBUG> --image=<helper-image> --target=<CONTAINER_TO_DEBUG> --share-processes
</code></pre>
<p>Eg:</p>
<pre><code>kubectl debug -it distro-less-pod --image=ubuntu --target=edistro --share-processes
</code></pre>
|
<p><s>I set up a Kubernetes cluster in my private network and managed to deploy a test pods:</p>
<p>now I want to expose an external ip for the service:</p>
<p>but when I run:</p>
<pre><code>kubectl get deployments kube-verify
</code></pre>
<p>i get:</p>
<pre><code>Error from server (NotFound): deployments.apps "kube-verify" not found
</code></pre>
</s>
<p><strong>EDIT</strong>
Ok I try a new approach:
i have made a namespace called: verify-cluster</p>
<p>My deployment.yaml:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: verify-cluster
namespace: verify-cluster
labels:
app: verify-cluster
spec:
replicas: 1
selector:
matchLabels:
app: verify-cluster
template:
metadata:
labels:
app: verify-cluster
spec:
containers:
- name: nginx
image: nginx:1.18.0
ports:
- containerPort: 80
</code></pre>
<p>and service.yaml:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: verify-cluster
namespace: verify-cluster
spec:
type: NodePort
selector:
app: verify-cluster
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30007
</code></pre>
<p>then I run:</p>
<pre><code>kubectl create -f deployment.yaml
kubectl create -f service.yaml
</code></pre>
<p>than checking</p>
<pre><code>kubectl get all -n verify-cluster
</code></pre>
<p>but than I want to check deployment with:</p>
<pre><code>kubectl get all -n verify-cluster
</code></pre>
<p>and get:</p>
<pre><code>Error from server (NotFound): deployments.apps "verify-cluster" not found
</code></pre>
<p>hope that's better for reproduction ?</p>
<p><strong>EDIT 2</strong>
when I deploy it to default namespace it runs directly so the issue must be something in the namespace</p>
| <p>First you need to get the deployment by</p>
<pre><code>$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
test-deployment 1/1 1 1 15m
</code></pre>
<p>If you have used any namespace then</p>
<pre><code>$ kubectl get deployment -n your-namesapce
</code></pre>
<p>Then use the exact name in further commands example</p>
<pre><code>kubectl scale deployment test-deployment --replicas=10
</code></pre>
|
<p>Earlier today I had increased my Docker desktop resources, but when ever since it restarted Kubernetes has not been able to complete its startup. Whenever I try to run a kubectl command, I get <code>Unable to connect to the server: EOF</code> in response.</p>
<p>I had thought that it started because I hadn't deleting a helm chart before adjusting the resource values in Settings, thus said resources having been assigned to the pods instead of the Kubernetes api server. But I have not been able to fix this issue.</p>
<p>This is what I have tried thus far:</p>
<ul>
<li>Restarting Docker again</li>
<li>Reset Kubernetes</li>
<li>Reset Docker to factory settings</li>
<li>Deleting the VM in hyper-v and restarting Docker</li>
<li>Uninstalling and reinstalling Docker Desktop</li>
<li>Deleting the pki folder and restart Docker</li>
<li>Set the Environment Variable for KUBECONFIG</li>
<li>Deleting .kube/config and restart</li>
<li>Another clean reinstall of Docker Desktop</li>
</ul>
<p>But Kubernetes does not complete its startup, so I still get <code>Unable to connect to the server: EOF</code> in response.</p>
<p>Is there anything I haven't tried yet?</p>
| <p>I'll share that what solved this for me was Docker Desktop settings feature for "<strong>reset kubernetes cluster</strong>". I know that @shenyongo said that a "reset kubernetes" didn't work, and I suppose they mean this.</p>
<p>But <strong>for the sake of other readers who may find this</strong>, I had this same error message (with Docker Desktop on Windows 11, using wsl2), and the solution for me was indeed to do this:</p>
<ol>
<li>open the Settings page (in Docker Desktop--right-click on it in the status tray)</li>
<li>then choose "Kubernetes" on the left</li>
<li>then choose "reset kubernetes cluster"</li>
</ol>
<p>Yes, that warns that "all stacks and kubernetes resources will be deleted", but as nothing else had worked for me (and I wasn't worried about losing much), I tried it, and it did the trick. In moments, all my k8s functionality was back to working.</p>
<p>As background, k8s had been working fine for me for some time. It was just that one day I found I was getting this error. I searched and searched and found lots of folks asking about it but not getting answers, let alone this answer. To be clear, like the OP here I had tried restarting Docker Desktop, restarting the host machine, even downloading and installing an available DD update (I was only a bit behind), and none of those worked. I didn't proceed to ALL the steps shenyongo did, as I thought I'd try this first, and the reset worked.</p>
<p>Hope that may help others. I realize some may fear losing something, but this helps stress the power of declarative vs imperative k8s configuration. It SHOULD be easy to recreate most everything if necessary. I realize it may not be so for everyone.</p>
|
<p>Hey im trying to cross account access for a role. <strong>i have 2 accounts: prod and non-prod</strong>.
and <strong>bucket in prod account</strong>, which im trying to write files to there from a non-prod role which is used as a service account in k8s cluster.</p>
<p><strong>in prod account i configured:</strong>
a role with the following policy(read write access to the bucket):</p>
<pre><code>{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListObjectsInBucket",
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::test2"
]
},
{
"Sid": "AllObjectActions",
"Effect": "Allow",
"Action": "s3:*Object",
"Resource": [
"arn:aws:s3:::test2/*"
]
}
]
</code></pre>
<p>}</p>
<p>and the following trust:</p>
<pre><code>{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::non-prod-AccountID:role/name-of-the-non-prod-role"
},
"Action": "sts:AssumeRole",
"Condition": {}
}
]
}
</code></pre>
<p><strong>in non prod i configured:</strong></p>
<p>a role with the following policy:</p>
<pre><code> {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::prod-Account-ID:role/prod-role-name"
},
"Action": "sts:AssumeRole",
"Condition": {}
}
]
}
</code></pre>
<p>and trust as follows:</p>
<pre><code>{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::non-prod-accountID:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/1111111111111111111"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"oidc.eks.us-east-1.amazonaws.com/id/1111111111111111111:sub":
"system:serviceaccount:name-space:name-of-the-service-account"
}
}
}
]
}
</code></pre>
<p>serviceAccount annotation is:</p>
<pre><code>annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::non-prod-AccountID:role/non-prod-role-name
</code></pre>
<p>when running the command from inside the pod with the service account of the role in non-prod:</p>
<pre><code>aws s3 cp hello.txt s3://test2/hello.txt
</code></pre>
<p>im having:</p>
<pre><code>upload failed: ./hello.txt to s3://test2/hello.txt An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
</code></pre>
<p>by the way the cluster is in another account (devops account) if its related, surely added OIDC provider identity to both non-prod and prod accounts as identity provider.</p>
| <p>If you're getting the error <code>An error occurred (InvalidIdentityToken) when calling the AssumeRoleWithWebIdentity operation: No OpenIDConnect provider found in your account for $oidc_url</code> when trying to cross-account assume roles, but you can assume roles in your cluster account normally, here's some points:</p>
<p><strong>EKS ACCOUNT</strong></p>
<ol>
<li>Create a ServiceAccount</li>
</ol>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
name: $sa_name
namespace: $eks_ns
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::$resource_account_id:role/$role_name
</code></pre>
<ol start="2">
<li>Annotate your deployment</li>
</ol>
<pre><code>spec.template.spec:
serviceAccountName: $sa_name
</code></pre>
<ol start="3">
<li>Get info about your cluster OIDC Provider</li>
</ol>
<pre><code>aws iam get-open-id-connect-provider --open-id-connect-provider-arn arn:aws:iam::$eks_cluster_account_id:oidc-provider/$oidc_provider
</code></pre>
<p>3.1. The output will be like:</p>
<pre><code>{
"Url": "...",
"ClientIDList": ["..."],
"ThumbprintList": ["..."],
"CreateDate": "...",
"Tags": [...]
}
</code></pre>
<p>3.2. Take note of the outputs (<em>Url</em> and <em>ThumbprintList</em> specially)</p>
<p><strong>RESOURCE ACCOUNT</strong></p>
<ol>
<li>Add the provider (if you don`t have it already), using the output from your cluster account</li>
</ol>
<pre><code>aws iam create-open-id-connect-provider --url $oidc_url --client-id-list sts.amazonaws.com --thumbprint-list $oidc_thumbprint
</code></pre>
<p>This should be enought to the mentioned error stop. If you now get <code>An error occurred (AccessDenied) when calling the AssumeRoleWithWebIdentity operation: Not authorized to perform sts:AssumeRoleWithWebIdentity</code>, you're problably using the <em>$eks_cluster_account_id</em> on Principal.Federated, instead of <em>$resource_account_id</em> created on the previous step. So, make sure you're using the ARN from the IP that is assigned to the resource account, not the cluster account.</p>
<ol start="2">
<li>Create a role and a policy to access your resources with following trusted entities policy:</li>
</ol>
<pre><code>{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::$resource_account_id:oidc-provider/$oidc_provider"
},
"Action": "sts:AssumeRoleWithWebIdentity"
}
]
}
</code></pre>
<p>Also, there's no need to have two roles. One is enough.</p>
|
<p>When I try to install a Helm chart with a template file:</p>
<pre class="lang-yaml prettyprint-override"><code>{{ if eq .Release.Namespace "fedx-app-1100" }}{{ $nodePort := 30106 }}{{ end }}
apiVersion: v1
kind: Service
metadata:
name: mysql
namespace: {{ .Release.Namespace }}
labels:
app: mysql
spec:
type: NodePort
ports:
- port: 3306
targetPort: 3306
nodePort: {{ $nodePort }}
</code></pre>
<p>It produces an error:</p>
<pre class="lang-none prettyprint-override"><code>Error: parse error at (fedx-install/templates/mysql.yaml:21): undefined variable "$nodePort"
</code></pre>
<p>What causes this error? How can I restructure the template file to avoid it?</p>
| <p>Just try this</p>
<pre><code>{{ if eq .Release.Namespace "fedx-app-1100" }}{{ $nodePort := 30106 }}
apiVersion: v1
kind: Service
metadata:
name: mysql
namespace: {{ .Release.Namespace }}
labels:
app: mysql
spec:
type: NodePort
ports:
- port: 3306
targetPort: 3306
nodePort: {{ $nodePort }}
{{ end }}
</code></pre>
|
<p>I am trying to patch, clear Node conditions in OpenShift and/or Kubernetes cluster on a worker node. Patch isn't working, trying even workarounds, maybe update the key in etcd.</p>
<p>Main problem that i created new node conditions and then i removed them but they are not removed from list although they are no longer there or being updated by the controller.</p>
<pre><code>$ oc describe node node1.example.com
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
ExampleToRemove False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientDisk kubelet has sufficient disk space available
MemoryPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Wed, 13 Feb 2019 15:09:42 -0500
</code></pre>
| <p>Posting answer from comment as Community Wiki.</p>
<p>I found the fix, you can edit whatever you want in the node description by updating the etcd key <code>/kubernetes.io/minions/<node-name></code></p>
<p><strong>Edit:</strong>
Finally I found a way to patch and update the Node condition type status, add new or even delete</p>
<p>Example:</p>
<pre><code>curl -k -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json-patch+json" -X PATCH https://APISERVER:6443/api/v1/nodes/NAME-OF-NODE-Update-Condition/status --data '[{ "op": "remove", "path": "/status/conditions/2"}]'
</code></pre>
<p>Note: each condition has an index number, so try to know what is the index number and then target it in /status/condition/</p>
|
<p>Currently have Istio deployed to handle the ingress to my applications. Running into an issue where one app needs to be reachable via a certain URL such as "http://gateway/webapp1" and I need it to route internally to:
server-name1:8080/some/odd/name/path</p>
<p>The current match statement I have is:</p>
<pre><code> - match:
- uri:
prefix: /webapp1
route:
- destination:
host: server-name1
port:
number: 8080
</code></pre>
<p>I am unable to figure out how to expand on the destination route to tell it to hit a different endpoint that just server-name1:8080. Is there a simple way to add a prefix to the underlying route/destination? Thank you.</p>
| <p>You can use the <strong>HTTP rewrite</strong> with the virtual service before sending traffic further</p>
<pre><code>apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: ratings-route
spec:
hosts:
- ratings.prod.svc.cluster.local
http:
- match:
- uri:
prefix: /webapp1
rewrite:
uri: /gateway/webapp1
route:
- destination:
host: server-name1
port:
number: 8080
</code></pre>
<p>Read more at official document : <a href="https://istio.io/latest/docs/reference/config/networking/virtual-service/#HTTPRewrite" rel="nofollow noreferrer">https://istio.io/latest/docs/reference/config/networking/virtual-service/#HTTPRewrite</a></p>
|
<p>we have an application deployed on AWS EKS, with these components:</p>
<ul>
<li>Apache Artemis JMS</li>
<li>PostgreSQL</li>
<li>Kafka</li>
<li>and some application stateless pods made in node.js</li>
</ul>
<p>Which is the best approach to move the entire application from one nodegroup to another?</p>
<p>We were thinking to use the "kubectl drain" command and move the EBS manually to the new node.</p>
<p>Is there any better option?</p>
<p>The reason behind this request is that we started with 2 xlarge nodes and we want to move to 4 large nodes, also to have the application on all 3 AWS zones, because we are worried that if a node dies, AWS may start the node on a different zone and EBS disks will not be mounted.</p>
<p>Thanks for any advise</p>
| <p>I would just add nodeselectors of nodeaffinity and then delete the running pods (so they will be rescheduled on the correct nodes)</p>
|
<p>I am running Rancher Desktop on my ubuntu laptop.
I have a container running mongodb in a kubernetes container:</p>
<pre><code>$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/mongo-deployment-7fb46bd85-vz9th 1/1 Running 0 37m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 3d17h
service/mongo-service NodePort 10.43.132.185 <none> 27017:32040/TCP 37m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/mongo-deployment 1/1 1 1 37m
NAME DESIRED CURRENT READY AGE
replicaset.apps/mongo-deployment-7fb46bd85 1 1 1 37m
</code></pre>
<p><strong>So the node port of the mongo service is: 32040.</strong></p>
<p>I have found the local ip of the kubernetes node:</p>
<pre><code>$ kubectl get node -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
lima-rancher-desktop Ready control-plane,master 3d17h v1.23.6+k3s1 192.168.5.15 <none> Alpine Linux v3.15 5.15.32-0-virt containerd://1.5.11
</code></pre>
<p><strong>so the internal ip is: 192.168.5.15</strong></p>
<p>but when i try to connect to 192.168.5.15 on port 32040 i get <code>connection timed out</code>.</p>
<p>could i have a hint on how to do this with Rancher Desktop ?</p>
<p>thank you,
Andrei</p>
| <p>i found a solution: it seems that the ip returned by
<code>kubectl get node -o wide</code>
is not usable to acess services from the kubernetes node, in Rancher Desktop (it is working in other kubernetes cluster named "kind" (<a href="https://kind.sigs.k8s.io/" rel="nofollow noreferrer">https://kind.sigs.k8s.io/</a>)).</p>
<p>What is working for Rancher Desktop is to access the NodePort service directly on localhost, so in the example above: localhost:32040</p>
|
<p>This is more of an architecture question. I have a data engineering background and have been using airflow to orchestrate ETL tasks using airflow for a while. I have limited knowledge of containerization and kuberentes. I have a task to come up with a good practice framework for productionalizting our Data science models using an orchestration engine namely airflow.</p>
<p>Our Data science team creates many NLP models to process different text documents from various resources. Previously the model was created by an external team which requires us to create an anacoda environment install libraries on it and run the model. The running of model was very manual where a data engineer would spin us a EC2 instance, and setup the model download the files to the ec2 instance and process the files using the model and take the output for further processing.</p>
<p>We are trying to move away from this to an automated pipeline where we have an airflow dag that basically orchestrates this all. The point where I am struggling is the running the model part.</p>
<p>This is the logical step I am thinking of doing. Please let me know if you think this would be feasible. All of these will be down in airflow. Step 2,3,4 are the ones I am totally unsure how to achieve.</p>
<ol>
<li>Download files from ftp to s3</li>
<li>**Dynamically spin up a kubernetes cluster and create parallel pod based on number of files to be process.</li>
<li>Split files between those pods so each pod can only process its subset of files</li>
<li>Collate output of model from each pod into s3 location**</li>
<li>Do post processing on them</li>
</ol>
<p>I am unsure how I can spin up a kuberentes cluster in airflow on runtime and especially how I split files between pods so each pod only processes on its own chunk of files and pushes output to shared location.</p>
<p>The running of the model has two methods. Daily and Complete. Daily would be a delta of files that have been added since last run whereas complete is a historical reprocessing of the whole document catalogue that we run every 6 months. As you can imagine the back catalogue would require alot of parallel processing and pods in parallel to process the number of documents.</p>
<p>I know this is a very generic post but my lack of kuberentes is the issue and any help would be appreciated in pointing me in the right direction.</p>
| <p>Normally people schedule the container or PODs as per need on top of k8s cluster, however, I am not sure how frequent you need to crate the k8s cluster.</p>
<p><strong>K8s cluster setup :</strong></p>
<p>You can create the K8s cluster in different ways that are more dependent on the cloud provider and options they provide like SDK, CLI, etc.</p>
<p>Here is one example you can use this option with <strong>airflow</strong> to create the AWS <strong>EKS</strong> clusters : <a href="https://leftasexercise.com/2019/04/01/python-up-an-eks-cluster-part-i/" rel="nofollow noreferrer">https://leftasexercise.com/2019/04/01/python-up-an-eks-cluster-part-i/</a></p>
<p>Most cloud providers support the CLI option so maybe using just CLI also you can create the K8s cluster.</p>
<p>If you want to use GCP GKE you can also check for the operators to create cluster : <a href="https://airflow.apache.org/docs/apache-airflow-providers-google/stable/operators/cloud/kubernetes_engine.html" rel="nofollow noreferrer">https://airflow.apache.org/docs/apache-airflow-providers-google/stable/operators/cloud/kubernetes_engine.html</a></p>
<blockquote>
<p>Split files between those pods so each pod can only process its subset
of files</p>
</blockquote>
<p>This is more depends on the file structure, you can mount the S3 direct to all pods, or you can keep the file into NFS and mount it to POD but in all cases you have to manage the directory structure accordingly, you can mount it to <strong>POD</strong>.</p>
<blockquote>
<p>Collate output of model from each pod into s3 location**</p>
</blockquote>
<p>You can use <strong>boto3</strong> to upload files to S3, Can also mount S3 bucket direct to POD.</p>
<p>it's more now on your structure how big files are generated, and stored.</p>
|
<p>I deployed the images of microservices currency-conversion and currency-exchange on Google cloud but in the Kubernetes Engine, I see that the pods/replica sets are not available.</p>
<p>When I check under Workload tab, I see that the service shows a message "Does not have minimum availability"</p>
<p><a href="https://i.stack.imgur.com/FFzUx.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FFzUx.jpg" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/NatB5.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NatB5.jpg" alt="enter image description here" /></a></p>
<p>I added additional availability zone to increase the resources but that did not help.</p>
<p>How do I fix this ?</p>
| <p>The pod is crashing hence why you're getting "Does not mean minimum availability"</p>
<p>You should look at the logs of the container first and see why its crashing</p>
<p><code>kubectl logs -n default {name of pod}</code></p>
|
<p>I have application that is using Helm charts to be deployed. On first <code>helm upgrade --install</code> command everything works as expected but second time I have this error:</p>
<pre><code>Error: UPGRADE FAILED: failed to replace object: PersistentVolumeClaim "logs" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims && failed to replace object: Service "application" is invalid: spec.clusterIP: Invalid value: "": field is immutable
</code></pre>
<p>here is my <code>application-deployment.yaml</code>:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: application
name: application
spec:
selector:
matchLabels:
app: application
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: application
spec:
imagePullSecrets:
- name: aws-registry
containers:
- image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: Always
name: application
resources: {}
volumeMounts:
- mountPath: /app/var/logs
name: logs
restartPolicy: Always
volumes:
- name: logs
persistentVolumeClaim:
claimName: logs
</code></pre>
<p>here is <code>application-service.yaml</code>:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: application
name: application
spec:
ports:
- port: 9000
protocol: TCP
targetPort: 9000
selector:
app: application
</code></pre>
<p>and here is <code>logs-pvc.yaml</code>:</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: middleware-logs
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 400Mi
status: {}
</code></pre>
<p>I don't have any idea how to solve this it is impossible that once created pvc or service can't be change so I am guessint that I am doing something wrong but not sure what.</p>
| <p>Removing --force from helm upgrade solved my issue.</p>
|
<p>We have several sites running under a CMS using virtual machines. Basically we have three VM running the CMS and a SQL instance to store data. We plan to transition to containers, but to be honest I'm don't have much idea about it and my boss plans to have the full app (CMS and DB) within an image and then deploy as many containers as needed (initially three). My doubt here is that as far as I know containers work better separating the different parts and using them as microservices, so I don't know if it's a good idea to have the full app within the container.</p>
| <p>Short answer is: No.</p>
<p>It's best practice with containers to have one process per container. The container has an entrypoint, basically a command that is executed when starting the container. This entrypoint will be the command that starts your process. If you want more than one process, you need to have a script in the container that starts them and puts them in the background, complicating the whole setup. See also <a href="https://docs.docker.com/config/containers/multi-service_container/" rel="noreferrer">docker docs</a>.</p>
<p>There are some more downsides.</p>
<ol>
<li><p>A container should only consist of everything it needs to run the process. If you have more than one process, you'll have one big container. Also your not independent on the base image, but you need to find one, that fits all processes you want to run. Also you might have troubles with dependencies, because the different processes might need different version of a dependency (like a library).</p>
</li>
<li><p>You're unable to scale independently. E.g. you could have 5 CMS container that all use the same database, for redundance and performance. That's not possible when you have everything in the same container.</p>
</li>
<li><p>Detecting/debugging fault. If more than one process runs in a container, the container might fail because one of the processes failed. But you can't be sure which one. If you have one process and the container fails, you know exactly why. Also it's easier to monitor health, because there is one health-check endpoint for that container. Last but not least, logs of the container represent logs of the process, not of multiple ones.</p>
</li>
<li><p>Updating becomes easier. When updating your CMS to the next version or updating the database, you need to update the container image of the process. E.g. the database doesn't need to be stopped/started when you update the CMS.</p>
</li>
<li><p>The container can be reused easier. You can e.g. use the same container everywhere and mount the customer specifics from a volume, configmap or environment variable.</p>
</li>
</ol>
<p>If you want both your CMS and database together you can use the sidecar pattern in kubernetes. Simply have a pod with multiple containers in the manifest. Note that this too will not make it horizontal scalable.</p>
|
<p>I deployed the images of microservices currency-conversion and currency-exchange on Google cloud but in the Kubernetes Engine, I see that the pods/replica sets are not available.</p>
<p>When I check under Workload tab, I see that the service shows a message "Does not have minimum availability"</p>
<p><a href="https://i.stack.imgur.com/FFzUx.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/FFzUx.jpg" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/NatB5.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NatB5.jpg" alt="enter image description here" /></a></p>
<p>I added additional availability zone to increase the resources but that did not help.</p>
<p>How do I fix this ?</p>
| <p>Many reasons could be there behind failure:</p>
<ul>
<li>Low Resources so POD are not starting or pending</li>
<li>Liveness or Readiness failing for PODs</li>
<li>Configmap or secret which POD require to start is not available</li>
</ul>
<p>You can describe the POD or check the logs of POD to debug more issue</p>
<pre><code>kubectl describe pod <POD name> -n <Namespace name>
</code></pre>
|
<p>I am using the following chart to deploy a Cassandra cluster to my gke cluster.
<a href="https://github.com/k8ssandra/k8ssandra/tree/main/charts/k8ssandra" rel="nofollow noreferrer">https://github.com/k8ssandra/k8ssandra/tree/main/charts/k8ssandra</a></p>
<p>However, the statefulset stuck in state 1/2 (the cassandra container status is always unhealthy)</p>
<p>Here's my values.yaml</p>
<pre><code>cassandra:
auth:
superuser:
secret: cassandra-admin-secret
clusterName: cassandra-cluster
version: "4.0.0"
cassandraLibDirVolume:
storageClass: standard
size: 5Gi
allowMultipleNodesPerWorker: true
resources:
requests:
cpu: 500m
memory: 2Gi
limits:
cpu: 500m
memory: 2Gi
datacenters:
- name: dc1
size: 1
racks:
- name: default
stargate:
enabled: true
replicas: 1
heapMB: 256
cpuReqMillicores: 200
cpuLimMillicores: 500
kube-prometheus-stack:
enabled: False
</code></pre>
<pre><code>NAME READY STATUS RESTARTS AGE
cassandra-cluster-dc1-default-sts-0 1/2 Running 0 77m
</code></pre>
<p>And then I describe the pod</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 2m11s (x478 over 81m) kubelet Readiness probe failed: HTTP probe failed with statuscode: 500
</code></pre>
<p>Finally, I print the log of the <strong>cassandra</strong> container.</p>
<pre><code>INFO [nioEventLoopGroup-2-2] 2022-04-20 11:09:35,711 Cli.java:617 - address=/10.12.11.58:51000 url=/api/v0/metadata/endpoints status=500 Internal Server Error
INFO [nioEventLoopGroup-3-14] 2022-04-20 11:09:37,718 UnixSocketCQLAccess.java:88 - Cannot create Driver CQLSession as the driver socket has not been created. This should resolve once Cassandra has started and created the socket at /tmp/cassandra.sock
INFO [nioEventLoopGroup-2-1] 2022-04-20 11:09:37,720 Cli.java:617 - address=/10.12.11.58:51132 url=/api/v0/metadata/endpoints status=500 Internal Server Error
INFO [nioEventLoopGroup-3-15] 2022-04-20 11:09:37,750 UnixSocketCQLAccess.java:88 - Cannot create Driver CQLSession as the driver socket has not been created. This should resolve once Cassandra has started and created the socket at /tmp/cassandra.sock
INFO [nioEventLoopGroup-2-2] 2022-04-20 11:09:37,750 Cli.java:617 - address=/10.12.11.1:48478 url=/api/v0/probes/readiness status=500 Internal Server Error
INFO [nioEventLoopGroup-3-16] 2022-04-20 11:09:39,741 UnixSocketCQLAccess.java:88 - Cannot create Driver CQLSession as the driver socket has not been created. This should resolve once Cassandra has started and created the socket at /tmp/cassandra.sock
</code></pre>
<p>and the logs of <strong>server-system-logger</strong></p>
<pre><code>tail: cannot open '/var/log/cassandra/system.log' for reading: No such file or directory
</code></pre>
<p>How can I solve this problem? Thanks.</p>
| <p>The message in the <code>cassandra</code> container says it should resolve itself once Cassandra is up and running which is correct.</p>
<p>Similarly no logs are available with the <code>server-system-logger</code> container until Cassandra has started and more precisely, not until the logging framework has initialized.</p>
|
<p>I am new to Kubernetes, and planning on moving some of my ECS services to EKS. In my current auto-scaling algorithm, apart from CPU, memory and some other things I also used rate of incoming requests. In Kubernetes, how can I leverage rate of incoming requests to customize auto-scaling?</p>
| <p>You can use the <strong>Prometheus</strong> adapter along with the HPA to scale the deployment based on incoming requests or other metrics you want.</p>
<p><a href="https://github.com/kubernetes-sigs/prometheus-adapter/blob/master/docs/walkthrough.md" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/prometheus-adapter/blob/master/docs/walkthrough.md</a></p>
<p>also do check this out : <a href="https://github.com/zalando-incubator/kube-metrics-adapter" rel="nofollow noreferrer">https://github.com/zalando-incubator/kube-metrics-adapter</a></p>
<p><strong>Example</strong></p>
<pre><code>apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: myapp-hpa
annotations:
# metric-config.<metricType>.<metricName>.<collectorType>/<configKey>
metric-config.pods.requests-per-second.json-path/json-key: "$.http_server.rps"
metric-config.pods.requests-per-second.json-path/path: /metrics
metric-config.pods.requests-per-second.json-path/port: "9090"
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: myapp
minReplicas: 1
maxReplicas: 10
metrics:
- type: Pods
pods:
metric:
name: requests-per-second
target:
averageValue: 1k
type: AverageValue
</code></pre>
|
<p>Below is the logic in my docker file.
I am using nginx to build the application.</p>
<pre><code>FROM node:14-alpine as builder
COPY package.json ./
RUN npm install && mkdir /app && mv ./node_modules ./app
WORKDIR /app
COPY . .
RUN npm run build
FROM nginx:1.16.0-alpine
COPY --from=builder /app/build /usr/share/nginx/html
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d
EXPOSE 3000
CMD ["nginx", "-g", "daemon off;"]
</code></pre>
<p>Below is the nginx.conf file</p>
<pre><code>server {
listen 3000;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html =404;
}
location /api {
# rewrite /api/api(.*) /$1 break;
proxy_pass http://${backend_host}:${backend_port}/api;
}
include /etc/nginx/extra-conf.d/*.conf;
}
</code></pre>
<p>backend_host and backend_port in the proxy_pass URL will be provided while deploying the image using Deployment file.</p>
<p>Is this possible?
If not is there any alternative way for this?</p>
| <p>If you want to dynamically mount the <strong>nginx.conf</strong> I would suggest using the <strong>config map</strong> with your <strong>deployment.yaml</strong></p>
<p>So this way you re-use your docker image multiple times without recreating it and pass the <strong>config map</strong> to update it.</p>
<p>You docker file will be</p>
<pre><code>FROM node:14-alpine as builder
COPY package.json ./
RUN npm install && mkdir /app && mv ./node_modules ./app
WORKDIR /app
COPY . .
RUN npm run build
FROM nginx:1.16.0-alpine
COPY --from=builder /app/build /usr/share/nginx/html
RUN rm /etc/nginx/conf.d/default.conf
EXPOSE 3000
CMD ["nginx", "-g", "daemon off;"]
</code></pre>
<p><strong>Example configmap</strong></p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
data:
default.conf: |-
server {
listen 80 default_server;
root /var/www/html;
server_name _;
index index.php;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
}
}
</code></pre>
<p><strong>Mount the config map to deployment</strong></p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: app
labels:
app: app
spec:
selector:
matchLabels:
app: app
replicas: 1
template:
metadata:
labels:
app: app
spec:
containers:
- name: app
image: app-image
ports:
- containerPort: 80
volumeMounts:
- name: nginx-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
volumes:
- name: nginx-config
configMap:
name: confnginx
</code></pre>
<p>For more details read at : <a href="https://blog.meain.io/2020/dynamic-reverse-proxy-kubernetes/" rel="nofollow noreferrer">https://blog.meain.io/2020/dynamic-reverse-proxy-kubernetes/</a></p>
|
<p>I see that normally a new image is created, that is, a dockerfile, but is it a good practice to pass the cert through environment variables? with a batch that picks it up and places it inside the container</p>
<p>Another approach I see is to mount the certificate on a volume.</p>
<p>What would be the best approximation to have a single image for all environments?
Just like what happens with software artifacts, I mean.</p>
<p>Creating a new image for each environment or renewal I find it tedious, but if it has to be like this...</p>
| <p>Definitely do <strong>not</strong> bake certificates into the image.</p>
<p>Because you tagged your question with <code>azure-aks</code>, I recommend using the Secrets Store CSI Driver to mount your certificates from Key Vault.</p>
<ul>
<li>See the <a href="https://github.com/Azure/secrets-store-csi-driver-provider-azure" rel="nofollow noreferrer">plugin project page on GitHub</a></li>
<li>See also this doc <a href="https://azure.github.io/secrets-store-csi-driver-provider-azure/docs/configurations/getting-certs-and-keys/" rel="nofollow noreferrer">Getting Certificates and Keys using Azure Key Vault Provider</a></li>
<li>This doc is better, more thorough and worth going through even if you're not using the nginx ingress controller <a href="https://azure.github.io/secrets-store-csi-driver-provider-azure/docs/configurations/ingress-tls/" rel="nofollow noreferrer">Enable NGINX Ingress Controller with TLS</a></li>
</ul>
<p>And so for different environments, you'd pull in different certificates from one or more key vaults and mount them to your cluster. Please also remember to use different credentials/identities to grab those certs.</p>
|
<p>I have used Kubernetes and I deployed for example WordPress or nginx or etc. We install from the YAML file. Where is it installed how can i find directory of pages(for example WordPress pages etc.) at same point at Google Cloud too. When I use Kubernetes at Google Cloud where is the path of installed files(ex. index.php).</p>
| <p>If you are running the docker image <strong>directly</strong> without attaching anything like NFS, S3 or Disk then you will be able to get those files by default in the container file system(index.php and all).</p>
<p>With any <strong>K8s</strong> cluster you check files inside <strong>container</strong> either Gcloud or any :</p>
<pre><code>kubectl get pods
kubectl exec -it <Wordpress pod name> -- /bin/bash
</code></pre>
<p>If you are attaching the File system like <strong>NFS</strong>, or object storage <strong>S3</strong> or <strong>EFS</strong> you will be able to watch those files there unless you mount and apply config using the <strong>YAML</strong> file.</p>
<p>Regarding setup file (<strong>YAML</strong>),</p>
<p>Kubernetes uses the <strong>ETCD</strong> database as a data store. The flow is like this. <strong>Kubectl</strong> command connect to <strong>API server</strong> and sends the YAML file to API server. API parses and store the information in ETCD database so you wont be getting those file as it is in YAML format.</p>
|
<p>Imagine a scenario where I have 3 classes of worker node (A,B,C) and 2 master nodes (X,Y) as part of a Kubernetes cluster. There maybe multiple worker nodes of each class. Is it possible to route the traffic such that traffic arriving at the different master nodes is routed to a different set of worker nodes. For example I want master node X to route traffic to workers of class A and potentially fail over to class B under heavy load and for master node Y to route traffic to nodes of class C but also use B as a failover when needed.</p>
| <p>As correctly pointed out in the comments, the traffic is not handled by the master nodes; its responsibility, at a high level, is to observe and maintain the desired state of the Kubernetes resources deployed in the worker nodes, as well as to push any changes to them.</p>
<p>That being said, yes, it is possible to discriminate and send the traffic to different worker nodes using <a href="https://kubernetes.io/docs/concepts/services-networking/topology-aware-hints/" rel="nofollow noreferrer">Topology Aware Hints</a>:</p>
<blockquote>
<p>Topology Aware Hints enable topology aware routing by including
suggestions for how clients should consume endpoints. This approach
adds metadata to enable consumers of EndpointSlice and / or Endpoints
objects, so that traffic to those network endpoints can be routed
closer to where it originated.</p>
</blockquote>
<p>This feature is specifically designed to handle traffic in a multi-zone environment and has a series of built-in <a href="https://kubernetes.io/docs/concepts/services-networking/topology-aware-hints/#safeguards" rel="nofollow noreferrer">Safeguards</a> to prevent situations like overload of an endpoint, insufficient endpoints in a zone, etc.</p>
|
<p>I have a kubernetes service, written in Go, and am using <code>client-go</code> to access the kubernetes apis.</p>
<p>I need the <code>Pod</code> of the service's own pod.</p>
<p>The <code>PodInterface</code> allows me to iterate all pods, but what I need is a "self" concept to get the currently running pod that is executing my code.</p>
<p>It appears by reading <code>/var/run/secrets/kubernetes.io/serviceaccount/namespace</code> and searching pods in the namespace for the one matching <code>hostname</code>, I can determine the "self" pod.</p>
<p>Is this the proper solution?</p>
| <p>Expose the <code>POD_NAME</code> and <code>POD_NAMESPACE</code> to your pod as environment variables. Later use those values to get your own pod object.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: dapi-envars-fieldref
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "sh", "-c"]
args:
- while true; do
echo -en '\n';
printenv MY_POD_NAME MY_POD_NAMESPACE;
sleep 10;
done;
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
restartPolicy: Never
</code></pre>
<p><strong>Ref:</strong> <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/" rel="nofollow noreferrer">environment-variable-expose-pod-information</a></p>
|
<p>First of all, let me thank you for this amazing guide. I'm very new to kubernetes and having a guide like this to follow helps a lot when trying to setup my first cluster!</p>
<p>That said, I'm having some issues with creating deploytments, as there are two pods that aren't being created, and remain stuck in the state: ContainerCreating</p>
<pre><code>[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane 25h v1.24.0
node1 Ready <none> 24h v1.24.0
node2 Ready <none> 24h v1.24.0
[root@master ~]# kubectl cluster-info
Kubernetes control plane is running at https://192.168.3.200:6443
CoreDNS is running at https://192.168.3.200:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
</code></pre>
<p>The problem:</p>
<pre><code>[root@master ~]# kubectl get all --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/coredns-6d4b75cb6d-v5pvk 0/1 ContainerCreating 0 114m
kube-system pod/coredns-7599c5f99f-q6nwq 0/1 ContainerCreating 0 114m
kube-system pod/coredns-7599c5f99f-sg4wn 0/1 ContainerCreating 0 114m
kube-system pod/etcd-master 1/1 Running 3 (3h26m ago) 25h
kube-system pod/kube-apiserver-master 1/1 Running 3 (3h26m ago) 25h
kube-system pod/kube-controller-manager-master 1/1 Running 3 (3h26m ago) 25h
kube-system pod/kube-proxy-ftxzx 1/1 Running 2 (3h11m ago) 24h
kube-system pod/kube-proxy-pcl8q 1/1 Running 3 (3h26m ago) 25h
kube-system pod/kube-proxy-q7dpw 1/1 Running 2 (3h23m ago) 24h
kube-system pod/kube-scheduler-master 1/1 Running 3 (3h26m ago) 25h
kube-system pod/weave-net-2p47z 2/2 Running 5 (3h23m ago) 24h
kube-system pod/weave-net-k5529 2/2 Running 4 (3h11m ago) 24h
kube-system pod/weave-net-tq4bs 2/2 Running 7 (3h26m ago) 25h
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 25h
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 25h
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/kube-proxy 3 3 3 3 3 kubernetes.io/os=linux 25h
kube-system daemonset.apps/weave-net 3 3 3 3 3 <none> 25h
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/coredns 0/2 2 0 25h
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/coredns-6d4b75cb6d 1 1 0 25h
kube-system replicaset.apps/coredns-7599c5f99f 2 2 0 116m
</code></pre>
<p>Note that the first three pods, from coredns, fail to start.</p>
<pre><code>[root@master ~]# kubectl get events
LAST SEEN TYPE REASON OBJECT MESSAGE
93m Warning FailedCreatePodSandBox pod/nginx-deploy-99976564d-s4shk (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "fd79c77289f42b3cb0eb0be997a02a42f9595df061deb6e2d3678ab00afb5f67": failed to find network info for sandbox "fd79c77289f42b3cb0eb0be997a02a42f9595df061deb6e2d3678ab00afb5f67"
</code></pre>
<p>.</p>
<pre><code> [root@master ~]# kubectl describe pod coredns-6d4b75cb6d-v5pvk -n kube-system
Name: coredns-6d4b75cb6d-v5pvk
Namespace: kube-system
Priority: 2000000000
Priority Class Name: system-cluster-critical
Node: node2/192.168.3.202
Start Time: Thu, 12 May 2022 19:45:58 +0000
Labels: k8s-app=kube-dns
pod-template-hash=6d4b75cb6d
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/coredns-6d4b75cb6d
Containers:
coredns:
Container ID:
Image: k8s.gcr.io/coredns/coredns:v1.8.6
Image ID:
Ports: 53/UDP, 53/TCP, 9153/TCP
Host Ports: 0/UDP, 0/TCP, 0/TCP
Args:
-conf
/etc/coredns/Corefile
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
Liveness: http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/etc/coredns from config-volume (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4bpvz (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
config-volume:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: coredns
Optional: false
kube-api-access-4bpvz:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: CriticalAddonsOnly op=Exists
node-role.kubernetes.io/control-plane:NoSchedule
node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedCreatePodSandBox 93s (x393 over 124m) kubelet (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "7d0f8f4b3dbf2dffcf1a8c01b41368e16b1f80bc97ff3faa611c1fd52c0f6967": failed to find network info for sandbox "7d0f8f4b3dbf2dffcf1a8c01b41368e16b1f80bc97ff3faa611c1fd52c0f6967"
</code></pre>
<p>Versions:</p>
<pre><code>[root@master ~]# docker --version
Docker version 20.10.15, build fd82621
[root@master ~]# kubelet --version
Kubernetes v1.24.0
[root@master ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.0", GitCommit:"4ce5a8954017644c5420bae81d72b09b735c21f0", GitTreeState:"clean", BuildDate:"2022-05-03T13:44:24Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>I have no idea where to go from here. I googled keywords like "rpc error weave k8s" and "Failed to create pod sandbox: rpc error" but none of the solutions I found had a solution to my problem. I saw some problems mentioning weaving net, could this be the problem? Maybe I got it wrong, but I'm sure I followed the instructions very well.</p>
<p>Any help would be greatly appreciated!</p>
| <p>Looks like you got pretty far! Support for docker as a container runtime was dropped in 1.24.0. I can't tell if that is what you are using or not but if you are that could be your problem.</p>
<p><a href="https://kubernetes.io/blog/2022/05/03/kubernetes-1-24-release-announcement/" rel="nofollow noreferrer">https://kubernetes.io/blog/2022/05/03/kubernetes-1-24-release-announcement/</a></p>
<p>You could switch to containerd for your container runtime but for the purposes of learning you could try the latest 1.23.x version of kubernetes. Get that to work then circle back and tackle containerd with kubernetes v1.24.0</p>
<p>You can still use docker on your laptop/desktop but on the k8s servers you will not be able to use docker on 1.24.x or later.</p>
<p>Hope that helps and good luck!</p>
|
<p>Sorry if this question might sound "convoluted" but here it goes...</p>
<p>I'm currently designing a k8s solution based on Firecracker and Kata-containers. I'd like the environment to be as isolated/secure as possible. My thoughts around this are:</p>
<ol>
<li>deploy k8s masters as Firecracker nodes having API-server,
Controller, Scheduler and etcd</li>
<li>deploy k8s workers as Firecracker nodes having Kubelet, Kube-proxy and using Kata-containers + Firecracker for
deployed workload. The workload will be a combination of MQTT cluster components and in-house developed FaaS components (probably using OpenFaaS)</li>
</ol>
<p>It's point 2 above which makes me feel a little awkward/convoluted. Am I over complicating things, introducing complexity which will cause problems related to (CNI) networking among worker nodes etc? Isolation and minimizing attack vectors are all important, but maybe I'm trying "to be too much of a s.m.a.r.t.a.s.s" here :)</p>
<p>I really like the concept with Firecrackers microVM architecture with reduced security risks and reduced footprint and it would make for a wonderful solution to tenant isolation. However, am I better of to use another CRI-conforming runtime together with Kata for the actual workload being deployed on the workers?</p>
<p>Many thanks in advance for your thoughts/comments on this!</p>
| <p>You might want to take a look at <a href="https://github.com/weaveworks-liquidmetal" rel="nofollow noreferrer">https://github.com/weaveworks-liquidmetal</a> and consider whether contributing to that would get you further towards your goal? alternative runtimes (like kata) for different workloads are welcomed in PR’s. There is a liquid-metal slack channel in the Weaveworks user group of you have any queries. Disclosure I currently work at Weaveworks :)</p>
|
<p>I am attempting to create a kubernetes ConfigMap with helm, which simply consists of the first line within a config file. I put my file in <code>helm/config/file.txt</code>, which has several lines of content, but I only want to extract the first. My first attempt at this was to loop over the lines of the file (naturally), but quit out after the first loop:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: one-line-cm
data:
first-line:
{{- range .Files.Lines "config/file.txt" }}
{{ . }}
{{ break }} # not a real thing
{{- end }}
</code></pre>
<p>Unfortunately, <code>break</code> doesn't seem to be a concept/function in helm, even though it is within golang. I discovered this the hard way, as well as reading about a similar question in this other post: <a href="https://stackoverflow.com/questions/60966946/helm-break-loop-range-in-template">Helm: break loop (range) in template</a></p>
<p>I'm not stuck on using a loop, I'm just wondering if there's another solution to perform the simple task of extracting the first line from a file with helm syntax.</p>
| <p>EDIT:<br />
I've determined the following is the cleanest solution:</p>
<pre><code>.Files.Lines "config/file.txt" | first
</code></pre>
<p>(As a side note, I had to pipe to <code>squote</code> in my acutal solution due to my file contents containing special characters)</p>
<hr />
<p>After poking around in the helm <a href="https://helm.sh/docs/chart_template_guide/function_list/" rel="nofollow noreferrer">docs</a> for alternative functions, I came up with a solution that works, it's just not that pretty:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: one-line-cm
data:
first-line: |
{{ index (regexSplit "\\n" (.Files.Get "config/file.txt") -1) 0 }}
</code></pre>
<p>This is what's happening above (working inside outward):</p>
<ol>
<li><code>.Files.Get "config/file.txt"</code> is returning a string representation of the file contents.</li>
<li><code>regexSplit "\\n" <step-1> -1</code> is splitting the file contents from step-1 by newline (-1 means return the max number of substring matches possible)</li>
<li><code>index <step-2> 0</code> is grabbing the first item (index 0) from the list returned by step-2.</li>
</ol>
<p>Hope this is able to help others in similar situations, and I am still open to alternative solution suggestions.</p>
|
<p>I have the following ingress...</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: core-ingress
namespace: earth
annotations:
kubernetes.io/ingress.global-static-ip-name: "core-proxy-static-ip"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: core-ingress
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: core-proxy-service
port:
number: 80
</code></pre>
<p>I have used <code>kubectl exec</code> to make sure the service itself is running and listening on port 80 so I know <code>core-proxy-service</code> is working. When I try to install the ingress it never gets and IP and the events show nothings...</p>
<pre><code>Name: core-ingress
Namespace: earth
Address:
Default backend: default-http-backend:80 (10.109.128.78:8080)
Rules:
Host Path Backends
---- ---- --------
*
/ core-proxy-service:80 (10.109.128.132:80)
Annotations: kubernetes.io/ingress.global-static-ip-name: core-proxy-static-ip
nginx.ingress.kubernetes.io/rewrite-target: /
Events: <none>
</code></pre>
<p>Since there are no events I figure something must have happened and that is why I am not getting an IP but how would I be able to debug to figure out what?</p>
<pre><code>NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
earth core-ingress core-ingress * 80 3m33s
</code></pre>
<h2>Update</h2>
<p>If I revert back to</p>
<pre><code> defaultBackend:
service:
name: core-proxy-service
port:
number: 80
</code></pre>
<p>Everything works what am I missing?</p>
| <p>Try something like :</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-myservicea
annotations:
kubernetes.io/ingress.global-static-ip-name: "core-proxy-static-ip"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /
backend:
serviceName: core-proxy-service
servicePort: 80
</code></pre>
<p>Ref doc : <a href="https://kubernetes.github.io/ingress-nginx/user-guide/basic-usage/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/basic-usage/</a></p>
|
<p>I hope somebody can help me.
I'm trying to pull a private docker image with no success. I already tried some solutions that I found, but without success.</p>
<p>Docker, Gitlab, Gitlab-Runner, Kubernetes all run on the same server</p>
<p>Insecure Registry</p>
<pre><code>$ sudo cat /etc/docker/daemon.json
{ "insecure-registries":["10.0.10.20:5555"]}
</code></pre>
<p>Config.json</p>
<pre><code>$ cat .docker/config.json
{
"auths": {
"10.0.10.20:5555": {
"auth": "NDUwNjkwNDcwODoxMjM0NTZzIQ=="
},
"https://index.docker.io/v1/": {
"auth": "NDUwNjkwNDcwODpGcGZHMXQyMDIyQCE="
}
}
}
</code></pre>
<p>Secret</p>
<pre><code>$ kubectl create secret generic regcred \
--from-file=.dockerconfigjson=~/.docker/config.json \
--type=kubernetes.io/dockerconfigjson
</code></pre>
<p>I'm trying to create a Kubernetes pod from a private docker image. However, I get the following error:</p>
<pre><code>Name: private-reg
Namespace: default
Priority: 0
Node: 10.0.10.20
Start Time: Thu, 12 May 2022 12:44:22 -0400
Labels: <none>
Annotations: <none>
Status: Pending
IP: 10.244.0.61
IPs:
IP: 10.244.0.61
Containers:
private-reg-container:
Container ID:
Image: 10.0.10.20:5555/development/app-image-base:latest
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-stjn4 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-stjn4:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal BackOff 2m7s (x465 over 107m) kubelet Back-off pulling image "10.0.10.20:5555/development/expedicao-api-image-base:latest"
Normal Pulling 17s (x3 over 53s) kubelet Pulling image "10.0.10.20:5555/development/expedicao-api-image-base:latest"
Warning Failed 17s (x3 over 53s) kubelet Failed to pull image "10.0.10.20:5555/development/expedicao-api-image-base:latest": rpc error: code = Unknown desc = failed to pull and unpack image "10.0.10.20:5555/development/app-image-base:latest": failed to resolve reference "10.0.10.20:5555/development/app-image-base:latest": failed to do request: Head "https://10.0.10.20:5555/v2/development/app-image-base/manifests/latest": http: server gave HTTP response to HTTPS client
Warning Failed 17s (x3 over 53s) kubelet Error: ErrImagePull
Normal BackOff 3s (x2 over 29s) kubelet Back-off pulling image "10.0.10.20:5555/development/expedicao-api-image-base:latest"
Warning Failed 3s (x2 over 29s) kubelet Error: ImagePullBackOff
</code></pre>
<p>When I pull the image directly in docker, no problem occurs even with the secret</p>
<p>Pull image</p>
<pre><code>$ docker login 10.0.10.20:5555
Username: 4506904708
Password:
WARNING! Your password will be stored unencrypted in ~/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
$ docker pull 10.0.10.20:5555/development/app-image-base:latest
latest: Pulling from development/app-image-base
Digest: sha256:1385a8aa2bc7bac1a8d3e92ead66fdf5db3d6625b736d908d1fec61ba59b6bdc
Status: Image is up to date for 10.0.10.20:5555/development/app-image-base:latest
10.0.10.20:5555/development/app-image-base:latest
</code></pre>
<p><strong>Can someone help me?</strong></p>
| <p>First, you need to create a file in /etc/containerd/config.toml</p>
<pre><code># Config file is parsed as version 1 by default.
# To use the long form of plugin names set "version = 2"
[plugins.cri.registry.mirrors]
[plugins.cri.registry.mirrors."10.0.10.20:5555"]
endpoint = ["http://10.0.10.20:5555"]
</code></pre>
<p>Second, restart contained</p>
<pre><code>$ systemctl restart containerd
</code></pre>
|
<pre class="lang-text prettyprint-override"><code>Error: failed to start container "node-exporter": Error response from daemon: path /sys is mounted on /sys but it is not a shared or slave mount
</code></pre>
<p>shows that message here is the repository I took it from trying to make a node exporter to Grafana dashboard through Kubernetes pods followed this <a href="https://www.youtube.com/watch?v=1-tRiThpFrY&t=791s" rel="nofollow noreferrer">video</a> and this <a href="https://github.com/marcel-dempers/docker-development-youtube-series/tree/master/monitoring/prometheus/kubernetes/1.18.4" rel="nofollow noreferrer">repo</a></p>
<p>ERROR screenshot
<img src="https://i.stack.imgur.com/ZtgSE.png" alt="enter image description here" /></p>
| <p>Well for me (Docker-Desktop in MacOS) this command saved my day:</p>
<pre><code>kubectl patch ds monitoring-prometheus-node-exporter --type "json" -p '[{"op": "remove", "path" : "/spec/template/spec/containers/0/volumeMounts/2/mountPropagation"}]'
</code></pre>
<p>credit: <a href="https://github.com/prometheus-community/helm-charts/issues/467#issuecomment-793682080" rel="noreferrer">GitHub Issues</a></p>
|
<p>I'm unable to get the controller working. Tried many times and still I get <code>Error: ImagePullBackOff</code>.</p>
<p>Is there a alternative that I can try or any idea why its failing?</p>
<pre><code> kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.27.0/deploy/static/mandatory.yaml
</code></pre>
<p><code>kubectl describe pod nginx-ingress-controller-7fcb6cffc5-m8m5c -n ingress-nginx</code></p>
<pre><code>Name: nginx-ingress-controller-7fcb6cffc5-m8m5c
Namespace: ingress-nginx
Priority: 0
Node: ip-10-0-0-244.ap-south-1.compute.internal/10.0.0.244
Start Time: Mon, 07 Dec 2020 08:21:13 -0500
Labels: app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
pod-template-hash=7fcb6cffc5
Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: cpu, memory request for container nginx-ingress-controller
kubernetes.io/psp: eks.privileged
prometheus.io/port: 10254
prometheus.io/scrape: true
Status: Pending
IP: 10.0.0.231
IPs:
IP: 10.0.0.231
Controlled By: ReplicaSet/nginx-ingress-controller-7fcb6cffc5
Containers:
nginx-ingress-controller:
Container ID:
Image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:master
Image ID:
Ports: 80/TCP, 443/TCP
Host Ports: 0/TCP, 0/TCP
Args:
/nginx-ingress-controller
--configmap=$(POD_NAMESPACE)/nginx-configuration
--tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
--udp-services-configmap=$(POD_NAMESPACE)/udp-services
--publish-service=$(POD_NAMESPACE)/ingress-nginx
--annotations-prefix=nginx.ingress.kubernetes.io
State: Waiting
Reason: ErrImagePull
Ready: False
Restart Count: 0
Requests:
cpu: 100m
memory: 90Mi
Liveness: http-get http://:10254/healthz delay=10s timeout=10s period=10s #success=1 #failure=3
Readiness: http-get http://:10254/healthz delay=0s timeout=10s period=10s #success=1 #failure=3
Environment:
POD_NAME: nginx-ingress-controller-7fcb6cffc5-m8m5c (v1:metadata.name)
POD_NAMESPACE: ingress-nginx (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from nginx-ingress-serviceaccount-token-xtnz9 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
nginx-ingress-serviceaccount-token-xtnz9:
Type: Secret (a volume populated by a Secret)
SecretName: nginx-ingress-serviceaccount-token-xtnz9
Optional: false
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 19s default-scheduler Successfully assigned ingress-nginx/nginx-ingress-controller-7fcb6cffc5-m8m5c to ip-10-0-0-244.ap-south-1.compute.internal
Normal Pulling 18s kubelet Pulling image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:master"
Warning Failed 3s kubelet Failed to pull image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:master": rpc error: code = Unknown desc = Error response from daemon: Get https://quay.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Warning Failed 3s kubelet Error: ErrImagePull
Normal BackOff 3s kubelet Back-off pulling image "quay.io/kubernetes-ingress-controller/nginx-ingress-controller:master"
Warning Failed 3s kubelet Error: ImagePullBackOff
</code></pre>
| <p>I had the same problem, with the <code>ingress-nginx</code> installation.</p>
<pre><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.2.0/deploy/static/provider/cloud/deploy.yaml
</code></pre>
<p>For some reason it couldn't get the ingress-nginx-controller.</p>
<pre><code>$ kubectl get pods --namespace=ingress-nginx
NAME READY STATUS RE
ingress-nginx-admission-create-6q4wx 0/1 Completed 0
ingress-nginx-admission-patch-fr5ct 0/1 Completed 1
ingress-nginx-controller-686556747b-dg68h 0/1 ImagePullBackOff 0
</code></pre>
<p>What I did was, I ran <code>$ kubectl describe pod ingress-nginx-controller-686556747b-dg68h --namespace ingress-nginx</code>
and got the following output:</p>
<pre class="lang-yaml prettyprint-override"><code>Name: ingress-nginx-controller-686556747b-dg68h
Namespace: ingress-nginx
Priority: 0
Node: docker-desktop/x.x.x.x
Start Time: Wed, 11 May 2022 20:11:55 +0430
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/name=ingress-nginx
pod-template-hash=686556747b
Annotations: <none>
Status: Pending
IP: x.x.x.x
IPs:
IP: x.x.x.x
Controlled By: ReplicaSet/ingress-nginx-controller-686556747b
Containers:
controller:
Container ID:
Image: k8s.gcr.io/ingress-nginx/controller:v1.2.0@sha256:d819
Image ID:
Ports: 80/TCP, 443/TCP, 8443/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Args:
/nginx-ingress-controller
--publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
--election-id=ingress-controller-leader
--controller-class=k8s.io/ingress-nginx
--ingress-class=nginx
--configmap=$(POD_NAMESPACE)/ingress-nginx-controller
--validating-webhook=:8443
--validating-webhook-certificate=/usr/local/certificates/cert
--validating-webhook-key=/usr/local/certificates/key
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Requests:
cpu: 100m
memory: 90Mi
Liveness: http-get http://:10254/healthz delay=10s timeout=1s perio
Readiness: http-get http://:10254/healthz delay=10s timeout=1s perio
Environment:
POD_NAME: ingress-nginx-controller-686556747b-dg68h (v1:metad
POD_NAMESPACE: ingress-nginx (v1:metadata.namespace)
LD_PRELOAD: /usr/local/lib/libmimalloc.so
Mounts:
/usr/local/certificates/ from webhook-cert (ro)
</code></pre>
<p>From <code>Containers.controller.Image</code>, I got the image name that kubernetes is trying to download but is unsuccessful to do so and tried to <code>docker pull</code> that image myself like so:</p>
<pre class="lang-sh prettyprint-override"><code>docker pull k8s.gcr.io/ingress-nginx/controller:v1.2.0@sha256:d819
</code></pre>
<p>Docker could pull the image successfully and after that everything worked just fine.</p>
|
<p>I am trying to run kubectl virt commands to manage my virtual machine via PHP. First, I log in to my server with phpseclib with the following code:</p>
<pre><code>$ssh = new SSH2('localhost');
if (!$ssh->login('root', 'rootPassword')) {
throw new \Exception('Login failed');
}
</code></pre>
<p>This part works fine, and when I try to run <code>$ssh->exec('whoami && echo $PATH')</code>, I get the following output:</p>
<pre><code>root
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
</code></pre>
<p>But, whenever I try to run <code>kubectl virt</code> via PHP, I get the following output:</p>
<pre><code>error: unknown command "virt" for "kubectl"
</code></pre>
<p><code>kubectl</code> and <code>kubectl virt</code> work perfectly fine when I run them via terminal but somehow do not work with PHP exec(). I also tried to check the <code>$PATH</code> via terminal and I get a different output:</p>
<pre><code>/root/.krew/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
</code></pre>
<p>I thought that it may be because of <code>$PATH</code> but the interesting part is when I try to run <code>sudo kubectl virt</code> via terminal I also get the same error:</p>
<pre><code>error: unknown command "virt" for "kubectl"
</code></pre>
<p>At that point, I am completely lost and don't even know where to look for a problem. I am thankful for all the answers.</p>
| <p>When you are issuing ad-hoc ssh commands, you are not using interactive shell, and depending on your default shell behavior it may or may not load your .bashrc file . See <a href="https://serverfault.com/questions/936746/bashrc-is-not-sourced-on-ssh-command">https://serverfault.com/questions/936746/bashrc-is-not-sourced-on-ssh-command</a> and <a href="https://stackoverflow.com/questions/16546333/running-command-via-ssh-also-runs-bashrc/16547623#16547623">Running command via ssh also runs .bashrc?</a> for more details.</p>
<p>So by default, krew modifies your <code>PATH</code> variable, and appends it's bin path to it, i.e. my config contains <code>export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"</code>. But what exactly is kubectl plugin? Usually it's just a single binary, with kubectl-<em>plugin_name</em> name. So by invoking <code>which kubectl-virt</code> you can easily know where is your virt binary located and invoke it directly, so something like</p>
<pre><code>$ssh->exec('~/.krew/bin/kubectl-virt')
</code></pre>
<p>should work</p>
<p>The other way is to modify PATH all by yourself, setting <code>PATH=$PATH:~/.krew/bin</code> should make it work, at least in my case</p>
<pre><code>ssh localhost 'PATH=$PATH:~/.krew/bin kubectl virt'
</code></pre>
<p>worked nicely.</p>
<p>You can try to force loading of .bashrc in your shell configuration, but personally i think it's a bad practice, and ssh commands are not usually loading rc files for a reason, command execution speed and consistency between systems are the first things that come to mind.</p>
<p>Regarding sudo, it's actually not that surprising, because without <code>-E</code> or <code>-i</code> flags it won't load your current environment / won't start interactive shell. See <a href="https://unix.stackexchange.com/questions/228314/sudo-command-doesnt-source-root-bashrc">https://unix.stackexchange.com/questions/228314/sudo-command-doesnt-source-root-bashrc</a> for more info</p>
|
<p>Is it possible to display Kubernetes (K8s) resources from multiple specified <code>namespaces</code> with <strong><em>only</em></strong> the CLI (<code>kubectl</code>)?</p>
<p>In other words, given two <code>namespaces</code> in a K8s cluster:</p>
<pre class="lang-shell prettyprint-override"><code>kubectl get namespaces \
--output=go-template \
--template='{{ range .items }}{{ .metadata.name }}{{ "\n" }}{{ end }}'
#=>
. . .
$SOME_NAMESPACE
. . .
$ANOTHER_NAMESPACE
. . .
</code></pre>
<p>would it be possible to <code>get</code> resources (such as <code>pods</code>) from <strong>only</strong> those two <code>namespaces</code> (<code>$SOME_NAMESPACE</code> and <code>$ANOTHER_NAMESPACE</code>) using <strong><em>only</em></strong> <code>kubectl</code>?</p>
<p>Supplying the <code>--all-namespaces</code> flag and filtering using either the <code>--field-selector</code> or <code>--selector</code> flags will <strong>not</strong> work, because both flags accept only <code>=</code>, <code>==</code> and <code>!=</code> operators.</p>
| <p>You may use <code>go-template</code> to print the name of the pods belonging to the two namespaces, following is an example of printing pods from the <code>test-1</code> and <code>test-2</code> namespace.</p>
<pre><code>kubectl get pod -A -o go-template='{{range .items}}{{if or (eq .metadata.namespace "test-1") (eq .metadata.namespace "test-2") }}{{printf "%s %s\n" .metadata.namespace .metadata.name}}{{end}}{{end}}'
</code></pre>
|
<p>I have a kubernetes cluster into which I'm intending to implement a service in a pod - the service will accept a grpc request, start a long running process but return to the caller indicating the process has started. Investigation suggests that <code>IHostedService</code> (<code>BackgroundService</code>) is the way to go for this.</p>
<p>My question is, will use of <code>BackgroundService</code> behave nicely with various neat features of asp.net and k8s:</p>
<ul>
<li>Will horizontal scaling understand that a service is getting overloaded and spin up a new instance even though the service will appear to have no pending grpc requests because all the work is background (I appreciate there's probably hooks that can be implemented, I'm wondering what's default behaviour)</li>
<li>Will the notion of <code>await</code>ing allowing the current process to be swapped out and another run work okay with background services (I've only experienced it where one message received hits an <code>await</code> so allows another message to be processed, but backround services are not a messaging context)</li>
<li>I think asp.net will normally manage throttling too many requests, backing off if the server is too busy, but will any of that still work if the 'busy' is background processes</li>
<li>What's the best method to mitigate against overloading the service (if horizontal scaling is not an option) - I can have the grpc call reutrn 'too busy' but would need to detect it (not quite sure if that's cpu bound, memory or just number of background services)</li>
<li>Should I be considering something other than <code>BackgroundService</code> for this task</li>
</ul>
<p>I'm hoping the answer is that "it all just works" but feel it's better to have that confirmed than to just hope...</p>
| <blockquote>
<p>Investigation suggests that IHostedService (BackgroundService) is the way to go for this.</p>
</blockquote>
<p>I <a href="https://blog.stephencleary.com/2021/01/asynchronous-messaging-1-basic-distributed-architecture.html" rel="nofollow noreferrer">strongly recommend</a> using a durable queue with a separate background service. It's not that difficult to split into two images, one running ASP.NET GRPC requests, and the other processing the durable queue (this can be a console app - see the Service Worker template in VS). Note that solutions using <strong>non</strong>-durable queues are not reliable (i.e., work may be lost whenever a pod restarts or is scaled down). This includes in-memory queues, which are commonly suggested as a "solution".</p>
<p>If you do make your own background service in a console app, I recommend applying a <a href="https://blog.stephencleary.com/2020/05/backgroundservice-gotcha-startup.html" rel="nofollow noreferrer">few tweaks</a> (noted on my blog):</p>
<ul>
<li>Wrap <code>ExecuteAsync</code> in <code>Task.Run</code>.</li>
<li>Always have a top-level <code>try</code>/<code>catch</code> in <code>ExecuteAsync</code>.</li>
<li>Call <code>IHostApplicationLifetime.StopApplication</code> when the background service stops for any reason.</li>
</ul>
<blockquote>
<p>Will horizontal scaling understand that a service is getting overloaded and spin up a new instance even though the service will appear to have no pending grpc requests because all the work is background (I appreciate there's probably hooks that can be implemented, I'm wondering what's default behaviour)</p>
</blockquote>
<p>One reason I prefer using two different images is that they can scale on different triggers: GRPC requests for the API and queued messages for the worker. Depending on your queue, using "queued messages" as the trigger may require a custom metric provider. I do prefer using "queued messages" because it's a natural scaling mechanism for the worker image; out-of-the-box solutions like CPU usage don't always work well - in particular for asynchronous processors, which you mention you are using.</p>
<blockquote>
<p>Will the notion of awaiting allowing the current process to be swapped out and another run work okay with background services (I've only experienced it where one message received hits an await so allows another message to be processed, but backround services are not a messaging context)</p>
</blockquote>
<p>Background services can be asynchronous without any problems. In fact, it's not uncommon to grab messages in batches and process them all concurrently.</p>
<blockquote>
<p>I think asp.net will normally manage throttling too many requests, backing off if the server is too busy, but will any of that still work if the 'busy' is background processes</p>
</blockquote>
<p>No. ASP.NET only throttles requests. Background services do register with ASP.NET, but that is <em>only</em> to provide a best-effort at graceful shutdown. ASP.NET has no idea how busy the background services are, in terms of pending queue items, CPU usage, or outgoing requests.</p>
<blockquote>
<p>What's the best method to mitigate against overloading the service (if horizontal scaling is not an option) - I can have the grpc call reutrn 'too busy' but would need to detect it (not quite sure if that's cpu bound, memory or just number of background services)</p>
</blockquote>
<p>Not a problem if you use the durable queue + independent worker image solution. GRPC calls can pretty much always stick another message in the queue (very simple and fast), and K8 can autoscale based on your (possibly custom) metric of "outstanding queue messages".</p>
|
<p>Hi i have been trying to do cpu pinning in my eks cluster. i have used amazon linux latest release, and my eks version is 1.22 . i have created a launch template where i have used this user data mentioned below.</p>
<pre><code>Content-Type: multipart/mixed; boundary="//"
MIME-Version: 1.0
--//
#!/bin/bash
set -o xtrace
/etc/eks/bootstrap.sh $CLUSTER_NAME
sleep 2m
yum update -y
sudo rm /var/lib/kubelet/cpu_manager_state
sudo chmod 777 kubelet.service
sudo cat > /etc/systemd/system/kubelet.service <<EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service iptables-restore.service
Requires=docker.service
[Service]
ExecStartPre=/sbin/iptables -P FORWARD ACCEPT -w 5
ExecStart=/usr/bin/kubelet --cloud-provider aws \
--image-credential-provider-config /etc/eks/ecr-credential-provider/ecr-
credential-provider-config \
--image-credential-provider-bin-dir /etc/eks/ecr-credential-provider \
--cpu-manager-policy=static \
--kube-reserved=cpu=0.5,memory=1Gi,ephemeral-storage=0.5Gi \
--system-reserved=cpu=0.5,memory=1Gi,ephemeral-storage=0.5Gi \
--config /etc/kubernetes/kubelet/kubelet-config.json \
--kubeconfig /var/lib/kubelet/kubeconfig \
--container-runtime docker \
--network-plugin cni $KUBELET_ARGS $KUBELET_EXTRA_ARGS
Restart=always
RestartSec=5
KillMode=process
[Install]
WantedBy=multi-user.target
EOF
sudo chmod 644 kubelet.service
sudo systemctl daemon-reload
sudo systemctl stop kubelet
sudo systemctl start kubelet
--//
</code></pre>
<p>after creating the template i have used it on the eks nodegroup creation. after waititng a while i am getting this error on the eks dashboard.</p>
<p>Health issues (1)
NodeCreationFailure Instances failed to join the kubernetes cluster .</p>
<p>and i have get into that ec2 instance and used the following command to view kubectl logs</p>
<pre><code>$journalctl -f -u kubelet
</code></pre>
<p>the output is</p>
<pre><code>[[email protected] kubelet]$ journalctl -f -u kubelet
-- Logs begin at Thu 2022-04-21 07:27:50 UTC. --
Apr 21 07:31:21 ip-10.100.11.111.us-west-2.compute.internal kubelet[12225]: I0421
07:31:21.199868 12225 state_mem.go:80] "Updated desired CPUSet" podUID="3b513cfa-
441d-4e25-9441-093b4c2ed548" containerName="efs-plugin" cpuSet="0-7"
Apr 21 07:31:21 ip-10.100.11.111.us-west-2.compute.internal kubelet[12225]: I0421
07:31:21.244811 12225 state_mem.go:80] "Updated desired CPUSet" podUID="3b513cfa-
441d-4e25-9441-093b4c2ed548" containerName="csi-provisioner" cpuSet="0-7"
Apr 21 07:31:21 ip-10.100.11.111.us-west-2.compute.internal kubelet[12225]: I0421
07:31:21.305206 12225 state_mem.go:80] "Updated desired CPUSet" podUID="3b513cfa-
441d-4e25-9441-093b4c2ed548" containerName="liveness-probe" cpuSet="0-7"
Apr 21 07:31:21 ip-10.100.11.111.us-west-2.compute.internal kubelet[12225]: I0421
07:31:21.335744 12225 state_mem.go:80] "Updated desired CPUSet" podUID="de537700-
f5ac-4039-a151-110ddf27d140" containerName="efs-plugin" cpuSet="0-7"
Apr 21 07:31:21 ip-10.100.11.111.us-west-2.compute.internal kubelet[12225]: I0421
07:31:21.388843 12225 state_mem.go:80] "Updated desired CPUSet" podUID="de537700-
f5ac-4039-a151-110ddf27d140" containerName="csi-driver-registrar" cpuSet="0-7"
Apr 21 07:31:21 ip-10.100.11.111.us-west-2.compute.internal kubelet[12225]: I0421
07:31:21.464789 12225 state_mem.go:80] "Updated desired CPUSet" podUID="de537700-
f5ac-4039-a151-110ddf27d140" containerName="liveness-probe" cpuSet="0-7"
Apr 21 07:31:21 ip-10.100.11.111.us-west-2.compute.internal kubelet[12225]: I0421
07:31:21.545206 12225 state_mem.go:80] "Updated desired CPUSet" podUID="a2f09d0d-
69f5-4bb7-82bb-edfa86cb87e2" containerName="kube-controller" cpuSet="0-7"
Apr 21 07:31:21 ip-10.100.11.111.us-west-2.compute.internal kubelet[12225]: I0421
07:31:21.633078 12225 state_mem.go:80] "Updated desired CPUSet" podUID="3ec70fe1-
3680-4e3c-bcfa-81f80ebe20b0" containerName="kube-proxy" cpuSet="0-7"
Apr 21 07:31:21 ip-10.100.11.111.us-west-2.compute.internal kubelet[12225]: I0421
07:31:21.696852 12225 state_mem.go:80] "Updated desired CPUSet" podUID="adbd9bef-
c4e0-4bd1-a6a6-52530ad4bea3" containerName="aws-node" cpuSet="0-7"
Apr 21 07:46:12 ip-10.100.11.111.us-west-2.compute.internal kubelet[12225]: E0421
07:46:12.424801 12225 certificate_manager.go:488] kubernetes.io/kubelet-serving:
certificate request was not signed: timed out waiting for the condition
Apr 21 08:01:16 ip-10.100.11.111.us-west-2.compute.internal kubelet[12225]: E0421
08:01:16.810385 12225 certificate_manager.go:488] kubernetes.io/kubelet-serving:
certificate request was not signed: timed out waiting for the condition
</code></pre>
<p>this was the output..</p>
<p>But before using this method i have also tried another method, where i have created a node group and then i have created an ami from one of the nodes in that nodegroup.. then modified the kubelet.service file and removed the old cpu_manager_state file.. then the i have used this image to create the nodegroup. Then it worked fine But the problem was i am unable to get into the pods running in those nodes and also i am unable to get the logs of the pods running there. and strangely if i use
$kubectl get nodes -o wide
in the output i was not getting the internal and external both ip addresses.
so i moved on to using the userdata instead of this method.</p>
<p>kindly give me instructions to create a managed nodegroup with cpu_manager_state as static policy for eks cluster .</p>
| <p>I had the same question. I added the following <code>userdata</code> script to my launch template</p>
<h2>User data script</h2>
<pre class="lang-sh prettyprint-override"><code>MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="==MYBOUNDARY=="
--==MYBOUNDARY==
Content-Type: text/x-shellscript; charset="us-ascii"
#!/bin/bash
yum install -y jq
set -o xtrace
cp /etc/kubernetes/kubelet/kubelet-config.json /etc/kubernetes/kubelet/kubelet-config.json.back
jq '. += { "cpuManagerPolicy":"static"}' /etc/kubernetes/kubelet/kubelet-config.json.back > /etc/kubernetes/kubelet/kubelet-config.json
--==MYBOUNDARY==--
</code></pre>
<h2>Verification</h2>
<p>You can verify the change took effect using <code>kubectl</code>:</p>
<pre class="lang-sh prettyprint-override"><code># start a k8s API proxy
$ kubectl proxy
# get the node name
$ kubectl get nodes
# get kubelet config
$ curl -sSL "http://localhost:8001/api/v1/nodes/<<node_name>>/proxy/configz"
</code></pre>
<p>I got the solution from this guide: <a href="https://aws.amazon.com/premiumsupport/knowledge-center/eks-worker-nodes-image-cache/" rel="nofollow noreferrer">https://aws.amazon.com/premiumsupport/knowledge-center/eks-worker-nodes-image-cache/</a>. However, I could not make the <code>sed</code> command properly work so I used <code>jq</code> instead.</p>
<h2>Logs</h2>
<p>If you can <code>ssh</code> into the node, you can check the userdata logs in <code>/var/log/cloud-init-output.log</code> - See <a href="https://stackoverflow.com/a/32460849/4400704">https://stackoverflow.com/a/32460849/4400704</a></p>
<h2>CPU pinning</h2>
<p>I have a pod with a status QoS <code>Guarantee</code> (CPU limit and requested = 2) and I can verify it has two CPU reserved</p>
<pre><code>$ cat /sys/fs/cgroup/cpuset/cpuset.cpus
2,10
</code></pre>
|
<p>I'm pretty new to K8s.</p>
<p>I'm trying to make my app visible to the outside world. I have deployed <a href="https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-helm/#configuration" rel="nofollow noreferrer">this</a> Nginx Ingress Controller with my apps, using Helm and <a href="https://github.com/helmfile/helmfile" rel="nofollow noreferrer">helmfile</a> which has given me an external ip address and a load balancer.</p>
<p>As I understand it, I now need an <code>ingress resource</code> to configure the route mappings.</p>
<p>I plan on using this ingress resource as a starting point.</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx-example
rules:
- http:
paths:
- path: /testpath
pathType: Prefix
backend:
service:
name: test
port:
number: 80
</code></pre>
<p>My apps are deployed in separate namespaces on port 80.</p>
<p>My question is: where do I put the yaml for creating the resources?</p>
<p>I want to keep everything in Helm if possible to make managing the configuration easier, so I don't want to use <code>kubectl</code> unless i have to.</p>
<h2>My helmfile</h2>
<pre><code>repositories:
- name: stable
url: https://charts.helm.sh/stable
- name: nginx-stable
url: https://helm.nginx.com/stable
releases:
# other apps configured here
- name: ingress-nginx
namespace: ingress
createNamespace: true
chart: nginx-stable/nginx-ingress
values:
- ./ingress/values.yaml
version: 0.10.4
installed: true
</code></pre>
<h2>My Ingress Controller values.yaml:</h2>
<pre><code>---
rbac:
create: true
serviceAccount:
create: true
name: nginx-ingress-public
controller:
ingressClassResource:
enabled: true
default: true
replicaCount: 3
minAvailable: 3
updateStrategy:
rollingUpdate:
maxSurge: 3
maxUnavailable: 0
</code></pre>
| <p>You should deploy the ingress controller only once as it can handle <em>all</em> ingress traffic for your whole cluster.</p>
<p>Sometimes it makes sense to deploy multiple, for example we run 2 ingress controller. 1 for internal traffic (private IP) and one for external traffic (public IP).</p>
<p>Once you have that, you just tell your other helm releases to use its ingress class.</p>
<p>The ingress manifest is usually a template of your helm chart. So you put it in templates. If you do <code>helm create my-app</code>, you get a good starting point, including ingress.</p>
<p>Once you have an ingress template in your chart, you can add some reasonable defaults for this template to the values.yaml of the chart, as usual.</p>
<p>When deploying the chart, you can use certain flags to override the defaults. i.e. <code>-f</code> and <code>--set</code>.</p>
|
<p>I'm not clear what the difference is between <code>kubernetes_config_map, </code>kubernetes_config_map_v1<code>and</code>kubernetes_config_map_v1_data`?</p>
<p>e.g. <a href="https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/config_map" rel="nofollow noreferrer">https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs/resources/config_map</a></p>
<p>Any suggestions?</p>
| <p>The <code>v1</code> suffix was recently introduced in the Kubernetes provider (definitely after the <code>alpha</code> provider merge in version 2.4.0) to designate the usage of the Go package for the <code>v1</code> of the Kubernetes API. You can also view resources with the <code>v2</code> suffix for <code>v2</code> of the API. No suffix generally means an alpha or beta version of the API e.g. <code>v1beta1</code>. For example, we see that in the <a href="https://github.com/hashicorp/terraform-provider-kubernetes/blob/main/kubernetes/resource_kubernetes_pod_disruption_budget.go" rel="nofollow noreferrer">resourceKubernetesPodDisruptionBudget</a>, the no suffix designates the usage of <code>v1beta1</code>, and in the <a href="https://github.com/hashicorp/terraform-provider-kubernetes/blob/main/kubernetes/resource_kubernetes_pod_disruption_budget_v1.go" rel="nofollow noreferrer">v1 suffix</a>, the package is the <code>v1</code> API.</p>
<p>As for the difference between <code>_data</code> and no <code>_data</code>, the difference is that the <code>_data</code> manages data within a config map (Update operation) without an import (in fact import is not even supported), and does not enable Create or Destroy operations.</p>
|
<p>We're using gitlab for CI/CD.I'll include the script which we're using</p>
<pre><code>services:
- docker:19.03.11-dind
workflow:
rules:
- if: $CI_COMMIT_BRANCH == $CI_DEFAULT_BRANCH || $CI_COMMIT_BRANCH == "developer" || $CI_COMMIT_BRANCH == "stage"|| ($CI_COMMIT_BRANCH =~ (/^([A-Z]([0-9][-_])?)?SPRINT(([-_][A-Z][0-9])?)+/i))
when: always
- if: $CI_COMMIT_BRANCH != $CI_DEFAULT_BRANCH || $CI_COMMIT_BRANCH != "developer" || $CI_COMMIT_BRANCH != "stage"|| ($CI_COMMIT_BRANCH !~ (/^([A-Z]([0-9][-_])?)?SPRINT(([-_][A-Z][0-9])?)+/i))
when: never
stages:
- build
- Publish
- deploy
cache:
paths:
- .m2/repository
- target
build_jar:
image: maven:3.8.3-jdk-11
stage: build
script:
- mvn clean install package -DskipTests=true
artifacts:
paths:
- target/*.jar
docker_build_dev:
stage: Publish
image: docker:19.03.11
services:
- docker:19.03.11-dind
variables:
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build --build-arg environment_name=development -t $IMAGE_TAG .
- docker push $IMAGE_TAG
only:
- /^([A-Z]([0-9][-_])?)?SPRINT(([-_][A-Z][0-9])?)+/i
- developer
docker_build_stage:
stage: Publish
image: docker:19.03.11
services:
- docker:19.03.11-dind
variables:
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
- docker build --build-arg environment_name=stage -t $IMAGE_TAG .
- docker push $IMAGE_TAG
only:
- stage
deploy_dev:
stage: deploy
image: stellacenter/aws-helm-kubectl
before_script:
- aws configure set aws_access_key_id ${DEV_AWS_ACCESS_KEY_ID}
- aws configure set aws_secret_access_key ${DEV_AWS_SECRET_ACCESS_KEY}
- aws configure set region ${DEV_AWS_DEFAULT_REGION}
script:
- sed -i "s/<VERSION>/${CI_COMMIT_SHORT_SHA}/g" patient-service.yml
- mkdir -p $HOME/.kube
- cp $KUBE_CONFIG_DEV $HOME/.kube/config
- chown $(id -u):$(id -g) $HOME/.kube/config
- export KUBECONFIG=$HOME/.kube/config
- kubectl apply -f patient-service.yml -n ${KUBE_NAMESPACE_DEV}
only:
- /^([A-Z]([0-9][-_])?)?SPRINT(([-_][A-Z][0-9])?)+/i
- developer
deploy_stage:
stage: deploy
image: stellacenter/aws-helm-kubectl
before_script:
- aws configure set aws_access_key_id ${DEV_AWS_ACCESS_KEY_ID}
- aws configure set aws_secret_access_key ${DEV_AWS_SECRET_ACCESS_KEY}
- aws configure set region ${DEV_AWS_DEFAULT_REGION}
script:
- sed -i "s/<VERSION>/${CI_COMMIT_SHORT_SHA}/g" patient-service.yml
- mkdir -p $HOME/.kube
- cp $KUBE_CONFIG_STAGE $HOME/.kube/config
- chown $(id -u):$(id -g) $HOME/.kube/config
- export KUBECONFIG=$HOME/.kube/config
- kubectl apply -f patient-service.yml -n ${KUBE_NAMESPACE_STAGE}
only:
- stage
</code></pre>
<p>According to the script, We just merged the script not to face conflicts/clashes for stage and development while deployment. Previously , we having each docker files for each environment. Now I want to merge the dockerfile also , I merged, but the dockerfile is not fetching. Having clashes in kubernetes. I don't know about kubernetes . I'll enclosed the docker file which I merged.</p>
<pre><code>FROM maven:3.8.3-jdk-11 AS MAVEN_BUILD
COPY pom.xml /build/
COPY src /build/src/
WORKDIR /build/
RUN mvn clean install package -DskipTests=true
FROM openjdk:11
ARG environment_name
WORKDIR /app
COPY --from=MAVEN_BUILD /build/target/patient-service-*.jar /app/patient-service.jar
ENV PORT 8094
ENV env_var_name=$environment_name
EXPOSE $PORT
ENTRYPOINT ["java","-Dspring.profiles.active= $env_var_name","-jar","/app/patient-service.jar"]
</code></pre>
<p>the last line , we used before ,</p>
<pre><code>ENTRYPOINT ["java","-Dspring.profiles.active=development","-jar","/app/patient-service.jar"]
</code></pre>
<p>at the time, its working fine, I'm not facing any issue on kubernetes. I'd just add environment variable to fetch along with whether development or stage. You can check ,my script after the docker build. After adding the varaible only , we facing the clashes. Please help me to sort it out this. Thanks in advance</p>
| <p>As already mentioned in <a href="https://stackoverflow.com/a/37904830/13097422">the question</a> in the comment section, you would need to execute a shell form, because the exec form won't do directly do variable substitution.</p>
<pre><code>ENTRYPOINT ["java","-Dspring.profiles.active= $env_var_name","-jar","/app/patient-service.jar"]
</code></pre>
<p>needs to be</p>
<pre><code>ENTRYPOINT [ "sh", "-c", "java","-Dspring.profiles.active=$env_var_name","-jar","/app/patient-service.jar" ]
</code></pre>
<p>Relevant documentation from Docker Docs: <a href="https://docs.docker.com/engine/reference/builder/#exec-form-entrypoint-example" rel="nofollow noreferrer">Shell form ENTRYPOINT example</a></p>
<blockquote>
<p>Unlike the <em>shell</em> form, the <em>exec</em> form does not invoke a command shell. This means that normal shell processing does not happen. For example, <code>ENTRYPOINT [ "echo", "$HOME" ]</code> will not do variable substitution on <code>$HOME</code>. If you want shell processing then either use the <em>shell</em> form or execute a shell directly, for example: <code>ENTRYPOINT [ "sh", "-c", "echo $HOME" ]</code>. When using the exec form and executing a shell directly, as in the case for the shell form, it is the shell that is doing the environment variable expansion, not docker.</p>
</blockquote>
|
<p>I am using minikube (docker driver) with kubectl to test an agones fleet deployment. Upon running <code>kubectl apply -f lobby-fleet.yml</code> (and when I try to apply any other agones yaml file) I receive the following error:</p>
<pre><code>error: resource mapping not found for name: "lobby" namespace: "" from "lobby-fleet.yml": no matches for kind "Fleet" in version "agones.dev/v1"
ensure CRDs are installed first
</code></pre>
<p>lobby-fleet.yml:</p>
<pre><code>apiVersion: "agones.dev/v1"
kind: Fleet
metadata:
name: lobby
spec:
replicas: 2
scheduling: Packed
template:
metadata:
labels:
mode: lobby
spec:
ports:
- name: default
portPolicy: Dynamic
containerPort: 7600
container: lobby
template:
spec:
containers:
- name: lobby
image: gcr.io/agones-images/simple-game-server:0.12 # Modify to correct image
</code></pre>
<p>I am running this on WSL2, but receive the same error when using the windows installation of kubectl (through choco). I have minikube installed and running for ubuntu in WSL2 using docker.</p>
<p>I am still new to using k8s, so apologies if the answer to this question is clear, I just couldn't find it elsewhere.</p>
<p>Thanks in advance!</p>
| <p>In order to create a resource of kind <code>Fleet</code>, you have to apply the Custom Resource Definition (CRD) that defines what is a <code>Fleet</code> first.</p>
<p>I've looked into the YAML installation instructions of agones, and the <a href="https://raw.githubusercontent.com/googleforgames/agones/release-1.23.0/install/yaml/install.yaml" rel="noreferrer">manifest</a> contains the CRDs. you can find it by searching <code>kind: CustomResourceDefinition</code>.</p>
<p>I recommend you to first try to install according to <a href="https://agones.dev/site/docs/installation/install-agones/yaml/#installing-agones" rel="noreferrer">the instructions in the docs</a>.</p>
|
<p>I'm just trying to create a simple service account. Theoretically, kubectl automatically creates the secret and token for service accounts... But, not in my case... I've done this in <code>kube-system</code>, <code>default</code>, and new/other namespaces.</p>
<pre><code>me@mymachine ~ % kubectl create serviceaccount my-acct
serviceaccount/my-acct created
me@mymachine ~ % kubectl describe serviceaccount my-acct
Name: my-acct
Namespace: default
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: <none>
Tokens: <none>
Events: <none>
</code></pre>
<p>I have reset the Kubernetes system. Uninstalled, removed ./kube/ and removed the Library...
Still no secret created. All of my developers machines (MAC as well, both Intel and M1) automatically create the account secret.
Any ideas?</p>
| <p>Docker-Desktop 4.7.1 includes the appropriate Kubernetes configuration for the controller manager and api service to automatically create secrets and tokens when creating a service account.</p>
<p>Docker-Desktop 4.8.x does not have this configuration. Fresh install or resetting the Kubernetes cluster will prevent the secret and token from automatically being created upon service account creation</p>
|
<p>I have my deployment scaled across multiple pods. The requirement is that whenever the app starts up I want a piece of code to be executed. But I just want one pod to execute this code. How can this be achieved?</p>
<p>Right now I have a way - I set a flag in my DB and have my pods read the flag, which ever reads and locks the flag first will have to process the task.</p>
<p>Are there any drawbacks in my approach ? Is there any better way to do this?</p>
| <p>I believe this is right approach. All pods need to have a way to understand if someone else is processing the task and doing through DB is best option. The only drawback is what if while the pod which picks it up, couldnt update the flag status. What would happen in that case?</p>
<p>The other option that i could think of is perhaps publishing a message to a message queue (a queue maintained outside of your pods, cloud platforms?). The idea is whenever a pod comes alive, they will check the queue and process it. Its similar to what database approach you have.</p>
|
<p>I want to update my kubernetes cluster from 1.21 to 1.22, I should update ingress resources from v1beta1 to v1, how do I compare the resource definitions of v1beta1 and v1 to know what to update?</p>
| <p>You could check on internet first (example: <a href="https://docs.konghq.com/kubernetes-ingress-controller/latest/concepts/ingress-versions/" rel="nofollow noreferrer">https://docs.konghq.com/kubernetes-ingress-controller/latest/concepts/ingress-versions/</a> )</p>
<p>Or you could use <code>kubectl proxy</code> command to access kubernetes API server locally and navigate through different apiversions.</p>
<p>(And maybe check the <code>kubectl explain</code> command, I have to check if we can do this with it)</p>
|
<p>So I want to create a directory under the node kubernetes simulated via docker-desktop</p>
<p>When I am trying to do so I am getting this error</p>
<p>/ # mkdir pod-volume
mkdir: can't create directory 'pod-volume': Read-only file system</p>
<p>Any idea How can I fix this inside docker-desktop (used for kubernetes simulation)</p>
| <blockquote>
<p>The Kubernetes server runs locally within your Docker instance, is not
configurable, and is a single-node cluster.</p>
</blockquote>
<p><a href="https://docs.docker.com/desktop/kubernetes/" rel="nofollow noreferrer">https://docs.docker.com/desktop/kubernetes/</a></p>
<blockquote>
<p>Docker Desktop offers a Kubernetes installation with a solid host
integration aiming to work without any user intervention.</p>
</blockquote>
<p>By the way this is a great article about how it works under the hood:</p>
<p><a href="https://www.docker.com/blog/how-kubernetes-works-under-the-hood-with-docker-desktop/" rel="nofollow noreferrer">https://www.docker.com/blog/how-kubernetes-works-under-the-hood-with-docker-desktop/</a></p>
<p>However, don't know why you are trying to do this, but it's not a good practice. If you want to deal with volumes there is a lot of articles on the internet about this, here is one stack overflow link that could help: <a href="https://stackoverflow.com/questions/54073794/kubernetes-persistent-volume-on-docker-desktop-windows">Kubernetes persistent volume on Docker Desktop (Windows)</a></p>
<p>Hope this has helped you,
Bguess</p>
|
<p>I found that there are two ingress-controllers, one is <code>jcmoraisjr/haproxy-ingress</code> and the other is <code>haproxytech/kubernetes-ingress</code> What is the difference between them?</p>
| <p>If you look at <a href="https://github.com/jcmoraisjr/haproxy-ingress" rel="nofollow noreferrer">https://github.com/jcmoraisjr/haproxy-ingress</a>, it is tagged "hacktoberfest" so possibly this one is that user's work on on the application at a <a href="https://en.wikipedia.org/wiki/Hackathon" rel="nofollow noreferrer">hackathon</a>.</p>
<p>At a guess, I'm also assuming <a href="https://github.com/haproxytech/kubernetes-ingress" rel="nofollow noreferrer">https://github.com/haproxytech/kubernetes-ingress</a> is the official one.</p>
|
<p>I have used <code>kubectl create serviceaccount sa1</code> to create service account. Then I used <code>kubectl get serviceaccount sa1 -oyaml</code> command to get service account info. But it returns as below.</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2022-05-16T08:03:50Z"
name: sa1
namespace: default
resourceVersion: "19651"
uid: fdddacba-be9d-4e77-a849-95ca243781cc
</code></pre>
<p>I need to get,</p>
<pre><code>secrets:
- name: <secret>
</code></pre>
<p>part. but it doesn't return secrets. How to fix it?</p>
| <p>In Kubernetes 1.24, ServiceAccount token secrets are no longer automatically generated. See <a href="https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md#urgent-upgrade-notes" rel="noreferrer">"Urgent Upgrade Notes" in the 1.24 changelog file</a>:</p>
<blockquote>
<p>The <code>LegacyServiceAccountTokenNoAutoGeneration</code> feature gate is beta, and enabled by default. When enabled, Secret API objects containing service account tokens are no longer auto-generated for every ServiceAccount. Use the <a href="https://kubernetes.io/docs/reference/kubernetes-api/authentication-resources/token-request-v1/" rel="noreferrer">TokenRequest</a> API to acquire service account tokens, or if a non-expiring token is required, create a Secret API object for the token controller to populate with a service account token by following this <a href="https://kubernetes.io/docs/concepts/configuration/secret/#service-account-token-secrets" rel="noreferrer">guide</a>. (<a href="https://github.com/kubernetes/kubernetes/pull/108309" rel="noreferrer">#108309</a>, <a href="https://github.com/zshihang" rel="noreferrer">@zshihang</a>)</p>
</blockquote>
<p>This means, in Kubernetes 1.24, you need to <a href="https://kubernetes.io/docs/concepts/configuration/secret/#service-account-token-secrets" rel="noreferrer">manually create the Secret</a>; the <code>token</code> key in the <code>data</code> field will be automatically set for you.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Secret
metadata:
name: sa1-token
annotations:
kubernetes.io/service-account.name: sa1
type: kubernetes.io/service-account-token
</code></pre>
<p>Since you're manually creating the Secret, you know its <code>name:</code> and don't need to look it up in the ServiceAccount object.</p>
<p>This approach should work fine in earlier versions of Kubernetes too.</p>
|
<p>I'm trying to understand the Arch of Airflow on Kubernetes.</p>
<p>Using the helm and Kubernetes executor, the installation mounts 3 pods called: Trigger, WebServer, and Scheduler...</p>
<p>When I run a dag using the Kubernetes pod operator, it also mounts 2 pods more: one with the dag name and another one with the task name...</p>
<p>I want to understand the communication between pods... So far I know the only the expressed in the image:</p>
<p><a href="https://i.stack.imgur.com/TWQjx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TWQjx.png" alt="enter image description here" /></a></p>
<p>Note: I'm using the git sync option</p>
<p>Thanks in advance for the help that you can give me!!</p>
| <p>Airflow application has some components that require for it to operate normally: Webserver, Database, Scheduler, Trigger, Worker(s), Executor. You can read about it <a href="https://airflow.apache.org/docs/apache-airflow/stable/concepts/overview.html#architecture-overview" rel="nofollow noreferrer">here</a>.</p>
<p>Lets go over the options:</p>
<ol>
<li><a href="https://airflow.apache.org/docs/apache-airflow/stable/executor/kubernetes.html#kubernetes-executor" rel="nofollow noreferrer">Kubernetes Executor</a> (As you choose):
In your instance since you are deploying on Kubernetes with Kubernetes Executor then each task being executed is a pod. Airflow wraps the task with a Pod no matter what task it is. This brings to the front the isolation that Kubernetes offer, this also bring the overhead of creating a pod for each task. Choosing Kubernetes Executor often goes with case where many/most of your tasks takes long time to execute - as if your tasks takes 5 seconds to complete it might not be worth to pay the overhead of creating pod for each task. As for what you see as the DAG -> Task1 in your diagram. Consider that the Scheduler launches the Airflow workers. The workers are starting the tasks in new pods. So the worker needs to monitor the execution of the task.</li>
<li><a href="https://airflow.apache.org/docs/apache-airflow/stable/executor/celery.html" rel="nofollow noreferrer">Celery Executor</a> - Setting up a Worker/Pod which tasks can run in it. This gives you speed as there is no need to create pod for each task but there is no isolation for each task. Noting that using this executor doesn't mean that you can't run tasks in their own Pod. User can run <a href="https://airflow.apache.org/docs/apache-airflow-providers-cncf-kubernetes/stable/operators.html#kubernetespodoperator" rel="nofollow noreferrer">KubernetesPodOperator</a> and it will create a Pod for the task.</li>
<li><a href="https://airflow.apache.org/docs/apache-airflow/stable/executor/celery_kubernetes.html#celerykubernetes-executor" rel="nofollow noreferrer">CeleryKubernetes Executor</a> - Enjoying both worlds. You decide which tasks will be executed by Celery and which by Kubernetes. So for example you can set small short tasks to Celery and longer tasks to Kubernetes.</li>
</ol>
<p>How will it look like Pod wise?</p>
<ol>
<li>Kubernetes Executor - Every task creates a pod. PythonOperator, BashOperator - all of them will be wrapped with pods (user doesn't need to change anything on his DAG code).</li>
<li>Celery Executor - Every task will be executed in a Celery worker(pod). So the pod is always in Running waiting to get tasks. You can create a dedicated pod for a task if you will explicitly use KubernetesPodOperator.</li>
<li>CeleryKubernetes - Combining both of the above.</li>
</ol>
<p>Note again that you can use each one of these executors with Kubernetes environment. Keep in mind that all of these are just executors. Airflow has other components like mentioned earlier so it's very OK to deploy Airflow on Kubernetes (Scheduler, Webserver) but to use CeleryExecutor thus the user code (tasks) are not creating new pods automatically.</p>
<p>As for Triggers since you asked about it specifically - It's a feature added in Airflow 2.2: <a href="https://airflow.apache.org/docs/apache-airflow/stable/concepts/deferring.html" rel="nofollow noreferrer">Deferrable Operators & Triggers</a> it allows tasks to defer and release worker slot.</p>
|
<p>Im new to kubernetes and this topic is confusing for me. I've learned that stateful set doesn't share the PV and each replica has it's own PV. On the other hand I saw the examples when one was using one pvc in stateful set with many replicas. So my question is what will happen then? As PVC to PV are bind 1:1 so one pvc can only bind to one pv, but each replica should have its own PV so how is it possible to have one pvc in stateful set in this scenario?</p>
| <p>You should usually use a <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#volume-claim-templates" rel="nofollow noreferrer">volume claim template</a> with a StatefulSet. As you note in the question, this will create a new PersistentVolumeClaim (and a new PersistentVolume) for each replica. Data is not shared, except to the extent the container process knows how to replicate data between its replicas. If a StatefulSet Pod is deleted and recreated, it will come back with the same underlying PVC and the same data, even if it is recreated on a different Node.</p>
<pre class="lang-yaml prettyprint-override"><code>spec:
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ReadWriteOnce]
resources:
requests:
storage: 1Gi
template:
spec:
containers:
- name: name
volumeMounts:
- name: data
mountPath: /data
</code></pre>
<p>You're allowed to manually create a PVC and attach it to the StatefulSet Pods</p>
<pre class="lang-yaml prettyprint-override"><code># not recommended -- one PVC shared across all replicas
spec:
template:
spec:
volumes:
- name: data
persistentVolumeClaim:
claimName: manually-created-pvc
containers:
- name: name
volumeMounts:
- name: data
mountPath: /data
</code></pre>
<p>but in this case the single PVC/PV will be shared across all of the replicas. This often doesn't work well: things like database containers have explicit checks that their storage isn't shared, and there is a range of concurrency problems that are possible doing this. This also can prevent pods from starting up since the <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#types-of-persistent-volumes" rel="nofollow noreferrer">volume types</a> that are straightforward to get generally only support a ReadWriteOnce <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer">access mode</a>; to get ReadWriteMany you need to additionally configure something like an NFS server outside the cluster.</p>
|
<p>I'm requesting some JSON data from a pod's web server via the Kubernetes API proxy verb. That is:</p>
<pre><code>corev1 = kubernetes.client.CoreV1Api()
res = corev1.connect_get_namespaced_pod_proxy_with_path(
'mypod:5000', 'default', path='somepath', path2='somepath')
print(type(res))
print(res)
</code></pre>
<p>The call succeeds and returns a <code>str</code> containing the serialized JSON data from my pod's web service. Unfortunately, <code>res</code> now looks like this ... which isn't valid JSON at all, so <code>json.loads(res)</code> denies to parse it:</p>
<pre><code>{'x': [{'xx': 'xxx', ...
</code></pre>
<p>As you can see, the stringified response looks like a Python dictionary, instead of valid JSON. Any suggestions as to how this convert safely back into either correct JSON or a correct Python <code>dict</code>?</p>
| <p>Looking at the source code for <a href="https://github.com/kubernetes-client/python/blob/master/kubernetes/client/api/core_v1_api.py" rel="nofollow noreferrer">core_v1_api.py</a>. The method calls generally accept a kwarg named <code>_preload_content</code>.</p>
<p>Setting this argument to <code>False</code> instructs the method to return the <code>urllib3.HTTPResponse</code> object instead of a processed <code>str</code>. You can then work directly with the data, which cooperates with <code>json.loads()</code>.</p>
<p>Example:</p>
<pre class="lang-py prettyprint-override"><code>corev1 = client.CoreV1Api()
res = corev1.connect_get_namespaced_pod_proxy_with_path(
'mypod:5000', 'default', path='somepath',
path2='somepath', _preload_content=False)
json.loads(res.data)
</code></pre>
|
<p>Im looking to scale my pods/nodes based on disk space. Is it possible? I see that i can scale based on cpu or memory, but how can i scale based on disk usage?</p>
| <p><strong>Yes</strong>, you can use a tool named <strong>Keda</strong>, basically, it gives you the option to scale based on anything.</p>
<p><a href="https://keda.sh/docs/1.4/scalers/prometheus/" rel="nofollow noreferrer">Here</a> is an example of scaling based on the sum of HTTP requests to your service; Keda will take the number directly from prometheus.</p>
<p>So yes you can scale pods based on disk space if you know which metrics to use</p>
|
<p>Terraform <a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/container_cluster" rel="nofollow noreferrer">google_container_cluster</a> example removes the default node pool.</p>
<pre><code>resource "google_container_cluster" "primary" {
name = "my-gke-cluster"
location = "us-central1"
# We can't create a cluster with no node pool defined, but we want to only use
# separately managed node pools. So we create the smallest possible default
# node pool and immediately delete it.
remove_default_node_pool = true
initial_node_count = 1
}
</code></pre>
<p>However, as the default pool is removed and there can be no node to deploy the system pods.</p>
<p><a href="https://i.stack.imgur.com/vjDDY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vjDDY.png" alt="enter image description here" /></a></p>
<h1>Question</h1>
<p>What is the good way to have a node to deploy the system pods?</p>
<ol>
<li>Have a default pool with number of nodes is 1 and schedule system pods in the defaujlt node. Set the min autoscaling node count to 0 and do not schedule system pods in auto scaling pods</li>
<li>Remove the default pool. Set the min autoscaling node count to 1 so that system pods can be deployed in a autoscaling node.</li>
</ol>
<h1>Node pool defnition</h1>
<pre><code>resource "google_container_node_pool" "primary" {
name = "${google_container_cluster.primary.name}-node-pool"
project = var.PROJECT_ID
location = var.location
cluster = google_container_cluster.primary.name
#--------------------------------------------------------------------------------
# Node instantiation based on auto-scaling setting.
# node_count and autoscaling are mutually exclusive.
#--------------------------------------------------------------------------------
node_count = var.autoscaling == true ? null : var.num_nodes
dynamic "autoscaling" {
for_each = var.autoscaling ? [1] : []
content {
min_node_count = var.min_node_count # Set to 0 currently
max_node_count = var.max_node_count
}
}
#--------------------------------------------------------------------------------
# Node configurations
#--------------------------------------------------------------------------------
node_config {
#--------------------------------------------------------------------------------
# Service Account, the roles of which the node assumes
#--------------------------------------------------------------------------------
service_account = var.service_account
#--------------------------------------------------------------------------------
# Instance configurations
#--------------------------------------------------------------------------------
machine_type = var.machine_type
preemptible = var.node_preemptive
disk_size_gb = var.disk_size_gb
disk_type = var.disk_type
metadata = {
disable-legacy-endpoints = "true"
}
#--------------------------------------------------------------------------------
# The K8S labels (key/value pairs) to be applied to each node
#--------------------------------------------------------------------------------
labels = var.labels
oauth_scopes = [
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
]
tags = var.tags
}
}
</code></pre>
| <p>Those are system pods that get deployed on each node, maybe kube-dns, kube-proxy however metrics server and other if running as deployment those can be running as single replicas on one of node.</p>
<blockquote>
<p>What is the good way to have a node to deploy the system pods?</p>
</blockquote>
<p>Keep at least <strong>1-2 node</strong> for system pods like metrics server, ingress if you are installing inside kube-system etc.</p>
<blockquote>
<p>Have a default pool with number of nodes is 1 and schedule system pods
in the defaujlt node. Set the min autoscaling node count to 0 and do
not schedule system pods in auto scaling pods</p>
</blockquote>
<p>There is no guarantee if you have set the autoscaling node count to 0 which means it will scale down. There are limitation with GKE about scaling down node you might need to take a look first however yes it's possible you can scale down to zero by updating PDB and checking other limitations.</p>
<p><strong>Limitation</strong> : <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler#limitations" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler#limitations</a></p>
<blockquote>
<p>Remove the default pool. Set the min autoscaling node count to 1 so
that system pods can be deployed in a autoscaling node.</p>
</blockquote>
<p>This is good if min count is 1 system pods should be up and running on that node, and when you deploy your application your nodes start scaling up as per need while system pods were already running.</p>
<p>You can also scale down to zero in that case if autoscaling is set and you deploy the application it will auto-start the application and system on available nodes so not much to worry about in that case also.</p>
<p>If you don't want to remove the default Node pool and keep running <strong>1</strong> Node, you can use this field : <a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/container_cluster#remove_default_node_pool" rel="nofollow noreferrer">https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/container_cluster#remove_default_node_pool</a></p>
|
<p>I have a <code>StorageClass</code> with <code>provisioner: kubernetes.io/aws-ebs</code></p>
<p>And I have a deployment where I need to mount some volumes. So, I need to use this <code>StorageClass</code></p>
<p>Here's my <code>sc.yaml</code></p>
<pre><code>apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
name: gp2
parameters:
fsType: ext4
type: gp2
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
</code></pre>
<p>Here's my <code>deployment.yaml</code></p>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: postgres
namespace: var.namespace
spec:
replicas: 1
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:10.4
imagePullPolicy: "IfNotPresent"
ports:
- containerPort: 5432
envFrom:
- configMapRef:
name: postgres-config
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
# how can I specify my storage class here?
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pv-claim
</code></pre>
<p>I need to specify the storage class in my deployment. Can someone help me?</p>
| <blockquote>
<p>I need to specify the storage class in my deployment. Can someone help me?</p>
</blockquote>
<p>Before we answer, first let's explain how <code>StroageClass</code>, <code>PersistentVolume</code>, <code>Persistent Volume Claim</code>.</p>
<h3>Terminology</h3>
<h4><code>StroageClass</code></h4>
<ul>
<li>We can look at <code>StorageClass</code> as a driver (Software).</li>
<li>A driver which responsible for the "communication" with the storage hardware.</li>
<li>Usually but not a must, the <code>StorageClass</code> is supplied by the Storage provider (hardware or virtual)</li>
</ul>
<h4><code>PersistentVolume</code></h4>
<ul>
<li>A <code>PersistentVolume</code> (PV) is a <strong>piece of storage</strong> in the cluster <strong>that has been provisioned</strong> by an administrator or dynamically provisioned using <code>Storage Classes</code>.</li>
</ul>
<h4><code>PersistentVolumeClaim</code></h4>
<ul>
<li>A <code>PersistentVolumeClaim</code> (PVC) is a <strong>request for storage</strong> by a user (Usually Pod)</li>
</ul>
<hr />
<h3>General Diagram (describing K8 Storage objects)</h3>
<p><a href="https://i.stack.imgur.com/53Bxl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/53Bxl.png" alt="K8 Storage Diagram" /></a></p>
<h2>TL;DR; Explanation</h2>
<ul>
<li>You have physical storage (Disk, SSD, virtual, etc)</li>
<li>Someone (usually the storage or the cloud provider) supplied you with the <code>StorageClass</code> object. By the way, you don't need to define/declare it most of the time and K8S will supply default storage for you (<a href="https://kubernetes.io/docs/concepts/storage/volumes/#emptydir" rel="nofollow noreferrer">emptyDir</a>).</li>
<li>Then you define <code>PersistentVolume</code> (PV) which will "create" storage based upon the type (<code>StorageClass</code>) you require.</li>
<li>Next step is to define <code>PersistentVolumeClaim</code> (PVC). The PVC is the allocation of the "physical" storage mounted from the (PV) which you defined in the previous step.</li>
<li>The last step is to "assign" volume to your execution (<code>Pod</code>, <code>Deployment</code>, <code>StatefulSet</code>, etc) which is done using <code>volumes</code>.</li>
</ul>
<p>** Notes</p>
<ul>
<li>As mentioned above most of the time you can simply use volumes without the need to define <code>StorageClass</code> or <code>PV</code>/<code>PVC</code>. Simply use a volume in the required Resources and K8S will take care of that for you.</li>
<li>There are some exceptions (without getting into too many details here like <code>StaefulSet</code>).
If no StorageClass is specified, then the default StorageClass will be used</li>
</ul>
<hr />
<p>Now let's answer your question</p>
<ul>
<li>You have defined a <code>StorageClass</code></li>
</ul>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: storage.k8s.io/v1
kind: StorageClass
...
provisioner: kubernetes.io/aws-ebs
</code></pre>
<ul>
<li>In your deployment, you specified <code>volumes</code> (identation left as as)</li>
</ul>
<pre><code>apiVersion: extensions/v1beta1
kind: Deployment
...
# --> Here you define the actual mount path you need for your pods
# The name (PVC) is corresponding to the one you
# defined below under volumes
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgredb
# ->>>> how can I specify my storage class here?
# You don't need to specify storage class, you need to define PVC,
# This is the missing piece in your code.
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pv-claim
</code></pre>
<h3>The missing piece ...</h3>
<pre><code>kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: postgres-pv-claim ### <<< The name as mentioned above in your Deploymnet
labels:
app: postgres
spec:
# The name of the storage class u defined earlier
storageClassName: gp2
# The access modes are:
# ReadWriteOnce - The volume can be mounted as read-write by a single node
# ReadWriteMany - The volume can be mounted as read-write by a many nodes
# ReadOnlyMany - The volume can be mounted as read-only by many nodes
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
</code></pre>
<hr />
<p>Hope it helped you out.</p>
|
<p>I deployed one ingress controller in a namespace by running the command:</p>
<pre><code>helm install nginx-ingress ingress-nginx/ingress-nginx \
--namespace kk \
--set controller.replicaCount=2 \
--set controller.nodeSelector."beta\.kubernetes\.io/os"=linux \
--set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux
</code></pre>
<p>When I run the same command in another namespace I get the below error:</p>
<pre><code>Error: Failed to download "ingress-nginx/ingress-nginx" (hint: running `helm repo update` may help)
</code></pre>
<p>I am pretty sure we can install multiple ingress controller in one Kubernetes cluster but I am unable to resolve this issue.</p>
| <p>It's all about your internet connection. I've had the same problem.
Try using the specific version; that may help:</p>
<pre><code>helm install ingress-nginx ingress-nginx/ingress-nginx --version 3.7.1 --debug
</code></pre>
|
<p>I am using the opentelemetry-ruby otlp exporter for auto instrumentation:
<a href="https://github.com/open-telemetry/opentelemetry-ruby/tree/main/exporter/otlp" rel="nofollow noreferrer">https://github.com/open-telemetry/opentelemetry-ruby/tree/main/exporter/otlp</a></p>
<p>The otel collector was installed as a daemonset:
<a href="https://github.com/open-telemetry/opentelemetry-helm-charts/tree/main/charts/opentelemetry-collector" rel="nofollow noreferrer">https://github.com/open-telemetry/opentelemetry-helm-charts/tree/main/charts/opentelemetry-collector</a></p>
<p>I am trying to get the OpenTelemetry collector to collect traces from the Rails application. Both are running in the same cluster, but in different namespaces.</p>
<p>We have enabled auto-instrumentation in the app, but the rails logs are currently showing these errors:</p>
<p><code>E, [2022-04-05T22:37:47.838197 #6] ERROR -- : OpenTelemetry error: Unable to export 499 spans</code></p>
<p>I set the following env variables within the app:</p>
<pre><code>OTEL_LOG_LEVEL=debug
OTEL_EXPORTER_OTLP_ENDPOINT=http://0.0.0.0:4318
</code></pre>
<p>I can't confirm that the application can communicate with the collector pods on this port.
Curling this address from the rails/ruby app returns "Connection Refused". However I am able to curl <code>http://<OTEL_POD_IP>:4318</code> which returns 404 page not found.</p>
<p>From inside a pod:</p>
<pre><code># curl http://localhost:4318/
curl: (7) Failed to connect to localhost port 4318: Connection refused
# curl http://10.1.0.66:4318/
404 page not found
</code></pre>
<p>This helm chart created a daemonset but there is no service running. Is there some setting I need to enable to get this to work?</p>
<p>I confirmed that otel-collector is running on every node in the cluster and the daemonset has HostPort set to 4318.</p>
| <p>The correct solution is to use the <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/" rel="nofollow noreferrer">Kubernetes Downward API</a> to fetch the node IP address, which will allow you to export the traces directly to the daemonset pod within the same node:</p>
<pre class="lang-yaml prettyprint-override"><code> containers:
- name: my-app
image: my-image
env:
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: http://$(HOST_IP):4318
</code></pre>
<p>Note that using the deployment's service as the endpoint (<code><service-name>.<namespace>.svc.cluster.local</code>) is incorrect, as it effectively bypasses the daemonset and sends the traces directly to the deployment, which makes the daemonset useless.</p>
|
<p>I have two k8s operators say operatorA and operatorB . I have both custom resources present in microservice(helm/templates).</p>
<p>how can I make sure custom resource for operatorB applied only after operatorA custom resource is applied?</p>
| <p>there is no validation between the two CRDs, only your operator can affect by the installation order, therefore, all you need to do is to make sure that the CRDs will install before the operator itself.</p>
<p>you can achieve this by using a <code>crds</code> folder, you can read about this in the official <a href="https://helm.sh/docs/topics/charts_hooks/#hooks-and-the-release-lifecycle" rel="nofollow noreferrer">docs</a></p>
|
<p><a href="https://docs.aws.amazon.com/eks/latest/userguide/creating-a-vpc.html" rel="nofollow noreferrer">This</a> article from AWS states that recommended production setup for EKS is a mix of private and public subnets. Unfortunately it does not explain the 'why'.
What is wrong with the option 'only public subnets' assuming it has a properly configured security groups? It seems much simpler and cheaper (since we do not need NAT).</p>
| <p>This <a href="https://stackoverflow.com/questions/22188444/why-do-we-need-private-subnet-in-vpc">SO</a> seems to actually answer my question:</p>
<p>In particular the question in one of the comments:</p>
<blockquote>
<p>What is the advantage of a server on a private subnet with a NAT instance and a a server public subnet with a strict security policy?</p>
</blockquote>
<p>and the answer:</p>
<blockquote>
<p>it's not really about an advantage. It's about the way networking works, in VPC. All of the instances on a given subnet have to use the same default gateway, which will either be the "Internet gateway" virtual object, which will not do NAT, or it will be a NAT instance, which will not "not do" NAT. Unless all your machines have public IPs, or none of them do, you're going to want both types of subnets. If everything is an Internet-facing web server, sure, you might need only a public subnet, and with correct security configuration, there's no disadvantage.</p>
</blockquote>
|
<h2>Problem</h2>
<p>I would like to do the following:</p>
<ol>
<li>Create an image with Trivy and Docker installed</li>
<li>Run the container on Kubernetes</li>
<li>Schedule a CRON job to pull all images from a container registry and to scan them, and output the results to stdout</li>
</ol>
<h2>Question</h2>
<p>Is there a smart way to do this? Can you even install docker within a docker container?</p>
| <p>I ended up doing the following:</p>
<ol>
<li>Creating an image with <a href="https://github.com/aquasecurity/trivy" rel="nofollow noreferrer">Trivy</a> and <a href="https://github.com/containers/skopeo" rel="nofollow noreferrer">Skopeo</a> installed</li>
<li>Downloading the docker images with Skopeo
<ul>
<li>skopeo copy --src-creds=user:password --dest-compress --src-tls-verify=false docker://myrepo.com/mynamespace/ubuntu:latest oci:ubuntu</li>
</ul>
</li>
<li>Scanning the image
<ul>
<li>trivy image --input ubuntu</li>
</ul>
</li>
</ol>
<p>An alternative to Skopeo would be to use the <a href="https://docs.docker.com/registry/spec/api/" rel="nofollow noreferrer">Registry HTTP API</a> to download images as suggested by David Maze.</p>
|
<p>I am working on a problem to implement a helm chart with customize configMap and trying to populate the configMap based on the environment mode.</p>
<p>Values.yaml</p>
<pre><code>externalIPService:
ip: 1.1.1.1
port: 80
emsConfig: "receivers:
otlp:
protocols:
http:
processors:
batch:
exporters:
otlp/ems:
endpoint: {{ .Values.externalIPService.ip }}:{{ .Values.externalIPService.port }}
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [otlp/ems]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [otlp/ems]
logs:
receivers: [otlp]
processors: [batch]
exporters: [otlp/ems]
"
</code></pre>
<p>configMap.yaml</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: simple-demo
data:
message: "{{ tpl .Values.emsConfig .}}"
</code></pre>
<p>The helm template output is a plain string and not a yaml content. I have tried <em>toYaml</em> as well but it did not help either. Could someone please help to find a way to do nested rendering and to be able to use the final output in the confiMap</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: simple-demo
data:
message: "receivers: otlp: protocols: http: processors: batch: exporters: otlp/ems: endpoint: {{ .Values.externalIPService.ip }}:{{ .Values.externalIPService.port }} service: pipelines: traces: receivers: [otlp] processors: [batch] exporters: [otlp/ems] metrics: receivers: [otlp] processors: [batch] exporters: [otlp/ems] logs: receivers: [otlp] processors: [batch] exporters: [otlp/ems] "
</code></pre>
| <p>in this situation, you need to use <code>_halpers.tpl</code> file.</p>
<p>first, add this to the <code>_halpers.tpl</code> file:</p>
<pre><code>{{- define "appname.emsConfig" -}}
receivers:
otlp:
protocols:
http:
processors:
batch:
exporters:
otlp/ems:
endpoint: {{ .Values.externalIPService.ip }}:{{ .Values.externalIPService.port }}
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [otlp/ems]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [otlp/ems]
logs:
receivers: [otlp]
processors: [batch]
exporters: [otlp/ems]
{{- end }}
</code></pre>
<p>the <code>values.yaml</code> file will look like this:</p>
<pre><code>externalIPService:
ip: 1.1.1.1
port: 80
</code></pre>
<p>and the <code>configMap.yaml</code> file, will need to look like this:</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: simple-demo
data:
message: |-
{{ include "appname.emsConfig" . | nindent 4}}
</code></pre>
|
<p>Anyone faced similar issue with helm, even with helm list I get that</p>
<p>query: failed to query with labels: stream error when reading response body, may be caused by closed connection. Please retry. Original error: stream error: stream ID 3; INTERNAL_ERROR; received from peer</p>
| <p>In my case I found that it was a timeout issues.</p>
<p>Below are the ways I debuged and fixed the issue.</p>
<p>debug</p>
<pre><code>helm ls --debug
</code></pre>
<p>delete older versions</p>
<pre><code>kubectl get secrets --all-namespace | grep -i <name of your helm chart>
kubectl delete secret <name here>
</code></pre>
<p>Workaround: I increased the timeout and this helped for now:</p>
<pre><code>helm install <name> <folder> --wait --timeout 10m0s --debug
</code></pre>
<p>I believe the root cause of this issue is with the apiserver throttle which, I need to check more to find the root cause for slowness.</p>
|
<p>I am running a Kafka pod in Kubernetes with Rancher. I am using the confluent image and it is connecting properly to zookeeper. I am using the stable helm chart with the Kafka confluent image 5.3.1. I also added SSL encryption into Helm using this <a href="https://medium.com/weareservian/encryption-authentication-and-external-access-for-confluent-kafka-on-kubernetes-69c723a612fc" rel="nofollow noreferrer">page</a>
It is starting properly then it shutdown abruptly and pod restart. I am getting this error on the log.</p>
<blockquote>
<p>[2019-11-15 19:41:49,943] INFO Terminating process due to signal SIGTERM (org.apache.kafka.common.utils.LoggingSignalHandler)
[2019-11-15 19:41:49,945] INFO Shutting down SupportedServerStartable (io.confluent.support.metrics.SupportedServerStartable)</p>
</blockquote>
<p>What is the SIGTERM error in Kafka pods? How to fix it?</p>
<p>Thank you</p>
| <p>As mentioned by @Hamzatli, it is about liveness and readiness. K8s thinks that your pod is hitting a timeout issue and sends the SIGTERM to the pod for it to shutdown.</p>
<p>In your Helm Chart's values.yaml, there should be an option about liveness and/or readiness. Increase the initialDelaySeconds to a higher time, which you think is enough time for the Kafka to be brought up, so that k8s doesn't send the shutdown signal too quickly during initial startup.</p>
<pre><code>livenessProbe:
enabled: true
initialDelaySeconds: 60 # 60 seconds delay for the pod to start liveness probe
timeoutSeconds: 5
readinessProbe:
enabled: true
initialDelaySeconds: 60 # 60 seconds delay for pod to start readiness probe
timeoutSeconds: 5
</code></pre>
<p>You can read more about this <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes" rel="nofollow noreferrer">here</a>.</p>
|
<p>I am working with a test environment that has around 10 kubernetes pods that I need to interact with a lot, especially port forwarding.</p>
<p>It takes me a lot of time every day to list pods, get pod name and use that in a command.</p>
<p>Since all my pods start with a fixed name, followed by some random string that changes several times a day, I am looking for a way to port forward (and potentially other commands) using only the fixed part of the pod name.</p>
<p>So instead of doing</p>
<pre><code>kubectl port-forward pods/planning-api-5fbf84cd4-mbxmn 8081:8081 -n production
</code></pre>
<p>I would want something like</p>
<pre><code>kubectl port-forward pods/planning-api* 8081:8081 -n production
</code></pre>
<p>I would ideally want to put it into some sort of script that I can just run, so multiple statements would work as well, as long as they don't need my manual intervention.</p>
<p>There will never be more than one instance of any of the pods I use, so that won't be an issue. Changing the pod names to drop the random part is not an option.</p>
<p>I am using a windows machine to achieve this. Using a kubectl alternative would be an option.</p>
<p>Can anyone tell me if and how this could be achieved?</p>
| <p>You can do a port-forward on a service or controller, i.e. deployment, instead, which should be sufficient, given the assumption that you have a service or controller.</p>
<pre><code>kubectl port-forward svc/planning-api 8081:80
# or
kubectl port-forward deploy/planning-api 8081:8081
</code></pre>
<p>That removes the need to know the random pod suffix.</p>
<p>Alternatively, you can use some shell basics. i.e. grep and command substitution:</p>
<pre><code>kubectl port-forward "$(kubectl get pod -o name | grep planning-api)" 8081:8081
</code></pre>
|
<p>I am new at kubernetes so apologies in advance for any silly questions and mistakes. I am trying to setup external access through ingress for ArgoCD. My setup is an aws eks cluster. I have setup alb following the guide <a href="https://github.com/kubernetes-sigs/aws-load-balancer-controller/tree/v2.2.3/helm/aws-load-balancer-controller" rel="noreferrer">here</a>. I have also setup external dns service as described <a href="https://github.com/kubernetes-sigs/external-dns/blob/v0.9.0/docs/tutorials/aws.md" rel="noreferrer">here</a>. I also followed the verification steps in that guide and was able to confirm that the dns record got created as well and i was able to access the foo service.</p>
<p>For argoCD I installed the manifests via</p>
<pre><code>kubectl create namespace argocd
kubectl apply -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml -n argocd
</code></pre>
<p>The argoCD docs mention adding a service to split up http and grpc and an ingress setup <a href="https://argoproj.github.io/argo-cd/operator-manual/ingress/#aws-application-load-balancers-albs-and-classic-elb-http-mode" rel="noreferrer">here</a>. I followed that and installed those as well</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
annotations:
alb.ingress.kubernetes.io/backend-protocol-version: HTTP2
external-dns.alpha.kubernetes.io/hostname: argocd.<mydomain.com>
labels:
app: argogrpc
name: argogrpc
namespace: argocd
spec:
ports:
- name: "443"
port: 443
protocol: TCP
targetPort: 8080
selector:
app.kubernetes.io/name: argocd-server
sessionAffinity: None
type: ClusterIP
</code></pre>
<pre><code>apiVersion: networking.k8s.io/v1 # Use extensions/v1beta1 for Kubernetes 1.18 and older
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/backend-protocol: HTTPS
alb.ingress.kubernetes.io/conditions.argogrpc: |
[{"field":"http-header","httpHeaderConfig":{"httpHeaderName": "Content-Type", "values":["application/grpc"]}}]
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
name: argocd
namespace: argocd
spec:
rules:
- host: argocd.<mydomain.com>
http:
paths:
- backend:
service:
name: argogrpc
port:
number: 443
pathType: ImplementationSpecific
- backend:
service:
name: argocd-server
port:
number: 443
pathType: ImplementationSpecific
tls:
- hosts:
- argocd.<mydomain.com>
</code></pre>
<p>The definitions are applied successfully but I don't see the dns record created neither any external IP listed. Am I missing any steps or is there any misconfiguration here? Thanks in advance!</p>
| <p>Service type needs to be NodePort.</p>
|
<p>I have a application of workload deployment and need to change it to statefulset</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: app
labels:
app: app
spec:
replicas: 1
selector:
matchLabels:
app: app
template:
metadata:
labels:
app: app
spec:
containers:
- name: app
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- name: data
subPath: app/log
mountPath: /opt/app/log
volumes:
- name: data
peristentVolumeClaim:
claimName: pv-app-claim
</code></pre>
<p><strong>PV</strong></p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-app
labels:
pv: app
spec:
storageClassName: "default"
capacity:
storage: 8Gi
accessModes:
- ReadWriteMany
persistentVolumeclaimPolicy: Retain
nfs:
server: someIP
path: "/somepath"
</code></pre>
<p><strong>PVC</strong></p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-app-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 8Gi
selector:
matchLabels:
pv: app
</code></pre>
<p>I have tried to change the required file but struck in a place where I need the data in the volume to be there even when I'm moving to statefulset. In statefulset we use volumeclaimtemplate this is where i'm struck how to retain the data and claim with volumeclaimtemplate.</p>
<p>Note: I'm going to use only one pod</p>
| <ul>
<li>If you want to use existing PVC in statefulset then you should not mention it under
<code>volumeclaimtemplate </code> as <code>volumeclaimtemplate </code> will create a new PVC .</li>
<li>You should mention it under <code>pod spec</code> just like you mention in a deployment</li>
</ul>
<p>Example:</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web1
spec:
selector:
matchLabels:
app: nginx
serviceName: "nginx"
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx1
imagePullPolicy: IfNotPresent
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web1
volumeMounts:
- name: www1
mountPath: /usr/share/nginx/html
volumes:
- name: www1
peristentVolumeClaim:
claimName: pv-app-claim
</code></pre>
|
<p>I'm trying to route POST requests through a K8s Load Balancer to a Webhook in Argo Events. I can't find any clear documentation on this. I'm able to get the Webhook created and I can successfully communicate with it when I port forward the webhook-eventsource-svc. The Load Balancer is built fine and displays the external IP that I assign. However when I try to POST to the Load Balancer I just get a connection timed out error. I'm hoping I'm just configuring these manifests wrong.</p>
<p>Here is the manifest for both services.</p>
<pre><code>apiVersion: argoproj.io/v1alpha1
kind: EventSource
metadata:
name: webhook
namespace: argo-events
spec:
service:
ports:
- port: 12000
targetPort: 12000
webhook:
example:
endpoint: /deploy
method: POST
port: "12000"
---
apiVersion: v1
kind: Service
metadata:
name: webhook-loadbalancer
namespace: argo-events
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
service.beta.kubernetes.io/azure-load-balancer-internal-subnet: DevelopSubnet
spec:
type: LoadBalancer
loadBalancerIP: 1XX.X.X.XXX
ports:
- protocol: TCP
port: 90
targetPort: 12000
selector:
app: webhook-eventsource-svc
controller: eventsource-controller
</code></pre>
<p>And here is how I am sending the request:</p>
<pre><code>curl -d '@params.json' -H "Content-Type: application/json" -X POST http://1XX.X.X.XXX:90/deploy
</code></pre>
<p>Any suggestions?</p>
| <p>I'm trying to do something similar in AWS. I can get the sample webhook to work with port forwarding (<a href="https://argoproj.github.io/argo-events/quick_start/" rel="nofollow noreferrer">https://argoproj.github.io/argo-events/quick_start/</a>) But it won't work with regular K8s objects. In my case, an Ingress and a Service object. I can see my Service selector correctly pick the webhook sensor pod. Both Argo Events and Argo Workflow run in the same argo namespace. Once configured, access to the Ingress from Postman returns a 404. What I find confusing is that the actual Port the sensor pod exposes is 7777 in the sample, not 12000. So, I've tried a Service with Port 12000 / TargetPort 12000 or 7777. In either case, the POST returns 404.</p>
<p>What I can point out that's applicable in your case and mine is this (<a href="https://argoproj.github.io/argo-events/eventsources/services/" rel="nofollow noreferrer">https://argoproj.github.io/argo-events/eventsources/services/</a>) in the second paragraph it states that you must remove the service field from your EventSource object to refactor the sample from port forwarding. Hope it helps. I'm still trying to make this work.</p>
|
<p>I have deployed vault in K8s . I would like to know how to inject the vault secret in the configmap of a application which has all the configuration of the application.</p>
| <p>It's not possible you can not mount the vault secret into config map. But you can parallelly inject both <strong>configmap</strong> and <strong>vault secret</strong> to single deployment.</p>
<ul>
<li>If you are mounting configmap as the file you can mount vault secret as file into same directory or another directory.</li>
<li>If injecting configmap as an environment variable you can also do the same with vault secret.</li>
</ul>
<p>If you are injecting the <strong>configmap</strong> as <strong>environment variable</strong> i would suggest checking out the : <a href="https://github.com/DaspawnW/vault-crd" rel="nofollow noreferrer">https://github.com/DaspawnW/vault-crd</a></p>
<p><code>vault-crd</code> sync the <strong>vault secret</strong> to <strong>Kubernetes secret</strong> and you can easily inject <strong>Kubernetes secret</strong> to deployment. Although it's not good considering the security perspective.</p>
<p>There are many different method you can inject vault secret into deployment.</p>
|
<p>I'm just trying to create a simple service account. Theoretically, kubectl automatically creates the secret and token for service accounts... But, not in my case... I've done this in <code>kube-system</code>, <code>default</code>, and new/other namespaces.</p>
<pre><code>me@mymachine ~ % kubectl create serviceaccount my-acct
serviceaccount/my-acct created
me@mymachine ~ % kubectl describe serviceaccount my-acct
Name: my-acct
Namespace: default
Labels: <none>
Annotations: <none>
Image pull secrets: <none>
Mountable secrets: <none>
Tokens: <none>
Events: <none>
</code></pre>
<p>I have reset the Kubernetes system. Uninstalled, removed ./kube/ and removed the Library...
Still no secret created. All of my developers machines (MAC as well, both Intel and M1) automatically create the account secret.
Any ideas?</p>
| <p><strong>Disclaimer</strong>: This answer will not "fix" the automatic creation of secrets for service accounts, but shows how you can associate a secret to a service account.</p>
<p>For the newer Docker Desktop 4.8.1 (for Mac), you can create the secret manually:</p>
<pre><code>kubectl apply -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
name: default-secret
annotations:
kubernetes.io/service-account.name: default
type: kubernetes.io/service-account-token
EOF
</code></pre>
<p>And then you can associate the secret to the service account by editing the service account configuration, run:</p>
<pre><code>kubectl edit serviceaccounts default
</code></pre>
<p>There you can add the secret, at the end, like:</p>
<pre><code>apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "XXXX-XX-XXTXX:XX:XXZ"
name: default
namespace: default
resourceVersion: "XXXX"
uid: XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX
secrets:
- name: default-secret
</code></pre>
<p>After that, you'll be able to use the token for the required purposes.</p>
|
<p>I have a deployment on google gke, and I can't see the pod logs on the console even though the Cloud logging is enabled on the cluster?
So what could be the issue? did I miss something?</p>
| <p>It sounds like Workload monitoring and logging may not have been enabled and currently it's only doing system monitoring and logging. Please see the docs here on how to change the logging settings: <a href="https://cloud.google.com/stackdriver/docs/solutions/gke/installing#installing" rel="nofollow noreferrer">https://cloud.google.com/stackdriver/docs/solutions/gke/installing#installing</a></p>
|
<p>I am going to implement PDB on AKS. Can someone please tell me why do we need it when we can use node autoscaler.</p>
<p>Also, does PDB allow zero unavailability by creating a node when one of the nodes fails?</p>
| <p><a href="https://kubernetes.io/docs/tasks/run-application/configure-pdb/" rel="nofollow noreferrer">PDB</a> allows you to set rules before evicting your pods from a node.</p>
<p>Let's say you have a 2 nodes cluster and a deployment with 1 replica and you want to update your nodes.</p>
<ul>
<li><code>kubectl drain</code> will cordon <code>node 1</code> so no pods can be schedule on that node</li>
<li><code>kubectl drain</code> will remove the pod schedule on <code>node 1</code></li>
<li><code>kubelet</code> will then deploy your pod over <code>node 2</code></li>
</ul>
<p>Now if you set a <code>PDB</code> with a <code>minAvailable: 50%</code>, that <code>drain</code> command would fail as it would violates the rule.</p>
<p>The pods is killed and <strong>then</strong> kubelet tries to schedule it somewhere.</p>
<p>PDB allows you to prevent downtime by budgeting pods before evicting them.</p>
<p><strong>Scenario without PDB</strong></p>
<p>You perform <code>node 1</code> update and <code>node 2</code> cannot host the evicted pod :</p>
<ul>
<li>pod is killed on <code>node 1</code></li>
<li>kubelet cannot schedule pod anywhere</li>
<li>autoscaling provisions a third node</li>
<li>pod is scheduled on that new node</li>
</ul>
<p>During that whole time your evicted pod was not running anywhere and your application was down.</p>
|
<p>I need to write a iptables rule in the K8s pod to SNAT packets based on some conditions but the network traffic never go through NAT table</p>
| <p><strong>Why my Kubernetes pod network traffic does not go through the iptables NAT table?</strong></p>
<p>Ans :</p>
<p>Because Kubernetes Dictates so to all CNI.
Following is from <a href="https://kubernetes.io/docs/concepts/services-networking/#the-kubernetes-network-model" rel="nofollow noreferrer">Kubernetes Documentation</a></p>
<pre><code>Kubernetes imposes the following fundamental requirements on any networking
implementation (barring any intentional network segmentation policies):
- pods on a node can communicate with all pods on all nodes without NAT
- agents on a node (e.g. system daemons, kubelet) can communicate with all pods on
that node.
Note: For those platforms that support Pods running in the host network
(e.g. Linux):
- pods in the host network of a node can communicate with all pods on all nodes
without NAT
</code></pre>
|
<p>Can someone briefly explain what is the difference between <code>istio_request_bytes_count</code> and <code>istio_request_bytes_sum</code>?. And why is the "istio_request_bytes" standard metric is missing?</p>
| <p><a href="https://istio.io/latest/docs/reference/config/metrics/" rel="nofollow noreferrer">Istio Standard Metrics</a> notes that <code>istio_request_bytes</code> is a <code>DISTRIBUTION</code> type metric. In Prometheus, this would appear as a <a href="https://prometheus.io/docs/concepts/metric_types/#histogram" rel="nofollow noreferrer">histogram metric</a>. So, you should see three metrics:</p>
<ul>
<li><code>istio_request_bytes_count</code> is the number of requests</li>
<li><code>istio_request_bytes_sum</code> is the total number of bytes, added together across all requests</li>
<li><code>istio_request_bytes_bucket{le="1024"}</code> is the total number of requests where the request size is 1 KiB or smaller</li>
</ul>
<p>You can calculate the average request size by dividing the sum by the count. You can also use Prometheus functions like <a href="https://prometheus.io/docs/prometheus/latest/querying/functions/#histogram_quantile" rel="nofollow noreferrer"><code>histogram_quantile()</code></a> to calculate the median (50th-percentile) size.</p>
<p>This also applies to the other standard metrics. A common thing to measure is 95th-percentile latency ("p95"); how long does it take 95% of the requests to execute, where the remaining 5% take longer than this? <code>histogram_quantile(0.95, istio_request_duration_milliseconds_bucket[1h])</code> could compute this over the most recent hour.</p>
|
<p>I have 3 contexts set up in k9s (<code>context1</code>, <code>context2</code>, <code>context3</code>). I know I can change the context within k9s by typing <code>:context</code> but how do I change the default context that loads when I open k9s from <code>context3</code> to <code>context1</code>?</p>
| <p>You can set the default context via <code>kubectl</code>:</p>
<pre><code>kubectl config use-context context1
</code></pre>
<p>This will update the default context in your KUBECONFIG, what is the base for k9s.</p>
|
<p>I have deployed an application in k8s cluster with 8 replicas. but how to ensure that there will be at least 3 pods always running during rolling update, can anyone please suggest a solution ?</p>
<p>Thanks in advance.</p>
| <p>you can set the required values on <code>maxSurge</code> and <code>maxUnavailable</code></p>
<pre><code> strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 50%
type: RollingUpdate
</code></pre>
<p>setting <code>maxUnavailable</code> to 50% will make sure that you always have 4 pods running.</p>
|
<p>I installed vanilla kubernetes 1.24, cluster is up and healthy but when i try to install kubernetes-dashboard, i realized can not access dashboard token. Before 1.24 i just describe token and get.</p>
<p>Normally when sa created, secret should be created automaticaly, BUT NOW, I just tried create service account then secret not created automaticaly.</p>
<p>just create sa with: "kubectl create serviceaccount servicename" you should see secret as a servicename-token with kubectl get secrets. But not created..</p>
<p>anyone faced this problem?</p>
| <p>For your question “Anyone faced this problem?”, the answer is that all the people that install version 1.24 are going to face the same behavior, as this <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/" rel="nofollow noreferrer">documentation</a> indicates, this version includes features such as the <em><strong>LegacyServiceAccountTokenNoAutoGeneration</strong></em>, the one that is the root cause of what you are experiencing.</p>
<p>So, the workaround right now is to manually create the token, as this <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#manually-create-a-service-account-api-token" rel="nofollow noreferrer">guide</a> indicates.</p>
|
<p>I encountered the following error when added <strong>spring-cloud-starter-kubernetes-config</strong> dependency to my pom.xml:</p>
<pre><code>io.fabric8.kubernetes.client.KubernetesClientException: An error has occurred. Caused by: java.security.cert.CertificateException: Could not parse certificate: java.io.IOException: Empty input Caused by: java.io.IOException: Empty input
</code></pre>
<p>To disable k8s, I added in bootstrap.yml following param:</p>
<pre><code>spring:
cloud:
kubernetes:
enabled: false
</code></pre>
<p>But even after that nothing changed and the error remained.</p>
<p>Where else should I look? What parameter should I add so that if I have this dependency in pom.xml, I disable Kubernetes when running tests?</p>
| <p>As stated on the Spring Cloud Kubernetes documentation</p>
<blockquote>
<p>Also note that these properties: spring.cloud.kubernetes.config.enabled and spring.cloud.kubernetes.secrets.enabled only take effect when set in bootstrap.{properties|yml}</p>
</blockquote>
<p><a href="https://docs.spring.io/spring-cloud-kubernetes/docs/current/reference/html/#kubernetes-ecosystem-awareness" rel="nofollow noreferrer">https://docs.spring.io/spring-cloud-kubernetes/docs/current/reference/html/#kubernetes-ecosystem-awareness</a></p>
<p>So, to actually disable this property, it has to be set in <code>bootstrap.{properties|yml}</code> and not in <code>application.{properties|yml}</code>.</p>
<p>Also, you can pass the value as a VM option:</p>
<pre><code>-Dspring.cloud.kubernetes.enabled=false
</code></pre>
<p>if you want to run the application locally.</p>
|
<p>I have a GitLab CI pipeline configured to run on Kubernetes runner. Everything worked great until I tried to add services (<a href="https://docs.gitlab.com/ee/ci/services/mysql.html" rel="nofollow noreferrer">https://docs.gitlab.com/ee/ci/services/mysql.html</a>) for test job. The service hostname (eg.: <code>mysql</code>) cannot be resolved on kubernetes, resulting into the following error <code>dial tcp: lookup mysql on 10.96.0.10:53: no such host</code>. However, it works on docker runner, but that's just not what I want. Is there any way to </p>
<p>The job definition from <code>.gitlab-ci.yml</code>:</p>
<pre><code>test:
stage: test
variables:
MYSQL_ROOT_PASSWORD: --top-secret--
MYSQL_DATABASE: --top-secret--
MYSQL_USER: --top-secret--
MYSQL_PASSWORD: --top-secret--
services:
- mysql:latest
- nats:latest
script:
- ping -c 2 mysql
- go test -cover -coverprofile=coverage.prof.tmp ./...
</code></pre>
<p><strong>Edit:</strong></p>
<p>Logs from <code>runner-jd6sxcl7-project-430-concurrent-0g5bm8</code> pod show that the services started. There are 4 containers total inside the pod: <code>build</code>,<code>helper</code>,<code>svc-0</code> (mysql), <code>svc-1</code> (nats)</p>
<p><code>svc-0</code> logs show the mysql service started successfully:</p>
<pre><code>2019-12-09 21:52:07+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.18-1debian9 started.
2019-12-09 21:52:07+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql'
2019-12-09 21:52:08+00:00 [Note] [Entrypoint]: Entrypoint script for MySQL Server 8.0.18-1debian9 started.
2019-12-09 21:52:08+00:00 [Note] [Entrypoint]: Initializing database files
2019-12-09T21:52:08.226747Z 0 [Warning] [MY-011070] [Server] 'Disabling symbolic links using --skip-symbolic-links (or equivalent) is the default. Consider not using this option as it' is deprecated and will be removed in a future release.
2019-12-09T21:52:08.233097Z 0 [System] [MY-013169] [Server] /usr/sbin/mysqld (mysqld 8.0.18) initializing of server in progress as process 46
</code></pre>
<p><code>svc-1</code> logs show the nats service started successfully as well:</p>
<pre><code>[1] 2019/12/09 21:52:12.876121 [INF] Starting nats-server version 2.1.2
[1] 2019/12/09 21:52:12.876193 [INF] Git commit [679beda]
[1] 2019/12/09 21:52:12.876393 [INF] Starting http monitor on 0.0.0.0:8222
[1] 2019/12/09 21:52:12.876522 [INF] Listening for client connections on 0.0.0.0:4222
[1] 2019/12/09 21:52:12.876548 [INF] Server id is NCPAQNFKKWPI67DZHSWN5EWOCQSRACFG2FXNGTLMW2NNRBAMLSDY4IYQ
[1] 2019/12/09 21:52:12.876552 [INF] Server is ready
[1] 2019/12/09 21:52:12.876881 [INF] Listening for route connections on 0.0.0.0:6222
</code></pre>
| <p>This was a known issue with older versions of GitLab Runner (< 12.8), or when running Kubernetes executor in older versions of Kubernetes (< 1.7).</p>
<p>From the <a href="https://gitlab.com/gitlab-org/gitlab-runner/-/blob/main/docs/executors/kubernetes.md#L26" rel="nofollow noreferrer">Kubernetes executor documentation of GitLab Runner</a>:</p>
<blockquote>
<ul>
<li>Since GitLab Runner 12.8 and Kubernetes 1.7, the services are accessible via their DNS names. <strong>If you are using an older version you will have to use <code>localhost</code></strong>.</li>
</ul>
</blockquote>
<p>(emphasis mine)</p>
<p>It's important to keep in mind that the other restrictions and implications of the Kubernetes executor. From the same document:</p>
<blockquote>
<p>Note that when services and containers are running in the same Kubernetes
pod, they are all sharing the same localhost address.</p>
</blockquote>
<p>So, even if you're able to use the service specific hostname to talk to your service, it's all really <code>localhost</code> (127.0.0.1) underneath.</p>
<p>As such, keep in mind the other important restriction from the same document:</p>
<blockquote>
<ul>
<li>You cannot use several services using the same port</li>
</ul>
</blockquote>
<p>(thanks to <a href="https://stackoverflow.com/users/3154003/user3154003">@user3154003</a> for the link to the <a href="https://gitlab.com/gitlab-org/gitlab-runner/-/issues/2229" rel="nofollow noreferrer">GitLab Runner issue</a> in a <a href="https://stackoverflow.com/a/61046902/1273938">currently deleted answer</a> that pointed me in the right direction for this answer.)</p>
|
<p>I wanna install Elasticsearch in AWS EKS. But <a href="https://www.elastic.co/guide/en/elasticsearch/plugins/master/cloud-aws-best-practices.html#:%7E:text=is%20not%20recommended%20or%20supported" rel="nofollow noreferrer">Elasticsearch is not recommended EFS Storage</a>.</p>
<p>So I want to install Elasticsearch using AWS EKS + EC2 (Management Node Group) + EBS.</p>
<p>I'm curious about three things.</p>
<p><strong>First,</strong> EKS is a managed Kubernetes service. Should we install Elasticsearch by installing one node (EC2) and Kubernetes in EC2?
<a href="https://i.stack.imgur.com/seSQq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/seSQq.png" alt="enter image description here" /></a></p>
<p><strong>Second,</strong> create three EC2s, install elasticsearch with Docker for each instance, and connect clustering
<a href="https://i.stack.imgur.com/jBTKu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jBTKu.png" alt="enter image description here" /></a></p>
<p><strong>Three,</strong> whether Elasticsearch can be installed with the EKS Fargate.</p>
<p>It may be an ignorant question, but I would appreciate it if you could give me an idea on how to reliably build Elasticsearch in EKS.</p>
| <p>I think the simpler way to install Elasticsearch (or ELK in general) is to use <a href="https://docs.aws.amazon.com/eks/latest/userguide/efs-csi.html" rel="nofollow noreferrer">EKS with EFS driver</a>. Then you can use Helm / Operator to install Elastic.</p>
<p>If you strive for highest Elastic performance, or/and have bigger cluster, then I'd go with your Second solution, so install each Elastic node on a separate EC2 instance.</p>
|
<p>Today when I using <code>duc</code> command to check the kubernetes(v1.21) cluster host disk usage like this:</p>
<pre><code>[root@k8smasterone log]# duc ls -Fg /var/log/
5.4G messages-20220515 [+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++]
3.9G messages [++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ]
571.1M messages-20220508 [++++++++++++++++++
</code></pre>
<p>the kubelet output too large files, what should I do to limit the kubelet log size? I have read this issue to talk about the log should handle by kubelet: <a href="https://github.com/containerd/containerd/issues/4830" rel="nofollow noreferrer">https://github.com/containerd/containerd/issues/4830</a>. but I did not found any possible solution to solve the kubelet log rotate problem. PS: I am using containerd not docker right now. This is my journald config:</p>
<pre><code>[root@k8smasterone log]# cat /etc/systemd/journald.conf
# This file is part of systemd.
#
# systemd is free software; you can redistribute it and/or modify it
# under the terms of the GNU Lesser General Public License as published by
# the Free Software Foundation; either version 2.1 of the License, or
# (at your option) any later version.
#
# Entries in this file show the compile time defaults.
# You can change settings by editing this file.
# Defaults can be restored by simply deleting this file.
#
# See journald.conf(5) for details.
[Journal]
#Storage=auto
Compress=yes
#Seal=yes
#SplitMode=uid
#SyncIntervalSec=5m
#RateLimitInterval=30s
#RateLimitBurst=1000
SystemMaxUse=50M
#SystemKeepFree=
SystemMaxFileSize=20M
#RuntimeMaxUse=
#RuntimeKeepFree=
#RuntimeMaxFileSize=
MaxRetentionSec=1week
#MaxFileSec=1month
#ForwardToSyslog=yes
#ForwardToKMsg=no
#ForwardToConsole=no
#ForwardToWall=yes
#TTYPath=/dev/console
#MaxLevelStore=debug
#MaxLevelSyslog=debug
#MaxLevelKMsg=notice
#MaxLevelConsole=info
#MaxLevelWall=emerg
#LineMax=48K
</code></pre>
| <p>You can tweak the following two parameters in the <code>kubelet</code> <a href="https://v1-21.docs.kubernetes.io/docs/reference/config-api/kubelet-config.v1beta1/" rel="nofollow noreferrer">config</a>:</p>
<blockquote>
<p><strong><code>containerLogMaxSize</code></strong> string A quantity defines the maximum size of the
container log file before it is rotated. For example "5Mi" or
"256Ki". Dynamic Kubelet Config (beta): If dynamically updating this
field, consider that it may trigger log rotation. Default: "10Mi"</p>
<p><strong><code>containerLogMaxFiles</code></strong> int32 Maximum number of container log files that
can be present for a container. Dynamic Kubelet Config (beta): If
dynamically updating this field, consider that lowering it may cause
log files to be deleted. Default: 5</p>
</blockquote>
<p>Example:</p>
<pre><code>sudo vi /etc/kubernetes/kubelet-config.yaml
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
nodeStatusUpdateFrequency: "10s"
failSwapOn: True
authentication:
anonymous:
enabled: false
webhook:
enabled: True
x509:
clientCAFile: /etc/kubernetes/ssl/ca.crt
authorization:
mode: Webhook
staticPodPath: /etc/kubernetes/manifests
cgroupDriver: systemd
containerLogMaxFiles: 5 # Maximum number of container logs to retain.
containerLogMaxSize: 1Mi # Change the size of /var/log/containers/<pod-name>/log files size to 1M max.
maxPods: 110
address: 192.168.22.5
readOnlyPort: 0
healthzPort: 10248
healthzBindAddress: 127.0.0.1
kubeletCgroups: /systemd/system.slice
clusterDomain: cluster.local
protectKernelDefaults: true
rotateCertificates: true
clusterDNS:
- 8.8.8.8
kubeReserved:
cpu: 200m
memory: 512Mi
resolvConf: "/run/systemd/resolve/resolv.conf"
eventRecordQPS: 5
shutdownGracePeriod: 60s
shutdownGracePeriodCriticalPods: 20s
</code></pre>
<p>Reload the kubelet config:</p>
<pre><code>sudo systemctl daemon-reload && sudo systemctl restart kubelet
</code></pre>
|
<p>I am trying to access the kubectl of the master node that is running on an ec2 instance. I want to do this from another ec2 instance running on a different vpc. What steps should I take to make this possible?</p>
<p>I have the kubeconfig file on my second machine already but on running kubectl, it gives me a connection error,</p>
<p>Edit: Both the vpcs are private and have the similar CIDR.</p>
| <ul>
<li>If both of your <strong>EC2</strong> are in diff VPCs you can do the <code>VPC peering</code>.</li>
<li>If you want to expose your master and K8s setup you can directly use the public IP(if exist) of EC2 and <code>kubectl</code> will connect to k8s cluster over the internet.</li>
</ul>
<p>You can also checkout peering multiple VPC with same cidr range if you are looking for that way : <a href="https://stackoverflow.com/questions/38301308/multiple-vpc-and-subnet-with-same-cidr-blocks">Multiple VPC and Subnet with same CIDR blocks</a></p>
<p>Or : <a href="https://docs.aws.amazon.com/vpc/latest/peering/peering-configurations-partial-access.html#two-vpcs-peered-specific-cidr" rel="nofollow noreferrer">https://docs.aws.amazon.com/vpc/latest/peering/peering-configurations-partial-access.html#two-vpcs-peered-specific-cidr</a></p>
|
<p>I am a beginner in kubernetes and was trying to deploy my flask application following this guide: <a href="https://medium.com/analytics-vidhya/build-a-python-flask-app-and-deploy-with-kubernetes-ccc99bbec5dc" rel="nofollow noreferrer">https://medium.com/analytics-vidhya/build-a-python-flask-app-and-deploy-with-kubernetes-ccc99bbec5dc</a></p>
<p>I have successfully built a docker image and pushed it to dockerhub <a href="https://hub.docker.com/repository/docker/beatrix1997/kubernetes_flask_app" rel="nofollow noreferrer">https://hub.docker.com/repository/docker/beatrix1997/kubernetes_flask_app</a></p>
<p>but am having trouble debugging a pod.
This is my yaml file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: kubernetesflaskapp-deploy
labels:
app: kubernetesflaskapp
spec:
replicas: 1
selector:
matchLabels:
app: kubernetesflaskapp
template:
metadata:
labels:
app: kubernetesflaskapp
spec:
containers:
- name: kubernetesflaskapp
image: beatrix1997/kubernetes_flask_app
ports:
- containerPort: 5000
</code></pre>
<p>And this is the description of the pod:</p>
<pre><code>Name: kubernetesflaskapp-deploy-5764bbbd44-8696k
Namespace: default
Priority: 0
Node: minikube/192.168.49.2
Start Time: Fri, 20 May 2022 11:26:33 +0100
Labels: app=kubernetesflaskapp
pod-template-hash=5764bbbd44
Annotations: <none>
Status: Running
IP: 172.17.0.12
IPs:
IP: 172.17.0.12
Controlled By: ReplicaSet/kubernetesflaskapp-deploy-5764bbbd44
Containers:
kubernetesflaskapp:
Container ID: docker://d500dc15e389190670a9273fea1d70e6bd6ab2e7053bd2480d114ad6150830f1
Image: beatrix1997/kubernetes_flask_app
Image ID: docker-pullable://beatrix1997/kubernetes_flask_app@sha256:1bfa98229f55b04f32a6b85d72860886abcc0f17295b14e173151a8e4b0f0334
Port: 5000/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 20 May 2022 11:58:38 +0100
Finished: Fri, 20 May 2022 11:58:38 +0100
Ready: False
Restart Count: 11
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zq8n7 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-zq8n7:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 33m default-scheduler Successfully assigned default/kubernetesflaskapp-deploy-5764bbbd44-8696k to minikube
Normal Pulled 33m kubelet Successfully pulled image "beatrix1997/kubernetes_flask_app" in 14.783413947s
Normal Pulled 33m kubelet Successfully pulled image "beatrix1997/kubernetes_flask_app" in 1.243534487s
Normal Pulled 32m kubelet Successfully pulled image "beatrix1997/kubernetes_flask_app" in 1.373217701s
Normal Pulling 32m (x4 over 33m) kubelet Pulling image "beatrix1997/kubernetes_flask_app"
Normal Created 32m (x4 over 33m) kubelet Created container kubernetesflaskapp
Normal Pulled 32m kubelet Successfully pulled image "beatrix1997/kubernetes_flask_app" in 1.239794774s
Normal Started 32m (x4 over 33m) kubelet Started container kubernetesflaskapp
Warning BackOff 3m16s (x138 over 33m) kubelet Back-off restarting failed container
</code></pre>
<p>I am using ubuntu as my OS if it matters at all.
Any help would be appreciated!
Many thanks!</p>
| <p>I would check the following:</p>
<ol>
<li>Check if your Docker image works in Docker, you can run it with the <code>run</code> command, find the official doc <a href="https://docs.docker.com/engine/reference/commandline/run/" rel="nofollow noreferrer">here</a></li>
<li>If it doesn't work, then you can check what is wrong in your app first.</li>
<li>If it does, try checking the readiness and liveness probe, here the official <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/" rel="nofollow noreferrer">documentation</a></li>
<li>You can find more hints about failing pods <a href="https://kubernetes.io/docs/tasks/debug/debug-application/determine-reason-pod-failure/" rel="nofollow noreferrer">here</a></li>
</ol>
|
<p>I wanna install Elasticsearch in AWS EKS. But <a href="https://www.elastic.co/guide/en/elasticsearch/plugins/master/cloud-aws-best-practices.html#:%7E:text=is%20not%20recommended%20or%20supported" rel="nofollow noreferrer">Elasticsearch is not recommended EFS Storage</a>.</p>
<p>So I want to install Elasticsearch using AWS EKS + EC2 (Management Node Group) + EBS.</p>
<p>I'm curious about three things.</p>
<p><strong>First,</strong> EKS is a managed Kubernetes service. Should we install Elasticsearch by installing one node (EC2) and Kubernetes in EC2?
<a href="https://i.stack.imgur.com/seSQq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/seSQq.png" alt="enter image description here" /></a></p>
<p><strong>Second,</strong> create three EC2s, install elasticsearch with Docker for each instance, and connect clustering
<a href="https://i.stack.imgur.com/jBTKu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jBTKu.png" alt="enter image description here" /></a></p>
<p><strong>Three,</strong> whether Elasticsearch can be installed with the EKS Fargate.</p>
<p>It may be an ignorant question, but I would appreciate it if you could give me an idea on how to reliably build Elasticsearch in EKS.</p>
| <p>You can use the Helm chart to deploy the elastic search on EKS cluster.</p>
<p><a href="https://github.com/elastic/helm-charts/tree/main/elasticsearch" rel="nofollow noreferrer">https://github.com/elastic/helm-charts/tree/main/elasticsearch</a></p>
<p>So by default, it will use the storage class, which will be disk and it will give better performance. Accordingly, you can change the storage class and disk changes as per requirement in the helm chart.</p>
<p>it's also good to deploy the Elasticsearch on separate nodes however creating networking and managing might not be as easy as on Kubernetes cluster.</p>
|
<p>I'm using k3s to deploy my cluster, in my worker node I already setup insecure registries in /etc/docker/daemon.json for an internal network private registry</p>
<pre><code>{ "insecure-registries": ["192.168.30.87:7111"] }
</code></pre>
<p>and I have restarted both my docker and my machine. I can see the settings from <code>docker info</code>
<a href="https://i.stack.imgur.com/LlCo7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LlCo7.png" alt="enter image description here" /></a></p>
<p>when I ran</p>
<pre><code>docker run 192.168.30.87:7111/linhnvh/user-service-image:latest
</code></pre>
<p>it worked perfectly</p>
<p>But I also set the image within my kubernetes deployment
<a href="https://i.stack.imgur.com/HeMk3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HeMk3.png" alt="enter image description here" /></a></p>
<p>when I apply the deployment I'm getting the following error:</p>
<p><a href="https://i.stack.imgur.com/pAELL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pAELL.png" alt="enter image description here" /></a></p>
<p>Why is this still not working within my kubernetes cluster? I can run <code>docker run</code> normally on the worker node with the image url</p>
<p><a href="https://i.stack.imgur.com/tqRa8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tqRa8.png" alt="enter image description here" /></a></p>
<p>it just doesn't work for kubernetes deployment? The <code>insecure-registries</code> settings is there in docker info</p>
<p>As I'm coding on a streaming machine so I can't copy paste images.</p>
| <p>finally after 4 hours of searching i finally come into this issue <a href="https://github.com/k3s-io/k3s/issues/1802" rel="nofollow noreferrer">https://github.com/k3s-io/k3s/issues/1802</a></p>
<p>Because i'm using k3s for cluster setup it's using crictl instead of docker, so the daemon.json doesn't apply
which bring me to this documentation <a href="https://rancher.com/docs/k3s/latest/en/installation/private-registry/" rel="nofollow noreferrer">https://rancher.com/docs/k3s/latest/en/installation/private-registry/</a></p>
<p>i created <code>registries.yaml</code> in at /etc/rancher/k3s/ with the following content:</p>
<pre><code>mirrors:
"192.168.30.87:7111":
endpoint:
- "http://192.168.30.87:7111"
</code></pre>
<p>after saving i restart the k3s service with <code>systemctl restart k3s</code></p>
<p>after that my pod able to pull the image, remember to set the ip both and after <code>endpoint</code> in registries.yaml</p>
<p>This need to be set on all node that gonna create a pod, and the restart command of service will be different</p>
<p>on master node: <code>systemctl restart k3s</code></p>
<p>on worker/agent node: <code>systemctl restart k3s-agent</code></p>
|
<p>I created a node js TLS server, dockerized it, and created a K8S Deployment and ClusterIP service for it. I created a DNS for the LoadBalancer service external IP of istio-ingressgateway and I’m using this DNS to try access this TLS server using istio but for some reason this error appears</p>
<pre><code>[2022-02-10T04:28:38.302Z] "- - -" 0 NR filter_chain_not_found - "-" 0 0 3087 - "-" "-" "-" "-" "-" "-" - - 10.120.22.33:7070 10.101.31.172:44748 - -
</code></pre>
<p>The node server.js file:</p>
<pre><code>const tls = require("tls");
const fs = require("fs");
const options = {
key: fs.readFileSync("server-key.pem"),
cert: fs.readFileSync("server-cert.pem"),
rejectUnauthorized: false,
};
process.env["NODE_TLS_REJECT_UNAUTHORIZED"] = 0;
const server = tls.createServer(options, (socket) => {
console.log(
"server connected",
socket.authorized ? "authorized" : "unauthorized"
);
socket.write("welcome!\n");
socket.setEncoding("utf8");
socket.pipe(socket);
});
server.listen(7070, () => {
console.log("server bound");
});
</code></pre>
<p>The client.js file I use to connect to the server:</p>
<pre><code>const tls = require("tls");
const fs = require("fs");
const options = {
ca: [fs.readFileSync("server-cert.pem", { encoding: "utf-8" })],
};
var socket = tls.connect(
7070,
"HOSTNAME",
options,
() => {
console.log(
"client connected",
socket.authorized ? "authorized" : "unauthorized"
);
process.stdin.pipe(socket);
process.stdin.resume();
}
);
socket.setEncoding("utf8");
socket.on("data", (data) => {
console.log(data);
});
socket.on("end", () => {
console.log("Ended");
});
</code></pre>
<p>The cluster service.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nodejs-service
namespace: nodejs-tcp
spec:
ports:
- name: web
port: 7070
protocol: TCP
targetPort: 7070
selector:
app: nodejs
sessionAffinity: None
type: ClusterIP
</code></pre>
<p>The istio-gateway.yaml</p>
<pre><code>apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: nodejs-gw
namespace: nodejs-tcp
spec:
selector:
istio: istio-ingressgateway
servers:
- hosts:
- 'HOSTNAME'
port:
name: tls
number: 7070
protocol: TLS
tls:
credentialName: tls-secret
mode: PASSTHROUGH
</code></pre>
<p>In credentialName, I created a generic secret that holds the values of the private key and the certificate of the server</p>
<p>The istio-virtual-service.yaml</p>
<pre><code>apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: nodejs-vs
namespace: nodejs-tcp
spec:
gateways:
- nodejs-gw
hosts:
- 'HOSTNAME'
tls:
- match:
- port: 7070
sniHosts:
- HOSTNAME
route:
- destination:
host: nodejs-service
port:
number: 7070
</code></pre>
<p>The Istio version I’m using:</p>
<pre><code>client version: 1.12.2
control plane version: 1.12.2
data plane version: 1.12.2 (159 proxies)
</code></pre>
<p>Your help is so much appreciated. Thanks in advance.</p>
| <p>One thing I noticed right away is that you are using the incorrect selector in your <code>istio-gateway</code>, it should be:</p>
<pre><code>spec:
selector:
istio: ingressgateway
</code></pre>
<p>A good troubleshooting starting point would be to get the routes for your <code>ingressgateway</code> and validate that you see the expected ones.</p>
<ol>
<li>First you need to know your pod's name:</li>
</ol>
<pre><code>kubectl get pods -n <namespace_of_your_app>
NAME READY STATUS RESTARTS AGE
pod/my-nginx-xxxxxxxxx-xxxxx 2/2 Running 0 50m
</code></pre>
<p>In my deployment, it is an nginx pod.</p>
<ol start="2">
<li>Once you have the name, you can get the routes specific to your hostname:</li>
</ol>
<pre><code>istioctl pc routes <your_pod_name>.<namespace>
NOTE: This output only contains routes loaded via RDS.
NAME DOMAINS MATCH VIRTUAL SERVICE
my-nginx.default.svc.cluster.local:443 my-nginx /*
</code></pre>
<p>This is an output example for a hostname "my-nginx". If the output returns no route, it usually means that it does not match SNI and/or cannot find a specific route.</p>
|
<p>I'm trying to use Kubectl get namespaces command it is fetching the data.</p>
<pre><code>kubectl get namespace
NAME STATUS AGE
default Active 1d
kube-node-lease Active 1d
kube-public Active 1d
kube-system Active 1d
</code></pre>
<p>but I want to filter it with name only. So when u run the script it should show like this.</p>
<pre><code>kubectl get namespace
NAME
default
kube-node-lease
kube-public
kube-system
</code></pre>
<p>I've tried some powershell command but it is not working out for me.</p>
| <p>Try any one of the command</p>
<pre><code>kubectl get namespace --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}'
</code></pre>
<pre><code>kubectl get namespace | awk '{print $1}'
</code></pre>
<pre><code>kubectl get namespace --no-headers -o custom-columns=":metadata.name"
</code></pre>
<pre><code>kubectl get namespace -o=name | sed "s/^.\{10\}//"
</code></pre>
|
<p>I have created 2 services and for each of the service there is a corresponding deployment. All these are in the same PC. After applying the YAML, the services and deployments are running properly.</p>
<pre><code>sudo npm kubectl apply -f deployment.yaml
</code></pre>
<p>Now, if I try to curl to the ingress IP address, then I get 404 page not found as the response.</p>
<pre><code>sudo kubectl get ingress -o wide
sudo curl <IP address of ingress>
</code></pre>
<p>If I try to curl to the IP address of the employee service or employee deployment, it gives that the connection was refused after waiting for sometime. Which ever port I mentioned for the employee service and deployment I try, I always get error instead of the proper response.</p>
<p>Can someone from the community please assist. I need to call the <code>/emplist</code> endpoint of myshop pod from employee pod.</p>
<p>myshop.js</p>
<pre><code>const express = require('express')
const app = express()
const port = 3000
app.get('/emplist', (req, res) => {
res.status(200).send( empListObj )
});
app.listen(port, '0.0.0.0', () => {
console.log(`Listening on port ${port}`)
})
</code></pre>
<p>employee.js</p>
<pre><code>const axios = require('axios').default;
const express = require('express')
const app = express()
const port = 3001
app.get('/', (req, res) => {
axios({
method: 'get',
url: 'http://myshopservice/emplist'
})
.then(function (response) {
res.status(200).send(response.data.empList)
})
.catch(function (error) {
res.status(200).send(`Error because ${error}`);
})
})
app.listen(port, '0.0.0.0', () => {
console.log(`Listening on port ${port}`)
})
</code></pre>
<p>Now the corresponding deployment.yaml is:</p>
<pre><code>---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myshop-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: myshop.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myshopservice
port:
number: 4000
---
apiVersion: v1
kind: Service
metadata:
name: myshopservice
spec:
selector:
app: myshop
ports:
- protocol: "TCP"
port: 4000
targetPort: 3000
nodePort: 30000
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: myshop
spec:
replicas: 1
selector:
matchLabels:
app: myshop
template:
metadata:
labels:
app: myshop
spec:
containers:
- name: myshop
image: mydockerhubid/myshop:1
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: employeeservice
spec:
selector:
app: employee
ports:
- protocol: "TCP"
port: 4001
targetPort: 3001
nodePort: 30001
type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: employee
spec:
replicas: 1
selector:
matchLabels:
app: employee
template:
metadata:
labels:
app: employee
spec:
containers:
- name: hud
image: mydockerhubid/employee:1
ports:
- containerPort: 3001
---
</code></pre>
| <p>Looks like there is issue with ingress mainly. i would suggest first to check the services status if it's running or not at <code><Loadbalancer IP:Port/path></code> <strong>(Both services)</strong></p>
<p>Your app is listening on one specific path but you are not requesting that path in <strong>curl</strong>. If you want to do it that way you can also do it however i would suggest try simple way first and thn play with route/path based routing.</p>
<p>Plus your ingress is sending requests at myshop service but it only listening or have route : <strong>/emplist</strong></p>
<p>So try once ingress by removing annotation and host as you are on local.</p>
<p>If you want to keep host you have to add entry locally into <code>/etc/hosts</code> for <strong>host</strong> and ingress <strong>IP</strong> mapping and you should be using the or try curl on domain instead ingress IP then.</p>
<p>Are you running any ingress controller? Nginx or so? : <a href="https://kubernetes.github.io/ingress-nginx/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/</a></p>
<pre><code>---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myshop-ingress
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myshopservice
port:
number: 4000
</code></pre>
<p>Check this very simple example for ingress routing and controller setup : <a href="https://dev.to/xaviergeerinck/creating-a-kubernetes-nginx-ingress-controller-and-create-a-rule-to-a-sample-application-4and" rel="nofollow noreferrer">https://dev.to/xaviergeerinck/creating-a-kubernetes-nginx-ingress-controller-and-create-a-rule-to-a-sample-application-4and</a></p>
|
<p>I deployed my angular application(Static webpage) on kubernetes and tried launching it from Google Chrome. I see app is loading, however there is nothing displayed on the browser. Upon checking on browser console , I could see this error</p>
<p>"Failed to load module script: Expected a JavaScript module script but the server responded with a MIME type of "text/html". Strict MIME type checking is enforced for module scripts per HTML spec."
for (main.js,poylfill.js,runtime.js) files . I research few forums and one possible rootcause could because of <code>type</code> attribute in <code><script></code> tag should be type=text/javascript instead of <code>type=module</code> in my index.html file that is produced under dist folder after executing ng build. I don't how to make that change as to these tags as generated during the build process, and my ng-build command is taken care by a docker command.</p>
<p>URL i'm trying to access will be something like : "http://xxxx:portnum/issuertcoetools</p>
<p>note: The host <code>xxxx:portnum</code> will be used by many other apps as well.</p>
<p>Are there any work-arounds or solutions to this issue?</p>
<p>index.html - produced after running ng-build in local, (which is the same i see in kubernetes POD too)</p>
<pre><code><!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<title>Data Generator</title>
<base href="/">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="icon" type="image/x-icon" href="favicon.ico">
<link rel="preconnect" href="https://fonts.gstatic.com">
<style type="text/css">@font-face{font-family:'Roboto';font-style:normal;font-weight:300;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOlCnqEu92Fr1MmSU5fCRc4AMP6lbBP.woff2) format('woff2');unicode-range:U+0460-052F, U+1C80-1C88, U+20B4, U+2DE0-2DFF, U+A640-A69F, U+FE2E-FE2F;}@font-face{font-family:'Roboto';font-style:normal;font-weight:300;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOlCnqEu92Fr1MmSU5fABc4AMP6lbBP.woff2) format('woff2');unicode-range:U+0400-045F, U+0490-0491, U+04B0-04B1, U+2116;}@font-face{font-family:'Roboto';font-style:normal;font-weight:300;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOlCnqEu92Fr1MmSU5fCBc4AMP6lbBP.woff2) format('woff2');unicode-range:U+1F00-1FFF;}@font-face{font-family:'Roboto';font-style:normal;font-weight:300;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOlCnqEu92Fr1MmSU5fBxc4AMP6lbBP.woff2) format('woff2');unicode-range:U+0370-03FF;}@font-face{font-family:'Roboto';font-style:normal;font-weight:300;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOlCnqEu92Fr1MmSU5fCxc4AMP6lbBP.woff2) format('woff2');unicode-range:U+0102-0103, U+0110-0111, U+0128-0129, U+0168-0169, U+01A0-01A1, U+01AF-01B0, U+1EA0-1EF9, U+20AB;}@font-face{font-family:'Roboto';font-style:normal;font-weight:300;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOlCnqEu92Fr1MmSU5fChc4AMP6lbBP.woff2) format('woff2');unicode-range:U+0100-024F, U+0259, U+1E00-1EFF, U+2020, U+20A0-20AB, U+20AD-20CF, U+2113, U+2C60-2C7F, U+A720-A7FF;}@font-face{font-family:'Roboto';font-style:normal;font-weight:300;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOlCnqEu92Fr1MmSU5fBBc4AMP6lQ.woff2) format('woff2');unicode-range:U+0000-00FF, U+0131, U+0152-0153, U+02BB-02BC, U+02C6, U+02DA, U+02DC, U+2000-206F, U+2074, U+20AC, U+2122, U+2191, U+2193, U+2212, U+2215, U+FEFF, U+FFFD;}@font-face{font-family:'Roboto';font-style:normal;font-weight:400;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOmCnqEu92Fr1Mu72xKKTU1Kvnz.woff2) format('woff2');unicode-range:U+0460-052F, U+1C80-1C88, U+20B4, U+2DE0-2DFF, U+A640-A69F, U+FE2E-FE2F;}@font-face{font-family:'Roboto';font-style:normal;font-weight:400;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOmCnqEu92Fr1Mu5mxKKTU1Kvnz.woff2) format('woff2');unicode-range:U+0400-045F, U+0490-0491, U+04B0-04B1, U+2116;}@font-face{font-family:'Roboto';font-style:normal;font-weight:400;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOmCnqEu92Fr1Mu7mxKKTU1Kvnz.woff2) format('woff2');unicode-range:U+1F00-1FFF;}@font-face{font-family:'Roboto';font-style:normal;font-weight:400;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOmCnqEu92Fr1Mu4WxKKTU1Kvnz.woff2) format('woff2');unicode-range:U+0370-03FF;}@font-face{font-family:'Roboto';font-style:normal;font-weight:400;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOmCnqEu92Fr1Mu7WxKKTU1Kvnz.woff2) format('woff2');unicode-range:U+0102-0103, U+0110-0111, U+0128-0129, U+0168-0169, U+01A0-01A1, U+01AF-01B0, U+1EA0-1EF9, U+20AB;}@font-face{font-family:'Roboto';font-style:normal;font-weight:400;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOmCnqEu92Fr1Mu7GxKKTU1Kvnz.woff2) format('woff2');unicode-range:U+0100-024F, U+0259, U+1E00-1EFF, U+2020, U+20A0-20AB, U+20AD-20CF, U+2113, U+2C60-2C7F, U+A720-A7FF;}@font-face{font-family:'Roboto';font-style:normal;font-weight:400;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOmCnqEu92Fr1Mu4mxKKTU1Kg.woff2) format('woff2');unicode-range:U+0000-00FF, U+0131, U+0152-0153, U+02BB-02BC, U+02C6, U+02DA, U+02DC, U+2000-206F, U+2074, U+20AC, U+2122, U+2191, U+2193, U+2212, U+2215, U+FEFF, U+FFFD;}@font-face{font-family:'Roboto';font-style:normal;font-weight:500;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOlCnqEu92Fr1MmEU9fCRc4AMP6lbBP.woff2) format('woff2');unicode-range:U+0460-052F, U+1C80-1C88, U+20B4, U+2DE0-2DFF, U+A640-A69F, U+FE2E-FE2F;}@font-face{font-family:'Roboto';font-style:normal;font-weight:500;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOlCnqEu92Fr1MmEU9fABc4AMP6lbBP.woff2) format('woff2');unicode-range:U+0400-045F, U+0490-0491, U+04B0-04B1, U+2116;}@font-face{font-family:'Roboto';font-style:normal;font-weight:500;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOlCnqEu92Fr1MmEU9fCBc4AMP6lbBP.woff2) format('woff2');unicode-range:U+1F00-1FFF;}@font-face{font-family:'Roboto';font-style:normal;font-weight:500;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOlCnqEu92Fr1MmEU9fBxc4AMP6lbBP.woff2) format('woff2');unicode-range:U+0370-03FF;}@font-face{font-family:'Roboto';font-style:normal;font-weight:500;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOlCnqEu92Fr1MmEU9fCxc4AMP6lbBP.woff2) format('woff2');unicode-range:U+0102-0103, U+0110-0111, U+0128-0129, U+0168-0169, U+01A0-01A1, U+01AF-01B0, U+1EA0-1EF9, U+20AB;}@font-face{font-family:'Roboto';font-style:normal;font-weight:500;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOlCnqEu92Fr1MmEU9fChc4AMP6lbBP.woff2) format('woff2');unicode-range:U+0100-024F, U+0259, U+1E00-1EFF, U+2020, U+20A0-20AB, U+20AD-20CF, U+2113, U+2C60-2C7F, U+A720-A7FF;}@font-face{font-family:'Roboto';font-style:normal;font-weight:500;font-display:swap;src:url(https://fonts.gstatic.com/s/roboto/v29/KFOlCnqEu92Fr1MmEU9fBBc4AMP6lQ.woff2) format('woff2');unicode-range:U+0000-00FF, U+0131, U+0152-0153, U+02BB-02BC, U+02C6, U+02DA, U+02DC, U+2000-206F, U+2074, U+20AC, U+2122, U+2191, U+2193, U+2212, U+2215, U+FEFF, U+FFFD;}</style>
<style type="text/css">@font-face{font-family:'Material Icons';font-style:normal;font-weight:400;src:url(https://fonts.gstatic.com/s/materialicons/v128/flUhRq6tzZclQEJ-Vdg-IuiaDsNcIhQ8tQ.woff2) format('woff2');}.material-icons{font-family:'Material Icons';font-weight:normal;font-style:normal;font-size:24px;line-height:1;letter-spacing:normal;text-transform:none;display:inline-block;white-space:nowrap;word-wrap:normal;direction:ltr;-webkit-font-feature-settings:'liga';-webkit-font-smoothing:antialiased;}</style>
<style>.mat-typography{font:400 14px/20px Roboto,Helvetica Neue,sans-serif;letter-spacing:normal}html,body{height:100%}body{margin:0;font-family:Roboto,Helvetica Neue,sans-serif}</style><link rel="stylesheet" href="styles.9c0548760c8b22be.css" media="print" onload="this.media='all'"><noscript><link rel="stylesheet" href="styles.9c0548760c8b22be.css"></noscript></head>
<body class="mat-typography">
<app-root></app-root>
<script src="runtime.397c3874548e84cd.js" type="module">
</script><script src="polyfills.2145acc81d0726ab.js" type="module">
</script><script src="main.a655dca28148b7e2.js" type="module"></script>
</body></html>
</code></pre>
<p>nginx.conf file</p>
<pre><code>worker_processes 4;
events { worker_connections 1024; }
http {
server {
listen 8080;
include /etc/nginx/mime.types;
location /issuertcoetools {
root /usr/share/nginx/html;
index index.html index.htm;
try_files $uri $uri/ /index.html =404;
}
}
}
</code></pre>
| <p>This error usually happens because you deployment was into a subfolder, so it seems like Angular is fetching you app directly from your base URL, so your html is found when you go to your <code>domain.com/mysubfolder/index.html</code>, but as the Angular fetches your resources from <code>domain.com/index.html</code> instead of domain.com/mysubfolder/index.html; I’m pretty sure this is the cause of your issue. You can resolve it building your app with:</p>
<pre><code>ng build --prod --base-href mysubfolder
</code></pre>
|
<p>I am facing an error while deploying deployment in CircleCI. Please find the configuration file below.</p>
<p>When running the kubectl CLI, we got an error between kubectl and the <a href="https://aws.amazon.com/eks/" rel="noreferrer">EKS</a> tool of the <code>aws-cli</code>.</p>
<pre class="lang-yaml prettyprint-override"><code>version: 2.1
orbs:
aws-ecr: circleci/[email protected]
docker: circleci/[email protected]
rollbar: rollbar/[email protected]
kubernetes: circleci/[email protected]
deploy:
version: 2.1
orbs:
aws-eks: circleci/[email protected]
kubernetes: circleci/[email protected]
executors:
default:
description: |
The version of the circleci/buildpack-deps Docker container to use
when running commands.
parameters:
buildpack-tag:
type: string
default: buster
docker:
- image: circleci/buildpack-deps:<<parameters.buildpack-tag>>
description: |
A collection of tools to deploy changes to AWS EKS in a declarative
manner where all changes to templates are checked into version control
before applying them to an EKS cluster.
commands:
setup:
description: |
Install the gettext-base package into the executor to be able to run
envsubst for replacing values in template files.
This command is a prerequisite for all other commands and should not
have to be run manually.
parameters:
cluster-name:
default: ''
description: Name of the EKS Cluster.
type: string
aws-region:
default: 'eu-central-1'
description: Region where the EKS Cluster is located.
type: string
git-user-email:
default: "[email protected]"
description: Email of the git user to use for making commits
type: string
git-user-name:
default: "CircleCI Deploy Orb"
description: Name of the git user to use for making commits
type: string
steps:
- run:
name: install gettext-base
command: |
if which envsubst > /dev/null; then
echo "envsubst is already installed"
exit 0
fi
sudo apt-get update
sudo apt-get install -y gettext-base
- run:
name: Setup GitHub access
command: |
mkdir -p ~/.ssh
echo 'github.com ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ==' >> ~/.ssh/known_hosts
git config --global user.email "<< parameters.git-user-email >>"
git config --global user.name "<< parameters.git-user-name >>"
- aws-eks/update-kubeconfig-with-authenticator:
aws-region: << parameters.aws-region >>
cluster-name: << parameters.cluster-name >>
install-kubectl: true
authenticator-release-tag: v0.5.1
update-image:
description: |
Generates template files with the specified version tag for the image
to be updated and subsequently applies that template after checking it
back into version control.
parameters:
cluster-name:
default: ''
description: Name of the EKS Cluster.
type: string
aws-region:
default: 'eu-central-1'
description: Region where the EKS Cluster is located.
type: string
image-tag:
default: ''
description: |
The tag of the image, defaults to the value of `CIRCLE_SHA1`
if not provided.
type: string
replicas:
default: 3
description: |
The replica count for the deployment.
type: integer
environment:
default: 'production'
description: |
The environment/stage where the template will be applied. Defaults
to `production`.
type: string
template-file-path:
default: ''
description: |
The path to the source template which contains the placeholders
for the image-tag.
type: string
resource-name:
default: ''
description: |
Resource name in the format TYPE/NAME e.g. deployment/nginx.
type: string
template-repository:
default: ''
description: |
The fullpath to the repository where templates reside. Write
access is required to commit generated templates.
type: string
template-folder:
default: 'templates'
description: |
The name of the folder where the template-repository is cloned to.
type: string
placeholder-name:
default: IMAGE_TAG
description: |
The name of the placeholder environment variable that is to be
substituted with the image-tag parameter.
type: string
cluster-namespace:
default: sayway
description: |
Namespace within the EKS Cluster.
type: string
steps:
- setup:
aws-region: << parameters.aws-region >>
cluster-name: << parameters.cluster-name >>
git-user-email: [email protected]
git-user-name: deploy
- run:
name: pull template repository
command: |
[ "$(ls -A << parameters.template-folder >>)" ] && \
cd << parameters.template-folder >> && git pull --force && cd ..
[ "$(ls -A << parameters.template-folder >>)" ] || \
git clone << parameters.template-repository >> << parameters.template-folder >>
- run:
name: generate and commit template files
command: |
cd << parameters.template-folder >>
IMAGE_TAG="<< parameters.image-tag >>"
./bin/generate.sh --file << parameters.template-file-path >> \
--stage << parameters.environment >> \
--commit-message "Update << parameters.template-file-path >> for << parameters.environment >> with tag ${IMAGE_TAG:-$CIRCLE_SHA1}" \
<< parameters.placeholder-name >>="${IMAGE_TAG:-$CIRCLE_SHA1}" \
REPLICAS=<< parameters.replicas >>
- kubernetes/create-or-update-resource:
get-rollout-status: true
namespace: << parameters.cluster-namespace >>
resource-file-path: << parameters.template-folder >>/<< parameters.environment >>/<< parameters.template-file-path >>
resource-name: << parameters.resource-name >>
jobs:
test:
working_directory: ~/say-way/core
parallelism: 1
shell: /bin/bash --login
environment:
CIRCLE_ARTIFACTS: /tmp/circleci-artifacts
CIRCLE_TEST_REPORTS: /tmp/circleci-test-results
KONFIG_CITUS__HOST: localhost
KONFIG_CITUS__USER: postgres
KONFIG_CITUS__DATABASE: sayway_test
KONFIG_CITUS__PASSWORD: ""
KONFIG_SPEC_REPORTER: true
docker:
- image: 567567013174.dkr.ecr.eu-central-1.amazonaws.com/core-ci:test-latest
aws_auth:
aws_access_key_id: $AWS_ACCESS_KEY_ID_STAGING
aws_secret_access_key: $AWS_SECRET_ACCESS_KEY_STAGING
- image: circleci/redis
- image: rabbitmq:3.7.7
- image: circleci/mongo:4.2
- image: circleci/postgres:10.5-alpine
steps:
- checkout
- run: mkdir -p $CIRCLE_ARTIFACTS $CIRCLE_TEST_REPORTS
# This is based on your 1.0 configuration file or project settings
- restore_cache:
keys:
- v1-dep-{{ checksum "Gemfile.lock" }}-
# any recent Gemfile.lock
- v1-dep-
- run:
name: install correct bundler version
command: |
export BUNDLER_VERSION="$(grep -A1 'BUNDLED WITH' Gemfile.lock | tail -n1 | tr -d ' ')"
echo "export BUNDLER_VERSION=$BUNDLER_VERSION" >> $BASH_ENV
gem install bundler --version $BUNDLER_VERSION
- run: 'bundle check --path=vendor/bundle || bundle install --path=vendor/bundle --jobs=4 --retry=3'
- run:
name: copy test.yml.sample to test.yml
command: cp config/test.yml.sample config/test.yml
- run:
name: Precompile and clean assets
command: bundle exec rake assets:precompile assets:clean
# Save dependency cache
- save_cache:
key: v1-dep-{{ checksum "Gemfile.lock" }}-{{ epoch }}
paths:
- vendor/bundle
- public/assets
- run:
name: Audit bundle for known security vulnerabilities
command: bundle exec bundle-audit check --update
- run:
name: Setup Database
command: bundle exec ruby ~/sayway/setup_test_db.rb
- run:
name: Migrate Database
command: bundle exec rake db:citus:migrate
- run:
name: Run tests
command: bundle exec rails test -f
# By default, running "rails test" won't run system tests.
- run:
name: Run system tests
command: bundle exec rails test:system
# Save test results
- store_test_results:
path: /tmp/circleci-test-results
# Save artifacts
- store_artifacts:
path: /tmp/circleci-artifacts
- store_artifacts:
path: /tmp/circleci-test-results
build-and-push-image:
working_directory: ~/say-way/
parallelism: 1
shell: /bin/bash --login
executor: aws-ecr/default
steps:
- checkout
- run:
name: Pull latest core images for cache
command: |
$(aws ecr get-login --no-include-email --region $AWS_REGION)
docker pull "${AWS_ECR_ACCOUNT_URL}/core:latest"
- docker/build:
image: core
registry: "${AWS_ECR_ACCOUNT_URL}"
tag: "latest,${CIRCLE_SHA1}"
cache_from: "${AWS_ECR_ACCOUNT_URL}/core:latest"
- aws-ecr/push-image:
repo: core
tag: "latest,${CIRCLE_SHA1}"
deploy-production:
working_directory: ~/say-way/
parallelism: 1
shell: /bin/bash --login
executor: deploy/default
steps:
- kubernetes/install-kubectl:
kubectl-version: v1.22.0
- rollbar/notify_deploy_started:
environment: report
- deploy/update-image:
resource-name: deployment/core-web
template-file-path: core-web-pod.yml
cluster-name: report
environment: report
template-repository: [email protected]:say-way/sw-k8s.git
replicas: 3
- deploy/update-image:
resource-name: deployment/core-worker
template-file-path: core-worker-pod.yml
cluster-name: report
environment: report
template-repository: [email protected]:say-way/sw-k8s.git
replicas: 4
- deploy/update-image:
resource-name: deployment/core-worker-batch
template-file-path: core-worker-batch-pod.yml
cluster-name: report
environment: report
template-repository: [email protected]:say-way/sw-k8s.git
replicas: 1
- rollbar/notify_deploy_finished:
deploy_id: "${ROLLBAR_DEPLOY_ID}"
status: succeeded
deploy-demo:
working_directory: ~/say-way/
parallelism: 1
shell: /bin/bash --login
executor: deploy/default
steps:
- kubernetes/install-kubectl:
kubectl-version: v1.22.0
- rollbar/notify_deploy_started:
environment: demo
- deploy/update-image:
resource-name: deployment/core-web
template-file-path: core-web-pod.yml
cluster-name: demo
environment: demo
template-repository: [email protected]:say-way/sw-k8s.git
replicas: 2
- deploy/update-image:
resource-name: deployment/core-worker
template-file-path: core-worker-pod.yml
cluster-name: demo
environment: demo
template-repository: [email protected]:say-way/sw-k8s.git
replicas: 1
- deploy/update-image:
resource-name: deployment/core-worker-batch
template-file-path: core-worker-batch-pod.yml
cluster-name: demo
environment: demo
template-repository: [email protected]:say-way/sw-k8s.git
replicas: 1
- rollbar/notify_deploy_finished:
deploy_id: "${ROLLBAR_DEPLOY_ID}"
status: succeeded
workflows:
version: 2.1
build-n-test:
jobs:
- test:
filters:
branches:
ignore: master
build-approve-deploy:
jobs:
- build-and-push-image:
context: Core
filters:
branches:
only: master
- approve-report-deploy:
type: approval
requires:
- build-and-push-image
- approve-demo-deploy:
type: approval
requires:
- build-and-push-image
- deploy-production:
context: Core
requires:
- approve-report-deploy
- deploy-demo:
context: Core
requires:
- approve-demo-deploy
</code></pre>
| <p>There is <a href="https://github.com/aws/aws-cli/issues/6920" rel="nofollow noreferrer">an issue</a> in aws-cli. It is already fixed.</p>
<hr />
<p>In <em>my</em> case, updating aws-cli + updating the ~/.kube/config helped.</p>
<ol>
<li>Update aws-cli (<a href="https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html" rel="nofollow noreferrer">following the documentation</a>)</li>
</ol>
<pre><code>curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install --update
</code></pre>
<ol start="2">
<li>Update the kube configuration</li>
</ol>
<pre><code>mv ~/.kube/config ~/.kube/config.bk
aws eks update-kubeconfig --region ${AWS_REGION} --name ${EKS_CLUSTER_NAME}
</code></pre>
|
<p>Planning to use Nginx <a href="http://nginx.org/en/docs/http/ngx_http_mirror_module.html" rel="nofollow noreferrer">Mirror</a> module to copy the traffic to other server. Wanted to see if there are any performance implication of doing this ?</p>
<p>Like what if the mirror server is buggy or too slow ?</p>
<p>Thanks for any help in advance</p>
| <p>There could be chances of performance impact on prod server if you are running both together. For suppose running socket connection and both are online which might leads to latency or another issue for prod server also.</p>
<p>Forum has few unanswered ticket : <a href="https://forum.nginx.org/read.php?2,281042,281042" rel="nofollow noreferrer">https://forum.nginx.org/read.php?2,281042,281042</a></p>
|
<p>This is what i am trying to do:Get nginx pod's ip created in previous step, use a temp busybox image to wget its '/'</p>
<p>Find the link for your reference:
<a href="https://github.com/dgkanatsios/CKAD-exercises/blob/main/a.core_concepts.md#get-nginx-pods-ip-created-in-previous-step-use-a-temp-busybox-image-to-wget-its-" rel="nofollow noreferrer">https://github.com/dgkanatsios/CKAD-exercises/blob/main/a.core_concepts.md#get-nginx-pods-ip-created-in-previous-step-use-a-temp-busybox-image-to-wget-its-</a></p>
<p>Now my question is when I run this command it works:</p>
<pre><code>kubectl run busybox --image=busybox --rm -it --restart=Never -- wget -O- $(kubectl get pod nginx -o jsonpath='{.status.podIP}:{.spec.containers[0].ports[0].containerPort}')
</code></pre>
<p>but when I do this it does not work. Cam someone explain me why is that ?</p>
<pre><code>kubectl get po -o wide # get the IP, will be something like '10.1.1.131'
# create a temp busybox pod
kubectl run busybox --image=busybox --rm -it --restart=Never -- wget -O- 10.1.1.131:80
</code></pre>
| <p>You have to first create an Nginx pod. This is what is referred as "previous step".</p>
<pre><code>$ kubectl run nginx --image=nginx --restart=Never --port=80
pod/nginx created
</code></pre>
<p>You can then run wget inside a BusyBox pod:</p>
<pre><code>$ kubectl run BusyBox --image=busybox --rm -it --restart=Never -- wget -O- $(kubectl get pod nginx -o jsonpath='{.status.podIP}:{.spec.containers[0].ports[0].containerPort}')
Connecting to 10.42.0.52:80 (10.42.0.52:80)
writing to stdout
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
- 100% |********************************| 615 0:00:00 ETA
written to stdout
pod "busybox" deleted
</code></pre>
|
<p>I have a bunch of secretKey in values.yaml like below. I need to add each value of <code>secretKey</code> as a key and decoded value in template.data like as a value like below.</p>
<p>How can I achieve this ?</p>
<pre><code>{{- range $externalSecretName, $externalSecret := .Values.externalSecrets }}
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: {{ $externalSecretName }}
spec:
refreshInterval: 1m
secretStoreRef:
name: secret
kind: SecretStore
target:
name: {{ $externalSecretName }}
creationPolicy: Owner
template:
data:
## Needs to insert/add each secretKey value here like below
{
keyname1: "{{ .keyname1 | b64dec }}".
keyname2: "{{ .keyname2 | b64dec }}".
}
data:
{{- toYaml $externalSecret.data | nindent 4 }}
---
{{- end }}
values.yaml:
===========
extraEnvSecret:
fromSecret:
name: master-tf-address-handling
data:
PREFIX_KEYNAME1: keyname1
PREFIX_KEYNAME2: keyname2
externalSecrets:
demo-app:
data:
- secretKey: keyname1
remoteRef:
key: value1
- secretKey: keyname2
remoteRef:
key: value1
</code></pre>
| <pre><code>{{- range $externalSecretName, $externalSecret := .Values.externalSecrets }}
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: {{ $externalSecretName }}
spec:
refreshInterval: 1m
secretStoreRef:
name: secret
kind: SecretStore
target:
name: {{ $externalSecretName }}
creationPolicy: Owner
template:
data:
{
{{- range $externalSecret.data }}
{{ .secretKey }}: "{{ .remoteRef.key | b64enc }}",
{{- end }}
}
data:
{{- toYaml $externalSecret.data | nindent 4 }}
{{- end }}
</code></pre>
|
Subsets and Splits