Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>Hy folks, I'm using Vagrant box provisioned with Ansible and provider Oracle virtualbox, it was working fine for me.
But one day i installed Android Studio and it's Emulator and Minicube wit KVM. </p>
<p>Afterwards vagrant with virtual-box just stop working. Now whenever i run <code>vagrant up</code> i get below error.</p>
<pre class="lang-sh prettyprint-override"><code>Bringing machine 'default' up with 'virtualbox' provider...
==> default: Checking if box 'ubuntu/bionic64' version '20200416.0.0' is up to date...
==> default: Clearing any previously set forwarded ports...
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
default: Adapter 1: nat
==> default: Forwarding ports...
default: 22 (guest) => 2222 (host) (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
The guest machine entered an invalid state while waiting for it
to boot. Valid states are 'starting, running'. The machine is in the
'gurumeditation' state. Please verify everything is configured
properly and try again.
If the provider you're using has a GUI that comes with it,
it is often helpful to open that and watch the machine, since the
GUI often has more helpful error messages than Vagrant can retrieve.
For example, if you're using VirtualBox, run `vagrant up` while the
VirtualBox GUI is open.
The primary issue for this error is that the provider you're using
is not properly configured. This is very rarely a Vagrant issue.
</code></pre>
<p><strong>I need to run all of three on Ubuntu, How can i fix this?</strong></p>
| Muhammad Taqi | <p>1) Stop the VM</p>
<pre><code>VBoxManage controlvm vm_123 poweroff
</code></pre>
<p>2) The check the settings.
VirtualBox will likely tell you there are some incompatible settings; correct those.</p>
<p>It could be the embedded virtualisation, or 32 vs 64 bits, or the amount of RAM for display or the virtual VGA display type, etc.</p>
| Battornarak Lapuma |
<p>I have some questions about terminology that I should use. I have searched up the components that I am using, but I am still not sure. Can you please check if these are right way to describe each component? If not, can you please revise it?</p>
<ol>
<li>I develop microservices using Spring Framework (Spring boot).</li>
<li>I deploy components on cloud via Kubernetes and Docker. (I understand Docker is used for containerizing and Kubernetes is for orchestrating multiple containers. Is it right to say "I deploy on Kubernetes or via Kubernetes"?)</li>
<li>CI/CD pipeline using Jenkins and Azure DevOps. (Project uses Azure DevOps for triggering CI/CD in Jenkins)</li>
</ol>
<p>Please note that this project was already there when I joined the team, and I am new to all of these concepts. I understand what they do briefly and I know what each terminology means on entry level, but I just want to make sure if I am saying these in right ways.</p>
| Jonathan Hagen | <p>I would say that you deploy services, not components, but your team might have its own terminology.</p>
<p>You do deploy on Kubernetes.</p>
<p>Docker is used to create and manage containers and container images. Kubernetes does not use Docker but can use images created via Docker to deploy containers in Pods (via the OCI format)</p>
<p>The rest seems right to me :)</p>
| Faeeria |
<p>Is there anyone who uses argo cd on eks fargate?? It seems that there is an issue with Argo setup on Fargate. All pods are in <code>pending state</code></p>
<p>I’ve tried installing on argocd namespace and existing ones. Still doesn’t work</p>
<p>I tried to install it using the commands below:</p>
<pre><code>kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/v2.4.7/manifests/install.yaml
</code></pre>
| Gauthier Tchey | <p>Make sure you have created a fargate profile with the namespace selector as <code>argocd</code>. It might be one of the issues.</p>
<p>refer this <a href="https://docs.aws.amazon.com/eks/latest/userguide/fargate-getting-started.html#fargate-gs-create-profile" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/fargate-getting-started.html#fargate-gs-create-profile</a></p>
| Ajay Chidambaram |
<p>Good afternoon, I have a question, I am new to Kubernetes and I need to connect to a DB that is outside of my cluster, I could only connect to the DB using the hostNetwork = true, however, this is not recommended, in this case there is a method to communicate with External DB?</p>
<p>I leave you the yaml that I am currently using, my pod contains one container that work with spring boot</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: find-complementary-account-info
labels:
app: find-complementary-account-info
spec:
replicas: 2
selector:
matchLabels:
app: find-complementary-account-info
template:
metadata:
labels:
app: find-complementary-account-info
spec:
hostNetwork: true
dnsPolicy: Default
containers:
- name: find-complementary-account-info
image:find-complementary-account-info:latest
imagePullPolicy: IfNotPresent
resources:
limits:
memory: "350Mi"
requests:
memory: "300Mi"
ports:
- containerPort: 8081
env:
- name: URL_CONNECTION_BD
value: jdbc:oracle:thin:@11.160.9.18:1558/DEFAULTSRV.WORLD
- name: USERNAME_CONNECTION_BD
valueFrom:
secretKeyRef:
name: credentials-bd-pers
key: user_pers
- name: PASSWORD_CONNECTION_BD
valueFrom:
secretKeyRef:
name: credentials-bd-pers
key: password_pers
key: password_pers
---
apiVersion: v1
kind: Service
metadata:
name: find-complementary-account-info
spec:
type: NodePort
selector:
app: find-complementary-account-info
ports:
- protocol: TCP
port: 8081
targetPort: 8081
nodePort: 30020
</code></pre>
<p>Anyone have an idea how to communicate with external DB? This is not a cloud cluster, it is OnPremise</p>
| Cesar Justo | <p><code>hostNetwork</code> parameter is used for accessing pods from outside of the Cluster, you don't need that.</p>
<p>Pods from inside the Cluster can communicate externally because they are NATted. If not, something external prevent it, like a firewall or a missing routing.</p>
<p>The quicker way to find that is to ssh to one of your Kubernetes cluster nodes and try</p>
<pre><code>telnet 11.160.9.18 1558
</code></pre>
<p>Anyway that IP address seems a Public one, so you have to check your company firewall imho</p>
| oldgiova |
<p>I have:</p>
<ol>
<li>deployments of services A and B in k8s</li>
<li>Prometheus stack</li>
</ol>
<p>I wanna scale service A when metric m1 of service B is changed.
Solutions which I found and not suitable more or less:</p>
<ol>
<li>I can define HPA for service A with the following part of spec:</li>
</ol>
<pre><code> - type: Object
object:
metric:
name: m1
describedObject:
apiVersion: api/v1
kind: Pod
name: certain-pod-of-service-B
current:
value: 10k
</code></pre>
<p>Technically, it will work. But it's not suitable for dynamic nature of k8s.
Also I can't use pods metric (metrics: - type: Pods pods:) in HPA cause it will request m1 metric for pods of service A (which obviously doesn't have this)</p>
<ol start="2">
<li><p>Define custom metric in prometheus-adapter which query m1 metric from pods of service B. It's more suitable, but looks like workaround cause I already have a metric m1</p>
</li>
<li><p>The same for external metrics</p>
</li>
</ol>
<p>I feel that I miss something cause it doesn't seem like a non realistic case :)
So, advise me please how to scale one service by metric of another in k8s?</p>
| pingrulkin | <p>I decided to provide a Community Wiki answer that may help other people facing a similar issue.</p>
<p>The <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">Horizontal Pod Autoscaler</a> is a Kubernetes feature that allows to scale applications based on one or more monitored metrics.<br />
As we can find in the <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/" rel="nofollow noreferrer">Horizontal Pod Autoscaler documentation</a>:</p>
<blockquote>
<p>The Horizontal Pod Autoscaler automatically scales the number of Pods in a replication controller, deployment, replica set or stateful set based on observed CPU utilization (or, with custom metrics support, on some other application-provided metrics).</p>
</blockquote>
<p>There are <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-metrics-apis" rel="nofollow noreferrer">three groups of metrics</a> that we can use with the Horizontal Pod Autoscaler:</p>
<ul>
<li><a href="https://kubernetes.io/docs/tasks/debug-application-cluster/resource-metrics-pipeline/" rel="nofollow noreferrer">resource metrics</a>: predefined resource usage metrics (CPU and
memory) of pods and nodes.</li>
<li><a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-multiple-metrics-and-custom-metrics" rel="nofollow noreferrer">custom metrics</a>: custom metrics associated with a Kubernetes
object.</li>
<li><a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-metrics-not-related-to-kubernetes-objects" rel="nofollow noreferrer">external metrics</a>: custom metrics not associated with a
Kubernetes object.</li>
</ul>
<p>Any HPA target can be scaled based on the resource usage of the <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-resource-metrics" rel="nofollow noreferrer">pods</a> (or <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#container-resource-metrics" rel="nofollow noreferrer">containers</a>) in the scaling target. The CPU utilization metric is a <code>resource metric</code>, you can specify other resource metrics besides CPU (e.g. memory). This seems to be the easiest and most basic method of scaling, but we can use more specific metrics by using <code>custom metrics</code> or <code>external metrics</code>.</p>
<p>There is one major difference between <code>custom metrics</code> and <code>external metrics</code> (see: <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/custom-and-external-metrics#custom-metrics" rel="nofollow noreferrer">Custom and external metrics for autoscaling workloads</a>):</p>
<blockquote>
<p>Custom metrics and external metrics differ from each other:</p>
</blockquote>
<blockquote>
<p>A custom metric is reported from your application running in Kubernetes.
An external metric is reported from an application or service not running on your cluster, but whose performance impacts your Kubernetes application.</p>
</blockquote>
<p>All in all, in my opinion it is okay to use <code>custom metrics</code> in the case above,
I did not find any other suitable way to accomplish this task.</p>
| matt_j |
<p>I am running EKS cluster with fargate profile. I checked nodes status by using <code>kubectl describe node</code> and it is showing disk pressure:</p>
<pre><code>Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Tue, 12 Jul 2022 03:10:33 +0000 Wed, 29 Jun 2022 13:21:17 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure True Tue, 12 Jul 2022 03:10:33 +0000 Wed, 06 Jul 2022 19:46:54 +0000 KubeletHasDiskPressure kubelet has disk pressure
PIDPressure False Tue, 12 Jul 2022 03:10:33 +0000 Wed, 29 Jun 2022 13:21:17 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 12 Jul 2022 03:10:33 +0000 Wed, 29 Jun 2022 13:21:27 +0000 KubeletReady kubelet is posting ready status
</code></pre>
<p>And also there is failed garbage collection event.</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FreeDiskSpaceFailed 11m (x844 over 2d22h) kubelet failed to garbage collect required amount of images. Wanted to free 6314505830 bytes, but freed 0 bytes
Warning EvictionThresholdMet 65s (x45728 over 5d7h) kubelet Attempting to reclaim ephemeral-storage
</code></pre>
<p>I think cause of disk filling quickly is due to application logs, which application is writing to stdout, as per aws documentation which in turn is written to log files by container agent and I am using fargate in-built fluentbit to push application logs to opensearch cluster.</p>
<p>But looks like EKS cluster is not deleting old log files created by container agent.</p>
<p>I was looking to SSH into fargate nodes to furhter debug issue but as per aws support ssh into fargate nodes not possible.</p>
<p>What can be done to remove disk pressure from fargate nodes?</p>
<p>As suggested in answers I am using logrotate in sidecar. But as per logs of logrotate container it is not able to find dir:</p>
<pre><code>rotating pattern: /var/log/containers/*.log
52428800 bytes (5 rotations)
empty log files are not rotated, old logs are removed
considering log /var/log/containers/*.log
log /var/log/containers/*.log does not exist -- skipping
reading config file /etc/logrotate.conf
Reading state from file: /var/lib/logrotate.status
Allocating hash table for state file, size 64 entries
Creating new state
</code></pre>
<p>yaml file is:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-apis
namespace: kube-system
spec:
replicas: 3
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: my-apis
image: 111111xxxxx.dkr.ecr.us-west-2.amazonaws.com/my-apis:1.0.3
ports:
- containerPort: 8080
resources:
limits:
cpu: "1000m"
memory: "1200Mi"
requests:
cpu: "1000m"
memory: "1200Mi"
readinessProbe:
httpGet:
path: "/ping"
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 2
livenessProbe:
httpGet:
path: "/ping"
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
timeoutSeconds: 5
- name: logrotate
image: realz/logrotate
volumeMounts:
- mountPath: /var/log/containers
name: my-app-logs
env:
- name: CRON_EXPR
value: "*/5 * * * *"
- name: LOGROTATE_LOGFILES
value: "/var/log/containers/*.log"
- name: LOGROTATE_FILESIZE
value: "50M"
- name: LOGROTATE_FILENUM
value: "5"
volumes:
- name: my-app-logs
emptyDir: {}
</code></pre>
| anujprashar | <p><code>What can be done to remove disk pressure from fargate nodes?</code></p>
<p>No known configuration that could have Fargate to automatic clean a specific log location. You can run logrotate as sidecar. Plenty of choices <a href="https://hub.docker.com/search?q=logrotate" rel="nofollow noreferrer">here</a>.</p>
| gohm'c |
<p>I am trying to connect terraform to a Kubernetes cluster, but the documentation on Terraform is not clear on which client certificates I should use for TLS connection. Since I am new to both Kubernetes and Terraform, I could not figure that out:</p>
<pre><code>provider "kubernetes" {
host = "https://xxx.xxx.xxx.xxx"
client_certificate = "${file("~/.kube/client-cert.pem")}"
client_key = "${file("~/.kube/client-key.pem")}"
cluster_ca_certificate = "${file("~/.kube/cluster-ca-cert.pem")}"
}
</code></pre>
<p>in the /etc/kubernetes/pki there is more than one certificate and key ( front-proxy-client, api-server-client, api-server-kubelet-client ), which one should I use to Allow terraform to connect to my cluster ?</p>
<hr />
<p><strong>Edit:</strong> Here is the kubernetes version ( output of kubectl version )</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.1", GitCommit:"632ed300f2c34f6d6d15ca4cef3d3c7073412212", GitTreeState:"clean", BuildDate:"2021-08-19T15:45:37Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3", GitCommit:"c92036820499fedefec0f847e2054d824aea6cd1", GitTreeState:"clean", BuildDate:"2021-10-27T18:35:25Z", GoVersion:"go1.16.9", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
| joe1531 | <p>I found out the reason. It was not connected to Terraform. The problem is when I setup my cluster I used the option --apiserver-advertise-address=<MASTER_NODE_PRIVATE_IP> in the kubeadm init command, but when I used --control-plane-endpoint=<MASTER_NODE_PUBLIC_IP>, it worked.</p>
| joe1531 |
<p>I have two domains and both of these domains have separate SSL certificates. Is it possible to set up ssl for these domains using a single ingress configuration?</p>
<p>Cluster: EKS</p>
<p>Ingress controller: AWS ALB ingress controller</p>
| raghunath | <p>Add in annotations comma-separated string of your certs.</p>
<pre><code>alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-west-2:xxxxx:certificate/cert1,arn:aws:acm:us-west-2:xxxxx:certificate/cert2,arn:aws:acm:us-west-2:xxxxx:certificate/cert3
</code></pre>
<p><a href="https://github.com/kubernetes-sigs/aws-load-balancer-controller/blob/main/docs/guide/ingress/annotations.md#certificate-arn" rel="nofollow noreferrer">https://github.com/kubernetes-sigs/aws-load-balancer-controller/blob/main/docs/guide/ingress/annotations.md#certificate-arn</a></p>
<p>Also, you can assign a certificate via AWS web console:
EC2 -> load balancers -> listeners -> tls:443 -> view/edit certificates and add there an additional cert from ACM</p>
| pluk |
<p>I have pods running on different nodes. But when I execute the command
<code>curl -s checkip.dyndns.org</code>, I am getting the same public IP for all. So, is the pod's public IP different from the public IP of the node it is running?</p>
<p>Also, when I execute the command <code>kubectl get nodes -o wide</code>, I get <code>EXTERNAL-IP</code> as <code><none></code> and there is only <code>INTERNAL-IP</code>.</p>
<p>I actually need the node's public IP address to access Kubernetes <strong>NodePort</strong> service.</p>
| Yashasvi Raj Pant | <p><code>...when I execute the command curl -s checkip.dyndns.org, I am getting the same public IP for all.</code></p>
<p>That's your NAT public IP.</p>
<p><code>I actually need the node's public IP address...</code></p>
<p>The node needs to run in a subnet that allows direct (no NAT) Internet access and have public IP assigned. You can find this info on your cloud provider console; or in the node run <code>ip addr show</code> to see all IP(s) assigned to the node.</p>
| gohm'c |
<p>Friends, I am learning here and trying to implement an init container which checks if MySQL is ready for connections. I have a pod running MySQL and another pod with an app which will connect to the MysSQL pod when its ready.</p>
<p>No success so far. The error I am getting is the following: <code>sh: mysql: not found</code>. This is how I am trying:</p>
<pre><code>initContainers:
- name: {{ .Values.initContainers.name }}
image: {{ .Values.initContainers.image }}
command:
- "sh"
- "-c"
- "until mysql --host=mysql.default.svc.cluster.local --user={MYSQL_USER}
--password={MYSQL_PASSWORD} --execute=\"SELECT 1;\"; do echo waiting for mysql; sleep 2; done;"
</code></pre>
<p>Any idea how I could make this work?</p>
| marcelo | <p>Please try using this.</p>
<p>initContainers:</p>
<pre><code> - name: init-cont
image: busybox:1.31
command: ['sh', '-c', 'echo -e "Checking for the availability of MySQL Server deployment"; while ! nc -z mysql 3306; do sleep 1; printf "-"; done; echo -e " >> MySQL DB Server has started";']
</code></pre>
| Shivaramane |
<p>I have a specific requirement where I have a POD with 4 containers. Yes, 4 :-)
We are moving step by step towards a more containerized model. Please excuse me.
Now there is a requirement for container W need to know whether container X,Y,Z are up or down at real time. Is there a built-in feature available in K8s? Or should we use our own HTTP/TCP liveness checks?</p>
| Prince | <p>Your own HTTP/TCP liveness checks are workable because containers in the same pod can contact each other via <code>localhost</code>. Example if container x is listening to port 80 with a healthcheck path <code>/healthz</code>, your container w can do <code>curl -sI http://localhost/healthz -o /dev/null -w "%{http_code}"</code> to check for 200 OK response.</p>
| gohm'c |
<p>I've been running my ECK (Elastic Cloud on Kubernetes) cluster for a couple of weeks with no issues. However, 3 days ago filebeat stopped being able to connect to my ES service. All pods are up and running (Elastic, Beats and Kibana).</p>
<p>Also, shelling into filebeats pods and connecting to the Elasticsearch service works just fine:</p>
<pre class="lang-sh prettyprint-override"><code>curl -k -u "user:$PASSWORD" https://quickstart-es-http.quickstart.svc:9200
</code></pre>
<pre class="lang-json prettyprint-override"><code>{
"name" : "aegis-es-default-4",
"cluster_name" : "quickstart",
"cluster_uuid" : "",
"version" : {
"number" : "7.14.0",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "",
"build_date" : "",
"build_snapshot" : false,
"lucene_version" : "8.9.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
</code></pre>
<p>Yet the filebeats pod logs are producing the below error:</p>
<pre><code>ERROR
[publisher_pipeline_output] pipeline/output.go:154
Failed to connect to backoff(elasticsearch(https://quickstart-es-http.quickstart.svc:9200)):
Connection marked as failed because the onConnect callback failed: could not connect to a compatible version of Elasticsearch:
503 Service Unavailable:
{
"error": {
"root_cause": [
{ "type": "master_not_discovered_exception", "reason": null }
],
"type": "master_not_discovered_exception",
"reason": null
},
"status": 503
}
</code></pre>
<p>I haven't made any changes so I think it's a case of authentication or SSL certificates needing updating?</p>
<p>My filebeats config looks like this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: quickstart
namespace: quickstart
spec:
type: filebeat
version: 7.14.0
elasticsearchRef:
name: quickstart
config:
filebeat:
modules:
- module: gcp
audit:
enabled: true
var.project_id: project_id
var.topic: topic_name
var.subcription: sub_name
var.credentials_file: /usr/certs/credentials_file
var.keep_original_message: false
vpcflow:
enabled: true
var.project_id: project_id
var.topic: topic_name
var.subscription_name: sub_name
var.credentials_file: /usr/certs/credentials_file
firewall:
enabled: true
var.project_id: project_id
var.topic: topic_name
var.subscription_name: sub_name
var.credentials_file: /usr/certs/credentials_file
daemonSet:
podTemplate:
spec:
serviceAccountName: filebeat
automountServiceAccountToken: true
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
securityContext:
runAsUser: 0
containers:
- name: filebeat
volumeMounts:
- name: varlogcontainers
mountPath: /var/log/containers
- name: varlogpods
mountPath: /var/log/pods
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
- name: credentials
mountPath: /usr/certs
readOnly: true
volumes:
- name: varlogcontainers
hostPath:
path: /var/log/containers
- name: varlogpods
hostPath:
path: /var/log/pods
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: credentials
secret:
defaultMode: 420
items:
secretName: elastic-service-account
</code></pre>
<p>And it was working just fine - haven't made any changes to this config to make it lose access.</p>
| Lera | <p>Did a little more digging and found that there weren't enough resources to be able to assign a master node.</p>
<p>Got this when I tried to run GET /_cat/master and it returned the same 503 no master error. I added a new node pool and it started running normally.</p>
| Lera |
<p>I have successfully installed Istio in k8 cluster.</p>
<ul>
<li><p>Istio version is 1.9.1</p>
</li>
<li><p>Kubernetes CNI plugin used: Calico version 3.18 (Calico POD is up and running)</p>
</li>
</ul>
<pre><code>kubectl get pod -A
istio-system istio-egressgateway-bd477794-8rnr6 1/1 Running 0 124m
istio-system istio-ingressgateway-79df7c789f-fjwf8 1/1 Running 0 124m
istio-system istiod-6dc55bbdd-89mlv 1/1 Running 0 124
</code></pre>
<p>When I'm trying to deploy sample nginx app I am getting the error below:</p>
<pre><code>failed calling webhook sidecar-injector.istio.io context deadline exceeded
Post "https://istiod.istio-system.svc:443/inject?timeout=30s":
context deadline exceeded
</code></pre>
<p>When I Disable automatic proxy sidecar injection the pod is getting deployed without any errors.</p>
<pre><code>kubectl label namespace default istio-injection-
</code></pre>
<p>I am not sure how to fix this issue could you please some one help me on this?</p>
| Gowmi | <p>In this case, adding <code>hostNetwork:true</code> under <code>spec.template.spec</code> to the <code>istiod</code> Deployment may help.
This seems to be a workaround when using Calico CNI for pod networking (see: <a href="https://github.com/istio/istio/issues/20890#issuecomment-711051999" rel="nofollow noreferrer">failed calling webhook "sidecar-injector.istio.io</a>)</p>
<p>As we can find in the <a href="https://kubernetes.io/docs/concepts/policy/pod-security-policy/#host-namespaces" rel="nofollow noreferrer">Kubernetes Host namespaces documentation</a>:</p>
<blockquote>
<p>HostNetwork - Controls whether the pod may use the node network namespace. Doing so gives the pod access to the loopback device, services listening on localhost, and could be used to snoop on network activity of other pods on the same node.</p>
</blockquote>
| matt_j |
<p>MacOS Big Sur 11.6.8
minikube version: v1.28.0</p>
<p>Following several tutorials on ingress and attempting to get it working locally. Everything appears to work: manual <code>minikube service foo</code> works, <code>kubectl get ingress</code> shows an IP, pinging the designated host name resolves the expected IP, etc. I went through a few different tutes with the same results.</p>
<p>I boiled it down to the simplest replication from the tutorial at <a href="https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/" rel="nofollow noreferrer">kubernetes.io</a> :</p>
<pre><code># kubectl create deployment web --image=gcr.io/google-samples/hello-app:1.0
# kubectl expose deployment web --type=NodePort --port=8080
# kubectl get service web (ensure it's a node port)
# minikube service web --url (test url)
# kubectl apply -f ingress_hello_world.yaml
# curl localkube.com
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: localkube.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web
port:
number: 8080
</code></pre>
<p>Manual service works:</p>
<pre><code>>minikube service web --url
http://127.0.0.1:50111
❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it.
>curl http://127.0.0.1:50111
Hello, world!
Version: 1.0.0
Hostname: web-84fb9498c7-hnphb
</code></pre>
<p>Ingress looks good:</p>
<pre><code>>minikube addons list | grep ingress
| ingress | minikube | enabled ✅ | Kubernetes |
>kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
example-ingress nginx localkube.com 192.168.49.2 80 15m
</code></pre>
<p>ping resolves the address mapped in /etc/hosts:</p>
<pre><code>>ping localkube.com
PING localkube.com (192.168.49.2): 56 data bytes
</code></pre>
<p>I have looked through similar questions with no positive results. I have gone from this simple example to apache to mongo deployments via config files. Each time I can get to the app through a manual service mapping or by creating an external service (LoadBalancer / nodePort), but when I get to the Ingress part the config applies with no errors and everything appears to be working except for it actually... working.</p>
| Mike M | <p>Based on Veera's answer, I looked into the ingress issue with macOS and <code>minikube tunnel</code>. To save others the hassle, here is how I resolved the issue:</p>
<ol>
<li>ingress doesn't seem to work on macOS (the different pages say "with docker" but I had the same outcome with other drivers like hyperkit.</li>
<li>the issue seems to be IP / networking related. You can not get to the minikube IP from your local workstation. If you first run <code>minikube ssh</code> you can ping and curl the minikube IP and the domain name you mapped to that IP in /etc/hosts. However, this does not help trying to access the service from a browser.</li>
<li>the solution is to map the domain names to 127.0.0.1 in /etc/hosts (instead of the ingress assigned IP) and use ingress components to control the domain-name -> service mappings as before...</li>
<li>then starting a tunnel with <code>sudo minikube tunnel</code> will keep a base tunnel open, and create tunneling for any existing or new ingress components. This combined with the ingress rules will mimic host header style connecting to any domain resolving to the local host.</li>
</ol>
<p>Here is a full example of a working solution on mac. Dump this to a file named ingress_hello_world.yaml and follow the commented instructions to achieve a simple ingress solution that routes 2 domains to 2 different services (note this will work with pretty much any internal service, and can be a ClusterIP instead of NodePort):</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
spec:
ingressClassName: nginx
rules:
- host: test1.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web
port:
number: 8080
- host: test2.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web2
port:
number: 8080
# Instructions:
# start minikube if not already
# >minikube start --vm-driver=docker
#
# enable ingress if not already
# >minikube addons enable ingress
# >minikube addons list | grep "ingress "
# | ingress | minikube | enabled ✅ | Kubernetes |
#
# >kubectl create deployment web --image=gcr.io/google-samples/hello-app:1.0
# deployment.apps/web created
#
# >kubectl expose deployment web --type=NodePort --port=8080
# service/web exposed
#
# >kubectl create deployment web2 --image=gcr.io/google-samples/hello-app:2.0
# deployment.apps/web2 created
#
# >kubectl expose deployment web2 --port=8080 --type=NodePort
# service/web2 exposed
#
# >kubectl get service | grep web
# web NodePort 10.101.19.188 <none> 8080:31631/TCP 21m
# web2 NodePort 10.102.52.139 <none> 8080:30590/TCP 40s
#
# >minikube service web --url
# http://127.0.0.1:51813
# ❗ Because you are using a Docker driver on darwin, the terminal needs to be open to run it.
#
# ------ in another console ------
# >curl http://127.0.0.1:51813
# ^---- this must match the port from the output above
# Hello, world!
# Version: 1.0.0 <---- will show version 2.0.0 for web2
# Hostname: web-84fb9498c7-7bjtg
# --------------------------------
# ctrl+c to kill tunnel in original tab, repeat with web2 if desired
#
# ------ In another console ------
# >sudo minikube tunnel
# ✅ Tunnel successfully started
#
# (leave open, will show the following when you start an ingress component)
# Starting tunnel for service example-ingress.
# --------------------------------
#
# >kubectl apply -f ingress_hello_world.yaml
# ingress.networking.k8s.io/example-ingress created
#
# >kubectl get ingress example-ingress --watch
# NAME CLASS HOSTS ADDRESS PORTS AGE
# example-ingress nginx test1.com,test2.com 80 15s
# example-ingress nginx test1.com,test2.com 192.168.49.2 80 29s
# wait for this to be populated ----^
#
# >cat /etc/hosts | grep test
# 127.0.0.1 test1.com
# 127.0.0.1 test2.com
# ^---- set this to localhost ip
#
# >ping test1.com
# PING test1.com (127.0.0.1): 56 data bytes
#
# >curl test1.com
# Hello, world!
# Version: 1.0.0
# Hostname: web-84fb9498c7-w6bkc
#
# >curl test2.com
# Hello, world!
# Version: 2.0.0
# Hostname: web2-7df4dcf77b-66g5b
# ------- Cleanup:
# stop tunnel
#
# >kubectl delete -f ingress_hello_world.yaml
# ingress.networking.k8s.io "example-ingress" deleted
#
# >kubectl delete service web
# service "web" deleted
#
# >kubectl delete service web2
# service "web2" deleted
#
# >kubectl delete deployment web
# deployment.apps "web" deleted
#
# >kubectl delete deployment web2
# deployment.apps "web2" deleted
</code></pre>
| Mike M |
<p>I created disk in azure k8s cluster (4Gb standard HDD)
I am using code PV</p>
<p><a href="https://pastebin.com/HysrzFyB" rel="nofollow noreferrer">Pv file</a></p>
<p>Then I am creating PVC:</p>
<p><a href="https://pastebin.com/r7T4KZEv" rel="nofollow noreferrer">PVC yaml</a></p>
<p>Attach my volume to Pod:</p>
<p><a href="https://pastebin.com/z8MXNHXF" rel="nofollow noreferrer">Pod volume attache</a></p>
<p>But when I checked the status of my Pod, I got an error:</p>
<pre><code>root@k8s:/home/azureuser/k8s# kubectl get describe pods mypod
error: the server doesn't have a resource type "describe"
root@k8s:/home/azureuser/k8s# kubectl describe pods mypod
Name: mypod
Namespace: default
Priority: 0
Node: aks-agentpool-37412589-vmss000000/10.224.0.4
Start Time: Wed, 03 Aug 2022 10:34:45 +0000
Labels: <none>
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Containers:
mypod:
Container ID:
Image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Limits:
cpu: 250m
memory: 256Mi
Requests:
cpu: 100m
memory: 128Mi
Environment: <none>
Mounts:
/mnt/azure from azure (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nq9q2 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
azure:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: pvc-azuredisk
ReadOnly: false
kube-api-access-nq9q2:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m2s default-scheduler Successfully assigned default/mypod to aks-agentpool-36264904-vmss000000
Warning FailedAttachVolume 53s (x8 over 2m1s) attachdetach-controller AttachVolume.Attach failed for volume "pv-azuredisk" : rpc error: code = InvalidArgument desc = Volume capability not supported
</code></pre>
<p>Could you please help with advice, how I can solve this issue: <code>Warning FailedAttachVolume 53s (x8 over 2m1s) attachdetach-controller AttachVolume.Attach failed for volume "pv-azuredisk" : rpc error: code = InvalidArgument desc = Volume capability not supported</code></p>
| Oleg | <blockquote>
<p>...Volume capability not supported</p>
</blockquote>
<p>Try update your PV:</p>
<pre><code>...
accessModes:
- ReadWriteOnce # <-- ReadWriteMany is not supported by disk.csi.azure.com
...
</code></pre>
<p>ReadWriteMany is supported by file.csi.azure.com (Azure Files).</p>
| gohm'c |
<p>I am using fluent-bit version 1.4.6 and I am trying to collect logs from a tomcat/logs folder, but I receive:</p>
<p><code>[error] [input:tail:tail.0] read error, check permissions</code>
These files inside the logs folder are all "rw-r-----" (640).</p>
<p>I tried to confirm whether it can read it at all by changing the permissions of a file inside the logs folder and it works, but that does not solve the overall problem.</p>
<p>My question is, is this something that should be set on the tomcat level or it can be done via fluent-bit? Can I start that as a different user?</p>
<p>Thanks in advance!</p>
| voidcraft | <h2>thanks for all the tips, I tried all of them, and it works, but unfortunately, on our deployments it does not as we have some custom users.</h2>
<p>What was needed to be done is to set the UMASK as env variable with a value of "111" which would change permissions of the log files so they can be picked up by fluent-bit.</p>
| voidcraft |
<p>I am trying to deploy a "Hello World" application on an EKS cluster I created using eksctl. I have the cluster running with 2 pods and am following a tutorial located at <a href="https://shahbhargav.medium.com/hello-world-on-kubernetes-cluster-6bec6f4b1bfd" rel="nofollow noreferrer">https://shahbhargav.medium.com/hello-world-on-kubernetes-cluster-6bec6f4b1bfd</a>. I created a deployment using the following yaml file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-deployment
labels:
app: hello-world
spec:
selector:
matchLabels:
app: hello-world
replicas: 2
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: bhargavshah86/kube-test:v0.1
ports:
- containerPort: 80
resources:
limits:
memory: 256Mi
cpu: "250m"
requests:
memory: 128Mi
cpu: "80m"
---
apiVersion: v1
kind: Service
metadata:
name: hello-world
spec:
selector:
app: hello-world
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30081
type: NodePort
</code></pre>
<p>I then created the deploying by running the following command:</p>
<pre><code>kubectl create -f hello-world.yaml
</code></pre>
<p>I am unable to access it on localhost. I believe I am missing a step because the I had created the cluster & deployment on a Linux EC2 instance that I SSH into with Putty and I am accessing localhost on my Windows machine. Any advice how I can connect would be appreciated. Currently I am getting the following when trying to connect to http://localhost:30081/</p>
<p><a href="https://i.stack.imgur.com/gC1MR.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gC1MR.png" alt="enter image description here" /></a></p>
| Dave Michaels | <p>As you mention, the problem is that you are trying to access to your local machine at port 30081 but the pods that you have created are in your EKS cluster in the cloud. If you want to try out that the application is working you can SSH into the worker node as you have done and use the <a href="https://linux.die.net/man/1/curl" rel="nofollow noreferrer">curl</a> command like this.</p>
<pre class="lang-sh prettyprint-override"><code>curl localhost:30081
</code></pre>
<p>That command is going to return the website that you have running in the console (without any format).</p>
<p>I think that in your case the best course will be to use the <a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/#forward-a-local-port-to-a-port-on-the-pod" rel="nofollow noreferrer">kubectl port-forward</a> command. This command is going to bind one of your local machine ports into one of the ports of the pods.</p>
<p>Here is the format of the command.</p>
<pre class="lang-sh prettyprint-override"><code>kubectl port-forward POD-NAME-CURRENTLY-RUNNING-IN-CLUSTER UNUSED-PORT-IN-YOUR-PC:APPLICATION-PORT
</code></pre>
<p>Here is an example of how to use it and to check out that's working.</p>
<pre class="lang-sh prettyprint-override"><code>kubectl port-forward hello-world-xxxxxx-xxxxx 8000:80
curl localhost:8000
</code></pre>
<p>Notice here that I am not using the service port to access the pod. This tool is great for debugging!</p>
<p>Another approach could be to open port 30081 with a security group and hit the IPs of the worker nodes but I think that's insecure and also have a lot of extra steps. You should check out the difference between type of <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">services</a>.</p>
<p>Let me know if you have any doubts about my answer. I am not an expert and I could be wrong!</p>
<p>Also English is not my first language.</p>
<p>Cheers</p>
| Manuel Chichi |
<p>I am trying to install the same chart two times in the same cluster in two different namespaces. However I am getting this error:</p>
<pre><code>Error: rendered manifests contain a resource that already exists. Unable to continue with install: ClusterRole "nfs-provisioner" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "namespace2": current value is "namespace1"
</code></pre>
<p>As I understood cluster roles suposed to be independet from the namespace, so I found this contradictory. We are using helm3</p>
| KilyenOrs | <p>I decided to provide a Community Wiki answer that may help other people facing a similar issue.<br />
I assume you want to install the same chart multiple times but get the following error:</p>
<pre><code>Error: rendered manifests contain a resource that already exists. Unable to continue with install: ClusterRole "<CLUSTERROLE_NAME>" in namespace "" exists and cannot be imported into the current release: ...
</code></pre>
<p><br>First, it's important to decide if we really need <code>ClusterRole</code> instead of <code>Role</code>.
As we can find in the <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#role-and-clusterrole" rel="noreferrer">Role and ClusterRole documentation</a>:</p>
<blockquote>
<p>If you want to define a role within a namespace, use a Role; if you want to define a role cluster-wide, use a ClusterRole.</p>
</blockquote>
<p>Second, we can use the variable name for <code>ClusterRole</code> instead of hard-coding the name in the template:</p>
<p>For example, instead of:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: clusterrole-1
...
</code></pre>
<p>Try to use something like:</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ .Values.clusterrole.name }}
...
</code></pre>
<p>Third, we can use the <code>lookup</code> function and the <code>if</code> control structure to skip creating resources if they already exist.</p>
<p>Take a look at a simple example:</p>
<pre><code>$ cat clusterrole-demo/values.yaml
clusterrole:
name: clusterrole-1
$ cat clusterrole-demo/templates/clusterrole.yaml
{{- if not (lookup "rbac.authorization.k8s.io/v1" "ClusterRole" "" .Values.clusterrole.name) }}
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: {{ .Values.clusterrole.name }}
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
- watch
{{- end }}
</code></pre>
<p>In the example above, if <code>ClusterRole</code> <code>clusterrole-1</code> already exits, it won’t be created.</p>
| matt_j |
<p>I have a GKE cluster with two nodepools. I turned on autoscaling on one of my nodepools but it does not seem to automatically scale down.</p>
<p><a href="https://i.stack.imgur.com/V5cVN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/V5cVN.png" alt="autoscaling enabled" /></a></p>
<p>I have enabled HPA and that works fine. It scales the pods down to 1 when I don't see traffic.</p>
<p>The API is currently not getting any traffic so I would expect the nodes to scale down as well.</p>
<p>But it still runs the maximum 5 nodes despite some nodes using less than 50% of allocatable memory/CPU.</p>
<p><a href="https://i.stack.imgur.com/heVKV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/heVKV.png" alt="5 nodes" /></a></p>
<p>What did I miss here? I am planning to move these pods to bigger machines but to do that I need the node autoscaling to work to control the monthly cost.</p>
| Johan Wikström | <p>There are many reasons that can cause CA to not be downscaling successfully. If we resume how this should work normally it will be something like this:</p>
<ul>
<li>Cluster autoscaler will periodically check (every 10 seconds) utilization of the nodes.</li>
<li>If the utilization factor is less than 0.5 the node will be considered as under utilization.</li>
<li>Then the nodes will be marked for removal and will be monitored for next 10 mins to make sure the utilization factor stays less than 0.5.</li>
<li>If even after 10 mins it stays under utilized then the node would be removed by cluster autoscaler.</li>
</ul>
<p>If above is not being accomplished, then something else is preventing your nodes to be downscaling. In my experience PDBs needs to be applied to kube-system pods and I would say that could be the reason why; however, there are many reasons why this can be happening, here are reasons that can cause downscaling issues:</p>
<p><strong>1. PDB is not applied to your kube-system pods.</strong> Kube-system pods prevent Cluster Autoscaler from removing nodes on which they are running. You can manually add Pod Disruption Budget(PDBs) for the kube-system pods that can be safely rescheduled elsewhere, this can be added with next command:</p>
<pre><code>`kubectl create poddisruptionbudget PDB-NAME --namespace=kube-system --selector app=APP-NAME --max-unavailable 1`
</code></pre>
<p><strong>2. Containers using local storage (volumes), even empty volumes.</strong> Kubernetes prevents scale down events on nodes with pods using local storage. Look for this kind of configuration that prevents Cluster Autoscaler to scale down nodes.</p>
<p><strong>3. Pods annotated with <code>cluster-autoscaler.kubernetes.io/safe-to-evict: true</code>.</strong> Look for pods with this annotation that can be preventing Nodes scaledown</p>
<p><strong>4. Nodes annotated with <code>cluster-autoscaler.kubernetes.io/scale-down-disabled: true</code>.</strong> Look for Nodes with this annotation that can be preventing cluster Autoscale. These configurations are the ones I will suggest you check on, in order to make your cluster to be scaling down nodes that are under utilized. -----</p>
<p>Also you can see <a href="https://medium.com/google-cloud/calming-down-kubernetes-autoscaler-fbdba52adba6" rel="nofollow noreferrer">this</a> page where explains the configuration to prevent the downscales, which can be what is happening to you.</p>
| Marco P. |
<p>I'm trying to create a cluster via eksctl, using defaults options, and AMI user with "AdministratorAccess", I get stuck at "waiting for CloudFormation stack"</p>
<pre><code> > eksctl create cluster --name dev
[ℹ] eksctl version 0.36.0
[ℹ] using region us-west-2
[ℹ] setting availability zones to [us-west-2a us-west-2c us-west-2b]
[ℹ] subnets for us-west-2a - public:192.168.0.0/19 private:192.168.96.0/19
[ℹ] subnets for us-west-2c - public:192.168.32.0/19 private:192.168.128.0/19
[ℹ] subnets for us-west-2b - public:192.168.64.0/19 private:192.168.160.0/19
[ℹ] nodegroup "ng-fa4af514" will use "ami-0532808ed453f9ca3" [AmazonLinux2/1.18]
[ℹ] using Kubernetes version 1.18
[ℹ] creating EKS cluster "dev" in "us-west-2" region with un-managed nodes
[ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup
[ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-west-2 --cluster=dev'
[ℹ] CloudWatch logging will not be enabled for cluster "dev" in "us-west-2"
[ℹ] you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-west-2 --cluster=dev'
[ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "dev" in "us-west-2"
[ℹ] 2 sequential tasks: { create cluster control plane "dev", 3 sequential sub-tasks: { no tasks, create addons, create nodegroup "ng-fa4af514" } }
[ℹ] building cluster stack "eksctl-dev-cluster"
[ℹ] deploying stack "eksctl-dev-cluster"
[ℹ] waiting for CloudFormation stack "eksctl-dev-cluster"
[ℹ] waiting for CloudFormation stack "eksctl-dev-cluster"
[ℹ] waiting for CloudFormation stack "eksctl-dev-cluster"
[ℹ] waiting for CloudFormation stack "eksctl-dev-cluster"
[ℹ] waiting for CloudFormation stack "eksctl-dev-cluster"
[ℹ] waiting for CloudFormation stack "eksctl-dev-cluster"
[ℹ] waiting for CloudFormation stack "eksctl-dev-cluster"
[ℹ] waiting for CloudFormation stack "eksctl-dev-cluster"
[ℹ] waiting for CloudFormation stack "eksctl-dev-cluster"
[ℹ] waiting for CloudFormation stack "eksctl-dev-cluster"
[ℹ] waiting for CloudFormation stack "eksctl-dev-cluster"
[ℹ] waiting for CloudFormation stack "eksctl-dev-cluster"
[ℹ] waiting for CloudFormation stack "eksctl-dev-cluster"
[ℹ] waiting for CloudFormation stack "eksctl-dev-cluster"
[ℹ] waiting for CloudFormation stack "eksctl-dev-cluster"
[ℹ] waiting for CloudFormation stack "eksctl-dev-cluster"
[ℹ] waiting for CloudFormation stack "eksctl-dev-cluster"
[ℹ] waiting for CloudFormation stack "eksctl-dev-cluster"
[ℹ] waiting for CloudFormation stack "eksctl-dev-cluster"
[ℹ] waiting for CloudFormation stack "eksctl-dev-cluster"
[ℹ] waiting for CloudFormation stack "eksctl-dev-cluster"
[ℹ] waiting for CloudFormation stack "eksctl-dev-cluster"
[ℹ] waiting for CloudFormation stack "eksctl-dev-cluster"
[ℹ] waiting for CloudFormation stack "eksctl-dev-cluster"
[ℹ] waiting for CloudFormation stack "eksctl-dev-cluster"
[ℹ] waiting for CloudFormation stack "eksctl-dev-cluster"
[ℹ] waiting for CloudFormation stack "eksctl-dev-cluster"
[ℹ] waiting for CloudFormation stack "eksctl-dev-cluster"
[ℹ] waiting for CloudFormation stack "eksctl-dev-cluster"
[ℹ] waiting for CloudFormation stack "eksctl-dev-cluster"
[ℹ] waiting for CloudFormation stack "eksctl-dev-cluster"
[ℹ] waiting for CloudFormation stack "eksctl-dev-cluster"
[ℹ] waiting for CloudFormation stack "eksctl-dev-cluster"
[ℹ] waiting for CloudFormation stack "eksctl-dev-cluster"
[ℹ] waiting for CloudFormation stack "eksctl-dev-cluster"
[ℹ] waiting for CloudFormation stack "eksctl-dev-cluster"
[ℹ] waiting for CloudFormation stack "eksctl-dev-cluster"
[ℹ] waiting for CloudFormation stack "eksctl-dev-cluster"
[ℹ] waiting for CloudFormation stack "eksctl-dev-cluster"
[ℹ] waiting for CloudFormation stack "eksctl-dev-cluster"
[ℹ] waiting for CloudFormation stack "eksctl-dev-cluster"
[ℹ] waiting for CloudFormation stack "eksctl-dev-cluster"
</code></pre>
<p>I have tried different regions, ran into the same issue.</p>
| Deano | <p>It takes almost 20 minutes to create the stacks in Cloudformation. When you create the cluster, check the progress of the stack in Cloudformation console: <a href="https://console.aws.amazon.com/cloudformation/home" rel="noreferrer">https://console.aws.amazon.com/cloudformation/home</a>.</p>
| Kiran Reddy |
<p>I am having trouble upgrading our CLB to a NLB. I did a manual upgrade via the wizard through the console, but the connectivity wouldn't work. This upgrade is needed so we can use static IPs in the loadbalancer. I think it needs to be upgraded through kubernetes, but my attempts failed.</p>
<p>What I (think I) understand about this setup is that this loadbalancer was set up using Helm. What I also understand is that the ingress (controller) is responsible for redirecting http requests to https. and that this lb is working on layer 4.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.30.0
component: controller
heritage: Tiller
release: nginx-ingress-external
name: nginx-ingress-external-controller
namespace: kube-system
selfLink: /api/v1/namespaces/kube-system/services/nginx-ingress-external-controller
spec:
clusterIP: 172.20.41.16
externalTrafficPolicy: Cluster
ports:
- name: http
nodePort: 30854
port: 80
protocol: TCP
targetPort: http
- name: https
nodePort: 30621
port: 443
protocol: TCP
targetPort: https
selector:
app: nginx-ingress
component: controller
release: nginx-ingress-external
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- hostname: xxx.region.elb.amazonaws.com
</code></pre>
<p>How would I be able to perform the upgrade by modifying this configuration file?</p>
| aardbol | <p>As <strong>@Jonas</strong> pointed out in the comments section, creating a new <code>LoadBalancer</code> <code>Service</code> with the same selector as the existing one is probably the fastest and easiest method. As a result we will have two <code>LoadBalancer</code> <code>Services</code> using the same <code>ingress-controller</code>.</p>
<p>You can see in the following snippet that I have two <code>Services</code> (<code>ingress-nginx-1-controller</code> and <code>ingress-nginx-2-controller</code>) with exactly the same endpoint:</p>
<pre><code>$ kubectl get pod -o wide ingress-nginx-1-controller-5856bddb98-hb865
NAME READY STATUS RESTARTS AGE IP
ingress-nginx-1-controller-5856bddb98-hb865 1/1 Running 0 55m 10.36.2.8
$ kubectl get svc ingress-nginx-1-controller ingress-nginx-2-controller
NAME TYPE CLUSTER-IP EXTERNAL-IP
ingress-nginx-1-controller LoadBalancer 10.40.15.230 <PUBLIC_IP>
ingress-nginx-2-controller LoadBalancer 10.40.11.221 <PUBLIC_IP>
$ kubectl get endpoints ingress-nginx-1-controller ingress-nginx-2-controller
NAME ENDPOINTS AGE
ingress-nginx-1-controller 10.36.2.8:443,10.36.2.8:80 39m
ingress-nginx-2-controller 10.36.2.8:443,10.36.2.8:80 11m
</code></pre>
<p>Additionally to avoid downtime, we can first change the DNS records to point at the new <code>LoadBalancer</code> and after the propagation time we can safely delete the old <code>LoadBalancer</code> <code>Service</code>.</p>
| matt_j |
<p>I am trying to delete a namespace but it is in terminating state, I tried removing the finalizer and applying replace but not able to succeed. Below are the steps and error</p>
<pre><code>[root@~]# kubectl replace "/api/v1/namespaces/service-catalog/finalize" -f n.json
namespace/service-catalog replaced
[root@~]#
[root@~]#
[root@~]# k get ns service-catalog
NAME STATUS AGE
service-catalog Terminating 6d21h
[root@~]# k delete ns service-catalog
Error from server (Conflict): Operation cannot be fulfilled on namespaces "service-catalog": The system is ensuring all content is removed from this namespace. Upon completion, this namespace will automatically be purged by the system.
</code></pre>
<p>In the namespace I had created few crd objects and my good guess is those are the thing which are preventing it from deletion. Right now I am not able memorise all the crd object that I created. </p>
<p>Is there a way where I can query all the object with the <code>finalizer: service-catalog</code>?</p>
| prashant | <p>I was looking for all the finalizers that were used in our cluster and this worked for me. It checks for all types of objects in all namespaces and returns their finalizers -- you can probably use awk and grep to filter it out for what you're looking for</p>
<p><code>kubectl get all -o custom-columns=Kind:.kind,Name:.metadata.name,Finalizers:.metadata.finalizers --all-namespaces </code></p>
<p>Note, this doesn't return the cluster scoped resources</p>
| Brando |
<p>I am trying to create a Kubernetes Ingress object with the kubernetes_manifest terraform resource. It is throwing the following error:</p>
<pre><code>│ Error: Failed to morph manifest to OAPI type
│
│ with module.services.module.portal.module.appmesh.kubernetes_manifest.service_ingress_object,
│ on .terraform/modules/services.portal.appmesh/kubernetes_manifest.tf line 104, in resource "kubernetes_manifest" "service_ingress_object":
│ 104: resource "kubernetes_manifest" "service_ingress_object" {
│
│ AttributeName("spec"): [AttributeName("spec")] failed to morph object element into object element: AttributeName("spec").AttributeName("rules"): [AttributeName("spec").AttributeName("rules")] failed to
│ morph object element into object element: AttributeName("spec").AttributeName("rules"): [AttributeName("spec").AttributeName("rules")] unsupported morph of object value into type:
│ tftypes.List[tftypes.Object["host":tftypes.String, "http":tftypes.Object["paths":tftypes.List[tftypes.Object["backend":tftypes.Object["resource":tftypes.Object["apiGroup":tftypes.String,
│ "kind":tftypes.String, "name":tftypes.String], "serviceName":tftypes.String, "servicePort":tftypes.DynamicPseudoType], "path":tftypes.String, "pathType":tftypes.String]]]]]
</code></pre>
<p>My code is:</p>
<pre><code>resource "kubernetes_manifest" "service_ingress_object" {
manifest = {
"apiVersion" = "networking.k8s.io/v1beta1"
"kind" = "Ingress"
"metadata" = {
"name" = "${var.service_name}-ingress"
"namespace" = "${var.kubernetes_namespace}"
"annotations" = {
"alb.ingress.kubernetes.io/actions.ssl-redirect" = "{'Type': 'redirect', 'RedirectConfig': { 'Protocol': 'HTTPS', 'Port': '443', 'StatusCode': 'HTTP_301'}}"
"alb.ingress.kubernetes.io/listen-ports" = "[{'HTTP': 80}, {'HTTPS':443}]"
"alb.ingress.kubernetes.io/certificate-arn" = "${data.aws_acm_certificate.enivronment_default_issued.arn}"
"alb.ingress.kubernetes.io/scheme" = "internal"
"alb.ingress.kubernetes.io/target-type" = "instance"
"kubernetes.io/ingress.class" = "alb"
}
}
"spec" = {
"rules" = {
"host" = "${aws_route53_record.service_dns.fqdn}"
"http" = {
"paths" = {
"backend" = {
"serviceName" = "${var.service_name}-svc"
"servicePort" = "${var.service_port}"
}
"path" = "/*"
}
}
}
}
}
}
</code></pre>
<p>I have tried adding brackets to the "spec" field, however when I do that, I just the following error:</p>
<pre><code>│ Error: Missing item separator
│
│ on .terraform/modules/services.portal.appmesh/kubernetes_manifest.tf line 121, in resource "kubernetes_manifest" "service_ingress_object":
│ 120: "spec" = {[
│ 121: "rules" = {
│
│ Expected a comma to mark the beginning of the next item.
</code></pre>
<p>Once I get that error, I have tried adding commas under "spec". It just continuously throws the same error after this.</p>
| sd-gallowaystorm | <p>I figured it out. You need to add the bracket before the "{". So the code now looks like this:</p>
<pre><code>resource "kubernetes_manifest" "service_ingress_object" {
manifest = {
"apiVersion" = "networking.k8s.io/v1beta1"
"kind" = "Ingress"
"metadata" = {
"name" = "${var.service_name}-ingress"
"namespace" = "${var.kubernetes_namespace}"
"annotations" = {
"alb.ingress.kubernetes.io/actions.ssl-redirect" = "{'Type': 'redirect', 'RedirectConfig': { 'Protocol': 'HTTPS', 'Port': '443', 'StatusCode': 'HTTP_301'}}"
"alb.ingress.kubernetes.io/listen-ports" = "[{'HTTP': 80}, {'HTTPS':443}]"
"alb.ingress.kubernetes.io/certificate-arn" = "${data.aws_acm_certificate.enivronment_default_issued.arn}"
"alb.ingress.kubernetes.io/scheme" = "internal"
"alb.ingress.kubernetes.io/target-type" = "instance"
"kubernetes.io/ingress.class" = "alb"
}
}
"spec" = {
"rules" = [{
"host" = "${aws_route53_record.service_dns.fqdn}"
"http" = {
"paths" = [{
"backend" = {
"serviceName" = "${var.service_name}-svc"
"servicePort" = "${var.service_port}"
}
"path" = "/*"
}]
}
}]
}
}
}
</code></pre>
| sd-gallowaystorm |
<p>I have the following job definition</p>
<pre><code> - uses: actions/checkout@v2
- uses: azure/login@v1
with:
creds: ${{ secrets.BETA_AZURE_CREDENTIALS }}
- uses: azure/docker-login@v1
with:
login-server: ${{ secrets.BETA_ACR_SERVER }}
username: ${{ secrets.BETA_ACR_USERNAME }}
password: ${{ secrets.BETA_ACR_PASSWORD }}
- run: docker build -f .ops/account.dockerfile -t ${{ secrets.BETA_ACR_SERVER }}/account:${{ github.sha }} -t ${{ secrets.BETA_ACR_SERVER }}/account:latest .
working-directory: ./Services
- run: docker push ${{ secrets.BETA_ACR_SERVER }}/account:${{ github.sha }}
- uses: azure/[email protected]
- uses: azure/[email protected]
with:
resource-group: ${{ secrets.BETA_RESOURCE_GROUP }}
cluster-name: ${{ secrets.BETA_AKS_CLUSTER }}
- run: kubectl -n pltfrmd set image deployments/account account=${{ secrets.BETA_ACR_SERVER }}/account:${{ github.sha }}
</code></pre>
<p>The docker function works fine and it pushes to ACR without issue.</p>
<p>But then even though the ask-set-context works, the run kubectl command doesn't execute and just hangs waiting for an interactive login prompt.</p>
<p>What am I doing wrong? How do I get Github actions to let me execute the kubectl command properly?</p>
| James Hancock | <p>Setting 'admin' to true worked for me.</p>
<pre><code>- uses: azure/[email protected]
with:
resource-group: ${{ secrets.BETA_RESOURCE_GROUP }}
cluster-name: ${{ secrets.BETA_AKS_CLUSTER }}
admin: true
</code></pre>
| Mudassir Syed |
<p>how can I achieve that <a href="https://github.com/alexellis/arkade" rel="nofollow noreferrer">arkade</a> will create a service of <code>type: LoadBalancer</code> instead of <code>type: ClusterIP</code>?</p>
<p>I stumbled upon that requirement while deploying my private <code>docker-registry</code>. Logging in, pushing and pulling images from the command line, all runs fine, but once I want to use that registry, I need a point of reference which I state as the image in my Deployment definition:</p>
<pre><code>...
containers:
- name: pairstorer
image: 192.168.x.x:5000/myimage:1.0.0
ports:
- containerPort: 55555
...
</code></pre>
<p>If I install the registry using <code>arkade install docker-registry</code>, I don't see any options for obtaining an external IP other than <code>kubectl edit service docker-registry</code> and adding it by myself.</p>
| Michael | <p>If all you want is just a change of <code>Service</code> type from <code>ClusterIP</code> to <code>LoadBalancer</code>, you need to override default value for <code>docker-registry</code> Helm Chart.</p>
<p><strong>Arkade</strong> uses the <code>stable/docker-registry</code> Helm chart to install <code>docker-registry</code> and you can find default values <a href="https://github.com/helm/charts/blob/master/stable/docker-registry/values.yaml" rel="nofollow noreferrer">here</a>.</p>
<p>You need to change <code>service.type=ClusterIP</code> to <code>service.type=LoadBalancer</code> (additionally you may need to edit more values e.g. port number from default <code>5000</code>):</p>
<pre><code>$ arkade install docker-registry --set service.type=LoadBalancer
</code></pre>
<p>To change port add <code>--set service.port=<PORT_NUMBER></code> to the above command.</p>
<p>We can check the type of the <code>docker-registry</code> <code>Service</code>:</p>
<pre><code>$ kubectl get svc docker-registry
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
docker-registry LoadBalancer 10.0.14.107 <PUBLIC_IP> 5000:32665/TCP 29m
</code></pre>
<hr />
<p>In addition, you may be interested in TLS-enabled Docker registry as described in this <a href="https://blog.alexellis.io/get-a-tls-enabled-docker-registry-in-5-minutes/" rel="nofollow noreferrer">tutorial</a>. I recommend you to use this approach.</p>
| matt_j |
<p>Is it possible to have more than one role bind to a single service account?</p>
| codeogeek | <p>This is possible to have multiple <code>RoleBindings</code> / <code>ClusterRoleBindings</code> that link <code>Roles</code> / <code>ClusterRoles</code> to single <code>ServiceAccount</code>.</p>
<p>You should remember that permissions are <strong>additive</strong> - you can add another <code>Role</code> / <code>ClusterRole</code> to <code>ServiceAccount</code> to extend its permissions.</p>
<hr />
<p>I've created simple example to illustrate you how it works.</p>
<p>First I created <code>red</code> and <code>blue</code> <code>Namespaces</code> and <code>test-sa</code> <code>ServiceAccount</code>:</p>
<pre><code>$ kubectl create namespace red
namespace/red created
$ kubectl create namespace blue
namespace/blue created
$ kubectl create serviceaccount test-sa
serviceaccount/test-sa created
</code></pre>
<p>By default, the newly created <code>test-sa</code> <code>ServiceAccount</code> doesn't have any permissions:</p>
<pre><code>$ kubectl auth can-i get pod -n red --as=system:serviceaccount:default:test-sa
no
$ kubectl auth can-i get pod -n blue --as=system:serviceaccount:default:test-sa
no
$ kubectl auth can-i get pod -n default --as=system:serviceaccount:default:test-sa
no
</code></pre>
<p>Next I created two <code>pod-reader-red</code> and <code>pod-reader-blue</code> <code>Roles</code> that have permissions to get <code>Pods</code> in <code>red</code> and <code>blue</code> <code>Namespaces</code> accordingly:</p>
<pre><code>$ kubectl create role pod-reader-red --verb=get --resource=pods -n red
role.rbac.authorization.k8s.io/pod-reader-red created
$ kubectl create role pod-reader-blue --verb=get --resource=pods -n blue
role.rbac.authorization.k8s.io/pod-reader-blue created
</code></pre>
<p>And then using <code>RoleBinding</code> I linked this <code>Roles</code> to <code>test-sa</code> <code>ServiceAccount</code>:</p>
<pre><code>$ kubectl create rolebinding pod-reader-red --role=pod-reader-red --serviceaccount=default:test-sa -n red
rolebinding.rbac.authorization.k8s.io/pod-reader-red created
$ kubectl create rolebinding pod-reader-blue --role=pod-reader-blue --serviceaccount=default:test-sa -n blue
rolebinding.rbac.authorization.k8s.io/pod-reader-blue created
</code></pre>
<p>Finally we can check that <code>test-sa</code> <code>ServiceAccount</code> has permission to get <code>Pods</code> in <code>red</code> <code>Namespace</code> using <code>pod-reader-red</code> <code>Role</code>
and <code>test-sa</code> <code>ServiceAccount</code> has permission to get <code>Pods</code> in <code>blue</code> <code>Namespace</code> using <code>pod-reader-blue</code> <code>Role</code>:</p>
<pre><code>$ kubectl auth can-i get pod -n red --as=system:serviceaccount:default:test-sa
yes
$ kubectl auth can-i get pod -n blue --as=system:serviceaccount:default:test-sa
yes
$ kubectl auth can-i get pod -n default --as=system:serviceaccount:default:test-sa
no
</code></pre>
| matt_j |
<p>I am unable to install helm locally on linux based aws server, i checked on helm docs and found below steps but got an error, need assistance</p>
<pre><code>curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
</code></pre>
<p><a href="https://i.stack.imgur.com/vOruo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vOruo.png" alt="error" /></a>
<a href="https://i.stack.imgur.com/3LP4j.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3LP4j.png" alt="error2" /></a></p>
| Salman | <p><code>apt-transport-https</code> needed, see <a href="https://helm.sh/docs/intro/install/#from-apt-debianubuntu" rel="nofollow noreferrer">here</a> for the the Debian/Ubuntu installation guide.</p>
| gohm'c |
<p>I have a problem with my ingress, basically in my k8s cluster there are two chat and notification microservices, these are two microservices which, in addition to exposing APIs, expose a web socekt with socket.io and a normal websocket.</p>
<p>Chat</p>
<pre><code>const {Server} = require("socket.io");
const io = new Server(server);
io.on('connection', (socket) => {
console.log('New connection started', socket.id);
});
</code></pre>
<p>Notification</p>
<pre><code>const WebSocket = require('ws');
const wss = new WebSocket.Server({server});
wss.on('connection', (ws) => {
console.log('Client connected', ws);
});
</code></pre>
<p>Each microservice is deployed with associated service on my k8s cluster.</p>
<p>But now I can't establish socket and web-socket connections through my ingress, here is my config:</p>
<pre><code>kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/use-regex: 'true'
nginx.ingress.kubernetes.io/proxy-body-size: 64m
nginx.ingress.kubernetes.io/proxy-read-timeout: '3600'
nginx.ingress.kubernetes.io/proxy-send-timeout: '3600'
nginx.ingress.kubernetes.io/proxy-connect-timeout: '3600'
spec:
rules:
- host: demo.local.home
http:
paths:
- path: /ct(/|$)(.*)
pathType: Prefix
backend:
service:
name: ct-svc
port:
number: 8110
- path: /
pathType: Prefix
backend:
service:
name: ct-svc
port:
number: 8210
- path: /nt(/|$)(.*)
pathType: Prefix
backend:
service:
name: ntf-svc
port:
number: 8111
- path: /demo
pathType: Prefix
backend:
service:
name: dm-svc
port:
number: 8290
</code></pre>
<p>The services respond correctly via REST API, but I can't connect to the socket.io and web-socket, would you have any advice to make my configuration work?</p>
| HittingMyCode | <p>I finally managed to solve the problem, here is the final configuration:</p>
<pre><code>metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/use-regex: 'true'
nginx.ingress.kubernetes.io/proxy-body-size: 64m
nginx.ingress.kubernetes.io/proxy-read-timeout: '3600'
nginx.ingress.kubernetes.io/proxy-send-timeout: '3600'
nginx.ingress.kubernetes.io/proxy-connect-timeout: '3600'
spec:
rules:
- host: demo.local.home
http:
paths:
- path: /ct(/|$)(.*)
pathType: Prefix
backend:
service:
name: ct-svc
port:
number: 8110
- path: /nt(/|$)(.*)
pathType: Prefix
backend:
service:
name: ntf-svc
port:
number: 8111
- path: /demo
pathType: Prefix
backend:
service:
name: dm-svc
port:
number: 8290
</code></pre>
<p>with:</p>
<ul>
<li>socket.io host: <a href="http://demo.local.home/ct/" rel="nofollow noreferrer">http://demo.local.home/ct/</a></li>
<li>websocket host: ws://demo.local.home/nt/</li>
</ul>
| HittingMyCode |
<p>I have a Testcafe script (<code>script.js</code>). I want to run this on a Chrome browser, but in the headless mode. So, I use the following command.</p>
<pre><code>testcafe chrome:headless script.js
</code></pre>
<p>This works fine. But, now I wish to Dockerize this and run it in a container. The purpose is to get this running in Kubernetes. How can I achieve this?</p>
<p>I see the <a href="https://hub.docker.com/r/testcafe/testcafe/" rel="nofollow noreferrer">Testcafe Docker image</a>, but this is just to run a Testcafe instance. It does not cater to my requirement of running this script in a Chrome Headless in a container.</p>
<p>(<a href="https://stackoverflow.com/questions/59776886/is-there-a-way-to-run-non-headless-browser-in-testcafe-docker">This question</a> is different from what I am asking)</p>
| Keet Sugathadasa | <p>As you can see in the <a href="https://github.com/DevExpress/testcafe/blob/master/docker/Dockerfile" rel="nofollow noreferrer">Dockerfile</a>, the <code>testcafe/testcafe</code> Docker image is based on the Alpine Linux image. It doesn't contain the <code>Chrome</code> browser, but you can run your tests using the <code>Chromium</code> browser.
More information can be found in this <a href="https://devexpress.github.io/testcafe/documentation/guides/advanced-guides/use-testcafe-docker-image.html" rel="nofollow noreferrer">documentation</a></p>
<blockquote>
<p>TestCafe provides a preconfigured Docker image with Chromium and Firefox installed.</p>
</blockquote>
<hr />
<h3>Docker</h3>
<p>I've created simple example for you to illustrate how it works.
On my local machine I have <code>tests</code> directory that contains one simple test script <code>script.js</code>:<br></p>
<pre><code>root@server1:~# cat /tests/script.js
import { Selector } from 'testcafe';
fixture `First test`
.page `http://google.com`;
test('Test 1', async t => {
// Test code
});
</code></pre>
<p>I'm able to run this test script in container using below command:<br></p>
<pre><code>root@server1:~# docker run -v /tests:/tests -it testcafe/testcafe chromium:headless /tests/script.js
Running tests in:
- Chrome 86.0.4240.111 / Linux 0.0
First test
✓ Test 1
1 passed (1s)
</code></pre>
<h3>Kubernetes</h3>
<p>In addition you may want to run some tests in Kubernetes using for example <code>Jobs</code>.</p>
<p>I created <code>Dockerfile</code> based on <code>testcafe/testcafe</code> image that copies my test script to appropriate location and then built an image from this <code>Dockerfile</code>:<br></p>
<pre><code>FROM testcafe/testcafe
...
COPY tests/script.js /tests/script.js
...
</code></pre>
<p>Finally, I created <code>Job</code> using above image (it can be <code>CronJob</code> as well):</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: simple-test
spec:
backoffLimit: 3
template:
spec:
containers:
- image: <IMAGE_NAME>
name: simple-test
args: [ "chromium:headless", "/tests/script.js" ]
restartPolicy: Never
</code></pre>
<p>As we can see <code>Job</code> successfully completed:<br></p>
<pre><code>$ kubectl get job,pod
NAME COMPLETIONS DURATION AGE
job.batch/simple-test 1/1 18s 14m
NAME READY STATUS RESTARTS AGE
pod/simple-test-w72g2 0/1 Completed 0 14m
</code></pre>
| matt_j |
<p>when intalled k8s node installation happened:</p>
<pre><code>[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
error execution phase kubelet-start: error uploading crisocket: timed out waiting for the condition
</code></pre>
<p>Thank you!</p>
| wejizhang | <p>The issue was caused by Docker version mismatch on one of the machines. Problem was resolved after reinstalling Docker to correct version.</p>
| matt_j |
<p>I am having a config file as a secret in kubernetes and I want to mount it into a specific location inside the container. The problem is that the volume that is created inside the container is a folder instead of a file with the content of the secrets in it. Any way to fix it?
My deployment looks like this:</p>
<pre><code>kind: Deployment
apiVersion: apps/v1
metadata:
name: jetty
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: jetty
template:
metadata:
labels:
app: jetty
spec:
containers:
- name: jetty
image: quay.io/user/jetty
ports:
- containerPort: 8080
volumeMounts:
- name: config-properties
mountPath: "/opt/jetty/config.properties"
subPath: config.properties
- name: secrets-properties
mountPath: "/opt/jetty/secrets.properties"
- name: doc-path
mountPath: /mnt/storage/
resources:
limits:
cpu: '1000m'
memory: '3000Mi'
requests:
cpu: '750m'
memory: '2500Mi'
volumes:
- name: config-properties
configMap:
name: jetty-config-properties
- name: secrets-properties
secret:
secretName: jetty-secrets
- name: doc-path
persistentVolumeClaim:
claimName: jetty-docs-pvc
imagePullSecrets:
- name: rcc-quay
</code></pre>
| zozo6015 | <h3>Secrets vs ConfigMaps</h3>
<p><code>Secrets</code> let you store and manage sensitive information (e.g. passwords, private keys) and <code>ConfigMaps</code> are used for non-sensitive configuration data.<br />
As you can see in the <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="noreferrer">Secrets</a> and <a href="https://kubernetes.io/docs/concepts/configuration/configmap/" rel="noreferrer">ConfigMaps</a> documentation:</p>
<blockquote>
<p>A Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key.</p>
</blockquote>
<blockquote>
<p>A ConfigMap allows you to decouple environment-specific configuration from your container images, so that your applications are easily portable.</p>
</blockquote>
<h3>Mounting Secret as a file</h3>
<p>It is possible to create <code>Secret</code> and pass it as a <strong>file</strong> or multiple <strong>files</strong> to <code>Pods</code>.<br />
I've create simple example for you to illustrate how it works.
Below you can see sample <code>Secret</code> manifest file and <code>Deployment</code> that uses this Secret:<br>
<strong>NOTE:</strong> I used <code>subPath</code> with <code>Secrets</code> and it works as expected.</p>
<pre><code>---
apiVersion: v1
kind: Secret
metadata:
name: my-secret
data:
secret.file1: |
c2VjcmV0RmlsZTEK
secret.file2: |
c2VjcmV0RmlsZTIK
---
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- name: secrets-files
mountPath: "/mnt/secret.file1" # "secret.file1" file will be created in "/mnt" directory
subPath: secret.file1
- name: secrets-files
mountPath: "/mnt/secret.file2" # "secret.file2" file will be created in "/mnt" directory
subPath: secret.file2
volumes:
- name: secrets-files
secret:
secretName: my-secret # name of the Secret
</code></pre>
<p><strong>Note:</strong> <code>Secret</code> should be created before <code>Deployment</code>.</p>
<p>After creating <code>Secret</code> and <code>Deployment</code>, we can see how it works:</p>
<pre><code>$ kubectl get secret,deploy,pod
NAME TYPE DATA AGE
secret/my-secret Opaque 2 76s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx 1/1 1 1 76s
NAME READY STATUS RESTARTS AGE
pod/nginx-7c67965687-ph7b8 1/1 Running 0 76s
$ kubectl exec nginx-7c67965687-ph7b8 -- ls /mnt
secret.file1
secret.file2
$ kubectl exec nginx-7c67965687-ph7b8 -- cat /mnt/secret.file1
secretFile1
$ kubectl exec nginx-7c67965687-ph7b8 -- cat /mnt/secret.file2
secretFile2
</code></pre>
<hr />
<h3>Projected Volume</h3>
<p>I think a better way to achieve your goal is to use <a href="https://kubernetes.io/docs/concepts/storage/volumes/#projected" rel="noreferrer">projected volume</a>.</p>
<blockquote>
<p>A projected volume maps several existing volume sources into the same directory.</p>
</blockquote>
<p>In the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-projected-volume-storage/" rel="noreferrer">Projected Volume documentation</a> you can find detailed explanation but additionally I created an example that might help you understand how it works.
Using projected volume I mounted <code>secret.file1</code>, <code>secret.file2</code> from <code>Secret</code> and <code>config.file1</code> from <code>ConfigMap</code> as files into the <code>Pod</code>.</p>
<pre><code>---
apiVersion: v1
kind: Secret
metadata:
name: my-secret
data:
secret.file1: |
c2VjcmV0RmlsZTEK
secret.file2: |
c2VjcmV0RmlsZTIK
---
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config
data:
config.file1: |
configFile1
---
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: all-in-one
mountPath: "/config-volume"
readOnly: true
volumes:
- name: all-in-one
projected:
sources:
- secret:
name: my-secret
items:
- key: secret.file1
path: secret-dir1/secret.file1
- key: secret.file2
path: secret-dir2/secret.file2
- configMap:
name: my-config
items:
- key: config.file1
path: config-dir1/config.file1
</code></pre>
<p>We can check how it works:</p>
<pre><code>$ kubectl exec nginx -- ls /config-volume
config-dir1
secret-dir1
secret-dir2
$ kubectl exec nginx -- cat /config-volume/config-dir1/config.file1
configFile1
$ kubectl exec nginx -- cat /config-volume/secret-dir1/secret.file1
secretFile1
$ kubectl exec nginx -- cat /config-volume/secret-dir2/secret.file2
secretFile2
</code></pre>
<p>If this response doesn't answer your question, please provide more details about your <code>Secret</code> and what exactly you want to achieve.</p>
| matt_j |
<p>I want to reduce the number of metrics that are scraped under Kube-state-metrics.
When I use the following configuration:</p>
<pre><code> metric_relabel_configs:
- source_labels: [__name__]
separator: ;
regex: kube_pod_(status_phase|container_resource_requests_memory_bytes|container_resource_requests_cpu_cores|owner|labels|container_resource_limits_memory_bytes|container_resource_limits_cpu_cores)
replacement: $1
action: keep
</code></pre>
<p>It is working and I can see only the metrics I selected above.
But when I try to add another rule:</p>
<pre><code>metric_relabel_configs:
- source_labels: [__name__]
separator: ;
regex: kube_pod_(status_phase|container_resource_requests_memory_bytes|container_resource_requests_cpu_cores|owner|labels|container_resource_limits_memory_bytes|container_resource_limits_cpu_cores)
replacement: $1
action: keep
- source_labels: [__name__]
separator: ;
regex: kube_replicaset_(owner)
replacement: $1
action: keep
</code></pre>
<p>It will remove everything, including the first rule that used to work.
How should it be correctly written so that I can create multiple rules for keeping selective metrics?</p>
| Tomer Leibovich | <p>Figured out that both conditions can't be together, only one <code>keep</code> can be.</p>
| Tomer Leibovich |
<p>I'm trying to deploy my code to GKE using github actions but getting an error during the deploy step:</p>
<p><a href="https://i.stack.imgur.com/HCcJD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HCcJD.png" alt="enter image description here" /></a></p>
<p>Here is my deployment.yaml:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-3
namespace: default
labels:
type: nginx
spec:
replicas: 1
selector:
matchLabels:
- type: nginx
template:
metadata:
labels:
- type: nginx
spec:
containers:
- image: nginx:1.14
name: renderer
ports:
- containerPort: 80
</code></pre>
<p>Service.yaml:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: nginx-3-service
spec:
ports:
port: 80
protocol: TCP
targetPort: 80
</code></pre>
<p>And my dockerfile:</p>
<pre><code>FROM ubuntu/redis:5.0-20.04_beta
# Install.
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y tzdata
RUN \
sed -i 's/# \(.*multiverse$\)/\1/g' /etc/apt/sources.list && \
apt-get update && \
apt-get -y upgrade && \
apt-get install -y build-essential && \
apt-get install -y software-properties-common && \
apt-get install -y byobu curl git htop man unzip vim wget && \
rm -rf /var/lib/apt/lists/*
# Set environment variables.
ENV HOME /root
# Define working directory.
WORKDIR /root
# Define default command.
CMD ["bash"]
</code></pre>
<p>This is what the cloud deployments(Workloads) looks like:
<a href="https://i.stack.imgur.com/QlAQ9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QlAQ9.png" alt="enter image description here" /></a></p>
<p><a href="https://i.stack.imgur.com/aYYHJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aYYHJ.png" alt="enter image description here" /></a></p>
<p>I'm trying to push a C++ code using an ubuntu image. I just want to simply push my code to google cloud kubernetes engine.</p>
<p>Update:
I've deleted the deployment and re-run the action and got this:</p>
<p>It said that deployment is successfully created but gives off another error:</p>
<pre><code>deployment.apps/nginx-3 created
Error from server (NotFound): deployments.apps "gke-deployment" not found
</code></pre>
| Turgut | <p>Try:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
...
labels:
type: nginx # <-- correct
spec:
...
selector:
matchLabels:
type: nginx # incorrect, remove the '-'
template:
metadata:
labels:
type: nginx # incorrect, remove the '-'
spec:
...
---
apiVersion: v1
kind: Service
...
spec:
...
ports:
- port: 80 # <-- add '-'
protocol: TCP
targetPort: 80
</code></pre>
| gohm'c |
<p>I am trying to run ingress on minikube.
I am running on Ubnutu 18.04.
I am interested in nginx-ingress from:
<a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/</a></p>
<p>I have simple test service which is running on port 3000 inside docker container. This container is pushed to docker hub. I have simple get request there:</p>
<pre><code>app.get('/api/users/currentuser', (req, res) => {
res.send('Hi there!');
});
</code></pre>
<p>Steps I've done: <strong>minikube start</strong> then <strong>minikube addons enable ingress</strong> after that I got message from minikube:</p>
<pre><code>🔎 Verifying ingress addon...
🌟 The 'ingress' addon is enabled
</code></pre>
<p>but still when I am trying to verify if it is running I don't think that this is working fine:
Output from <strong>kubectl get pods -n kube-system</strong></p>
<pre><code>NAME READY STATUS RESTARTS AGE
coredns-66bff467f8-bhqnk 1/1 Running 4 2d8h
etcd-minikube 1/1 Running 4 2d8h
ingress-nginx-admission-create-676jc 0/1 Completed 0 168m
ingress-nginx-admission-patch-bwf7x 0/1 Completed 0 168m
ingress-nginx-controller-7bb4c67d67-x5qzl 1/1 Running 3 168m
kube-apiserver-minikube 1/1 Running 4 2d8h
kube-controller-manager-minikube 1/1 Running 4 2d8h
kube-proxy-jg2jz 1/1 Running 4 2d8h
kube-scheduler-minikube 1/1 Running 4 2d8h
storage-provisioner 1/1 Running 6 2d8h
</code></pre>
<p>There is also no information about ingress when doing <strong>kubectl get pods</strong></p>
<pre><code>NAME READY STATUS RESTARTS AGE
auth-depl-7dff4bb675-bpzfh 1/1 Running 0 4m58s
</code></pre>
<p>I am running services via skaffold using command <strong>skaffold dev</strong></p>
<p>configuration of ingress service looks like:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- host: kube-test.dev
http:
paths:
- path: /api/users/?(.*)
backend:
serviceName: auth-srv
servicePort: 3000
</code></pre>
<p>which points to deployment file of (host of <code>kube-test.dev</code> is just mapped to localhost in <code>/etc/hosts</code>):</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: geborskimateusz/auth
---
apiVersion: v1
kind: Service
metadata:
name: auth-srv
spec:
selector:
app: auth
ports:
- name: auth
protocol: TCP
port: 3000
targetPort: 3000
</code></pre>
<p>what is more if this does matter skaffold config looks like:</p>
<pre><code>apiVersion: skaffold/v2alpha3
kind: Config
deploy:
kubectl:
manifests:
- ./infra/k8s/*
build:
local:
push: false
artifacts:
- image: geborskimateusz/auth
context: auth
docker:
dockerfile: Dockerfile
sync:
manual:
- src: 'src/**/*.ts'
dest: .
</code></pre>
<p>Any ideas here? I was running similar config on mac and it worked fine, this looks more like ingress addon problem. Any ideas here?</p>
<p>when I hit <strong>kube-test.dev/api/users/currentuser</strong> I get:</p>
<pre><code>Error: connect ECONNREFUSED 127.0.0.1:80
</code></pre>
<p>and hosts file:</p>
<pre><code>127.0.0.1 localhost
127.0.1.1 mat-5474
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
127.0.0.1 kube-test.dev
</code></pre>
<p>EDIT</p>
<p><strong>kubectl describe pods auth-depl-6845657cbc-kqdm8</strong></p>
<pre><code>Name: auth-depl-6845657cbc-kqdm8
Namespace: default
Priority: 0
Node: minikube/192.168.99.100
Start Time: Tue, 14 Jul 2020 09:51:03 +0200
Labels: app=auth
app.kubernetes.io/managed-by=skaffold-v1.12.0
pod-template-hash=6845657cbc
skaffold.dev/builder=local
skaffold.dev/cleanup=true
skaffold.dev/deployer=kubectl
skaffold.dev/docker-api-version=1.40
skaffold.dev/run-id=fcdee662-da9c-48ab-aab0-a6ed0ecef301
skaffold.dev/tag-policy=git-commit
skaffold.dev/tail=true
Annotations: <none>
Status: Running
IP: 172.17.0.4
IPs:
IP: 172.17.0.4
Controlled By: ReplicaSet/auth-depl-6845657cbc
Containers:
auth:
Container ID: docker://674d4aae381431ff124c8533250a6206d044630135854e43ac70f2830764ce0a
Image: geborskimateusz/auth:2d55de4779465ed71686bffc403e6ad7cfef717e7d297ec90ef50a363dc5d3c7
Image ID: docker://sha256:2d55de4779465ed71686bffc403e6ad7cfef717e7d297ec90ef50a363dc5d3c7
Port: <none>
Host Port: <none>
State: Running
Started: Tue, 14 Jul 2020 09:51:04 +0200
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-pcj8j (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-pcj8j:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-pcj8j
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/auth-depl-6845657cbc-kqdm8 to minikube
Normal Pulled 47s kubelet, minikube Container image "geborskimateusz/auth:2d55de4779465ed71686bffc403e6ad7cfef717e7d297ec90ef50a363dc5d3c7" already present on machine
Normal Created 47s kubelet, minikube Created container auth
Normal Started 47s kubelet, minikube Started container auth
</code></pre>
<p>EDIT 2
<strong>kubectl logs ingress-nginx-controller-7bb4c67d67-x5qzl -n kube-system</strong></p>
<pre><code>W0714 07:49:38.776541 6 flags.go:249] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false)
W0714 07:49:38.776617 6 client_config.go:543] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0714 07:49:38.777097 6 main.go:220] Creating API client for https://10.96.0.1:443
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: 0.32.0
Build: git-446845114
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.17.10
-------------------------------------------------------------------------------
I0714 07:49:38.791783 6 main.go:264] Running in Kubernetes cluster version v1.18 (v1.18.3) - git (clean) commit 2e7996e3e2712684bc73f0dec0200d64eec7fe40 - platform linux/amd64
I0714 07:49:39.007305 6 main.go:105] SSL fake certificate created /etc/ingress-controller/ssl/default-fake-certificate.pem
I0714 07:49:39.008092 6 main.go:113] Enabling new Ingress features available since Kubernetes v1.18
W0714 07:49:39.010806 6 main.go:125] No IngressClass resource with name nginx found. Only annotation will be used.
I0714 07:49:39.022204 6 ssl.go:528] loading tls certificate from certificate path /usr/local/certificates/cert and key path /usr/local/certificates/key
I0714 07:49:39.058275 6 nginx.go:263] Starting NGINX Ingress controller
I0714 07:49:39.076400 6 event.go:278] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"kube-system", Name:"nginx-load-balancer-conf", UID:"3af0b029-24c9-4033-8d2a-de7a15b62464", APIVersion:"v1", ResourceVersion:"2007", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap kube-system/nginx-load-balancer-conf
I0714 07:49:39.076438 6 event.go:278] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"kube-system", Name:"tcp-services", UID:"bbd76f82-e3b3-42f8-8098-54a87beb34fe", APIVersion:"v1", ResourceVersion:"2008", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap kube-system/tcp-services
I0714 07:49:39.076447 6 event.go:278] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"kube-system", Name:"udp-services", UID:"21710ee0-4b23-4669-b265-8bf5be662871", APIVersion:"v1", ResourceVersion:"2009", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap kube-system/udp-services
I0714 07:49:40.260006 6 nginx.go:307] Starting NGINX process
I0714 07:49:40.261693 6 leaderelection.go:242] attempting to acquire leader lease kube-system/ingress-controller-leader-nginx...
I0714 07:49:40.262598 6 nginx.go:327] Starting validation webhook on :8443 with keys /usr/local/certificates/cert /usr/local/certificates/key
I0714 07:49:40.262974 6 controller.go:139] Configuration changes detected, backend reload required.
I0714 07:49:40.302595 6 leaderelection.go:252] successfully acquired lease kube-system/ingress-controller-leader-nginx
I0714 07:49:40.304129 6 status.go:86] new leader elected: ingress-nginx-controller-7bb4c67d67-x5qzl
I0714 07:49:40.437999 6 controller.go:155] Backend successfully reloaded.
I0714 07:49:40.438145 6 controller.go:164] Initial sync, sleeping for 1 second.
W0714 07:51:03.723044 6 controller.go:909] Service "default/auth-srv" does not have any active Endpoint.
I0714 07:51:03.765397 6 main.go:115] successfully validated configuration, accepting ingress ingress-service in namespace default
I0714 07:51:03.771212 6 event.go:278] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"ingress-service", UID:"96ec6e26-2354-46c9-be45-ca17a5f1a6f3", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"3991", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress default/ingress-service
I0714 07:51:07.032427 6 controller.go:139] Configuration changes detected, backend reload required.
I0714 07:51:07.115511 6 controller.go:155] Backend successfully reloaded.
I0714 07:51:40.319830 6 status.go:275] updating Ingress default/ingress-service status from [] to [{192.168.99.100 }]
I0714 07:51:40.332044 6 event.go:278] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"ingress-service", UID:"96ec6e26-2354-46c9-be45-ca17a5f1a6f3", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"4011", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress default/ingress-service
W0714 07:55:28.215453 6 controller.go:822] Error obtaining Endpoints for Service "default/auth-srv": no object matching key "default/auth-srv" in local store
I0714 07:55:28.215542 6 controller.go:139] Configuration changes detected, backend reload required.
I0714 07:55:28.234472 6 event.go:278] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"ingress-service", UID:"96ec6e26-2354-46c9-be45-ca17a5f1a6f3", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"4095", FieldPath:""}): type: 'Normal' reason: 'DELETE' Ingress default/ingress-service
I0714 07:55:28.297582 6 controller.go:155] Backend successfully reloaded.
I0714 07:55:31.549294 6 controller.go:139] Configuration changes detected, backend reload required.
I0714 07:55:31.653169 6 controller.go:155] Backend successfully reloaded.
W0714 08:25:53.145312 6 controller.go:909] Service "default/auth-srv" does not have any active Endpoint.
I0714 08:25:53.188326 6 main.go:115] successfully validated configuration, accepting ingress ingress-service in namespace default
I0714 08:25:53.191134 6 event.go:278] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"ingress-service", UID:"4ac33fc5-ae7a-4511-922f-7e6bdc1fe4d5", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"4124", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress default/ingress-service
I0714 08:25:54.270931 6 status.go:275] updating Ingress default/ingress-service status from [] to [{192.168.99.100 }]
I0714 08:25:54.278468 6 event.go:278] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"ingress-service", UID:"4ac33fc5-ae7a-4511-922f-7e6bdc1fe4d5", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"4136", FieldPath:""}): type: 'Normal' reason: 'UPDATE' Ingress default/ingress-service
I0714 08:25:56.460808 6 controller.go:139] Configuration changes detected, backend reload required.
I0714 08:25:56.530559 6 controller.go:155] Backend successfully reloaded.
</code></pre>
<p><strong>kubectl describe svc auth-srv</strong></p>
<pre><code>Name: auth-srv
Namespace: default
Labels: app.kubernetes.io/managed-by=skaffold-v1.12.0
skaffold.dev/builder=local
skaffold.dev/cleanup=true
skaffold.dev/deployer=kubectl
skaffold.dev/docker-api-version=1.40
skaffold.dev/run-id=19ab20fe-baa5-4faf-a478-c1bad98a22b1
skaffold.dev/tag-policy=git-commit
skaffold.dev/tail=true
Annotations: Selector: app=auth
Type: ClusterIP
IP: 10.99.52.173
Port: auth 3000/TCP
TargetPort: 3000/TCP
Endpoints: 172.17.0.4:3000
Session Affinity: None
Events: <none>
</code></pre>
| Mateusz Gebroski | <p>run minikube ip.
You will get an ip address, copy paste in etc/hosts file for kube-test.dev</p>
| Yash Srivastava |
<p>I am running command <strong>kubectl top nodes</strong> and getting error : </p>
<pre><code>node@kubemaster:~/Desktop/metric$ kubectl top nodes
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)
</code></pre>
<p>Metric Server pod is running with following params : </p>
<pre><code> command:
- /metrics-server
- --metric-resolution=30s
- --requestheader-allowed-names=aggregator
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
</code></pre>
<p>Most of the answer I am getting is the above params,
Still getting error</p>
<pre><code>E0601 18:33:22.012798 1 manager.go:111] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:kubemaster: unable to fetch metrics from Kubelet kubemaster (192.168.56.30): Get https://192.168.56.30:10250/stats/summary?only_cpu_and_memory=true: context deadline exceeded, unable to fully scrape metrics from source kubelet_summary:kubenode1: unable to fetch metrics from Kubelet kubenode1 (192.168.56.31): Get https://192.168.56.31:10250/stats/summary?only_cpu_and_memory=true: dial tcp 192.168.56.31:10250: i/o timeout]
</code></pre>
<p>I have deployed metric server using : </p>
<pre><code>kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml
</code></pre>
<p>What am I missing?
Using Calico for Pod Networking</p>
<p>On github page of metric server under FAQ: </p>
<pre><code>[Calico] Check whether the value of CALICO_IPV4POOL_CIDR in the calico.yaml conflicts with the local physical network segment. The default: 192.168.0.0/16.
</code></pre>
<p>Could this be the reason. Can someone explains this to me.</p>
<p>I have setup Calico using :
kubectl apply -f <a href="https://docs.projectcalico.org/v3.14/manifests/calico.yaml" rel="noreferrer">https://docs.projectcalico.org/v3.14/manifests/calico.yaml</a></p>
<p>My Node Ips are : 192.168.56.30 / 192.168.56.31 / 192.168.56.32</p>
<p>I have initiated the cluster with --pod-network-cidr=20.96.0.0/12. So my pods Ip are 20.96.205.192 and so on.</p>
<p>Also getting this in apiserver logs</p>
<pre><code>E0601 19:29:59.362627 1 available_controller.go:420] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.100.152.145:443/apis/metrics.k8s.io/v1beta1: Get https://10.100.152.145:443/apis/metrics.k8s.io/v1beta1: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
</code></pre>
<p>where 10.100.152.145 is IP of service/metrics-server(ClusterIP)</p>
<p>Surprisingly it works on another cluster with Node Ip in 172.16.0.0 range.
Rest everything is same. Setup using kudeadm, Calico, same pod cidr</p>
| Ankit Bansal | <p>It started working after I edited the metrics-server deployment yaml config to include a DNS policy.</p>
<p><strong>hostNetwork: true</strong></p>
<p><img src="https://i.stack.imgur.com/8nOBy.png" alt="Click here to view the image description " /></p>
<p>Refer to the link below:
<a href="https://www.linuxsysadmins.com/service-unavailable-kubernetes-metrics/" rel="noreferrer">https://www.linuxsysadmins.com/service-unavailable-kubernetes-metrics/</a></p>
| Damilola Abioye |
<p>I am a Kubernetes novice. I am trying to install a csi driver to a Kubernetes Namespace in a kubernetes cluster version 1.16.15.</p>
<p>I am using helm 2.16 version to do the install using below command :</p>
<p><code>.\helm install --name csi-secrets-store secrets-store-csi-driver/secrets-store-csi-driver --namespace csi --debug</code></p>
<pre><code>[debug] Created tunnel using local port: '63250'
[debug] SERVER: "127.0.0.1:63250"
[debug] Original chart version: ""
[debug] Fetched secrets-store-csi-driver/secrets-store-csi-driver to C:\Users\XXX\.helm\cache\archive\secrets-store-csi-driver-0.0.19.tgz
[debug] CHART PATH: C:\Users\XXX\.helm\cache\archive\secrets-store-csi-driver-0.0.19.tgz
**Error: render error in "secrets-store-csi-driver/templates/csidriver.yaml": template: secrets-store-csi-driver/templates/_helpers.tpl:40:45: executing "csidriver.apiVersion" at <.Capabilities.KubeVersion.Version>: can't evaluate field Version in type *version.Info**
</code></pre>
<p>csidriver.yaml :</p>
<pre class="lang-yaml prettyprint-override"><code> apiVersion: {{ template "csidriver.apiVersion" . }}
kind: CSIDriver
metadata:
name: secrets-store.csi.k8s.io
spec:
podInfoOnMount: true
attachRequired: false
{{- if semverCompare ">=1.16-0" .Capabilities.KubeVersion.Version }}
# Added in Kubernetes 1.16 with default mode of Persistent. Secrets store csi driver needs Ephermeral to be set.
volumeLifecycleModes:
- Ephemeral
{{ end }}
</code></pre>
<p>Any help much appreciated</p>
| ptilloo | <p>The issue was caused by old <code>Helm</code> version.
Problem was resolved after upgrading to the new <code>Helm v3</code>.</p>
<p>There is helpful <a href="https://helm.sh/docs/topics/v2_v3_migration/" rel="nofollow noreferrer">guide</a> on how to migrate <code>Helm v2</code> to <code>v3</code>.</p>
| matt_j |
<p>I'm trying to start a minikube machine with <code>minikube start --driver=docker</code>. But I'm seeing the following error.</p>
<pre><code>😄 minikube v1.9.2 on Ubuntu 20.04
✨ Using the docker driver based on user configuration
👍 Starting control plane node m01 in cluster minikube
🚜 Pulling base image ...
🔥 Creating Kubernetes in docker container with (CPUs=6) (8 available), Memory=8192MB (15786MB available) ...
🤦 StartHost failed, but will try again: creating host: create host timed out in 120.000000 seconds
🔥 Deleting "minikube" in docker ...
🔥 Creating Kubernetes in docker container with (CPUs=6) (8 available), Memory=8192MB (15786MB available) ...
❗ Executing "docker inspect -f {{.State.Status}} minikube" took an unusually long time: 3.934644373s
💡 Restarting the docker service may improve performance.
❌ [CREATE_TIMEOUT] Failed to start docker container. "minikube start" may fix it. creating host: create host timed out in 120.000000 seconds
💡 Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
⁉️ Related issue: https://github.com/kubernetes/minikube/issues/7072
</code></pre>
<p><code>minikube status</code> returns</p>
<pre><code>E0702 08:25:03.817735 36017 status.go:233] kubeconfig endpoint: empty IP
m01
host: Running
kubelet: Stopped
apiserver: Stopped
kubeconfig: Misconfigured
WARNING: Your kubectl is pointing to stale minikube-vm.
To fix the kubectl context, run `minikube update-context`
</code></pre>
<p>I've been using this driver for a few weeks now and it worked fine without any errors until yesterday. I tried restarting docker daemon and service but the issue is still there.</p>
<p>Docker version 19.03.8, build afacb8b7f0</p>
<p>minikube version: v1.9.2
commit: 93af9c1e43cab9618e301bc9fa720c63d5efa393</p>
<p>Ubuntu 20.04 LTS</p>
<p><strong>EDIT</strong>
I managed to start the machine without any changes on a later attempt, but it takes a considerable time to start (5-10 mins). Any ideas as to why this is happening?</p>
| RrR- | <p>The solution to this issue is to enable IOMMU in your GRUB boot parameters.</p>
<p>You can do this by setting the following in /etc/default/grub</p>
<p>GRUB_CMDLINE_LINUX_DEFAULT="intel_iommu=on"</p>
<p>If you're using an AMD processor, you should append amd_iommu=on to the boot parameters instead</p>
<p>GRUB_CMDLINE_LINUX_DEFAULT="quiet splash amd_iommu=on"</p>
<p>Then run update-grub and reboot</p>
| Fahd Rahali |
<p>I have created/provisioned a PVC and PV dynamically from a custom storage class that has a retain reclaimPolicy using the below k8s resource files.</p>
<pre><code># StorageClass yaml spec
# kubectl apply -f storage-class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast-ssd
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
fstype: ext4
replication-type: none
reclaimPolicy: Retain
allowVolumeExpansion: true
volumeBindingMode: Immediate
</code></pre>
<pre><code># PersistentVolumeClaim yaml specs
# kubectl apply -f mysql-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: test-mysql
name: test-mysql-pv-claim
spec:
storageClassName: "fast-ssd"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 51G
</code></pre>
<p>Below are the sequence of steps to followed to reproduce my scenario:</p>
<pre><code>ubuntu@ubuntu-ThinkPad-X230-Tablet:/home/ubuntu/test$ kubectl apply -f mysql-pvc.yaml
persistentvolumeclaim/test-mysql-pv-claim created
ubuntu@ubuntu-ThinkPad-X230-Tablet:/home/ubuntu/test$ kubectl get pvc,pv
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/test-mysql-pv-claim Bound pvc-a6bd789c-9e3c-43c8-8604-2e91b2fee616 48Gi RWO fast-ssd 7s
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-a6bd789c-9e3c-43c8-8604-2e91b2fee616 48Gi RWO Retain Bound default/test-mysql-pv-claim fast-ssd 6s
ubuntu@ubuntu-ThinkPad-X230-Tablet:/home/ubuntu/test$ kubectl delete -f mysql-pvc.yaml
persistentvolumeclaim "test-mysql-pv-claim" deleted
ubuntu@ubuntu-ThinkPad-X230-Tablet:/home/ubuntu/test$ kubectl get pvc,pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-a6bd789c-9e3c-43c8-8604-2e91b2fee616 48Gi RWO Retain Released default/test-mysql-pv-claim fast-ssd 44s
ubuntu@ubuntu-ThinkPad-X230-Tablet:/home/ubuntutest$ kubectl apply -f mysql-pvc.yaml
persistentvolumeclaim/test-mysql-pv-claim created
ubuntu@ubuntu-ThinkPad-X230-Tablet:/home/ubuntu/test$ kubectl get pvc,pv
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/test-mysql-pv-claim Bound pvc-fbc266ab-60b0-441b-a789-84f950071390 48Gi RWO fast-ssd 6m6s
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-a6bd789c-9e3c-43c8-8604-2e91b2fee616 48Gi RWO Retain Released default/test-mysql-pv-claim fast-ssd 7m6s
persistentvolume/pvc-fbc266ab-60b0-441b-a789-84f950071390 48Gi RWO Retain Bound default/test-mysql-pv-claim fast-ssd 6m5s
</code></pre>
<p>As we can see from the last part of the output new PVC is bounded to new PV, not to the old released PV.</p>
<p>So can we make some changes in mysql-pvc.yaml file so that our PVC can again be reassigned/bounded to old release PV, as that PV will have important data that we need?</p>
| devops-admin | <p>When your reclaim policy is "retain", any change in the pvc YAML will not work.</p>
<p>As per Kubernetes doc:</p>
<blockquote>
<p>The Retain reclaim policy allows for manual reclamation of the resource. When the <code>PersistentVolumeClaim</code> is deleted, the <code>PersistentVolume</code> still exists and the volume is considered "released". But it is not yet available for another claim because the previous claimant's data remains on the volume. An administrator can manually reclaim the volume with the following steps.</p>
</blockquote>
<ul>
<li>Delete the <code>PersistentVolume</code>. The associated storage asset in external infrastructure (such as an AWS EBS, GCE PD, Azure Disk, or Cinder volume) still exists after the PV is deleted.</li>
<li>Manually clean up the data on the associated storage asset accordingly.</li>
<li>Manually delete the associated storage asset, or if you want to reuse the same storage asset, create a new <code>PersistentVolume</code> with the storage asset definition.</li>
</ul>
<p>Ref:
<a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#retain" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/persistent-volumes/#retain</a></p>
| Pulak Kanti Bhowmick |
<p>I'm trying to deploy my frontend client application and backend API on the same domain and I want the frontend on the base path: /</p>
<p>However, I realized that the frontend and the backend need two different rewrite-target to accomplish this.</p>
<p>front-end works with:</p>
<pre><code>nginx.ingress.kubernetes.io/rewrite-target: /
</code></pre>
<p>while the backend works with:</p>
<pre><code>nginx.ingress.kubernetes.io/rewrite-target: /$2
</code></pre>
<p>I tried using two different ingress services in order to accommodate different rewrite-target, but that fails because the host was the same domain.</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
namespace: test
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-staging
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/add-base-url: "true"
nginx.ingress.kubernetes.io/service-upstream: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/proxy-buffer-size: 128k
nginx.ingress.kubernetes.io/proxy-buffering: "on"
nginx.ingress.kubernetes.io/proxy-buffers-number: "4"
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
nginx.ingress.kubernetes.io/configuration-snippet: |
more_set_headers "server: hide";
more_set_headers "X-Content-Type-Options: nosniff";
more_set_headers "X-Xss-Protection: 1";
spec:
tls:
- hosts:
- test.eastus.cloudapp.azure.com
secretName: tls-secret
rules:
- host: test.eastus.cloudapp.azure.com
http:
paths:
- backend:
serviceName: webapp
servicePort: 80
path: /
- backend:
serviceName: api
servicePort: 80
path: /api(/|$)(.*)
</code></pre>
<p>I know I can make both work with the same rewrite-target <code>/$2</code> if I change the frontend path to path: /app(/|$)(.*) but I don't want to that except it is the only option.</p>
<p>Is there a way for me to better configure a single ingress to work for 2 different rewrite-target?</p>
| capiono | <p>Ingress with API version networking.k8s.io/v1 with k8s version 1.19+ can solve your problem. I have given an example below.</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: simple-fanout-example
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend-service
port:
number: 80
- path: /api
pathType: Prefix
backend:
service:
name: backend-service
port:
number: 8080
</code></pre>
<p>According to kubernetes <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#multiple-matches" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p>In some cases, multiple paths within an Ingress will match a request. In those cases precedence will be given first to the longest matching path. If two paths are still equally matched, precedence will be given to paths with an exact path type over prefix path type.</p>
</blockquote>
<p>So, when you are looking for a resource located at path "/home.html", it only matches with your frontend service. But when you are looking for a resource located at path "/api/something" then it matches both service. But it will go to backend service always because of maximum path match stated above.</p>
| Pulak Kanti Bhowmick |
<p>I have an application with Pods that are not part of a deployments and I use services like nodePorts I access my application through <strong>ipv4:nodePorts/url-microservice</strong> when I want to scale my pods do I need to have a deployment with replicas?
I tried using deployment with nodePorts but it doesn't work this way anymore: <strong>ipv4:nodePorts/url-microservice</strong></p>
<p>I'll post my deployments and service for someone to see if I'm wrong somewhere</p>
<p>Deployments:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-gateway
labels:
app: my-gateway
spec:
replicas: 1
selector:
matchLabels:
run: my-gateway
template:
metadata:
labels:
run: my-gateway
spec:
containers:
- name: my-gateway
image: rafaelribeirosouza86/shopping:api-gateway
imagePullPolicy: Always
ports:
- containerPort: 31534
protocol: TCP
imagePullSecrets:
- name: regcred
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-adm-contact
labels:
app: my-adm-contact
spec:
replicas: 1
selector:
matchLabels:
run: my-adm-contact
template:
metadata:
labels:
run: my-adm-contact
spec:
containers:
- name: my-adm-contact
image: rafaelribeirosouza86/shopping:my-adm-contact
imagePullPolicy: Always
ports:
- containerPort: 30001
protocol: TCP
imagePullSecrets:
- name: regcred
</code></pre>
<p>Services:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-adm-contact-service
namespace: default
spec:
# clusterIP: 10.99.233.224
ports:
- port: 30001
protocol: TCP
targetPort: 30001
nodePort: 30001
# externalTrafficPolicy: Local
selector:
app: my-adm-contact
# type: ClusterIP
# type: LoadBalancer
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
name: my-gateway-service
namespace: default
spec:
# clusterIP: 10.99.233.224
# protocol: ##The default is TCP
# port: ##Exposes the service within the cluster. Also, other Pods use this to access the Service
# targetPort: ##The service sends request while containers accept traffic on this port.
ports:
- port: 31534
protocol: TCP
targetPort: 31534
nodePort: 31534
# externalTrafficPolicy: Local
selector:
app: my-gateway
# type: ClusterIP
# type: LoadBalancer
type: NodePort
</code></pre>
| Rafael Souza | <p>Try:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-gateway
...
spec:
...
template:
metadata:
labels:
run: my-gateway # <-- take note
...
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-adm-contact
...
spec:
...
template:
metadata:
labels:
run: my-adm-contact # <-- take note
...
---
apiVersion: v1
kind: Service
metadata:
name: my-adm-contact-service
...
selector:
run: my-adm-contact # <-- wrong selector, changed from 'app' to 'run'
---
apiVersion: v1
kind: Service
metadata:
name: my-gateway-service
...
selector:
run: my-gateway # <-- wrong selector, changed from 'app' to 'run'
</code></pre>
| gohm'c |
<p>In the following pod yaml, I cannot get <code>source</code> command to work. Initially I inserted the command under <code>args</code> between <a href="https://stackoverflow.com/questions/33887194/how-to-set-multiple-commands-in-one-yaml-file-with-kubernetes"><code>echo starting</code> and <code>echo done</code></a> and now I tried <a href="https://stackoverflow.com/questions/44140593/how-to-run-command-after-initialization"><code>{.lifecycle.postStart}</code></a> to no avail.</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: mubu62
labels:
app: mubu62
spec:
containers:
- name: mubu621
image: dockreg:5000/mubu6:v6
imagePullPolicy: Always
ports:
- containerPort: 5021
command: ["/bin/sh","-c"]
args:
- echo starting;
echo CONT1=\"mubu621\" >> /etc/environment;
touch /mubu621;
sed -i 's/#Port 22/Port 5021/g' /etc/ssh/sshd_config;
sleep 3650d;
echo done;
lifecycle:
postStart:
exec:
command: ["/bin/bash","-c","source /etc/environment"]
- name: mubu622
image: dockreg:5000/mubu6:v6
imagePullPolicy: Always
ports:
- containerPort: 5022
imagePullSecrets:
- name: regcred
nodeName: spring
restartPolicy: Always
</code></pre>
<p><code>Kubectl apply</code> throws no errors, but <code>echo $CONT1</code> returns nada! <code>mubu6</code> is an ubuntu modified image.</p>
<p>The reason I am doing this, is because when I <code>ssh</code> from another pod in this pod <code>(mubu621)</code>, Kubernetes environment variables set through <code>env</code> are not seen in the <code>ssh</code> session.</p>
<p>Any help would be much appreciated!</p>
| Alexander Sofianos | <p>After experimenting with the suggestions under <a href="https://unix.stackexchange.com/questions/101168/set-environment-variable-automatically-upon-ssh-login-no-root-access">set-environment-variable-automatically-upon-ssh-login</a>, what worked was to substitute</p>
<pre><code>echo CONT1=\"mubu621\" >> /etc/environment;
</code></pre>
<p>with</p>
<pre><code>echo CONT1=\"mubu621\" >> /root/.bashrc;
</code></pre>
<p>and delete</p>
<pre><code>lifecycle:
postStart:
exec:
command: ["/bin/bash","-c","source /etc/environment"]
</code></pre>
<p>that didn't work anyway.</p>
<p>Upon SSH-ing from <code>container mubu622</code> to <code>container mubu621</code>, I can now successfully execute <code>echo $CONT1</code> with <code>mubu621</code> output, <strong>without having to <code>source</code> <code>/root/.bashrc</code> first</strong>, which was initially the case with writing the <code>env_variable</code> in <code>/etc/environment</code>.</p>
<p><strong>In summary:</strong> when using a <code>bash shell</code> in <code>kubernetes containers</code>, you can <code>SSH</code> from another container and <code>echo</code> variables written in <code>/root/.bashrc</code> without sourcing (because <code>kubernetes env_variables</code> are not available in a ssh session).
This is very useful e.g in the case of <strong>multi-container pods</strong>, so you know amongst other things in which container you are currently logged in.</p>
| Alexander Sofianos |
<p>I have a classic AWS Elastic Load Balancer deployed into my kubernetes cluster, and i am able to get to my UI app through the load balancer. However I know thet Network Load Balancers are the way forward, and i think i should be using that because I think it can issue an External IP which I can then use in my Godaddy account to map to my domain name.</p>
<p>However when i apply the k8s manifest for the NLB, it just stays in a pending state.</p>
<p>This is my configuration here.</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: <service name>
annotations:
service.beta.kubernetes.io/aws-load-balancer-type : external
service.beta.kubernetes.io/aws-load-balancer-subnets : <a public subnet>
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type : ip
service.beta.kubernetes.io/aws-load-balancer-scheme : internet-facing
spec:
selector:
app: <pod label>
type: LoadBalancer
ports:
- protocol: TCP
port: 85
targetPort: 80
</code></pre>
<p>can anyonee help in fixing this ?</p>
| floormind | <p>Try update your annotation: <code>service.beta.kubernetes.io/aws-load-balancer-type: "nlb"</code></p>
| gohm'c |
<p>I am fairly new to K8 and am working on a setup on my sandbox environment to be replicated on some Ubuntu VMs at a client site.</p>
<p>I have a number of services running within my cluster and ingress rules set up to route to them. Ingress Add-on is enabled. I am now trying to expose the endpoints (via Ingress) outside of the machine on which MicroK8s is installed.</p>
<p>I have set up an nginx (edge) proxy server outside of my cluster and am looking for the MicroK8s IP address I need to proxy to. (In production I'll have an edge proxy that takes https and proxies to http)</p>
<p>I have had this working previously on minikube where I proxied to the IP address returned by
minikube ip, but I cannot find a corresponding command on microK8s</p>
<p>Can anyone advise how to do this routing? Thanks.</p>
| royneedshelp | <p>The problem was that I had learned Kubernetes using minikube and it handles ingress differently.</p>
<p>Moving to MicroK8 I had to add my own instance of an ingress service class (NodePort) in my ingress namespace and expose port 80. This then exposed my ingress endpoints on all external network interfaces, and my self provisioned edge proxy server was just able to redirect to port 80 on the K8 host machine's public IP</p>
| royneedshelp |
<p>Hell everyone, I have a problem with Apache Spark (version 3.3.1) on <code>k8s</code>.</p>
<p>In short:
When I run the statement</p>
<pre class="lang-py prettyprint-override"><code>print(sc.uiWebUrl)
</code></pre>
<p>within a pod, I would get a <code>URL</code> that is accessible from outside the <code>k8s</code> cluster.
Something like:</p>
<pre><code>http://{{my-ingress-host}}
</code></pre>
<p>Long story:
I want to create a workspace for Apache Spark on <code>k8s</code>, where the driver's pod, is the <strong>workspace</strong> that I work on. I want to let the client run Apache Spark either with <code>pyspark-shell</code> or with the <code>pyspark</code> python library.</p>
<p>In either way, I want that the UI's web url would be a one that is accessible from the outside world (outside the <code>k8s</code> cluster).
Why? Because of <code>UX</code>, I want to make my client's life easier.</p>
<p>Because I run on <code>k8s</code>, part of the configuration of my Apache Spark program is:</p>
<pre><code>spark.driver.host={{driver-service}}.{{drivers-namespace}}.svc.cluster.local
spark.driver.bindAddress=0.0.0.0
</code></pre>
<p>Because of that, the output of this code:</p>
<pre class="lang-py prettyprint-override"><code>print(sc.webUiUrl)
</code></pre>
<p>Would be:</p>
<pre><code>http://{{driver-service}}.{{drivers-namespace}}.svc.cluster.local:4040
</code></pre>
<p>Also in the pyspark-shell, the same address would be displayed.</p>
<p>So my question is, is there a way to change the ui web url's host to a host that I have defined in my <code>ingress</code> to make my client's life easier?
So the new output would be:</p>
<pre><code>http://{{my-defined-host}}
</code></pre>
<p>Other points I want to make sure to adjust the solution as much as possible:</p>
<ul>
<li>I don't have a <code>nginx</code> ingress in my k8s cluster. Maybe I have a <code>HAPROXY</code> ingress. But I would want to be coupled to my ingress implementation as <strong>least</strong> as possiable.</li>
<li>I would prefer that the client would need to configure Apache Spark as <strong>least</strong> as possible.</li>
<li>I would prefer that the ui web url of the Spark's context would be set when creating the context, meaning before the <code>pyspark-shell</code> displays the welcome screen.</li>
<li>I have tried messing with the <code>ui.proxy</code> configurations, and it haven't helped. And sometimes made things worst.</li>
</ul>
<p>Thanks ahead for everyone, any help would be appreciated.</p>
| Avihai Shalom | <p>You can change your web UI's host to a host that you want by setting the <a href="https://spark.apache.org/docs/3.4.1/configuration.html#environment-variables" rel="nofollow noreferrer"><code>SPARK_PUBLIC_DNS</code></a> environment variable. This needs to be done on the driver, since the web UI runs on the driver.</p>
<p>To set the port for the web UI, you can do that using the <a href="https://spark.apache.org/docs/3.4.1/configuration.html#spark-ui" rel="nofollow noreferrer"><code>spark.ui.port</code></a> config parameter.</p>
<p>So putting both together using <code>spark-submit</code> for example, makes something like the following:</p>
<pre><code>bin/spark-submit \
--class ... \
--master k8s://... \
....
....
....
--conf spark.kubernetes.driverEnv.SPARK_PUBLIC_DNS=YOUR_VALUE_HERE
--conf spark.ui.port=YOUR_WANTED_PORT_HERE
...
</code></pre>
| Koedlt |
<p>Hello i'm trying to launch my own deployment with my own container in minikube. Here's my yaml file</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: wildboar-nginx-depl
labels:
app: services.nginx
spec:
replicas: 2
selector:
matchLabels:
app: services.nginx
template:
metadata:
labels:
app: services.nginx
spec:
containers:
- name: wildboar-nginx-pod
image: services.nginx
ports:
- containerPort: 80
- containerPort: 443
- containerPort: 22
imagePullPolicy: Never
---
apiVersion: v1
kind: Service
metadata:
name: wildboar-nginx-service
annotations:
metallb.universe.tf/allow-shared-ip: wildboar-key
spec:
type: LoadBalancer
loadBalancerIP: 192.168.1.101
selector:
app: services.nginx
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
nodePort: 30080
- name: https
protocol: TCP
port: 443
targetPort: 443
nodePort: 30443
- name: ssh
protocol: TCP
port: 22
targetPort: 22
nodePort: 30022
</code></pre>
<p>That's my Dockerfile</p>
<pre><code>FROM alpine:latest
RUN apk update && apk upgrade -U -a
RUN apk add nginx openssl openrc openssh supervisor
RUN mkdir /www/
RUN adduser -D -g 'www' www
RUN chown -R www:www /www
RUN chown -R www:www /var/lib/nginx
RUN openssl req -x509 -nodes -days 30 -newkey rsa:2048 -subj \
"/C=RU/ST=Moscow/L=Moscow/O=lchantel/CN=localhost" -keyout \
/etc/ssl/private/lchantel.key -out /etc/ssl/certs/lchantel.crt
COPY ./conf /etc/nginx/conf.d/default.conf
COPY ./nginx_conf.sh .
COPY ./supervisor.conf /etc/
RUN mkdir -p /run/nginx/
EXPOSE 80 443 22
RUN chmod 755 /nginx_conf.sh
CMD sh nginx_conf.sh
</code></pre>
<p>That's my nginx_conf.sh</p>
<pre><code>#!bin/sh
cp /var/lib/nginx/html/index.html /www/
rc default
rc-service sshd start
ssh-keygen -A
rc-service sshd stop
/usr/bin/supervisord -c /etc/supervisord.conf
</code></pre>
<p>After i'm successfuly apllying yaml files, but i'm stuck in CrashLoopBackOff error:</p>
<pre><code>$ kubectl get pod
NAME READY STATUS RESTARTS AGE
wildboar-nginx-depl-57d64f58d8-cwcnn 0/1 CrashLoopBackOff 2 40s
wildboar-nginx-depl-57d64f58d8-swmq2 0/1 CrashLoopBackOff 2 40s
</code></pre>
<p>I tried to reboot, but it doesn't help. I tried to describe pod, but information is not helpfull:</p>
<pre><code>$ kubectl describe pod wildboar-nginx-depl-57d64f58d8-cwcnn
Name: wildboar-nginx-depl-57d64f58d8-cwcnn
Namespace: default
Priority: 0
Node: minikube/192.168.99.100
Start Time: Sun, 06 Dec 2020 17:49:19 +0300
Labels: app=services.nginx
pod-template-hash=57d64f58d8
Annotations: <none>
Status: Running
IP: 172.17.0.7
IPs:
IP: 172.17.0.7
Controlled By: ReplicaSet/wildboar-nginx-depl-57d64f58d8
Containers:
wildboar-nginx-pod:
Container ID: docker://6bd4ab3b08703293697d401e355d74d1ab09f938eb23b335c92ffbd2f8f26706
Image: services.nginx
Image ID: docker://sha256:a62f240db119e727935f072686797f5e129ca44cd1a5f950e5cf606c9c7510b8
Ports: 80/TCP, 443/TCP, 22/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
State: Terminated
Reason: Completed
Exit Code: 0
Started: Sun, 06 Dec 2020 17:52:13 +0300
Finished: Sun, 06 Dec 2020 17:52:15 +0300
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Sun, 06 Dec 2020 17:50:51 +0300
Finished: Sun, 06 Dec 2020 17:50:53 +0300
Ready: False
Restart Count: 5
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hr82j (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-hr82j:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hr82j
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m9s Successfully assigned default/wildboar-nginx-depl-57d64f58d8-cwcnn to minikube
Normal Pulled 98s (x5 over 3m9s) kubelet, minikube Container image "services.nginx" already present on machine
Normal Created 98s (x5 over 3m9s) kubelet, minikube Created container wildboar-nginx-pod
Normal Started 98s (x5 over 3m9s) kubelet, minikube Started container wildboar-nginx-pod
Warning BackOff 59s (x10 over 3m4s) kubelet, minikube Back-off restarting failed container
</code></pre>
<p>I ran out of ideas what should i do:(</p>
| WildBoar | <p>Well i solved issue with nginx. First of all, i rewrote supervisor.conf and it now something like this:</p>
<pre><code>[supervisord]
nodaemon=true
user = root
[program:nginx]
command=nginx -g 'daemon off;'
autostart=true
autorestart=true
startsecs=0
redirect_stderr=true
[program:ssh]
command=/usr/sbin/sshd -D
autostart=true
autorestart=true
</code></pre>
<p>The second, i got problem with loadBalancer. I swap service and deployment configurations in file and also add for service next stat spec.externalTrafficPolicy: Cluster (for ip address sharing).</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: wildboar-nginx-service
labels:
app: nginx
annotations:
metallb.universe.tf/allow-shared-ip: minikube
spec:
type: LoadBalancer
loadBalancerIP: 192.168.99.105
selector:
app: nginx
externalTrafficPolicy: Cluster
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 443
- name: ssh
protocol: TCP
port: 22
targetPort: 22
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: wildboar-nginx-depl
labels:
app: nginx
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
restartPolicy: Always
containers:
- name: wildboar-nginx-pod
image: wildboar.nginx:latest
ports:
- containerPort: 80
name: http
- containerPort: 443
name: https
- containerPort: 22
name: ssh
imagePullPolicy: Never
</code></pre>
<p>The third i rebuilt minikube and all configs with script like this</p>
<pre><code>#!/bin/bash
kubectl ns default
kubectl delete deployment --all
kubectl delete service --all
kubectl ns metallb-system
kubectl delete configmap --all
kubectl ns default
docker rmi -f <your_custom_docker_image>
minikube stop
minikube delete
minikube start --driver=virtualbox --disk-size='<your size>mb' --memory='<your_size>mb'
minikube addons enable metallb
eval $(minikube docker-env)
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.5/manifests/metallb.yaml
# next line is only when you use mettallb for first time
#kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
docker build -t <your_custom_docker_images> .
kubectl apply -f <mettalb_yaml_config>.yaml
kubectl apply -f <your_config_with_deployment_and_service>.yaml
</code></pre>
<p>I also mentioned, that yaml files are very sensitive to spaces and tabs, so i installed yamllint for basic debugging of yaml files. I wanna thank confused genius and David Maze for help!</p>
| WildBoar |
<p>I'm trying to get off the ground with Spark and Kubernetes but I'm facing difficulties. I used the helm chart here:</p>
<p><a href="https://github.com/bitnami/charts/tree/main/bitnami/spark" rel="nofollow noreferrer">https://github.com/bitnami/charts/tree/main/bitnami/spark</a></p>
<p>I have 3 workers and they all report running successfully. I'm trying to run the following program remotely:</p>
<pre><code>from pyspark.sql import SparkSession
spark = SparkSession.builder.master("spark://<master-ip>:<master-port>").getOrCreate()
df = spark.read.json('people.json')
</code></pre>
<p>Here's the part that's not entirely clear. Where should the file people.json actually live? I have it locally where I'm running the python code and I also have it on a PVC that the master and all workers can see at /sparkdata/people.json.</p>
<p>When I run the 3rd line as simply <code>'people.json'</code> then it starts running but errors out with:</p>
<p><code>WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources</code></p>
<p>If I run it as <code>'/sparkdata/people.json'</code> then I get</p>
<p><code>pyspark.sql.utils.AnalysisException: Path does not exist: file:/sparkdata/people.json</code></p>
<p>Not sure where I go from here. To be clear I want it to read files from the PVC. It's an NFS share that has the data files on it.</p>
| mxcolin | <p>Your <code>people.json</code> file needs to be accessible to your driver + executor pods. This can be achieved in multiple ways:</p>
<ul>
<li>having some kind of network/cloud drive that each pod can access</li>
<li>mounting volumes on your pods, and then uploading the data to those volumes using <code>--files</code> in your spark-submit.</li>
</ul>
<p>The latter option might be the simpler to set up. <a href="https://jaceklaskowski.github.io/spark-kubernetes-book/demo/spark-and-local-filesystem-in-minikube/#hostpath" rel="nofollow noreferrer">This page</a> discusses in more detail how you could do this, but we can shortly go to the point. If you add the following arguments to your spark-submit you should be able to get your <code>people.json</code> on your driver + executors (you just have to choose sensible values for the $VAR variables in there):</p>
<pre><code> --files people.json \
--conf spark.kubernetes.file.upload.path=$SOURCE_DIR \
--conf spark.kubernetes.driver.volumes.$VOLUME_TYPE.$VOLUME_NAME.mount.path=$MOUNT_PATH \
--conf spark.kubernetes.driver.volumes.$VOLUME_TYPE.$VOLUME_NAME.options.path=$MOUNT_PATH \
--conf spark.kubernetes.executor.volumes.$VOLUME_TYPE.$VOLUME_NAME.mount.path=$MOUNT_PATH \
--conf spark.kubernetes.executor.volumes.$VOLUME_TYPE.$VOLUME_NAME.options.path=$MOUNT_PATH \
</code></pre>
<p>You can always verify the existence of your data by going inside of the pods themselves like so:</p>
<pre><code>kubectl exec -it <driver/executor pod name> bash
(now you should be inside of a bash process in the pod)
cd <mount-path-you-chose>
ls -al
</code></pre>
<p>That last <code>ls -al</code> command should show you a <code>people.json</code> file in there (after having done your spark-submit of course).</p>
<p>Hope this helps!</p>
| Koedlt |
<p>I am learning k8s with eksctl and used this to create a loadbalancer:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: lb
spec:
type: LoadBalancer
selector:
app: lb
ports:
- protocol: TCP
port: 3000
targetPort: 3000
</code></pre>
<p>it was created ok and <code>kubectl get service/lb</code> lists it as well a long aws domain name representing the external IP (let's call this <code><awsdomain></code>).</p>
<p>I then deployed my app:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-deployment
namespace: default
labels:
app: myapp
spec:
replicas: 2
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: <account-id>.dkr.ecr.<region>.amazonaws.com/myapp:latest
ports:
- containerPort: 3000
</code></pre>
<p>I did <code>kubectl apply -f deployment.yml</code> and that also seems to have worked. However, when I go to my browser, <code>http://<awsdomain>:3000</code> doesn't return anything :(</p>
<p>Is there another resource I'm supposed to create? Thanks.</p>
| yen | <p>Your service selector will not select any pod. Try:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: lb
spec:
type: LoadBalancer
selector:
app: myapp # <-- change to match the pod template
...
</code></pre>
| gohm'c |
<p>Here is my config of YAML (all PV, Statefulset, and Service get created fine no issues in that). Tried a bunch of solution for connection string of Kubernetes mongo but didn't work any.</p>
<p>Kubernetes version (minikube):
1.20.1</p>
<p>Storage for config:
NFS (working fine, tested)</p>
<p>OS:
Linux Mint 20</p>
<p>YAML CONFIG:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: auth-pv
spec:
capacity:
storage: 250Mi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: manual
nfs:
path: /nfs/auth
server: 192.168.10.104
---
apiVersion: v1
kind: Service
metadata:
name: mongo
labels:
name: mongo
spec:
ports:
- port: 27017
targetPort: 27017
selector:
role: mongo
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mongo
spec:
selector:
matchLabels:
app: mongo
serviceName: mongo
replicas: 1
template:
metadata:
labels:
app: mongo
spec:
terminationGracePeriodSeconds: 10
containers:
- name: mongo
image: mongo
ports:
- containerPort: 27017
volumeMounts:
- name: mongo-persistent-storage
mountPath: /data/db
volumeClaimTemplates:
- metadata:
name: mongo-persistent-storage
spec:
storageClassName: manual
accessModes: ["ReadWriteMany"]
resources:
requests:
storage: 250Mi
</code></pre>
| Meet Patel | <p>I have found few issues in your configuration file. In your service manifest file you use</p>
<pre><code> selector:
role: mongo
</code></pre>
<p>But in your statefull set pod template you are using</p>
<pre><code>labels:
app: mongo
</code></pre>
<p>One more thing you should use <code>ClusterIP:None</code> to use the headless service which is recommended for statefull set, if you want to access the db using dns name.</p>
| Pulak Kanti Bhowmick |
<p>I'm writing a helm chart by making use of a dependency chart available in a chart repository.
Below is my Chart.yaml</p>
<pre><code>appVersion: "1.2"
description: Helm chart for installation and maintenance of Test charts.
name: child-chart
version: 0.2.0
dependencies:
- name: parent-chart
version: 1.0.9
repository: https://testrepo.com/
enter code here
</code></pre>
<p>The values.yaml of my parent-chart is having the below entry</p>
<pre><code>testboard:
enabled: true
replicaCount: "1"
appName: "test-operator"
</code></pre>
<p>I tried to use this name in my child chart by making use of {{.Values.testboard.appName}}, but this value is coming as <strong>null</strong>. When I gave {{.Values.parent-chart.testboard.appName}} it is failing with an error bad character U+002D '-'</p>
<p>How I should modify my helm template to get the correct values from my dependency chart</p>
| mystack | <p>First, I'd like to point out that <em>usually</em> we refer to <em>child</em> charts as the ones our <em>parent</em> chart depends on. To be more clearer, let's call those <a href="https://helm.sh/docs/chart_template_guide/subcharts_and_globals/" rel="nofollow noreferrer">subcharts</a></p>
<p>To answer your question, if you have something like this in the subchart <code>values.yaml</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>testboard:
appName: something
</code></pre>
<p>this in the parent chart:</p>
<pre class="lang-yaml prettyprint-override"><code>dependencies:
- name: subchart
version: x.x.x
repository: https://does-not-matter
alias: alias-for-subchart # << this is optional!
</code></pre>
<p>you can override <code>testboard.appName</code> from the parent chart like this:</p>
<pre class="lang-yaml prettyprint-override"><code>alias-for-subchart:
testboard:
appName: something-else
</code></pre>
<p>if you provide an alias or</p>
<pre class="lang-yaml prettyprint-override"><code>subchart:
testboard:
appName: something-else
</code></pre>
<p>if you don't.</p>
<p>More explanations in the linked documentation.</p>
| Lebenitza |
<p>In Kubernetes docs <a href="https://kubernetes.io/docs/concepts/workloads/pods/#working-with-pods" rel="nofollow noreferrer">here</a> I can see that the pod name follows a valid "DNS subdomain name" which means 253 char limit but here in this article <a href="https://pauldally.medium.com/why-you-try-to-keep-your-deployment-names-to-47-characters-or-less-1f93a848d34c" rel="nofollow noreferrer">here</a> they have mentioned that you should try to keep deployment name below 47 char as pod name limit to 63 char. Also, I tried creating a pod with a deployment name of more than 63 chars but the pod name got truncated to 63 chars.<br />
So, what is the correct char limit for the pod name?</p>
| Binshumesh sachan | <p>A domain name is a series of labels with max length of 255; where each label with max length of 63 bytes. The K8s document is referring to the label. See <a href="https://www.freesoft.org/CIE/RFC/1035/9.htm" rel="nofollow noreferrer">here</a> for the RFC standard.</p>
| gohm'c |
<p>I have multiple pods, that scale up and down automatically.</p>
<p>I am using an ingress as entry point. I need to route external traffic to a specific pod base on some conditions (lets say path). At the point the request is made I am sure the specific pod is up.</p>
<p>For example lets say I have domain someTest.com, that normally routes traffic to pod 1, 2 and 3 (lets say I identify them by internal ips - 192.168.1.10, 192.168.1.11 and 192.168.1.13).</p>
<p>When I call someTest.com/specialRequest/12, I need to route the traffic to 192.168.1.12, when I call someTest.com/specialRequest/13, I want to route traffic to 192.168.1.13. For normal cases (someTest.com/normalRequest) I just want to do the lb do his epic job normally.</p>
<p>If pods scale up and 192.168.1.14 appears, I need to be able to call someTest.com/specialRequest/14 and be routed to the mentioned pod.</p>
<p>Is there anyway I can achieve this?</p>
| zozo | <p>Yes, you can easily achieve this using Kubernetes Ingress. Here is a sample code that might help:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app-ingress
spec:
rules:
- host: YourHostName.com
http:
paths:
- path: /
backend:
serviceName: Service1
servicePort: 8000
- path: /api
backend:
serviceName: Service2
servicePort: 8080
- path: /admin
backend:
serviceName: Service3
servicePort: 80
</code></pre>
<p>Please not that the ingress rules have serviceNames and not pod names, so you will have to create services for your pods. Here is an example for a service which exposes nginx as a service in Kubernetes:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: nginx-service
labels:
io.kompose.service: nginx
spec:
ports:
- name: "80"
port: 80
targetPort: 80
selector:
io.kompose.service: nginx
</code></pre>
| Vikrant |
<p>I have a cron job with below spec. I am sending a POST request to end point after specific intervals. I need to change the urls based on the environment like <code>staging</code> or <code>production</code></p>
<p>Is there a way i can use the ENV variable in place of domain name and not create two separate files to use two different urls.</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: testing
spec:
schedule: "*/20 * * * *"
suspend: false
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
containers:
- name: testing
image: image
ports:
- containerPort: 3000
args:
- /bin/sh
- -c
- curl -X POST #{USE_ENV_VARIABLE_HERE}/api/v1/automated_tests/workflows #how to make this generic
restartPolicy: Never
</code></pre>
| opensource-developer | <p>You can use env variable via secret or config map. Here I have given an example with secret.</p>
<pre><code># secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: demo-secret
type: Opaque
stringData:
BASE_URL: "example.com"
</code></pre>
<p>Then you can use that secret as env in container.</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: testing
spec:
schedule: "*/20 * * * *"
suspend: false
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 1
jobTemplate:
spec:
template:
spec:
containers:
- name: testing
image: image
ports:
- containerPort: 3000
envFrom:
- secretRef:
name: demo-secret
args:
- /bin/sh
- -c
- curl -X POST ${BASE_URL}/api/v1/automated_tests/workflows
restartPolicy: Never
</code></pre>
| Pulak Kanti Bhowmick |
<p>I am using an external TCP/UDP network load balancer (Fortigate), Kubernetes 1.20.6 and Istio 1.9.4.
I have set set externalTrafficPolicy: Local and need to run ingress gateway on every node (as said <a href="https://istio.io/latest/docs/tasks/security/authorization/authz-ingress/#source-ip-address-of-the-original-client" rel="nofollow noreferrer">here</a> in network load balancer tab) . How do I do that?</p>
<p>This is my ingress gateway service:</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: istio-ingressgateway
namespace: istio-system
uid: d1a86f50-ad14-415f-9c1e-d186fd72cb31
resourceVersion: '1063961'
creationTimestamp: '2021-04-28T19:25:37Z'
labels:
app: istio-ingressgateway
install.operator.istio.io/owning-resource: unknown
install.operator.istio.io/owning-resource-namespace: istio-system
istio: ingressgateway
istio.io/rev: default
operator.istio.io/component: IngressGateways
operator.istio.io/managed: Reconcile
operator.istio.io/version: 1.9.4
release: istio
annotations:
kubectl.kubernetes.io/last-applied-configuration: >
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"istio-ingressgateway","install.operator.istio.io/owning-resource":"unknown","install.operator.istio.io/owning-resource-namespace":"istio-system","istio":"ingressgateway","istio.io/rev":"default","operator.istio.io/component":"IngressGateways","operator.istio.io/managed":"Reconcile","operator.istio.io/version":"1.9.4","release":"istio"},"name":"istio-ingressgateway","namespace":"istio-system"},"spec":{"ports":[{"name":"status-port","port":15021,"protocol":"TCP","targetPort":15021},{"name":"http2","port":80,"protocol":"TCP","targetPort":8080},{"name":"https","port":443,"protocol":"TCP","targetPort":8443},{"name":"tcp-istiod","port":15012,"protocol":"TCP","targetPort":15012},{"name":"tls","port":15443,"protocol":"TCP","targetPort":15443}],"selector":{"app":"istio-ingressgateway","istio":"ingressgateway"},"type":"LoadBalancer"}}
managedFields:
- manager: istio-operator
........operation: Apply
apiVersion: v1
time: '2021-05-04T18:02:38Z'
fieldsType: FieldsV1
fieldsV1:
'f:metadata':
'f:annotations':
'f:kubectl.kubernetes.io/last-applied-configuration': {}
'f:labels':
'f:app': {}
'f:install.operator.istio.io/owning-resource': {}
'f:install.operator.istio.io/owning-resource-namespace': {}
'f:istio': {}
'f:istio.io/rev': {}
'f:operator.istio.io/component': {}
'f:operator.istio.io/managed': {}
'f:operator.istio.io/version': {}
'f:release': {}
'f:spec':
'f:ports':
'k:{"port":80,"protocol":"TCP"}':
.: {}
'f:name': {}
'f:port': {}
'f:protocol': {}
'f:targetPort': {}
'k:{"port":443,"protocol":"TCP"}':
.: {}
'f:name': {}
'f:port': {}
'f:protocol': {}
'f:targetPort': {}
'k:{"port":15012,"protocol":"TCP"}':
.: {}
'f:name': {}
'f:port': {}
'f:protocol': {}
'f:targetPort': {}
'k:{"port":15021,"protocol":"TCP"}':
.: {}
'f:name': {}
'f:port': {}
'f:protocol': {}
'f:targetPort': {}
'k:{"port":15443,"protocol":"TCP"}':
.: {}
'f:name': {}
'f:port': {}
'f:protocol': {}
'f:targetPort': {}
'f:selector':
'f:app': {}
'f:istio': {}
'f:type': {}
- manager: kubectl-patch
operation: Update
apiVersion: v1
time: '2021-05-04T18:01:23Z'
fieldsType: FieldsV1
fieldsV1:
'f:spec':
'f:externalIPs': {}
'f:externalTrafficPolicy': {}
'f:type': {}
selfLink: /api/v1/namespaces/istio-system/services/istio-ingressgateway
spec:
ports:
- name: status-port
protocol: TCP
port: 15021
targetPort: 15021
nodePort: 30036
- name: http2
protocol: TCP
port: 80
targetPort: 8080
nodePort: 32415
- name: https
protocol: TCP
port: 443
targetPort: 8443
nodePort: 32418
- name: tcp-istiod
protocol: TCP
port: 15012
targetPort: 15012
nodePort: 31529
- name: tls
protocol: TCP
port: 15443
targetPort: 15443
nodePort: 30478
selector:
app: istio-ingressgateway
istio: ingressgateway
clusterIP: 10.103.72.212
clusterIPs:
- 10.103.72.212
type: LoadBalancer
externalIPs:
- 10.43.34.38
- 10.43.34.77
sessionAffinity: None
externalTrafficPolicy: Local
healthCheckNodePort: 30788
status:
loadBalancer: {}
</code></pre>
<p>The firewall has these two addresses 10.43.34.38 and 10.43.34.77, and relays requests to two K8S nodes on ports 32415 (http) and 32415 (https).</p>
| brgsousa | <p>As brgsousa mentioned in the comment, the solution was redeploy as DaemonSet.</p>
<p>Here is working yaml file:</p>
<pre><code>apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
meshConfig:
accessLogFile: /dev/stdout
components:
ingressGateways:
- name: istio-ingressgateway
enabled: true
k8s:
overlays:
- apiVersion: apps/v1
kind: Deployment
name: istio-ingressgateway
patches:
- path: kind
value: DaemonSet
- path: spec.strategy
- path: spec.updateStrategy
value:
rollingUpdate:
maxUnavailable: 50%
type: RollingUpdate
egressGateways:
- name: istio-egressgateway
enabled: true
</code></pre>
| Mikołaj Głodziak |
<p>I've created a secret and when I deploy an application intended to read the secret, the application complains that the secret is a directory.</p>
<p>What am I doing wrong? The file is intended to be read as, well, a file.</p>
<pre><code>kc logs <pod>
(error) /var/config/my-file.yaml: is a directory.
</code></pre>
<p>The secret is created like this.</p>
<pre><code>kubectl create secret generic my-file.yaml --from-file=my-file.yaml
</code></pre>
<p>And here is the deployment.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: a-name
spec:
replicas: 1
selector:
matchLabels:
name: a-name
template:
metadata:
labels:
name: a-name
spec:
volumes:
- name: my-secret-volume
secret:
secretName: my-file.yaml
containers:
- name: a-name
image: test/image:v1.0.0
volumeMounts:
- name: my-secret-volume
mountPath: /var/config/my-file.yaml
subPath: my-file.yaml
readOnly: true
ports:
- containerPort: 1234
- containerPort: 5678
imagePullPolicy: Always
args:
- run
- --config
- /var/config/my-file.yaml
revisionHistoryLimit: 1
</code></pre>
| Martin01478 | <p>You are using <code>subPath</code> in the volume mount section. According to Kubernetes <a href="https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath" rel="nofollow noreferrer">volume doc</a>, when you need same volume for different purpose in the same pod then you should use <code>subPath</code>.</p>
<p>But here you are using the volume for only single use. But I'll give you both yaml file with subPath and without subPath.</p>
<p><b>With SubPath</b></p>
<pre><code> volumeMounts:
- name: my-secret-volume
mountPath: /var/config
subPath: config
readOnly: true
</code></pre>
<p><b>WithOut SubPath</b></p>
<pre><code> volumeMounts:
- name: my-secret-volume
mountPath: /var/config
readOnly: true
</code></pre>
<p>Rest of the manifest file will be same in both cases.</p>
| Pulak Kanti Bhowmick |
<p>I have a 3 node cluster. What I am going to do is create a persistence volume with <strong>ReadWriteMany</strong> access mode for MySQL Deployment. Also, Mount Option is GCEPersistentDisk.
My question is if I could use ReadWriteMany access mode for MySQL deployment, Will it be an issue? Because the volume can be mounted by many nodes. if I am wrong please correct me.</p>
| YMA | <p>Yes it can be an issue when the backend doesn't support ReadWriteMany, however, as per my knowledge MySQL supports ReadWriteMany. So it should not be an issue in your case.</p>
| Vikrant |
<p>I have the following anti-affinity rule configured in my k8s Deployment:</p>
<pre><code>spec:
...
selector:
matchLabels:
app: my-app
environment: qa
...
template:
metadata:
labels:
app: my-app
environment: qa
version: v0
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- my-app
topologyKey: kubernetes.io/hostname
</code></pre>
<p>In which I say that I do not want any of the Pod replica to be scheduled on a node of my k8s cluster in which is already present a Pod of the same application. So, for instance, having:</p>
<pre><code>nodes(a,b,c) = 3
replicas(1,2,3) = 3
</code></pre>
<p><strong>replica_1</strong> scheduled in <strong>node_a</strong>, <strong>replica_2</strong> scheduled in <strong>node_b</strong> and <strong>replica_3</strong> scheduled in <strong>node_c</strong></p>
<p>As such, I have each Pod scheduled in different nodes.</p>
<p>However, I was wondering if there is a way to specify that: "I want to spread my Pods in at least 2 nodes" to guarantee high availability without spreading all the Pods to other nodes, for example:</p>
<pre><code>nodes(a,b,c) = 3
replicas(1,2,3) = 3
</code></pre>
<p><strong>replica_1</strong> scheduled in <strong>node_a</strong>, <strong>replica_2</strong> scheduled in <strong>node_b</strong> and <strong>replica_3</strong> scheduled (<strong>again</strong>) in <strong>node_a</strong></p>
<p>So, to sum up, I would like to have a softer constraint, that allow me to guarantee high availability spreading Deployment's replicas across at least 2 nodes, without having to launch a node for each Pod of a certain application.</p>
<p>Thanks!</p>
| Luca Tartarini | <p>I think I found a solution to your problem. Look at this example yaml file:</p>
<pre><code>spec:
topologySpreadConstraints:
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
example: app
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- worker-1
- worker-2
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 50
preference:
matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- worker-1
</code></pre>
<p><strong>Idea of this configuration:</strong>
I'm using nodeAffinity here to indicate on which nodes pod can be placed:</p>
<pre><code>- key: kubernetes.io/hostname
</code></pre>
<p>and</p>
<pre><code>values:
- worker-1
- worker-2
</code></pre>
<p>It is important to set the following line:</p>
<pre><code>- maxSkew: 1
</code></pre>
<p>According to the <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/#spread-constraints-for-pods" rel="nofollow noreferrer">documentation</a>:</p>
<blockquote>
<p><strong>maxSkew</strong> describes the degree to which Pods may be unevenly distributed. It must be greater than zero.</p>
</blockquote>
<p>Thanks to this, the difference in the number of assigned feeds between nodes will always be maximally equal to 1.</p>
<p>This section:</p>
<pre><code> preferredDuringSchedulingIgnoredDuringExecution:
- weight: 50
preference:
matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- worker-1
</code></pre>
<p>is optional however, it will allow you to adjust the feed distribution on the free nodes even better. <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity" rel="nofollow noreferrer">Here</a> you can find a description with differences between: <code>requiredDuringSchedulingIgnoredDuringExecution</code> and <code>preferredDuringSchedulingIgnoredDuringExecution</code>:</p>
<blockquote>
<p>Thus an example of <code>requiredDuringSchedulingIgnoredDuringExecution</code> would be "only run the pod on nodes with Intel CPUs" and an example <code>preferredDuringSchedulingIgnoredDuringExecution</code> would be "try to run this set of pods in failure zone XYZ, but if it's not possible, then allow some to run elsewhere".</p>
</blockquote>
| Mikołaj Głodziak |
<p>I'm currently learning Kubernetes and recently learnt about using ConfigMaps for a Containers environment variables.</p>
<p>Let's say I have the following simple ConfigMap:</p>
<pre><code>apiVersion: v1
data:
MYSQL_ROOT_PASSWORD: password
kind: ConfigMap
metadata:
creationTimestamp: null
name: mycm
</code></pre>
<p>I know that a container of some deployment can consume this environment variable via:</p>
<p><code>kubectl set env deployment mydb --from=configmap/mycm</code></p>
<p>or by specifying it manually in the manifest like so:</p>
<pre><code>containers:
- env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
configMapKeyRef:
key: MYSQL_ROOT_PASSWORD
name: mycm
</code></pre>
<p>However, this isn't what I am after, since I'd to manually change the environment variables each time the ConfigMap changes.</p>
<p>I am aware that mounting a ConfigMap to the Pod's volume allows for the auto-updating of ConfigMap values. I'm currently trying to find a way to set a Container's environment variables to those stored in the mounted config map.</p>
<p>So far I have the following YAML manifest:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: mydb
name: mydb
spec:
replicas: 1
selector:
matchLabels:
app: mydb
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: mydb
spec:
containers:
- image: mariadb
name: mariadb
resources: {}
args: ["export MYSQL_ROOT_PASSWORD=$(cat /etc/config/MYSQL_ROOT_PASSWORD)"]
volumeMounts:
- name: config-volume
mountPath: /etc/config
env:
- name: MYSQL_ROOT_PASSWORD
value: temp
volumes:
- name: config-volume
configMap:
name: mycm
status: {}
</code></pre>
<p>I'm attempting to set the <code>MYSQL_ROOT_PASSWORD</code> to some temporary value, and then update it to mounted value as soon as the container starts via <code>args: ["export MYSQL_ROOT_PASSWORD=$(cat /etc/config/MYSQL_ROOT_PASSWORD)"]</code></p>
<p>As I somewhat expected, this didn't work, resulting in the following error:</p>
<p><code>/usr/local/bin/docker-entrypoint.sh: line 539: /export MYSQL_ROOT_PASSWORD=$(cat /etc/config/MYSQL_ROOT_PASSWORD): No such file or directory</code></p>
<p>I assume this is because the volume is mounted after the entrypoint. I tried adding a readiness probe to wait for the mount but this didn't work either:</p>
<pre><code>readinessProbe:
exec:
command: ["sh", "-c", "test -f /etc/config/MYSQL_ROOT_PASSWORD"]
initialDelaySeconds: 5
periodSeconds: 5
</code></pre>
<p>Is there any easy way to achieve what I'm trying to do, or is it impossible?</p>
| Anthony Seager | <p>So I managed to find a solution, with a lot of inspiration from <a href="https://stackoverflow.com/a/75265250/13557241">this answer</a>.</p>
<p>Essentially, what I did was create a sidecar container based on the alpine K8s image that mounts the configmap and constantly watches for any changes, since the K8s API automatically updates the mounted configmap when the configmap is changed. This required the following script, <code>watch_passwd.sh</code>, which makes use of <code>inotifywait</code> to watch for changes and then uses the K8s API to rollout the changes accordingly:</p>
<pre class="lang-bash prettyprint-override"><code>update_passwd() {
kubectl delete secret mysql-root-passwd > /dev/null 2>&1
kubectl create secret generic mysql-root-passwd --from-file=/etc/config/MYSQL_ROOT_PASSWORD
}
update_passwd
while true
do
inotifywait -e modify "/etc/config/MYSQL_ROOT_PASSWORD"
update_passwd
kubectl rollout restart deployment $1
done
</code></pre>
<p>The Dockerfile is then:</p>
<pre><code>FROM docker.io/alpine/k8s:1.25.6
RUN apk update && apk add inotify-tools
COPY watch_passwd.sh .
</code></pre>
<p>After building the image (locally in this case) as <em>mysidecar</em>, I create the ServiceAccount, Role, and RoleBinding outlined <a href="https://stackoverflow.com/a/75265250/13557241">here</a>, adding rules for deployments so that they can be restarted by the sidecar.</p>
<p>After this, I piece it all together to create the following YAML Manifest (note that <code>imagePullPolicy</code> is set to <code>Never</code>, since I created the image locally):</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: mydb
name: mydb
spec:
replicas: 3
selector:
matchLabels:
app: mydb
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: mydb
spec:
serviceAccountName: secretmaker
containers:
- image: mysidecar
name: mysidecar
imagePullPolicy: Never
command:
- /bin/sh
- -c
- |
./watch_passwd.sh $(DEPLOYMENT_NAME)
env:
- name: DEPLOYMENT_NAME
valueFrom:
fieldRef:
fieldPath: metadata.labels['app']
volumeMounts:
- name: config-volume
mountPath: /etc/config
- image: mariadb
name: mariadb
resources: {}
envFrom:
- secretRef:
name: mysql-root-passwd
volumes:
- name: config-volume
configMap:
name: mycm
status: {}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: secretmaker
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
app: mydb
name: secretmaker
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create", "get", "delete", "list"]
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "list", "watch", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app: mydb
name: secretmaker
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: secretmaker
subjects:
- kind: ServiceAccount
name: secretmaker
namespace: default
---
</code></pre>
<p>It all works as expected! Hopefully this is able to help someone out in the future. Also, if anybody comes across this and has a better solution please feel free to let me know :)</p>
| Anthony Seager |
<p>I have the following <a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="nofollow noreferrer">headless service</a> in my kubernetes cluster :</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
labels:
app: foobar
name: foobar
spec:
clusterIP: None
clusterIPs:
- None
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: foobar
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800
type: ClusterIP
</code></pre>
<p>Behind are running couple of pods managed by a statefulset.</p>
<p><strong>Lets try to reach my pods individually :</strong></p>
<ul>
<li>Running an alpine pod to contact my pods :</li>
</ul>
<pre class="lang-bash prettyprint-override"><code>> kubectl run alpine -it --tty --image=alpine -- sh
</code></pre>
<ul>
<li>Adding curl to fetch webpage :</li>
</ul>
<pre><code>alpine#> add apk curl
</code></pre>
<ul>
<li>I can curl into each of my pods :</li>
</ul>
<pre><code>alpine#> curl -s pod-1.foobar
hello from pod 1
alpine#> curl -s pod-2.foobar
hello from pod 2
</code></pre>
<p>It works just as expected.</p>
<p>Now I want to have a service that will loadbalance between my pods.
Let's try to use that same <code>foobar</code> service :</p>
<pre><code>alpine#> curl -s foobar
hello from pod 1
alpine#> curl -s foobar
hello from pod 2
</code></pre>
<p>It works just well. At least almost : In my headless service, I have specified <code>sessionAffinity</code>. As soon as I run a <code>curl</code> to a pod, I should stick to it.</p>
<p>I've tried the exact same test with a <em>normal</em> service (not headless) and this time it works as expected. It load balances between pods at first run BUT then stick to the same pod afterwards.</p>
<p><strong>Why sessionAffinity doesn't work on a headless service ?</strong></p>
| Will | <p>The affinity capability is provided by kube-proxy, only connection establish thru the proxy can have the client IP "stick" to a particular pod for a period of time. In case of headless, your client is given a list of pod IP(s) and it is up to your client app. to select which IP to connect. Because the order of IP(s) in the list is not always the same, typical app. that always pick the first IP will result to connect to the backend pod randomly.</p>
| gohm'c |
<p>I have an nginx deployment on k8s that is exposed via a nodeport service. Is it possible by using the GCP firewall to permit that only an application LB can talk with these nodeports?</p>
<p>I wouldn't like to let these two nodeports opened to everyone.</p>
| Daniel Marques | <p>Surely you can controll access traffics to your VM instance via firewall.</p>
<p>That is why firewall service exitsts.</p>
<p>If you created a VM in the default VPC and firewall setting environment, firewall will deny all traffics from outside.</p>
<p>You just need to write a rule to allow traffic from the application LB.</p>
<p>According to <a href="https://cloud.google.com/load-balancing/docs/https/ext-http-lb-simple#firewall" rel="nofollow noreferrer">Google document</a>, You need to allow from <code>130.211.0.0/22</code> and <code>35.191.0.0/16</code> IP ranges.</p>
| SeungwooLee |
<p>I need to process multiple files in s3 with k8s, so I've created one job on k8s and contains approximately 500 containers, for each container with different envs. However, the job is very slowly and in multiple times it failed.</p>
<p>I'm using kubernetes api python to submit job, likes this:</p>
<pre><code>def read_path():
files = []
suffixes = ('_SUCCESS', 'referen/')
files_path = fs.listdir(f"s3://{path}")
if files_path is not None:
for num, file in enumerate(files_path, start=1):
if not file['Key'].endswith(suffixes):
files.append(file['Key'])
return files
def from_containers(container, path):
containers = []
for num, file in enumerate(read_path(container, path), start=1):
containers.append(client.V1Container(name=f'hcm1-{num}', image='image-python',
command=['python3', 'model.py'],
env=[client.V1EnvVar(name='model', value='hcm1'),
client.V1EnvVar(name='referen', value=f"s3://{file}")]))
template = client.V1PodTemplateSpec(metadata=client.V1ObjectMeta(name="hcm1"),
spec=client.V1PodSpec(restart_policy="OnFailure",
containers=from_containers(containers, path),
image_pull_secrets=[client.V1LocalObjectReference(name="secret")]))
spec = client.V1JobSpec(template=template, backoff_limit=20)
job = client.V1Job(api_version="batch/v1", kind="Job", metadata=client.V1ObjectMeta(name="hcm1"), spec=spec)
api_response = batch_v1.create_namespaced_job(body=job, namespace="default")
print("Job created. status='%s'" % str(api_response.status))
</code></pre>
<p>I'm tried to use some config, like completions=10, concurrency=2, but my job multiple number of times to execute, 10*500=5000</p>
<p>What's better way to create job on k8s with multiples containers?</p>
<ol>
<li>1 Job -> 1 Pod -> 500 containers</li>
<li>1 Job -> 500 Pods (Each pod 1 container).</li>
<li>Is there another way?**</li>
</ol>
| Alan Miranda | <p>Your question is slightly <a href="https://english.meta.stackexchange.com/a/6528">opinion based</a>. You can find many solutions depending on your situation and approach.
However <a href="https://stackoverflow.com/users/10008173/david-maze">David Maze</a> mentioned very well:</p>
<blockquote>
<p>If you have this scripting setup already, why not 500 Jobs? (Generally you want to avoid multiple containers per pod.)</p>
</blockquote>
<p>Yes, if you are writing a script, this could be a very good solution to the problem. It definitely won't be that complicated because create 1 <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/" rel="nofollow noreferrer">Job</a> for 1 pod. There will be 1 container in each pod.</p>
<p>Theoretically you can also define your own <a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/" rel="nofollow noreferrer">Custom Resource</a> and write a controller like Job controller which supports multiple pod templates.</p>
<p>See also <a href="https://stackoverflow.com/questions/63160405/can-a-single-kubernetes-job-contain-multiple-pods-with-different-parallelism-def">this question.</a></p>
| Mikołaj Głodziak |
<p>I am following Kubernetes documentations on <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">secret</a>. I have this <code>secret.yaml</code> file:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
data:
val1: YXNkZgo=
stringData:
val1: asdf
</code></pre>
<p>and <code>secret-pod.yaml</code>:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: mysecretpod
spec:
containers:
- name: mypod
image: nginx
volumeMounts:
- name: myval
mountPath: /etc/secret
readOnly: true
volumes:
- name: myval
secret:
secretName: val1
items:
- key: val1
path: myval
</code></pre>
<p>I use <code>kubectl apply -f</code> on both of these files. Then using <code>kubectl exec -it mysecretpod -- cat /etc/secret/myval</code>, I can see the value <code>asdf</code> in the file <code>/etc/secret/myval</code> of <code>mysecretpod</code>.</p>
<p>However I want the mounted path to be <code>/etc/myval</code>. Thus I make the following change in <code>secret-pod.yaml</code>:</p>
<pre><code> volumeMounts:
- name: myval
mountPath: /etc
readOnly: true
</code></pre>
<p>After using <code>kubectl apply -f</code> on that file again, I check pod creation with <code>kubectl get pods --all-namespaces</code>. This is what I see:</p>
<pre><code>NAMESPACE NAME READY STATUS RESTARTS AGE
default mysecretpod 0/1 CrashLoopBackOff 2 (34s ago) 62s
</code></pre>
<p>Looking into that pod using <code>kubectl describe pods mysecretpod</code>, this is what I see:</p>
<pre><code>Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 35s default-scheduler Successfully assigned default/mysecretpod to minikube
Normal Pulled 32s kubelet Successfully pulled image "nginx" in 2.635766453s
Warning Failed 31s kubelet Error: failed to start container "mypod": Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: rootfs_linux.go:76: mounting "/var/lib/docker/containers/c84a8d278dc2f131daf9f322d26ff8c54d68cea8cd9c0ce209f68d7a9b677b3c/resolv.conf" to rootfs at "/etc/resolv.conf" caused: open /var/lib/docker/overlay2/4aaf54c61f7c80937a8edc094b27d6590538632e0209165e0b8c96e9e779a4b6/merged/etc/resolv.conf: read-only file system: unknown
Normal Pulled 28s kubelet Successfully pulled image "nginx" in 3.313846185s
Warning Failed 28s kubelet Error: failed to start container "mypod": Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: rootfs_linux.go:76: mounting "/var/lib/docker/containers/c84a8d278dc2f131daf9f322d26ff8c54d68cea8cd9c0ce209f68d7a9b677b3c/resolv.conf" to rootfs at "/etc/resolv.conf" caused: open /var/lib/docker/overlay2/34af5138f14d192ade7e53211476943ea82cd2c8186d69ca79a3adf2abbc0978/merged/etc/resolv.conf: read-only file system: unknown
Warning BackOff 24s kubelet Back-off restarting failed container
Normal Pulling 9s (x3 over 34s) kubelet Pulling image "nginx"
Normal Created 7s (x3 over 32s) kubelet Created container mypod
Normal Pulled 7s kubelet Successfully pulled image "nginx" in 2.73055072s
Warning Failed 6s kubelet Error: failed to start container "mypod": Error response from daemon: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: rootfs_linux.go:76: mounting "/var/lib/docker/containers/c84a8d278dc2f131daf9f322d26ff8c54d68cea8cd9c0ce209f68d7a9b677b3c/resolv.conf" to rootfs at "/etc/resolv.conf" caused: open /var/lib/docker/overlay2/01bfa6b2c35d5eb12ad7ad204a5acc58688c1e04d9b5891382e48c26d2e7077f/merged/etc/resolv.conf: read-only file system: unknown
</code></pre>
<p>Why does this fail? Is it possible to have a secret mounted at the <code>/etc</code> level instead of <code>/etc/something</code> level? If yes, how can I achieve that? Thank you so much!</p>
| CaTx | <pre><code>volumeMounts:
- name: myval
mountPath: /etc
readOnly: true
</code></pre>
<p>Instead of /etc <strong>directory</strong>, try mount as a single file:</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: nginx
type: Opaque
stringData:
val1: asdf
---
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx:alpine
volumeMounts:
- name: myval
mountPath: /etc/myval
subPath: myval
volumes:
- name: myval
secret:
secretName: nginx
items:
- key: val1
path: myval
...
</code></pre>
| gohm'c |
<p>Neebie to the world of Kubernetes AWS EKS and would great to get support.</p>
<p>I am trying to deploy a node app. I have the correct IAM policies attached to my IAM role on EKS, I have also setup the correct tags on the private and public subnets.</p>
<p>My Kubernetes yml looks like this.</p>
<pre><code>kind: Deployment
metadata:
name: test
spec:
replicas: 1
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: test:latest
ports:
- containerPort: 3000
imagePullPolicy: Always
---
kind: Service
apiVersion: v1
metadata:
name: test
spec:
type: LoadBalancer
selector:
app: test
ports:
- protocol: TCP
port: 80
targetPort: 9376
</code></pre>
<p>The service starts but the external ip just keeps saying pending and no load balancer is provisioned.</p>
<p>Thanks</p>
| AC88 | <p>To troubleshoot this issue, you can find related failure logs under AWS EKS cluster control plane logs. Please refer <a href="https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html#viewing-control-plane-logs" rel="nofollow noreferrer">this</a> document from AWS with steps to view AWS EKS cluster control plane logs, also with steps to enable control plane logs.</p>
<p>If you have AWS EKS cluster control plane log available, then you can execute following query in CloudWatch Logs Insights. For information about execution of AWS CloudWatch Logs Insights query, please refer <a href="https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/CWL_AnalyzeLogData_RunSampleQuery.html#CWL_AnalyzeLogData_RunQuerySample" rel="nofollow noreferrer">this</a> document from AWS. After execution of query, check for value of <code>responseObject.reason</code> field or you can also expand message to view details.</p>
<pre><code>fields @timestamp, @message, requestObject.kind, requestObject.metadata.name,requestObject.spec.type,responseObject.status,responseObject.message,responseObject.reason,requestObject.spec.selector.app
| filter requestObject.spec.type='LoadBalancer'
| sort @timestamp desc
</code></pre>
| amitd |
<h3>Objective</h3>
<p>My objective is to to connect to an RDS (Postgres) database from a pod running in an AWS EKS cluster. I am using Terraform for provisioning but not necessarily looking for Terraform code solution.</p>
<h2>What I have tried</h2>
<p>I have created the database (trimmed down settings like password, username...etc.) like:</p>
<pre class="lang-sh prettyprint-override"><code>resource "aws_db_instance" "rds-service" {
name = "serviceDB"
engine = "postgres"
db_subnet_group_name = aws_db_subnet_group.service_subnet_group.name
}
</code></pre>
<p>Then created a security group to allow traffic</p>
<pre class="lang-sh prettyprint-override"><code>resource "aws_security_group" "db" {
name = "service-rds-access"
vpc_id = module.vpc.vpc_id
ingress {
description = "Postgres from VPC"
from_port = 5432
to_port = 5432
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
}
</code></pre>
<p>Which uses a subnet group that I have created (<em><strong>Note that RDS is only deployed on private subnets, and I used the same subnets which are a subset of the subnets that I have used to deploy my service on EKS</strong></em>)</p>
<pre class="lang-sh prettyprint-override"><code>resource "aws_db_subnet_group" "service_subnet_group" {
name = "service-subnet-group"
subnet_ids = [module.vpc.subnet_a_private_id, module.vpc.subnet_b_private_id]
}
</code></pre>
<h2>What does not work</h2>
<p>Upon attempting to connect from the Pod, I can't reach the RDS. I have also tried getting a shell into the pod and attempting to manually connect to the RDS Instance like:</p>
<p>Running <code>psql --version</code> I got: <code>psql (PostgreSQL) 11.9</code>, and then when I tried to authenticate via <code>psql</code> like:</p>
<pre class="lang-sh prettyprint-override"><code>psql --host=$HOST_NAME --port=5432 --username=$USER_NAME --password --dbname=postgres
</code></pre>
<p>I got (edits are mine):</p>
<pre class="lang-sh prettyprint-override"><code>psql: could not connect to server: Connection timed out
Is the server running on host "<hostname>.amazonaws.com" <IP> and accepting
TCP/IP connections on port 5432?
</code></pre>
| alt-f4 | <p>You can use following options to find root-cause;</p>
<ol>
<li>Use <a href="https://aws.amazon.com/blogs/aws/new-vpc-insights-analyzes-reachability-and-visibility-in-vpcs/" rel="nofollow noreferrer">VPC Reachability Analyzer</a> : Create and analyze path with Source type Network Interfaces and Source as Network Interface Id of the ec2/node where you pods are deployed. Furthermore, select Destination type as Network Interfaces and Destination as network Interface Id of the RDS DB instance. Put Destination port as 5432, keep Protocol as TCP. Then execute the path analysis. In case you have multiple ec2/nodes for pods, I assume these ec2/nodes have same setup.</li>
</ol>
<p>Note*: It takes few minutes for completion of this path analysis.</p>
<p>OR</p>
<ol start="2">
<li>If you have enabled <strong>VPC Flow Logs</strong>, then you can trace vpc flow logs to check, which aws resource is rejecting the network traffic. For VPC flow logs information please refer <a href="https://aws.amazon.com/blogs/aws/vpc-flow-logs-log-and-view-network-traffic-flows/" rel="nofollow noreferrer">this</a> document from AWS.</li>
</ol>
<p>Furthermore, Check VPC flow logs of AWS lambda's network interface(s) of EC2 of pods and RDS DB instance.</p>
| amitd |
<p>I have a Kubernetes cluster. Inside my cluster is a Django application which needs to connect to my Kubernetes cluster on GKE. Upon my Django start up (inside my Dockerfile), I authenticate with Google Cloud by using:</p>
<pre><code>gcloud auth activate-service-account $GKE_SERVICE_ACCOUNT_NAME --key-file=$GOOGLE_APPLICATION_CREDENTIALS
gcloud config set project $GKE_PROJECT_NAME
gcloud container clusters get-credentials $GKE_CLUSTER_NAME --zone $GKE_ZONE
</code></pre>
<p>I am not really sure if I need to do this everytime my Django container starts, and I am not sure I understand how authentication to Google Cloud works. Could I perhaps just generate my Kubeconfig file, store it somewhere safe and use it all the time instead of authenticating?
In other words, is a Kubeconfig file enough to connect to my GKE cluster?</p>
| s3nti3ntB | <p>If your service is running in a Pod inside the GKE cluster you want to connect to, use a Kubernetes service account to authenticate.</p>
<ol>
<li><p>Create a Kubernetes service account and attach it to your Pod. If your Pod already has a Kubernetes service account, you may skip this step.</p>
</li>
<li><p>Use Kubernetes RBAC to grant the Kubernetes service account the correct permissions.</p>
</li>
</ol>
<p>The following example grants <strong>edit</strong> permissions in the <strong>prod</strong> namespace:</p>
<pre><code>kubectl create rolebinding yourserviceaccount \
--clusterrole=edit \
--serviceaccount=yournamespace:yourserviceaccount\
--namespace=prod
</code></pre>
<ol start="3">
<li>At runtime, when your service invokes <code>kubectl</code>, it automatically receives the credentials you configured.</li>
</ol>
<p>You can also store the credentials as a <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">secret</a> and mount it on your pod so that it can read them from there</p>
<p>To use a Secret with your workloads, you can specify environment variables that reference the Secret's values, or mount a volume containing the Secret.</p>
<p>You can create a Secret using the command-line or a YAML file.</p>
<p>Here is an example using Command-line</p>
<pre><code>kubectl create secret SECRET_TYPE SECRET_NAME DATA
</code></pre>
<p><code>SECRET_TYPE:</code> the Secret type, which can be one of the following:</p>
<ul>
<li><code>generic:</code>Create a Secret from a local file, directory, or literal value.</li>
<li><code>docker-registry:</code>Create a <code>dockercfg</code> Secret for use with a Docker registry. Used to authenticate against Docker registries.</li>
<li><code>tls:</code>Create a TLS secret from the given public/private key pair. The public/private key pair must already exist. The public key certificate must be .PEM encoded and match the given private key.</li>
</ul>
<p>For most Secrets, you use the <code>generic</code> type.</p>
<p><code>SECRET_NAME:</code> the name of the Secret you are creating.</p>
<p><code>DATA:</code> the data to add to the Secret, which can be one of the following:</p>
<ul>
<li>A path to a directory containing one or more configuration files, indicated using the <code>--from-file</code> or <code>--from-env-file</code> flags.</li>
<li>Key-value pairs, each specified using <code>--from-literal</code> flags.</li>
</ul>
<p>If you need more information about <code>kubectl create</code> you can check the <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#create" rel="nofollow noreferrer">reference documentation</a></p>
| Jorge Navarro |
<p>I am trying to get ingress EXTERNAL-IP in k8s. Is there any way to get the details from terraform data block. like using data "azurerm_kubernetes_cluster" or something?</p>
| iluv_dev | <p>Another solution would be to use the <a href="https://registry.terraform.io/providers/hashicorp/dns/latest/docs" rel="nofollow noreferrer">DNS provider's</a> <a href="https://registry.terraform.io/providers/hashicorp/dns/latest/docs/data-sources/dns_a_record_set" rel="nofollow noreferrer"><code>dns_a_record_set</code></a> data source to resolve the FQDN at build time. Unlike the IP, the FQDN is exported from the azurerm_kubernetes_cluster resource.</p>
<p>That would allow you to work with the resulting IP address via the <code>dns_a_record_set</code>'s <code>addrs</code> attribute, as shown below (an example where the IP was needed for the destination of a firewall rule):</p>
<pre><code># not shown:
# * declaring dns provider in required_providers block
# * supporting resources eg azurerm_resource_group, etc
resource "azurerm_kubernetes_cluster" "this" {
...
}
data "dns_a_record_set" "aks_api_ip" {
host = azurerm_kubernetes_cluster.this.fqdn
}
resource "azurerm_firewall_network_rule_collection" "firewall_network_rule_collection" {
name = "ip_based_network_rules"
azure_firewall_name = azurerm_firewall.this.name
resource_group_name = azurerm_resource_group.this.name
priority = 200
action = "Allow"
rule {
name = "aks-nodes-to-control-plane"
description = "Azure Global required network rules: https://learn.microsoft.com/en-us/azure/aks/limit-egress-traffic"
source_addresses = azurerm_subnet.this.address_prefixes
destination_ports = [ "443", "1194", "9000" ]
destination_addresses = data.dns_a_record_set.aks_api_ip.addrs
protocols = [
"UDP",
"TCP"
]
}
...
}
</code></pre>
<p>The above worked in my case, and successfully added the correct IP to the rule destination. No <code>depends_on</code> needed, Terraform thankfully is able to suss out build order.</p>
| epopisces |
<p>Memory and cpu resources of a container can be tracked using prometheus. But can we track I/O of a container? Are there any metrices available?</p>
| Manish Khandelwal | <p>If you are using Docker containers you can check the data with the <code>docker stats</code> command (as <a href="https://stackoverflow.com/users/6309601/p">P...</a> mentioned in the comment). <a href="https://docs.docker.com/engine/reference/commandline/stats/#extended-description" rel="nofollow noreferrer">Here</a> you can find more information about this command.</p>
<blockquote>
<p>If you want to check pods cpu/memory usage without installing any third party tool then you can get memory and cpu usage of pod from cgroup.</p>
<ol>
<li>Go to pod's exec mode <code>kubectl exec pod_name -- /bin/bash</code></li>
<li>Go to <code>cd /sys/fs/cgroup/cpu</code> for cpu usage run <code>cat cpuacct.usage</code></li>
<li>Go to <code>cd /sys/fs/cgroup/memory</code> for memory usage run <code>cat memory.usage_in_bytes</code></li>
</ol>
</blockquote>
<p>For more look at this <a href="https://stackoverflow.com/questions/54531646/checking-kubernetes-pod-cpu-and-memory">similar question</a>.
<a href="https://stackoverflow.com/questions/51641310/kubernetes-top-vs-linux-top/51656039#51656039">Here</a> you can find another interesting question. You should know, that</p>
<blockquote>
<p>Containers inside pods partially share <code>/proc</code>with the host system include path about a memory and CPU information.</p>
</blockquote>
<p>See also this article about <a href="https://fabiokung.com/2014/03/13/memory-inside-linux-containers/" rel="nofollow noreferrer">Memory inside Linux containers</a>.</p>
| Mikołaj Głodziak |
<p>When I created grafana pod to use official image and mounted /var/lib/grafana, data is not hidden
I don't know why?
According to what I have been studying if pvc is mounted in /var/lib/grafana directory, every file is hidden and can't access.</p>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: grafana-statefulset
spec:
serviceName: grafana-service
selector:
matchLabels:
app: grafana
replicas: 1
template:
metadata:
labels:
app: grafana
spec:
containers:
- name: grafana
image: grafana/grafana:latest
volumeMounts:
- mountPath: "/var/lib/grafana"
name: grafana-var
securityContext:
runAsUser: 472
fsGroup: 472
volumeClaimTemplates:
- metadata:
name: grafana-var
spec:
accessModes: ["ReadWriteMany"]
storageClassName: nks-nas-csi
resources:
requests:
storage: 2Gi
</code></pre>
<pre><code>[dev1-user@master-dev-kube-cluster migration]$ k exec -it grafana-statefulset-0 -- sh
/usr/share/grafana $
/usr/share/grafana $ ls -l /var/lib/grafana/
total 912
drwxr-x--- 3 grafana root 4096 Jan 2 08:00 alerting
drwx------ 2 grafana root 4096 Jan 2 08:00 csv
drwxr-x--- 2 grafana root 4096 Jan 2 08:00 file-collections
-rw-r----- 1 grafana root 909312 Jan 3 01:20 grafana.db
drwxr-xr-x 2 grafana root 4096 Jan 2 08:00 plugins
drwx------ 2 grafana root 4096 Jan 2 08:00 png
</code></pre>
<p>But i can see and access well /var/lib/grafana directory data</p>
<p>However when I created the image separately, the files in the directories I mounted were hidden and inaccessible.</p>
<pre><code>### First Stage
FROM busybox:latest
RUN mkdir /var/aaaa
COPY ./main.go /
RUN mv main.go /var/aaaa
</code></pre>
<pre><code>apiVersion: apps/v1
kind: StatefulSet
metadata:
name: busybox
spec:
serviceName: busybox
selector:
matchLabels:
app: busybox
replicas: 1
template:
metadata:
labels:
app: busybox
spec:
containers:
- name: busybox
image: busy/test:busybox
imagePullPolicy: "Always"
command:
- sleep
- "86400"
volumeMounts:
- mountPath: /var/aaaa
name: www
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: ["ReadWriteMany"]
storageClassName: nks-nas-csi
resources:
requests:
storage: 2Gi
</code></pre>
<pre><code>[dev1-user@master-dev-kube-cluster migration]$ k exec -it busybox-0 -- sh
/ #
/ #
/ # ls -l /var/aaaa/
total 0
/ #
</code></pre>
<p>There are no main.go file in /var/aaaa directory</p>
<p>The point of this article is not statefulset, it's just a question that came up while testing.</p>
<p>How can I keep using every directory data after mount like grafana official image and how grafana do that?</p>
| HHJ | <blockquote>
<p>How can I keep using every directory data after mount like grafana official image and how grafana do that?</p>
</blockquote>
<p>The official image only has <code>plugin</code> in <code>/var/lib/grafana</code>. Those additional directories that you saw; will automatically be created by grafana during start-up if not exist.</p>
| gohm'c |
<p>I have an image, which is a simple web server running on port 80. When I run it locally, I get:</p>
<pre class="lang-sh prettyprint-override"><code>The app is listening at http://localhost:80
</code></pre>
<p>and everything is fine.</p>
<p>However, when I deploy the following to K8s, it crashes constantly.</p>
<p>Deployment:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
namespace: apps
labels:
app: my-app
spec:
replicas: 2
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: myimage:dev
imagePullPolicy: Always
ports:
- containerPort: 80
</code></pre>
<p>Logs of one of the pods:</p>
<pre><code>node:events:371
throw er; // Unhandled 'error' event
^
Error: listen EACCES: permission denied 0.0.0.0:80
at Server.setupListenHandle [as _listen2] (node:net:1298:21)
at listenInCluster (node:net:1363:12)
at Server.listen (node:net:1450:7)
at Function.listen (/app/node_modules/express/lib/application.js:618:24)
at Object.<anonymous> (/app/index.js:17:5)
at Module._compile (node:internal/modules/cjs/loader:1095:14)
at Object.Module._extensions..js (node:internal/modules/cjs/loader:1124:10)
at Module.load (node:internal/modules/cjs/loader:975:32)
at Function.Module._load (node:internal/modules/cjs/loader:816:12)
at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:79:12)
Emitted 'error' event on Server instance at:
at emitErrorNT (node:net:1342:8)
at processTicksAndRejections (node:internal/process/task_queues:83:21) {
code: 'EACCES',
errno: -13,
syscall: 'listen',
address: '0.0.0.0',
port: 80
}
</code></pre>
<p>Why my image is able to run successfully on my local machine, and it fails on the Kubernetes?</p>
| mnj | <p>Non-root users (non privileged) can't open a listening socket on ports below 1024.</p>
<p>You can find the solution <a href="https://www.digitalocean.com/community/tutorials/how-to-use-pm2-to-setup-a-node-js-production-environment-on-an-ubuntu-vps#give-safe-user-permission-to-use-port-80" rel="nofollow noreferrer">here</a>:</p>
<blockquote>
<p>Remember, we do NOT want to run your applications as the root user, but there is a hitch: <strong>your safe user does not have permission to use the default HTTP port (80)</strong>. You goal is to be able to publish a website that visitors can use by navigating to an easy to use URL like <a href="http://example.com/" rel="nofollow noreferrer">http://example.com</a>.</p>
<p>Unfortunately, unless you sign on as root, you’ll normally have to use a URL like <a href="http://example.com:3000/" rel="nofollow noreferrer">http://example.com:3000</a> - notice the port number.</p>
<p>A lot of people get stuck here, but the solution is easy. There a few options but this is the one I like. Type the following commands:</p>
</blockquote>
<pre><code>sudo apt-get install libcap2-bin
sudo setcap cap_net_bind_service=+ep /usr/local/bin/node
</code></pre>
<p>You can also see <a href="https://stackoverflow.com/questions/60372618/nodejs-listen-eacces-permission-denied-0-0-0-080">this similar question</a>.</p>
| Mikołaj Głodziak |
<p>Trying to move my services to K8s. I am using helm for that and I want to describe my example with issue:</p>
<p>I need to move few services with own Apache`s. Yes, I understand, that this is working something like:</p>
<p>external traffic -> K8s nginx ingress -> pod Apache</p>
<p>But I cant change it for now. Apache is working with PHP. So, I have deployment for two images: <code>php-fpm</code> and <code>Apache</code>. And, in this case, I need to share php files from my first php-fpm container with Apache.</p>
<p>For now - I am using shared <code>volumeMounts</code> to copying files from one container to second (all date is preparing/compiling in the first container, php-fpm, during the build):</p>
<pre><code>volumeMounts:
- name: shared-files
mountPath: /var/shared-www
</code></pre>
<p>And</p>
<pre><code>lifecycle:
postStart:
exec:
command: ["/bin/bash", "-c", "cp -r XXX /var/shared-www/"]
</code></pre>
<p>And it is working, but I would like to find solution to have one common point, like a symlink, to store files. I want to have ability to change files in one place and have changed files in both containers.</p>
<p>Is it possible?</p>
<p>PS I am using AWS EKS. And I don`t want to use network storage (like a EFS).</p>
| prosto.vint | <p>Why not use <a href="https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath" rel="nofollow noreferrer">subPath</a> as per official documentation:</p>
<blockquote>
<p>it is useful to share one volume for multiple uses in a single pod.
The volumeMounts.subPath property specifies a sub-path inside the
referenced volume instead of its root.</p>
</blockquote>
<p>So instead of running the command something like (to share XXX towards):</p>
<pre><code>volumeMounts:
- name: shared-files
mountPath: /var/shared-www
subpath: XXX
</code></pre>
<p>Update:</p>
<p>Kubernetes subpath vulnerability <a href="https://kubernetes.io/blog/2018/04/04/fixing-subpath-volume-vulnerability/" rel="nofollow noreferrer">article</a></p>
| jmvcollaborator |
<p>How to upgrade an existing running deployment with yaml deployment file without changing the number of running replicas of that deployment?
So, I need to set the number of replicas on the fly without changing the yaml file.</p>
<p>It is like running kubectl apply -f deployment.yaml along with kubectl scale --replicas=3 both together, or in another wards to apply deployment yaml changes with keeping the numebr of running replicas the same as it is.</p>
<p>For example: I have a running deployment which already scaled its pods to 5 replicas, need to change deployment parameters within CD (like upgrade container image, change environment variabls, .. etc) without manualy check the #running pods and update the yaml with it, how can achieve this?</p>
| mohsen | <p>Use the kubectl edit command</p>
<pre><code>kubectl edit (RESOURCE/NAME | -f FILENAME)
E.g. kubectl edit deployment.apps/webapp-deployment
</code></pre>
<p>It will open an editor. You can update the value for number of replicas in the editor and save.</p>
<p>Refer the documentation section - Editing resources
<a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#editing-resources" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/kubectl/cheatsheet/#editing-resources</a></p>
| Prince Arora |
<p>I am trying to make a local volume to save logs from a pod to their node in an isolated environment. So, I am trying to make a PV and a PVC on the specific node that has the <code>tier=production</code> tag. I have labeled the node with the tag:</p>
<pre><code>$ k get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS
xxx Ready Worker 12d v1.25.2 <lots of lables>,tier=production
</code></pre>
<p>Following the <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/#local" rel="nofollow noreferrer">Local Volume</a> and the <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/#local" rel="nofollow noreferrer">Storage Class</a> docs, I have created the following yaml to deploy the volume, claim, and my pod:</p>
<pre class="lang-yaml prettyprint-override"><code>---
# A storage class to define local storage
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
# Volume using a local filesystem
apiVersion: v1
kind: PersistentVolume
metadata:
name: volume-pv
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /tmp/nginx/
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: tier
operator: In
values:
- production
---
# Request a claim on the file system volume
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: volume-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-storage
volumeMode: Filesystem
resources:
requests:
storage: 1Gi
selector:
matchExpressions:
- key: tier
operator: In
values:
- production
---
# Make a pod that uses the volume
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: tier
operator: In
values:
- production
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: volume-claim
</code></pre>
<p>My Volume is available but the PVC is <code>waiting for first consumer to be created before binding</code> which is expected since the Storage Class is set to <code>WaitForFirstConsumer</code>. But my pod is never scheduled, it gives the following warning:</p>
<pre><code>Warning FailedScheduling 8m25s default-scheduler 0/1 nodes are available: 1 node(s) didn't find available persistent volumes to bind. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.
</code></pre>
<p>If I remove the volume information from the yaml above, it deploys just fine so I don't know if it is a problem with the pod or something else. How do I get the pod to use the volumes?</p>
| veda905 | <p>Try:</p>
<pre><code>...
# Volume using a local filesystem
apiVersion: v1
kind: PersistentVolume
metadata:
name: volume-pv
labels: # <-- add for your PVC selector to match
tier: production
...
</code></pre>
| gohm'c |
<p>I have requirement for two different version of elastic search, can I deploy two different pods of elastic search under same namespace in kubernetes cluster?</p>
<p>Dockerfile for elasticserach:</p>
<pre><code>FROM elasticsearch:7.12.0
USER elasticsearch
EXPOSE 9200
</code></pre>
<p>Second Dockerfile:</p>
<pre><code>FROM elasticsearch:7.5.1
USER elasticsearch
EXPOSE 9200
</code></pre>
| SVD | <p>Yes you can.</p>
<p>If you want to deploy a bare pod</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Pod
metadata:
name: "elastic search"
labels:
app: "eleastic-search"
spec:
containers:
- name: "eleastic-search"
image: elasticsearch:7.12.0
ports:
- name: es
containerPort: 9200
protocol: TCP
</code></pre>
<p>for the second one you can use a different version and you need to use a different name</p>
<p>you can also use a deployment.
which will deploy 3 replicas of elasticsearch:7.12.0</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: "elastic search"
labels:
app: "eleastic-search"
spec:
replicas: 3
selector:
matchLabels:
app: "eleastic-search"
template:
metadata:
labels:
app: "eleastic-search"
spec:
containers:
- name: "eleastic-search"
image: elasticsearch:7.12.0
ports:
- containerPort: 9200
</code></pre>
<p>** make sure the metadata.name is not conflicting with each other.
both examples they will deployed in default namespace.
you can define namespace under the metadata section</p>
<p>I would recommend you to go through the offical docs of kubernetes
<a href="https://kubernetes.io/docs/concepts/workloads/pods/" rel="nofollow noreferrer">pods</a>
<a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="nofollow noreferrer">deployments</a>
<a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">services</a></p>
| heheh |
<p>I am wondering if <code>systemd</code> could be used as the <strong>cgroup driver</strong> in <strong>cgroup v1</strong> environment.</p>
<p>NOTE: As mentioned in <a href="https://kubernetes.io/docs/setup/production-environment/container-runtimes/" rel="nofollow noreferrer">Kubernetes Container Runtimes Doc</a>, <code>cgroupfs</code> is preferred when the OS have <strong>cgroup v1</strong>.</p>
<p>I have tried to set up a Kubernetes cluster using <code>systemd</code> as <strong>cgroup driver</strong>, and it is working correctly for now.</p>
<p>The test env is:</p>
<ul>
<li>Kubelet: 1.23</li>
<li>OS: Ubuntu 20.04 (Kernel 5.4.0, cgroup v1)</li>
<li>CRI: containerd 1.5.9</li>
<li>Cgroup Driver: systemd</li>
</ul>
<p>Are there any risks by using <code>systemd</code> in <strong>cgroup v1</strong> env?</p>
| Wolphy | <blockquote>
<p>NOTE: As mentioned in Kubernetes Container Runtimes Doc, cgroupfs is preferred when the OS have cgroup v1.</p>
</blockquote>
<p>Can you specify which paragraph is this? If not mistaken the document didn't state cgroupfs is preferred over systemd for distro that uses cgroup v1. systemd is widely accepted as the init system but cgroup v2 is available only if you run a fairly new (>=5.8) kernel.</p>
<blockquote>
<p>Are there any risks by using systemd in cgroup v1 env?</p>
</blockquote>
<p>Cgroup v1 is mostly in-use to date and systemd is designed to work with it. That being said, cgroupfs is the default for kubelet at this time of writing. As kernel mature overtime, systemd may one day become the default and all the backing CRI will follow thru.</p>
<p>A side note, docker default to cgroupfs on system that only support cgroup v1 (regardless if systemd is present). It will use systemd on system that uses cgroup v2 and systemd is present. However, k8s has dropped docker as the CRI with the removal of dockershim starting v1.24. You can continue with dockershim with <a href="https://www.mirantis.com/blog/mirantis-to-take-over-support-of-kubernetes-dockershim-2/" rel="nofollow noreferrer">Mirantis</a>.</p>
| gohm'c |
<p>Is there a way to combine <code>kubectl top pod</code> and <code>kubectl top nodes</code>?</p>
<p>Basically I want to know pods sorted by cpu/memory usage <strong>BY</strong> node.</p>
<p>I can only get pods sorted by memory/cpu for whole cluster with <code>kubectl top pod</code> or directly memory/cpu usage per whole node with <code>kubectl top nodes</code>.</p>
<p>I have been checking the documentation but couldnt find the exact command.</p>
| lapinkoira | <p>There is no built-in solution to achieve your expectations. <code>kubectl top pod</code> and <code>kubectl top node</code> are different commands and cannot be mixed each other. It is possible to <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#interacting-with-running-pods" rel="nofollow noreferrer">sort results</a> from <code>kubectl top pod</code> command only by <code>cpu</code> or <code>memory</code>:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl top pod POD_NAME --sort-by=cpu # Show metrics for a given pod and sort it by 'cpu' or 'memory'
</code></pre>
<p>If you want to "combine" <code>kubectl top pod</code> and <code>kubectl top node</code> you need to write custom solution. For example script in Bash based on <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/" rel="nofollow noreferrer">this commands</a>.</p>
| Mikołaj Głodziak |
<p>I use this manifest configuration to deploy a registry into 3 mode Kubernetes cluster:</p>
<pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
name: pv1
namespace: registry-space
spec:
capacity:
storage: 5Gi # specify your own size
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain
local:
path: /opt/registry # can be any path
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- kubernetes2
accessModes:
- ReadWriteMany # only 1 node will read/write on the path.
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv1-claim
namespace: registry-space
spec: # should match specs added in the PersistenVolume
accessModes:
- ReadWriteMany
volumeMode: Filesystem
resources:
requests:
storage: 5Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: private-repository-k8s
namespace: registry-space
labels:
app: private-repository-k8s
spec:
replicas: 1
selector:
matchLabels:
app: private-repository-k8s
template:
metadata:
labels:
app: private-repository-k8s
spec:
volumes:
- name: certs-vol
hostPath:
path: /opt/certs
type: Directory
- name: task-pv-storage
persistentVolumeClaim:
claimName: pv1-claim # specify the PVC that you've created. PVC and Deployment must be in same namespace.
containers:
- image: registry:2
name: private-repository-k8s
imagePullPolicy: IfNotPresent
env:
- name: REGISTRY_HTTP_TLS_CERTIFICATE
value: "/opt/certs/registry.crt"
- name: REGISTRY_HTTP_TLS_KEY
value: "/opt/certs/registry.key"
ports:
- containerPort: 5000
volumeMounts:
- name: certs-vol
mountPath: /opt/certs
- name: task-pv-storage
mountPath: /opt/registry
</code></pre>
<p>I manually created directories on every node under <code>/opt/certs</code> and <code>/opt/registry</code>.</p>
<p>But when I try to deploy the manifest without hardcoded <code>nodeSelectorTerms</code> on tha control plane I get error:</p>
<pre><code>kubernetes@kubernetes1:/opt/registry$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-58dbc876ff-fsjd5 1/1 Running 1 (74m ago) 84m
kube-system calico-node-5brzt 1/1 Running 1 (73m ago) 84m
kube-system calico-node-nph9n 1/1 Running 1 (76m ago) 84m
kube-system calico-node-pcd74 1/1 Running 1 (74m ago) 84m
kube-system calico-node-ph2ht 1/1 Running 1 (76m ago) 84m
kube-system coredns-565d847f94-7pswp 1/1 Running 1 (74m ago) 105m
kube-system coredns-565d847f94-tlrfr 1/1 Running 1 (74m ago) 105m
kube-system etcd-kubernetes1 1/1 Running 2 (74m ago) 105m
kube-system kube-apiserver-kubernetes1 1/1 Running 2 (74m ago) 105m
kube-system kube-controller-manager-kubernetes1 1/1 Running 2 (74m ago) 105m
kube-system kube-proxy-4slm4 1/1 Running 1 (76m ago) 86m
kube-system kube-proxy-4tnx2 1/1 Running 2 (74m ago) 105m
kube-system kube-proxy-9dgsj 1/1 Running 1 (73m ago) 85m
kube-system kube-proxy-cgr44 1/1 Running 1 (76m ago) 86m
kube-system kube-scheduler-kubernetes1 1/1 Running 2 (74m ago) 105m
registry-space private-repository-k8s-6d5d954b4f-xkmj5 0/1 Pending 0 4m55s
kubernetes@kubernetes1:/opt/registry$
</code></pre>
<p>Do you know how I can let Kubernetes to decide where to deploy the pod?</p>
| Peter Penzov | <p>Lets try the following(disregard the paths you currently have and use the ones in the example, (then you can change it), we can adapt it to your needs once dynamic provisioning is working, at the very bottom theres mysql image as an example, use busybox or leave it as it is to get a better understanding:</p>
<ol>
<li>NFS Server install. Create NFS Share on File Server (Usually master node)</li>
</ol>
<pre class="lang-yaml prettyprint-override"><code>
#Include prerequisites
sudo apt update -y # Run updates prior to installing
sudo apt install nfs-kernel-server # Install NFS Server
sudo systemctl enable nfs-server # Set nfs-server to load on startups
sudo systemctl status nfs-server # Check its status
# check server status
root@worker03:/home/brucelee# sudo systemctl status nfs-server
● nfs-server.service - NFS server and services
Loaded: loaded (/lib/systemd/system/nfs-server.service; enabled; vendor preset: enabled)
Active: active (exited) since Fri 2021-08-13 04:25:50 UTC; 18s ago
Process: 2731 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
Process: 2732 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS)
Main PID: 2732 (code=exited, status=0/SUCCESS)
Aug 13 04:25:49 linux03 systemd[1]: Starting NFS server and services...
Aug 13 04:25:50 linux03 systemd[1]: Finished NFS server and services.
# Prepare an empty folder
sudo su # enter root
nfsShare=/nfs-share
mkdir $nfsShare # create folder if it doesn't exist
chown nobody: $nfsShare
chmod -R 777 $nfsShare # not recommended for production
# Edit the nfs server share configs
vim /etc/exports
# add these lines
/nfs-share x.x.x.x/24(rw,sync,no_subtree_check,no_root_squash,no_all_squash,insecure)
# Export directory and make it available
sudo exportfs -rav
# Verify nfs shares
sudo exportfs -v
# Enable ingress for subnet
sudo ufw allow from x.x.x.x/24 to any port nfs
# Check firewall status - inactive firewall is fine for testing
root@worker03:/home/brucelee# sudo ufw status
Status: inactive
</code></pre>
<ol start="2">
<li>NFS Client install (Worker nodes)</li>
</ol>
<pre class="lang-yaml prettyprint-override"><code># Install prerequisites
sudo apt update -y
sudo apt install nfs-common
# Mount the nfs share
remoteShare=server.ip.here:/nfs-share
localMount=/mnt/testmount
sudo mkdir -p $localMount
sudo mount $remoteShare $localMount
# Unmount
sudo umount $localMount
</code></pre>
<ol start="3">
<li>Dinamic provisioning and Storage class defaulted</li>
</ol>
<pre class="lang-yaml prettyprint-override"><code># Pull the source code
workingDirectory=~/nfs-dynamic-provisioner
mkdir $workingDirectory && cd $workingDirectory
git clone https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner
cd nfs-subdir-external-provisioner/deploy
# Deploying the service accounts, accepting defaults
k create -f rbac.yaml
# Editing storage class
vim class.yaml
##############################################
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-ssd # set this value
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "true" # value of true means retaining data upon pod terminations
allowVolumeExpansion: "true" # this attribute doesn't exist by default
##############################################
# Deploying storage class
k create -f class.yaml
# Sample output
stoic@masternode:~/nfs-dynamic-provisioner/nfs-subdir-external-provisioner/deploy$ k get storageclasses.storage.k8s.io
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
managed-nfs-ssd k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 33s
nfs-class kubernetes.io/nfs Retain Immediate true 193d
nfs-client (default) cluster.local/nfs-subdir-external-provisioner Delete Immediate true 12d
# Example of patching an applied object
kubectl patch storageclass managed-nfs-ssd -p '{"allowVolumeExpansion":true}'
kubectl patch storageclass managed-nfs-ssd -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' # Set storage class as default
# Editing deployment of dynamic nfs provisioning service pod
vim deployment.yaml
##############################################
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: k8s-sigs.io/nfs-subdir-external-provisioner
- name: NFS_SERVER
value: X.X.X.X # change this value
- name: NFS_PATH
value: /nfs-share # change this value
volumes:
- name: nfs-client-root
nfs:
server: 192.168.100.93 # change this value
path: /nfs-share # change this value
##############################################
# Creating nfs provisioning service pod
k create -f deployment.yaml
# Troubleshooting: example where the deployment was pending variables to be created by rbac.yaml
stoic@masternode: $ k describe deployments.apps nfs-client-provisioner
Name: nfs-client-provisioner
Namespace: default
CreationTimestamp: Sat, 14 Aug 2021 00:09:24 +0000
Labels: app=nfs-client-provisioner
Annotations: deployment.kubernetes.io/revision: 1
Selector: app=nfs-client-provisioner
Replicas: 1 desired | 0 updated | 0 total | 0 available | 1 unavailable
StrategyType: Recreate
MinReadySeconds: 0
Pod Template:
Labels: app=nfs-client-provisioner
Service Account: nfs-client-provisioner
Containers:
nfs-client-provisioner:
Image: k8s.gcr.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
Port: <none>
Host Port: <none>
Environment:
PROVISIONER_NAME: k8s-sigs.io/nfs-subdir-external-provisioner
NFS_SERVER: X.X.X.X
NFS_PATH: /nfs-share
Mounts:
/persistentvolumes from nfs-client-root (rw)
Volumes:
nfs-client-root:
Type: NFS (an NFS mount that lasts the lifetime of a pod)
Server: X.X.X.X
Path: /nfs-share
ReadOnly: false
Conditions:
Type Status Reason
---- ------ ------
Progressing True NewReplicaSetCreated
Available False MinimumReplicasUnavailable
ReplicaFailure True FailedCreate
OldReplicaSets: <none>
NewReplicaSet: nfs-client-provisioner-7768c6dfb4 (0/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 3m47s deployment-controller Scaled up replica set nfs-client-provisioner-7768c6dfb4 to 1
# Get the default nfs storage class
echo $(kubectl get sc -o=jsonpath='{range .items[?(@.metadata.annotations.storageclass\.kubernetes\.io/is-default-class=="true")]}{@.metadata.name}{"\n"}{end}')
</code></pre>
<ol start="4">
<li>PersistentVolumeClaim (Notice the storageClassName it is the one defined on the previous step)</li>
</ol>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-persistentvolume-claim
namespace: default
spec:
storageClassName: nfs-client
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
</code></pre>
<ol start="5">
<li>PersistentVolume</li>
</ol>
<p>It is created dinamically ! confirm if it is here with the correct values running this command:</p>
<blockquote>
<p>kubectl get pv -A</p>
</blockquote>
<ol start="6">
<li>Deployment</li>
</ol>
<p>On your deployment you need two things, volumeMounts (for each container) and volumes (for all containers).
Notice: VolumeMounts->name=data and volumes->name=data because they should match. And claimName is my-persistentvolume-claim which is the same as you PVC.</p>
<pre class="lang-yaml prettyprint-override"><code> ...
spec:
containers:
- name: mysql
image: mysql:8.0.30
volumeMounts:
- name: data
mountPath: /var/lib/mysql
subPath: mysql
volumes:
- name: data
persistentVolumeClaim:
claimName: my-persistentvolume-claim
</code></pre>
| jmvcollaborator |
<p>I had a kubernetes single node cluster on my windows 10 machine. Due some errors I had to reinstall the <code>Docker Desktop</code> and since then kubernetes installation has failed while docker installed successfully. All attempts to resolve e.g. deleting the <code>config</code> file in <code>.kube</code> directory and complete reinstallation have failed. See attached pix for details. Installed docker version is <code>Docker version 18.09.2, build 6247962</code>. All search online efforts do not yield a possible solution. I would appreciate pointers to a solution or walk-around. </p>
<p><a href="https://i.stack.imgur.com/gR58f.png" rel="noreferrer"><img src="https://i.stack.imgur.com/gR58f.png" alt="enter image description here"></a></p>
| SyCode | <p>I stucked in two kinds of error</p>
<ol>
<li>system pods running, found labels but still waiting for labels...</li>
<li>xxxx: EOF</li>
</ol>
<p>I finally sovled it by following the advice by the following project,
<a href="https://github.com/AliyunContainerService/k8s-for-docker-desktop/" rel="noreferrer">https://github.com/AliyunContainerService/k8s-for-docker-desktop/</a>
Do as it told you, if not work,
remove ~/.kube and ~/Library/Group\ Containers/group.com.docker/pki directory, then restart docker desktop and wait like 5 minutes.
The Kubernetes status is <em>running</em> eventually.</p>
| luudis |
<p>I am attempting to setup a MSSQL-server in my WSL2 Linux distro, where I mount a volume for my <code>.mdf</code>- and <code>.ldf</code>-files.
However, I can't get Kubernetes to see my folder with said files.</p>
<p>I have my files stored in <code>C:\WindowsFolder\data</code> on my host (Windows), which allows WSL2 to see them at the path <code>/mnt/c/WindowsFolder/data</code> (Linux).</p>
<p>If I run the following yaml-file, <code>kubectl</code> sets up everything <em>but</em> my data - the folder I mount it into (<code>/data</code>) is empty.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-database
labels:
app.kubernetes.io/name: my-deployment
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: label-identifier
template:
metadata:
labels:
app.kubernetes.io/name: label-identifier
spec:
hostname: "database"
securityContext:
fsGroup: 0
containers:
- name: database
image: "mcr.microsoft.com/mssql/server:2019-latest"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 1433
protocol: TCP
env:
- name: "ACCEPT_EULA"
value: "Y"
- name: "MSSQL_DATA_DIR"
value: /data
- name: "MSSQL_PID"
value: "Developer"
- name: "SA_PASSWORD"
value: "SuperSecret123!"
volumeMounts:
- name: "myvolume"
mountPath: /data
volumes:
- name: "myvolume"
hostPath:
path: "/mnt/c/windowsFolder/Database"
</code></pre>
<p>Then I tried to spin up a docker container inside my WSL2 - it works as expected, but this is not a good solution in the long run:</p>
<pre><code>wsl.exe #Enter WSL2
docker run -d --name sql-t1 -e "ACCEPT_EULA=Y" \
-e "SA_PASSWORD=SuperSecret123!" -p 1433:1433 \
-v /mnt/c/windowsFolder/Database:/data \
mcr.microsoft.com/mssql/server:2019-latest
docker ps #find my containerID
docker exec -it <containerId> bash #step into docker container
> ls /data #shows my files correctly
</code></pre>
<p>WSL2 can mount through docker correctly on the same path specified with Kubernetes, but it doesn't work from Kubernetes.</p>
<p>Any suggestions why, or what I might try?</p>
<p><strong>Edit 1:</strong></p>
<p>I did an <code>docker inspect <Kubectl's WSL container></code> to see if it held any clues:</p>
<pre class="lang-json prettyprint-override"><code>[
{
"MountLabel": "",
"HostConfig": {
"Binds": [
"/mnt/c/windowsFolder/Database:/data",
...
],
},
"VolumeDriver": "",
"VolumesFrom": null,
"Isolation": ""
},
"Mounts": [
{
"Type": "bind",
"Source": "/mnt/c/windowsFolder/Database",
"Destination": "/data",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
],
"Config": {
...
"Volumes": null,
...
}
}
]
</code></pre>
<p><strong>Edit 2:</strong></p>
<p>I noticed the folder has chmod of 755 instead of 777.
I fixed that by adding an <code>initContainer</code> and removing the security group, though it still did not help:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-database
labels:
app.kubernetes.io/name: my-deployment
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: label-identifier
template:
metadata:
labels:
app.kubernetes.io/name: label-identifier
spec:
hostname: "database"
containers:
- name: database
image: "mcr.microsoft.com/mssql/server:2019-latest"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 1433
protocol: TCP
env:
- name: "ACCEPT_EULA"
value: "Y"
- name: "MSSQL_DATA_DIR"
value: /data
- name: "MSSQL_PID"
value: "Developer"
- name: "SA_PASSWORD"
value: "SuperSecret123!"
volumeMounts:
- name: "myvolume"
mountPath: /data
#this was added
initContainers:
- name: mssql-data-folder-permissions
image: "busybox:latest"
command: ["/bin/chmod","-R","777", "/data"]
volumeMounts:
- name: "myvolume"
mountPath: /data
volumes:
- name: "myvolume"
hostPath:
path: "/mnt/c/windowsFolder/Database"
</code></pre>
<p><strong>Edit 3:</strong></p>
<p>On request of @ovidiu-buligan:</p>
<p><code>kubectl get events -A</code> gives the following output:</p>
<pre class="lang-sh prettyprint-override"><code>NAMESPACE LAST SEEN TYPE REASON OBJECT MESSAGE
default 2m15s Normal Scheduled pod/myProject-database-7c477d65b8-mmh7h Successfully assigned default/myProject-database-7c477d65b8-mmh7h to docker-desktop
default 2m16s Normal Pulling pod/myProject-database-7c477d65b8-mmh7h Pulling image "mcr.microsoft.com/mssql/server:2019-latest"
default 88s Normal Pulled pod/myProject-database-7c477d65b8-mmh7h Successfully pulled image "mcr.microsoft.com/mssql/server:2019-latest" in 47.2350549s
default 88s Normal Created pod/myProject-database-7c477d65b8-mmh7h Created container database
default 87s Normal Started pod/myProject-database-7c477d65b8-mmh7h Started container database
default 2m16s Normal SuccessfulCreate replicaset/myProject-database-7c477d65b8 Created pod: myProject-database-7c477d65b8-mmh7h
default 2m16s Normal ScalingReplicaSet deployment/myProject-database Scaled up replica set myProject-database-7c477d65b8 to 1
</code></pre>
<p><code>kubectl describe pod myProject-database-7c477d65b8-mmh7h</code> gives the following output:</p>
<pre class="lang-sh prettyprint-override"><code>Name: myProject-database-7c477d65b8-mmh7h
Namespace: default
Priority: 0
Node: docker-desktop/192.168.65.4
Start Time: Tue, 06 Apr 2021 13:03:18 +0200
Labels: app.kubernetes.io/name=StatefulSet-database
pod-template-hash=7c477d65b8
Annotations: <none>
Status: Running
IP: 10.1.0.10
IPs:
IP: 10.1.0.10
Controlled By: ReplicaSet/myProject-database-7c477d65b8
Containers:
database:
Container ID: docker://f768710e7436d4c813913fa22a20091cb3fb77e1ecfbe2232b0ec6037eef3dbb
Image: mcr.microsoft.com/mssql/server:2019-latest
Image ID: docker-pullable://mcr.microsoft.com/mssql/server@sha256:ec5492b0b3f9c0707fddd37f0bd3d47d3ebea94a3054afb8b50e9e746d1e5f37
Port: 1433/TCP
Host Port: 0/TCP
State: Running
Started: Tue, 06 Apr 2021 13:04:07 +0200
Ready: True
Restart Count: 0
Environment:
ACCEPT_EULA: Y
MSSQL_DATA_DIR: /data
MSSQL_LOG_DIR: /log
MSSQL_PID: Developer
SA_PASSWORD: SuperSecret123!
Mounts:
/data from storage-volume-claim (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-gcd5j (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
storage-volume-claim:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: database-pvc
ReadOnly: false
default-token-gcd5j:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-gcd5j
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 3m6s default-scheduler Successfully assigned default/myProject-database-7c477d65b8-mmh7h to docker-desktop
Normal Pulling 3m6s kubelet Pulling image "mcr.microsoft.com/mssql/server:2019-latest"
Normal Pulled 2m18s kubelet Successfully pulled image "mcr.microsoft.com/mssql/server:2019-latest" in 47.2350549s
Normal Created 2m18s kubelet Created container database
Normal Started 2m17s kubelet Started container database
</code></pre>
| Rasmus Bækgaard | <p>Solution found!</p>
<p>Docker Desktop created a folder to store everything in.</p>
<p>The following path in explorer, <code>\\wsl$\docker-desktop-data\version-pack-data\community\kubelet\</code>, is equal to <code>/var/lib/</code> in a .yaml-file.</p>
<p>That means, you can write the following:</p>
<pre class="lang-yaml prettyprint-override"><code>...
hostPath:
path: "/var/lib/kubelet/myProject/"
type: DirectoryOrCreate
...
</code></pre>
<p>This will give a folder in <code>\\wsl$\docker-desktop-data\version-pack-data\community\kubelet\myProject</code>.
This will act as you want it to.</p>
<p>You might want to create a symlink with Windows Developer Mode to this place (Settings -> Updates & Security -> For Developers -> Developer Mode).</p>
| Rasmus Bækgaard |
<p>I have an application using Azure Kubernetes. Everything was working fine and the API gave me 200 response all the time, but last week I started receiving 500 internal server errors from the API management, and it indicated that its a backend error. I ran the server locally and sent requests to the API and it worked, so I figured the problem happens somewhere in Azure Kubernetes.</p>
<p>However the logs were super cryptic and didn't add that much info so I never really found out what was the problem. I just ran my code to deploy the image again and it got fixed but there was no way to realize that's the problem.</p>
<p>This time I managed to fix the problem but I'm looking for a better way to troubleshoot 500 internal server error in Azure. I have looked all through the Azure documentation but haven't found anything other than the logs, which weren't really helpful in my case. How do you usually go about troubleshooting 500 errors in applications running in Kubernetes?</p>
| Wiz | <p>In general, it all depends specifically on the situation you are dealing with. Nevertheless, you should always start by looking at the logs (application event logs and server logs). Try to look for information about the error in them. Error 500 is actually the effect, not the cause. If you want to find out what may have caused the error, you need to look for this information in the logs. Often times, you can tell what went wrong and fix the problem right away.</p>
<p>If you want to reproduce the problem, check the comment of <a href="https://stackoverflow.com/users/10008173/david-maze" title="75,151 reputation">David Maze</a>:</p>
<blockquote>
<p>I generally try to figure out what triggers the error, reproduce it in a local environment (not Kubernetes, not Docker, no containers at all), debug, write a regression test, fix the bug, get a code review, redeploy. That process isn't especially unique to Kubernetes; it's the same way I'd debug an error in a customer environment where I don't have direct access to the remote systems, or in a production environment where I don't want to risk breaking things further.</p>
</blockquote>
<p>See also:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/6324463/how-to-debug-azure-500-internal-server-error">similar question</a></li>
<li><a href="https://support.cloudways.com/en/articles/5121238-how-to-resolve-500-internal-server-error" rel="nofollow noreferrer">how to solve 500 error</a></li>
</ul>
| Mikołaj Głodziak |
<p>I am trying to learn how to use Kubernetes and tried following the guide <a href="https://minikube.sigs.k8s.io/docs/start/" rel="nofollow noreferrer">here</a> to create a local Kubernetes cluster with Docker driver.</p>
<p>However, I'm stuck at step 3: Interact with your cluster. I tried to run <code>minikube dashboard</code> and I keep getting this error:</p>
<pre><code>Unknown error (404)
the server could not find the requested resource (get ingresses.extensions)
</code></pre>
<p>My Kubernetes version:</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.2", GitCommit:"092fbfbf53427de67cac1e9fa54aaa09a28371d7", GitTreeState:"clean", BuildDate:"2021-06-16T12:59:11Z", GoVersion:"go1.16.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.1", GitCommit:"632ed300f2c34f6d6d15ca4cef3d3c7073412212", GitTreeState:"clean", BuildDate:"2021-08-19T15:39:34Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>Can someone point out where the problem lies please?</p>
| zoobiedoobie | <p>I have installed the minikube according to the same guide, both locally and with the help of cloud providor and ... it works for me :)
If you are just learning and starting your adventure with Kubernetes, try to install everything from scratch.</p>
<p>The error you are getting is related to different versions of Client and Server. When I have installed according to the same guide, following the instructions, my versions look like this:</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.1", GitCommit:"632ed300f2c34f6d6d15ca4cef3d3c7073412212", GitTreeState:"clean", BuildDate:"2021-08-19T15:45:37Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.1", GitCommit:"632ed300f2c34f6d6d15ca4cef3d3c7073412212", GitTreeState:"clean", BuildDate:"2021-08-19T15:39:34Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>You have different versions. Client <code>v1.21.2</code> and Server <code>v1.22.1</code>. Additionally your <code>Platforms</code> are also not the same. If you want to solve this you need to upgrade your client version or downgrade the server version. For more see <a href="https://stackoverflow.com/questions/51180147/determine-what-resource-was-not-found-from-error-from-server-notfound-the-se">this question</a>.</p>
| Mikołaj Głodziak |
<p>I am installing linkerd helm verison with flux and cert mananger for tls rotation</p>
<p>cert manager holds default config so there isnt much to talk there</p>
<p>flux and linkerd with this config:</p>
<p>release.yaml</p>
<pre><code>apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: linkerd
namespace: linkerd
spec:
interval: 5m
values:
identity.issuer.scheme: kubernetes.io/tls
installNamespace: false
valuesFrom:
- kind: Secret
name: linkerd-trust-anchor
valuesKey: tls.crt
targetPath: identityTrustAnchorsPEM
chart:
spec:
chart: linkerd2
version: "2.11.2"
sourceRef:
kind: HelmRepository
name: linkerd
namespace: linkerd
interval: 1m
</code></pre>
<p>source.yaml</p>
<pre><code>---
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: HelmRepository
metadata:
name: linkerd
namespace: linkerd
spec:
interval: 5m0s
url: https://helm.linkerd.io/stable
</code></pre>
<p>linkerd-trust-anchor.yaml</p>
<pre><code>apiVersion: v1
data:
tls.crt: base64encoded
tls.key: base64encoded
kind: Secret
metadata:
name: linkerd-trust-anchor
namespace: linkerd
type: kubernetes.io/tls
</code></pre>
<p>which was created with:</p>
<pre><code>step certificate create root.linkerd.cluster.local ca.crt ca.key \
--profile root-ca --no-password --insecure
</code></pre>
<p>issuer.yaml</p>
<pre><code>---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: linkerd-trust-anchor
namespace: linkerd
spec:
ca:
secretName: linkerd-trust-anchor
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: linkerd-identity-issuer
namespace: linkerd
spec:
secretName: linkerd-identity-issuer
duration: 48h
renewBefore: 25h
issuerRef:
name: linkerd-trust-anchor
kind: Issuer
commonName: identity.linkerd.cluster.local
dnsNames:
- identity.linkerd.cluster.local
isCA: true
privateKey:
algorithm: ECDSA
usages:
- cert sign
- crl sign
- server auth
- client auth
</code></pre>
<p>now when it comes the time to reconcile i get this error in the helmrelease</p>
<pre><code>Helm install failed: execution error at (linkerd2/templates/identity.yaml:19:21): Please provide the identity issuer certificate
</code></pre>
<p>however doing it manually does work perfectly</p>
<pre><code>helm install linkerd2 \
--set-file identityTrustAnchorsPEM=ca.crt \
--set identity.issuer.scheme=kubernetes.io/tls \
--set installNamespace=false linkerd/linkerd2 \
-n linkerd
</code></pre>
<p>It Also work if I have the same setup but without cert manager and certificates declared manually (with a different secret name as linkerd will create it on its own)like this:</p>
<pre><code>valuesFrom:
- kind: Secret
name: linkerd-trust-anchor
valuesKey: tls.crt
targetPath: identityTrustAnchorsPEM
- kind: Secret
name: linkerd-identity-issuer-2
valuesKey: tls.crt
targetPath: identity.issuer.tls.crtPEM
- kind: Secret
name: linkerd-identity-issuer-2
valuesKey: tls.key
targetPath: identity.issuer.tls.keyPEM
</code></pre>
<p>Am I missing something?</p>
| Diego Alejandro Llanos Gareca | <p>The problem lies here:</p>
<pre><code>values:
identity.issuer.scheme: kubernetes.io/tls
</code></pre>
<p>It should be:</p>
<pre><code>values:
identity:
issuer:
scheme: kubernetes.io/tls
</code></pre>
<p>Otherwise, helm wont recognize it and linkerd will think the schema is linkerd.io/tls, which doesn't match the schema structure of kubernetes secret tls.</p>
| Diego Alejandro Llanos Gareca |
<p>Im deploying EKS cluster and configuring the managed node groups so that we can have master and worker nodes .
following this doc :
<a href="https://docs.aws.amazon.com/eks/latest/userguide/cni-iam-role.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/cni-iam-role.html</a></p>
<p>while running this command :</p>
<pre><code>kubectl get pods -n kube-system -l k8s-app=aws-node
</code></pre>
<p>I dont see any POD with that label . dont know why ?
Is it something due to missing configuration OR I missed something while deploying EKS cluster</p>
<p>please suggest</p>
<p><strong>UPDATE 1</strong></p>
<pre><code>kubectl describe daemonset aws-node -n kube-system
</code></pre>
<p>output</p>
<pre><code>Name: aws-node Selector: k8s-app=aws-node Node-Selector: <none> Labels: app.kubernetes.io/instance=aws-vpc-cni
app.kubernetes.io/name=aws-node
app.kubernetes.io/version=v1.11.4
k8s-app=aws-node Annotations: deprecated.daemonset.template.generation: 2 Desired Number of Nodes Scheduled: 0 Current Number of Nodes Scheduled: 0 Number of Nodes Scheduled with Up-to-date Pods: 0 Number of Nodes Scheduled with Available Pods: 0 Number of Nodes Misscheduled: 0 Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed Pod Template: Labels: app.kubernetes.io/instance=aws-vpc-cni
app.kubernetes.io/name=aws-node
k8s-app=aws-node Service Account: aws-node
</code></pre>
| user2315104 | <blockquote>
<p>kubectl get nodes command says No resources found</p>
</blockquote>
<p>No pod will be running if you don't have any worker node. Easiest way to add worker node is on the AWS console, goto Amazon Elastic Kubernetes Service and click on your cluster, goto "Compute" tab and select the node group, click "Edit" and change "Desired size" to > 1.</p>
| gohm'c |
<p>I can create a rolebinding like this</p>
<pre><code>kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test
namespace: rolebinding-ns
subjects:
- kind: ServiceAccount
name: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view
</code></pre>
<p>The subject defines a ServiceAccount without the namespace, and we have a "default" serviceaccount in this rolebinding-ns namespace but we have some other "default" serviceaccounts in other namespaces, included the system namespaces, are different serviceaccounts but with the same name</p>
<p>The question is. Which serviceaccount is used in this rolebinding? The one that is in the same namespace as the rolebinding or kube-system one or any other?</p>
<p>I just applied the yml of the rolebinding without error but I do not know which serviceaccount is being used.</p>
| Roberto | <p>There's <strong>no</strong> namespace specified for the service account in your question:</p>
<pre><code>kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test
namespace: rolebinding-ns
subjects:
- kind: ServiceAccount
name: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view
</code></pre>
<p><code>Which serviceaccount is used in this rolebinding? The one that is in the same namespace as the rolebinding or kube-system one or any other?</code></p>
<p>RoleBinding is a namespaced object, therefore in this case the one that is in the same namespace as the rolebinding and <strong>no</strong> other.</p>
| gohm'c |
<p>I created a docker image of my app which is running an internal server exposed at 8080.
Then I tried to create a local kubernetes cluster for testing, using the following set of commands.</p>
<pre><code>$ kubectl create deployment --image=test-image test-app
$ kubectl set env deployment/test-app DOMAIN=cluster
$ kubectl expose deployment test-app --port=8080 --name=test-service
</code></pre>
<p>I am using Docker-desktop on windows to run run kubernetes. This exposes my cluster to external IP <code>localhost</code> but i cannot access my app. I checked the status of the pods and noticed this issue:</p>
<pre><code>$ kubectl get pods
NAME READY STATUS RESTARTS AGE
test-66-ps2 0/1 ImagePullBackOff 0 8h
test-6f-6jh 0/1 InvalidImageName 0 7h42m
</code></pre>
<p>May I know what could be causing this issue? And how can i make it work on local ?
Thanks, Look forward to the suggestions!</p>
<p>My YAML file for reference:</p>
<pre class="lang-js prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "4"
creationTimestamp: "2021-10-13T18:00:15Z"
generation: 4
labels:
app: test-app
name: test-app
namespace: default
resourceVersion: "*****"
uid: ************
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: test-app
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: test-app
spec:
containers:
- env:
- name: DOMAIN
value: cluster
image: C:\Users\test-image
imagePullPolicy: Always
name: e20f23453f27
ports:
- containerPort: 8080
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
conditions:
- lastTransitionTime: "2021-10-13T18:00:15Z"
lastUpdateTime: "2021-10-13T18:00:15Z"
message: Deployment does not have minimum availability.
reason: MinimumReplicasUnavailable
status: "False"
type: Available
- lastTransitionTime: "2021-10-13T18:39:51Z"
lastUpdateTime: "2021-10-13T18:39:51Z"
message: ReplicaSet "test-66" has timed out progressing.
reason: ProgressDeadlineExceeded
status: "False"
type: Progressing
observedGeneration: 4
replicas: 2
unavailableReplicas: 2
updatedReplicas: 1
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2021-10-13T18:01:49Z"
labels:
app: test-app
name: test-service
namespace: default
resourceVersion: "*****"
uid: *****************
spec:
clusterIP: 10.161.100.100
clusterIPs:
- 10.161.100.100
externalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- nodePort: 41945
port: 80
protocol: TCP
targetPort: 8080
selector:
app: test-app
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- hostname: localhost
</code></pre>
| Mohammad Saad | <p>The reason you are facing <strong>ImagePullBackOff</strong> and <strong>InvalidImageName</strong> issue is because your app image does not exist on the kubernetes cluster you deployed via docker, rather it exists on your local machine!</p>
<p>To resolve this issue for testing purpose what you can do is to mount the project workspace and create your image there on your kubernetes cluster and then build image using docker on the k8s cluster or upload your image to docker hub and then setting your deployment to pick image from docker hub!</p>
| Chhavi Jajoo |
<p>Say you have 3 or more services that communicate with each other constantly, if they are deployed remotely to the same cluster all is good cause they can see each other.</p>
<p>However, I was wondering how could I deploy one of those locally, using minikube for instance, in a way that they are still able to talk to each other.</p>
<p>I am aware that I can port-forward the other two so that the one I have locally deployed can send calls to the others but I am not sure how I could make it work for the other two also be able to send calls to the local one.</p>
| Marcelo Canaparro | <p><strong>TL;DR Yes, it is possible but not recommended, it is difficult and comes with a security risk.</strong></p>
<p><a href="https://stackoverflow.com/users/4185234/charlie" title="19,519 reputation">Charlie</a> wrote very well in the comment and is absolutely right:</p>
<blockquote>
<p>Your local service will not be discoverable by a remote service unless you have a direct IP. One other way is to establish RTC or Web socket connection between your local and remote services using an external server.</p>
</blockquote>
<p>As you can see, it is possible, but also not recommended. Generally, both containerization and the use of kubernetes tend to isolate environments. If you want your services to communicate with each other anyway being in completely different clusters on different machines, you need to configure the appropriate network connections over the public internet. It also may come with a security risk.</p>
<p>If you want to set up the environment locally, it will be a much better idea to run these 3 services as an independent whole. Also take into account that the Minikube is mainly designed for learning and testing certain solutions and is not entirely suitable for production solutions.</p>
| Mikołaj Głodziak |
<p>Inside my container I have in my <code>spec.jobTemplate.spec.template.spec</code>:</p>
<pre><code> containers:
- name: "run"
env:
{{ include "schedule.envVariables" . | nindent 16 }}
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_CREATION_TIMESTAMP
valueFrom:
fieldRef:
fieldPath: metadata.creationTimestamp
</code></pre>
<p>However I just get the error:</p>
<blockquote>
<p>CronJob.batch "schedule-3eb71b12d3" is invalid: spec.jobTemplate.spec.template.spec.containers[0].env[19]</p>
</blockquote>
<p>When changing to:</p>
<pre><code>- name: POD_CREATION_TIMESTAMP
value: ""
</code></pre>
<p>I get no errors. Any idea?</p>
| maxisme | <p>The reason is that <code>fieldRef</code> doesn't support the use of <code>metadata.creationTimestamp</code>.</p>
<pre class="lang-bash prettyprint-override"><code>$ kubectl explain job.spec.template.spec.containers.env.valueFrom
...
fieldRef <Object>
Selects a field of the pod: supports metadata.name, metadata.namespace,
`metadata.labels['<KEY>']`, `metadata.annotations['<KEY>']`, spec.nodeName,
spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.
...
</code></pre>
| Roman Geraskin |
<p>I have a major problem and will do my best to be short in the explanation. I am running a cronjob on an endpoint and would like to containerize that process. The endpoint needs to have some environments variables set and obviously a <code>TOKEN</code> authentication which is basically the login before I can curl POST or GET to get what I want from the API. The tricky part is that that TOKEN is never the same which means I need to run a curl command to get it during the pod run time. To make sure those environments variables are there in run time I injected a command in the cronjob args field to keep it running. When I ssh to the pod all my all env var are there ;) but the <code>TOKEN</code> :( . When I run <code>./run.sh</code> from inside the pod nothing happens which is the reason the TOKEN isn't listed in <code>printenv</code>. However when I manually run <code>command 1</code> then <code>command 2</code> from inside the pod it works fine. I am very confused and please help me if you can please. Instead of running my commands as a bash in the <code>cmd</code> at the docker level, I have seen that I can possibly parse <code>command 1</code> and <code>command 2</code> from the <code>run.sh</code> inside my cronjob.yaml with a multi line block scalar but haven't figured how as yaml format are a pain. Below are my codes for more details:</p>
<p><strong>docker-entrypoint.sh</strong> --> removed</p>
<pre><code>#! /bin/sh
export AWS_REGION=$AWS_REGION
export API_VS_HD=$API_VS_HD
export CONTROLLER_IP=$CONTROLLER_IP
export PASSWORD=$PASSWORD
export UUID=$UUID
</code></pre>
<p><strong>Dockerfile</strong></p>
<pre><code>FROM harbor/ops5/private/:device-purge/python38:3.8
# Use root user for packages installation
USER root
# Install packages
RUN yum update -y && yum upgrade -y
# Install curl
RUN yum install curl -y \
&& curl --version
# Install zip/unzip/gunzip
RUN yum install zip unzip -y \
&& yum install gzip -y
# Install wget
RUN yum install wget -y
# Install jq
RUN wget -O jq https://github.com/stedolan/jq/releases/download/jq-1.6/jq-linux64
RUN chmod +x ./jq
RUN cp jq /usr/bin
# Install aws cli
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
RUN unzip awscliv2.zip
RUN ./aws/install
# Set working directory
WORKDIR /home/app
# Add user
RUN groupadd --system user && adduser --system user --no-create-home --gid user
RUN chown -R user:user /home/app && chmod -R 777 /home/app
# Copy app
COPY ./run.sh /home/app
RUN chmod +x /home/app/run.sh
# Switch to non-root user
USER user
# Run service
CMD ["sh", "-c", "./run.sh"]
</code></pre>
<p><strong>run.sh</strong></p>
<pre><code># Command 1
export TOKEN=`curl -H "Content-Type: application/json" -H "${API_VS_HD}" --request POST --data "{\"providerName\":\"local\",\"username\":\"admin\",\"password\":\"$PASSWORD\",\"deviceId\":\"$UUID\"}" https://$CONTROLLER_IP:444/admin/login --insecure | jq -r '.token'`
# Sleep
sleep 3
# Command 2
curl -k -H "Content-Type: application/json" \
-H "$API_VS_HD" \
-H "Authorization: Bearer $TOKEN" \
-X GET \
https://$CONTROLLER_IP:444/admin/license/users
</code></pre>
<p>cronjob.yaml</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: device-cron-job
namespace: device-purge
spec:
schedule: "*/2 * * * *" # test
jobTemplate:
spec:
template:
spec:
imagePullSecrets:
- name: cron
containers:
- name: device-cron-pod
image: harbor/ops5/private/:device-purge
env:
- name: AWS_REGION
value: "us-east-1"
- name: API_VS_HD
value: "Accept:application/vnd.appgate.peer-v13+json"
- name: CONTROLLER_IP
value: "52.61.245.214"
- name: UUID
value: "d2b78ec2-####-###-###-#########"
- name: PASSWORD
valueFrom:
secretKeyRef:
name: password
key: password
imagePullPolicy: Always
restartPolicy: OnFailure
backoffLimit: 3
</code></pre>
| kddiji | <p><code>run.sh</code> is never being called. <code>docker-entrypoint.sh</code> needs to exec <code>run.sh</code> by adding <code>exec $@</code> at the bottom. But you don't really need the entrypoint anyways, those environment variables are already being exported into your environment by docker. I'm also not sure why you are specifying <code>command</code> and <code>args</code> in your yaml spec but I would get rid of those.</p>
<p>When you provide both an <code>ENTRYPOINT</code> and a <code>CMD</code> command in this form, the <code>CMD</code> params are passed to the entrypoint file, which then has the responsibility of executing the necessary process. You can review the documentation <a href="https://docs.docker.com/engine/reference/builder/#cmd" rel="nofollow noreferrer">here</a>.</p>
| kthompso |
<p>I'm attempting to start my Express.js application on GKE, however no matter which port I specify, I always get an error like so:</p>
<pre><code>Error: listen EACCES: permission denied tcp://10.3.253.94:3000
at Server.setupListenHandle [as _listen2] (net.js:1296:21)
at listenInCluster (net.js:1361:12)
at Server.listen (net.js:1458:5)
at Function.listen (/srv/node_modules/express/lib/application.js:618:24)
at Object.<anonymous> (/srv/src/index.js:42:5)
at Module._compile (internal/modules/cjs/loader.js:1137:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:1157:10)
at Module.load (internal/modules/cjs/loader.js:985:32)
at Function.Module._load (internal/modules/cjs/loader.js:878:14)
at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:71:12)
</code></pre>
<p>I've tried multiple ports (8080, 8000, 3000). I've set the user to <code>root</code> in the Docker image.</p>
<p>Here's my setup:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: api
name: api
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
dnsPolicy: ClusterFirst
restartPolicy: Always
containers:
- image: gcr.io/ellioseven-kbp/journal-api:1.0.14
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
name: api
</code></pre>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
app: api
name: api
namespace: default
spec:
ports:
- port: 3000
protocol: TCP
targetPort: 3000
selector:
app: api
type: NodePort
</code></pre>
<pre><code>FROM node:12-alpine
ENV PATH /srv/node_modules/.bin:$PATH
ENV API_PORT 3000
ENV REDIS_HOST redis
COPY . /srv
WORKDIR /srv
ENV PATH /srv/node_modules/.bin:$PATH
RUN yarn install
CMD yarn start
EXPOSE 3000
USER root
</code></pre>
<pre><code>const port = process.env.API_PORT || 3000
app.listen(port, () => console.log("Listening on " + port))
</code></pre>
<p>I'm at a complete loss trying to solve this, any help would be greatly appreciated.</p>
| ellioseven | <p>Your issue is on <a href="https://github.com/kubernetes/kubernetes/issues/18219" rel="nofollow noreferrer">environment variable conflicting with the service name</a>.</p>
<p><a href="https://v1-16.docs.kubernetes.io/docs/concepts/containers/container-environment-variables/#cluster-information" rel="nofollow noreferrer">According to Kubernetes docs</a></p>
<blockquote>
<p>For a service named <code>foo</code> that maps to a Container named <code>bar</code>, the
following variables are defined:</p>
<pre class="lang-sh prettyprint-override"><code>FOO_SERVICE_HOST=<the host the service is running on>
FOO_SERVICE_PORT=<the port the service is running on>
</code></pre>
</blockquote>
<p>Change either the service name or the environment variable</p>
<p>For e.g. use <code>API_PORT_NUMBER</code> instead of <code>API_PORT</code></p>
| Anonymous |
<p>There is only 1 version per object in k8s 1.20 as can be checked by command:</p>
<pre><code>kubectl api-resources
</code></pre>
<p>Also, creating custom objects with different versions is not allowed. <code>AlreadyExists</code> is thrown on trying.</p>
<p>In what use cases providing <code>--api-version</code> option is useful then?</p>
| Grzegorz Wilanowski | <p>Command:</p>
<pre class="lang-yaml prettyprint-override"><code>kubectl api-resources
</code></pre>
<p>Print the supported API resources on the server. You can read more about this command and allowed options <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#api-resources" rel="nofollow noreferrer">here</a>. Supported flags are:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Name</th>
<th>Shorthand</th>
<th>Default</th>
<th>Usage</th>
</tr>
</thead>
<tbody>
<tr>
<td>api-group</td>
<td></td>
<td></td>
<td>Limit to resources in the specified API group.</td>
</tr>
<tr>
<td>cached</td>
<td></td>
<td>false</td>
<td>Use the cached list of resources if available.</td>
</tr>
<tr>
<td>namespaced</td>
<td></td>
<td>true</td>
<td>If false, non-namespaced resources will be returned, otherwise returning namespaced resources by default.</td>
</tr>
<tr>
<td>no-headers</td>
<td></td>
<td>false</td>
<td>When using the default or custom-column output format, don't print headers (default print headers).</td>
</tr>
<tr>
<td>output</td>
<td>o</td>
<td></td>
<td>Output format. One of: wide</td>
</tr>
<tr>
<td>sort-by</td>
<td></td>
<td></td>
<td>If non-empty, sort list of resources using specified field. The field can be either 'name' or 'kind'.</td>
</tr>
<tr>
<td>verbs</td>
<td></td>
<td>[]</td>
<td>Limit to resources that support the specified verbs.</td>
</tr>
</tbody>
</table>
</div>
<p>You can use <code>--api-group</code> option to limit to resources in the specified API group.</p>
<p>There also exist the command:</p>
<pre><code>kubectl api-versions
</code></pre>
<p>and it prints the supported API versions on the server, in the form of "group/version". You can read more about it <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#api-versions" rel="nofollow noreferrer">here</a>.</p>
<p>You can also read more about <a href="https://kubernetes.io/docs/concepts/overview/kubernetes-api/#api-groups-and-versioning" rel="nofollow noreferrer">API groups and versioning</a>.</p>
<p><strong>EDIT:</strong>
In the comment:</p>
<blockquote>
<p>No, see example "kubectl explain deployment --api-version v1". In other words: when there can be more then one api version of a resource?</p>
</blockquote>
<p>you are referring to a completely different command which is <code>kubectl explain</code>. Option <code>--api-version</code> gets different explanations for particular API version (API group/version). You can read more about it <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#explain" rel="nofollow noreferrer">here</a>.</p>
| Mikołaj Głodziak |
<p>If I look at my logs in GCP logs, I see for instance that I got a request that gave 500</p>
<pre><code> log_message: "Method: some_cloud_goo.Endpoint failed: INTERNAL_SERVER_ERROR"
</code></pre>
<p>I would like to quickly go to that pod and do a <code>kubectl logs</code> on it. But I did not find a way to do this.</p>
<p>I am fairly new to k8s and GKE, any way to traceback the pod that handled that request?</p>
| Al Wld | <p>You could run command "kubectl get pods " on each node to check the status of all pods and could figure out accordingly by running for detail description of an error " kubectl describe pod pod-name"</p>
| Neelam |
<p>Currently we have a CronJob to clean pods deployed by airflow.
Cleanup cron job in airflow is defined as follows
<a href="https://i.stack.imgur.com/1ZsWx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1ZsWx.png" alt="enter image description here" /></a></p>
<p>This Cleans all completed pods (Successful pod and Pods that are marked as Error).</p>
<p>I have a requirement where in CleanUp Pods CronJob shouldn't clean Pods that are marked as <strong>ERROR</strong>.
I checked Airflow Docs but couldn't get anything. Any other way in which i can achieve this</p>
| user3865748 | <p>There are 2 airflow environments variables that might help.</p>
<p>AIRFLOW__KUBERNETES__DELETE_WORKER_PODS - If True, all worker pods will be deleted upon termination</p>
<p>AIRFLOW__KUBERNETES__DELETE_WORKER_PODS_ON_FAILURE - If False (and delete_worker_pods is True), failed worker pods will not be deleted so users can investigate them. This only prevents removal of worker pods where the worker itself failed, not when the task it ran failed</p>
<p>for more details see <a href="https://airflow.apache.org/docs/apache-airflow/stable/configurations-ref.html#delete-worker-pods-on-failure" rel="nofollow noreferrer">here</a></p>
| ozs |
<p>We are trying to analyze specific requirement for container implementation and would like to know the limit of maximum number of labels that can be created for the given pods in kubernetes?
Does such limit exists or it is not defined.</p>
<p>Thanks in advance.</p>
| Chota Bheem | <p>Based on <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" rel="nofollow noreferrer">official Kubernetes documentation</a> there should be no limit of independent labels in pod. But you should know, that each valid label:</p>
<blockquote>
<ul>
<li>must be 63 characters or less (can be empty),</li>
<li>unless empty, must begin and end with an alphanumeric character (<code>[a-z0-9A-Z]</code>),</li>
<li>could contain dashes (<code>-</code>), underscores (<code>_</code>), dots (<code>.</code>), and alphanumerics between.</li>
</ul>
</blockquote>
<p>If you want to know where the 63-character limit comes from, I recommend <a href="https://stackoverflow.com/questions/50412837/">this thread</a>, <a href="https://datatracker.ietf.org/doc/html/rfc1123" rel="nofollow noreferrer">RFC-1223</a> and <a href="https://stackoverflow.com/questions/32290167/what-is-the-maximum-length-of-a-dns-name#32294443">explanation</a>.</p>
<p>And <a href="https://stackoverflow.com/users/2525872/leroy">Leroy</a> well mentioned in the comment:</p>
<blockquote>
<p>keep in mind that all this data is being retrieved by an api.</p>
</blockquote>
| Mikołaj Głodziak |
<p>I deployed 3 lighthouse pods and 3 crawlers pods on my kubernetes got from this <a href="https://github.com/petabridge/Cluster.WebCrawler" rel="nofollow noreferrer">example</a>.
Right now cluster looks like this:</p>
<pre><code>akka.tcp://[email protected]:5213 | [crawler] | up |
akka.tcp://[email protected]:5213 | [crawler] | up |
akka.tcp://[email protected]:4053 | [lighthouse] | up |
akka.tcp://[email protected]:4053 | [lighthouse] | up |
akka.tcp://[email protected]:4053 | [lighthouse] | up |
</code></pre>
<p>As you can see, there's no <strong>crawler-0.crawler</strong> node. Lets look into the nodes' logs.</p>
<pre><code>[WARNING][05/26/2020 10:07:24][Thread 0011][[akka://webcrawler/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2Fwebcrawler%40lighthouse-1.lighthouse%3A4053-940/endpointWriter#501112873]] AssociationError [akka.tcp://[email protected]:5213] -> akka.tcp://[email protected]:4053: Error [Association failed with akka.tcp://[email protected]:4053] []
[WARNING][05/26/2020 10:07:24][Thread 0009][[akka://webcrawler/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2Fwebcrawler%40lighthouse-2.lighthouse%3A4053-941/endpointWriter#592338082]] AssociationError [akka.tcp://[email protected]:5213] -> akka.tcp://[email protected]:4053: Error [Association failed with akka.tcp://[email protected]:4053] []
[WARNING][05/26/2020 10:07:24][Thread 0008][remoting] Tried to associate with unreachable remote address [akka.tcp://[email protected]:4053]. Address is now gated for 5000 ms, all messages to this address will be delivered to dead letters. Reason: [Association failed with akka.tcp://[email protected]:4053] Caused by: [System.AggregateException: One or more errors occurred. (No such device or address) ---> System.Net.Internals.SocketExceptionFactory+ExtendedSocketException: No such device or address
at System.Net.Dns.InternalGetHostByName(String hostName)
at System.Net.Dns.ResolveCallback(Object context)
--- End of stack trace from previous location where exception was thrown ---
at System.Net.Dns.HostResolutionEndHelper(IAsyncResult asyncResult)
at System.Net.Dns.EndGetHostEntry(IAsyncResult asyncResult)
at System.Net.Dns.<>c.b__27_1(IAsyncResult asyncResult)
at System.Threading.Tasks.TaskFactory`1.FromAsyncCoreLogic(IAsyncResult iar, Func`2 endFunction, Action`1 endAction, Task`1 promise, Boolean requiresSynchronization)
--- End of stack trace from previous location where exception was thrown ---
at Akka.Remote.Transport.DotNetty.DotNettyTransport.ResolveNameAsync(DnsEndPoint address, AddressFamily addressFamily)
at Akka.Remote.Transport.DotNetty.DotNettyTransport.DnsToIPEndpoint(DnsEndPoint dns)
at Akka.Remote.Transport.DotNetty.TcpTransport.MapEndpointAsync(EndPoint socketAddress)
at Akka.Remote.Transport.DotNetty.TcpTransport.AssociateInternal(Address remoteAddress)
at Akka.Remote.Transport.DotNetty.DotNettyTransport.Associate(Address remoteAddress)
--- End of inner exception stack trace ---
at System.Threading.Tasks.Task`1.GetResultCore(Boolean waitCompletionNotification)
at Akka.Remote.Transport.ProtocolStateActor.<>c.b__11_54(Task`1 result)
at System.Threading.Tasks.ContinuationResultTaskFromResultTask`2.InnerInvoke()
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state)
--- End of stack trace from previous location where exception was thrown ---
at System.Threading.Tasks.Task.ExecuteWithThreadLocal(Task& currentTaskSlot)
---> (Inner Exception #0) System.Net.Internals.SocketExceptionFactory+ExtendedSocketException (00000005, 6): No such device or address
at System.Net.Dns.InternalGetHostByName(String hostName)
at System.Net.Dns.ResolveCallback(Object context)
--- End of stack trace from previous location where exception was thrown ---
at System.Net.Dns.HostResolutionEndHelper(IAsyncResult asyncResult)
at System.Net.Dns.EndGetHostEntry(IAsyncResult asyncResult)
at System.Net.Dns.<>c.b__27_1(IAsyncResult asyncResult)
at System.Threading.Tasks.TaskFactory`1.FromAsyncCoreLogic(IAsyncResult iar, Func`2 endFunction, Action`1 endAction, Task`1 promise, Boolean requiresSynchronization)
--- End of stack trace from previous location where exception was thrown ---
at Akka.Remote.Transport.DotNetty.DotNettyTransport.ResolveNameAsync(DnsEndPoint address, AddressFamily addressFamily)
at Akka.Remote.Transport.DotNetty.DotNettyTransport.DnsToIPEndpoint(DnsEndPoint dns)
at Akka.Remote.Transport.DotNetty.TcpTransport.MapEndpointAsync(EndPoint socketAddress)
at Akka.Remote.Transport.DotNetty.TcpTransport.AssociateInternal(Address remoteAddress)
at Akka.Remote.Transport.DotNetty.DotNettyTransport.Associate(Address remoteAddress)<---
]
</code></pre>
<p>While this node is spamming such exception other 2 crawlers keep in calm and seem to do nothing.<br>
These are 2 yamls I used deploying services:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: crawler
labels:
app: crawler
spec:
clusterIP: None
ports:
- port: 5213
selector:
app: crawler
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: crawler
labels:
app: crawler
spec:
serviceName: crawler
replicas: 3
selector:
matchLabels:
app: crawler
template:
metadata:
labels:
app: crawler
spec:
terminationGracePeriodSeconds: 35
containers:
- name: crawler
image: myregistry.ru:443/crawler:3
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "pbm 127.0.0.1:9110 cluster leave"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: CLUSTER_IP
value: "$(POD_NAME).crawler"
- name: CLUSTER_SEEDS
value: akka.tcp://[email protected]:4053,akka.tcp://[email protected]:4053,akka.tcp://[email protected]:4053
livenessProbe:
tcpSocket:
port: 5213
ports:
- containerPort: 5213
protocol: TCP
</code></pre>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: lighthouse
labels:
app: lighthouse
spec:
clusterIP: None
ports:
- port: 4053
selector:
app: lighthouse
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: lighthouse
labels:
app: lighthouse
spec:
serviceName: lighthouse
replicas: 3
selector:
matchLabels:
app: lighthouse
template:
metadata:
labels:
app: lighthouse
spec:
terminationGracePeriodSeconds: 35
containers:
- name: lighthouse
image: myregistry.ru:443/lighthouse:1
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "pbm 127.0.0.1:9110 cluster leave"]
env:
- name: ACTORSYSTEM
value: webcrawler
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: CLUSTER_IP
value: "$(POD_NAME).lighthouse"
- name: CLUSTER_SEEDS
value: akka.tcp://[email protected]:4053,akka.tcp://[email protected]:4053,akka.tcp://[email protected]:4053
livenessProbe:
tcpSocket:
port: 4053
ports:
- containerPort: 4053
protocol: TCP
</code></pre>
<p>I assume, if the error above gets fixed everything should work OK. Any ideas how to solve it?</p>
| r.slesarev | <p>Ok. I managed to fix it. One of the kuber node couldn't resolve DNS name. A simple reboot of the node solved the issue.</p>
| r.slesarev |
<p>I have 1 question regarding migration from Nginx controller to ALB. Does k8s during migration will create a new ingress controller and switch smoothly services to new ingress or will delete an old one and after that will create a new ingress? Why I ask that, because we want to change ingress class and we would like to minimize any downtime. Sorry for newbie question, because I didn't find any answer in doc</p>
| Andrew Striletskyi | <ol>
<li>First, when transitioning from one infrastructure to another, it's best to pre-build the new infrastructure ahead of the transition so it will be ready to be changed.</li>
<li>In this specific example, you can set up the two IngressClasses to exist in parallel, and create the new ALB ingress with a different domain name.</li>
<li>In the transition moment, change the DNS alias record (directly or using annotations) to point at the new ALB ingress and delete the older Nginx ingress.</li>
<li>In general, I recommend managing the ALB not as ingress from K8s, but as an AWS resource in Terraform/CloudFormation or similar and using TargetGroupBindings to connect the ALB to the application using its K8s Services.
<a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/guide/targetgroupbinding/targetgroupbinding/" rel="nofollow noreferrer">https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/guide/targetgroupbinding/targetgroupbinding/</a></li>
</ol>
| Tamir |
<p>I'm working on migrating our services from eks1.14 to eks1.18 cluster. I see lots of errors on some of our deployments .</p>
<p>Can someone pls let me know , How can I solve this error?</p>
<pre><code>May 19th 2021, 10:56:30.297 io.fabric8.kubernetes.client.KubernetesClientException: too old resource version: 13899376 (13911551)
at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager$1.onMessage(WatchConnectionManager.java:259)
at okhttp3.internal.ws.RealWebSocket.onReadMessage(RealWebSocket.java:323)
at okhttp3.internal.ws.WebSocketReader.readMessageFrame(WebSocketReader.java:219)
at okhttp3.internal.ws.WebSocketReader.processNextFrame(WebSocketReader.java:105)
at okhttp3.internal.ws.RealWebSocket.loopReader(RealWebSocket.java:274)
at okhttp3.internal.ws.RealWebSocket$2.onResponse(RealWebSocket.java:214)
</code></pre>
| user6826691 | <p>This is a standard behaviour of Kubernetes. When you ask to see changes for a <code>resourceVersion</code> that is too old - i.e. when it can no longer tell you what has changed since that version, since too many things have changed. So, you should avoid upgrading several versions at once. Try to update your cluster from 1.14 to 1.15, then from 1.15 to 1.16 and so on.
You can also read more about very similar problem <a href="https://stackoverflow.com/questions/61409596/kubernetes-too-old-resource-version">here</a>. There you can find another solution of your problem.</p>
<p>Secondary in the <a href="https://docs.aws.amazon.com/eks/latest/userguide/update-cluster.html" rel="nofollow noreferrer">documentation</a> of Amazon EKS, we can find:</p>
<blockquote>
<p>Updating a cluster from 1.16 to 1.17 will fail if you have any AWS Fargate pods that have a <code>kubelet</code> minor version earlier than 1.16. Before updating your cluster from 1.16 to 1.17, you need to recycle your Fargate pods so that their <code>kubelet</code> is 1.16 before attempting to update the cluster to 1.17.</p>
</blockquote>
<p>Based on this example and the huge amount of dependencies, it is a good idea to upgrade the cluster version by version.</p>
| Mikołaj Głodziak |
<p>I'm trying to deploy a custom pod on minikube and I'm getting the following message regardless of my twicks:</p>
<pre><code>Failed to load logs: container "my-pod" in pod "my-pod-766c646c85-nbv4c" is waiting to start: image can't be pulled
Reason: BadRequest (400)
</code></pre>
<p>I did all sorts of experiments based on <a href="https://minikube.sigs.k8s.io/docs/handbook/pushing/" rel="nofollow noreferrer">https://minikube.sigs.k8s.io/docs/handbook/pushing/</a> and <a href="https://number1.co.za/minikube-deploy-a-container-using-a-private-image-registry/" rel="nofollow noreferrer">https://number1.co.za/minikube-deploy-a-container-using-a-private-image-registry/</a> without success.
I ended up trying to use <code>minikube image load myimage:latest</code> and reference it in the container spec as:</p>
<pre><code> ...
containers:
- name: my-pod
image: myimage:latest
ports:
- name: my-pod
containerPort: 8080
protocol: TCP
...
</code></pre>
<p>Should/can I use <code>minikube image</code>?
If so, should I use the full image name <code>docker.io/library/myimage:latest</code> or just the image suffix <code>myimage:latest</code>?
Is there anything else I need to do to make minikube locate the image?
Is there a way to get the logs of the bad request itself to see what is going on (I don't see anything in the api server logs)?</p>
<p>I also see the following error in the minikube system:</p>
<pre><code>Failed to load logs: container "registry-creds" in pod "registry-creds-6b884645cf-gkgph" is waiting to start: ContainerCreating
Reason: BadRequest (400)
</code></pre>
<p>Thanks!
Amos</p>
| amos | <p>You should set the <a href="https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy" rel="nofollow noreferrer">imagePullPolicy</a> to <code>IfNotPresent</code>. Changing that will tell kubernetes to not pull the image if it does not need to.</p>
<pre class="lang-yaml prettyprint-override"><code> ...
containers:
- name: my-pod
image: myimage:latest
imagePullPolicy: IfNotPresent
ports:
- name: my-pod
containerPort: 8080
protocol: TCP
...
</code></pre>
<p>A quirk of kubernetes is that if you specify an image with the <code>latest</code> tag as you have here, it will default to using <code>imagePullPolicy=Always</code>, which is why you are seeing this error.</p>
<p><a href="https://kubernetes.io/docs/concepts/containers/images/#imagepullpolicy-defaulting" rel="nofollow noreferrer">More on how kubernetes decides the default image pull policy</a></p>
<p>If you need your image to always be pulled in production, consider using <a href="https://helm.sh/docs/" rel="nofollow noreferrer">helm</a> to template your kubernetes yaml configuration.</p>
| John Cunniff |
<p>database-deployment.yaml</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: postgres
name: postgres-db
spec:
replicas:
selector:
matchLabels:
app: postgres-db
template:
metadata:
labels:
app: postgres-db
spec:
containers:
- name: postgres-db
image: postgres:latest
ports:
- protocol: TCP
containerPort: 1234
env:
- name: POSTGRES_DB
value: "classroom"
- name: POSTGRES_USER
value: temp
- name: POSTGRES_PASSWORD
value: temp
</code></pre>
<p>database-service.yaml</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: database-service
spec:
selector:
app: postgres-db
ports:
- protocol: TCP
port: 1234
targetPort: 1234
</code></pre>
<p>I want to use this database-service url for other deployment so i tried to add it in configMap</p>
<p>my-configMap.yaml</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: classroom-configmap
data:
database_url: database-service
</code></pre>
<p>[Not Working] Expected - database_url : database-service (will be replaced with corresponding service URL)</p>
<p><code>ERROR - Driver org.postgresql.Driver claims to not accept jdbcUrl, database-service</code></p>
<pre><code>$ kubectl describe configmaps classroom-configmap
</code></pre>
<p>Output :</p>
<pre><code>Name: classroom-configmap
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
database_url:
----
database-service
BinaryData
====
Events: <none>
</code></pre>
| Shiru99 | <p>updated my-configMap.yaml (database_url)</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
name: classroom-configmap
data:
database_url: jdbc:postgresql://database-service.default.svc.cluster.local:5432/classroom
</code></pre>
<p>expected URL - jdbc:{DATABASE}://{DATABASE_SERVICE with NAMESPACE}:{DATABASE_PORT}/{DATABASE_NAME}</p>
<p>DATABASE_SERVICE - <code>database-service</code></p>
<p>NAMESPACE - <code>default</code></p>
<p>DATABASE_SERVICE with NAMESPACE - <code>database-service.default.svc.cluster.local</code></p>
| Shiru99 |
<p>I was trying to pull an image from docker.io, but I'm getting this error, recently I have changed my DNS I'm not sure is that the reason...I executed <code>minikube ssh</code> and I executed <code>docker pull</code> then I got this error </p>
<pre><code>Error response from daemon: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 10.0.2.3:53: read udp 10.0.2.15:32905->10.0.2.3:53: i/o timeout
</code></pre>
<p>My Env -:
Docker version -: 19.03.1
minikube version -: 1.2.0
ubuntu version -: 18.04</p>
<p>This is my resolv.conf.d --> head file content </p>
<p>nameserver 192.xxx.1x8.x</p>
| uvindu sri | <p>I was working on a side project and at some point needed to run a new docker image <code>docker pull nginx</code> that wasn't on my machine. When I tried to run it I got this error:</p>
<blockquote>
<p>Error response from daemon: Get <a href="https://registry-1.docker.io/v2/" rel="noreferrer">https://registry-1.docker.io/v2/</a>: dial tcp: lookup registry-1.docker.io on 10.0.0.1:53: read udp 10.0.0.30:55526->10.0.0.1:53: i/o timeout.</p>
</blockquote>
<p>I was surprised to see that, but I managed to find quick solution for this:</p>
<ul>
<li>Edit your DNS resolver config file: <code>sudo nano /etc/resolv.conf</code></li>
<li>Change or add nameserver <code>8.8.8.8</code> at the end of the file and you're good to go.</li>
</ul>
| Ibrahim Haruna |
<p>I am getting following error:</p>
<p>"Specify a project or solution file. The current working directory does not contain a project or solution file."</p>
<p>While try to build the image and getting the shown in the image</p>
<p><a href="https://i.stack.imgur.com/x0XU6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/x0XU6.png" alt="enter image description here" /></a></p>
<p>Following image shows my docker commands.</p>
<p>Also the images shows the code which i am trying to build. It shows the structure of the code which is src folder and a solution file outside the src folder.</p>
<p>I strongly belive issue is solution file is not in src.</p>
<p><a href="https://i.stack.imgur.com/GvFvh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GvFvh.png" alt="enter image description here" /></a></p>
<p>Please recommend what will be change in docker files to make a successfull build and push.</p>
<p>I also showing the deployment.yaml file which shows the steps of the build below image</p>
<pre><code> name: test run
jobs:
- job: Job_1
displayName: Agent job 1
pool:
vmImage: ubuntu-18.04
steps:
- checkout: self
- task: Docker@0
displayName: Build an image
inputs:
azureSubscription: 'sc-abc'
azureContainerRegistry:
loginServer: acr.azurecr.io
id: "/subscriptions/4f76bb2f-c521-45d1-b311-xxxxxxxxxx/resourceGroups/eus-abc-rg/providers/Microsoft.ContainerRegistry/registries/acr"
imageName: acr.azurecr.io/ims-abc/dotnetapi:jp26052022v8
- task: Docker@0
displayName: Push an image
inputs:
azureSubscription: 'sc-abc'
azureContainerRegistry: '{"loginServer":"acr.azurecr.io", "id" : "/subscriptions/4f76bb2f-c521-45d1-b311-xxxxxxxxxx/resourceGroups/eus-icndp-rg/providers/Microsoft.ContainerRegistry/registries/acr"}'
action: Push an image
</code></pre>
| SmartestVEGA | <p>Please move <code>WORKDIR /APP</code> before <code>COPY . .</code> so that in the directory, it can find the solution file.</p>
<p>You need to create the image in pipeline, then can publish the image to ACR. Please check <a href="https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/build/docker?view=azure-devops#build-and-push" rel="nofollow noreferrer">link</a> for your reference.</p>
| wade zhou - MSFT |
<p>Requirement: deploy multiple releases of chart A to single k8s namespace</p>
<p>I have a complex problem with subcharts name configuration. Relations between my charts are:</p>
<pre><code>A
|- B
|- C
|- D - postgres
|- E - graphql-engine
|- F - postgres
|- G - graphql-engine
</code></pre>
<ul>
<li>A depends on B, F, G</li>
<li>B depends on C</li>
<li>C depends on D, E</li>
</ul>
<p>And the chart of type graphql-engine can requires zero or N variables based on application which for it is (if you know this app it should be backend URL, action URL, trigger URL etc.). In instance E the variables should points to aplication C and in instance G they should points to A.</p>
<p>I made Helm chart for graphql-engine with this part in Deployment`s container section:</p>
<pre><code> env:
{{- range $k, $v := .Values.environmentVariables }}
- name: {{ quote $k }}
value: {{ quote "$v" }}
{{- end }}
</code></pre>
<p>To have a right names of subcharts I am doing this in A`s variables.yaml file:</p>
<pre><code>B:
C:
nameOverride: A-B-C
D:
nameOverride: A-B-C-D
E:
nameOverride: A-B-C-E
F:
nameOverride: A-F
G:
nameOverride: A-G
</code></pre>
<p>Default chart`s _helpers.tpl file prefixs nameOverride variable with .Release.Name variable.
It is not nice and optimal but I did not find a way to made this process to be dynamically created. Is here someone who knows better way to do this naming? That is my first question.</p>
<p>To simplify my problem. I need to put variable list like this:</p>
<ul>
<li><strong>VAR1="http://{{ .Release.Name }}-A-B-C:8080/graphql"</strong></li>
<li><strong>VAR2="http://{{ .Release.Name }}-A-B-C:8080/actions"</strong></li>
</ul>
<p>into E chart from A chart. But I did not find a way to let Go templating expand .Release.Name variable. I made this in A variables.yaml:</p>
<pre><code>B:
C:
nameOverride: A-B-C
D:
nameOverride: A-B-C-D
E:
nameOverride: A-B-C-E
extraVariables:
VAR1: "http://{{ .Release.Name }}-A-B-C:8080/graphql"
VAR2: "http://{{ .Release.Name }}-A-B-C:8080/actions"
F:
nameOverride: A-F
G:
nameOverride: A-G
extraVariables:
VAR1: "http://{{ .Release.Name }}-A:8080/graphql"
</code></pre>
<p>But I did not find a way how to use tpl Helm function in range part with dollar variable input. Or another possibility to accomplish this. I tried just include some "template" which I can made in A chart but it had bad Vars context and is used in every graphql-engine chart instance and it is not right in this.</p>
<p>The real A application have more levels of dependencies but it is not important to this problem. Is this wrong way to do that? How are you creating names of k8s objects and how are you setting URL variables for you applications?</p>
<p>Thank you!</p>
| Radim | <p>And answer to question about looping over list with tpl function calling is this. You just have to change context of tpl.</p>
<pre><code>{{- range $k, $v := .Values.environmentVariables }}
- name: {{ quote $k }}
value: {{ tpl $v $ }}
{{- end }}
</code></pre>
| Radim |
<p>Apply the following YAML file into a Kubernetes cluster:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: freebox
spec:
containers:
- name: busybox
image: busybox:latest
imagePullPolicy: IfNotPresent
</code></pre>
<p>Could the status be "Running" if I run <code>kubectl get pod freebox</code>? Why?</p>
| qqwenwenti | <p>the busybox image need to run a command for running.</p>
<p>add the command in the .spec.containers section under the busybox container</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: freebox
spec:
containers:
- name: busybox
command:
- sleep
- 4800
image: busybox:latest
imagePullPolicy: IfNotPresent
</code></pre>
| SAEED mohassel |
Subsets and Splits