Question
stringlengths
65
39.6k
QuestionAuthor
stringlengths
3
30
Answer
stringlengths
38
29.1k
AnswerAuthor
stringlengths
3
30
<p>I am begginer at K8S, i'm using github actions I have 3 environment (dev, pred-prod, prod) and 3 namespace to each environment, i want to have a second environment (pre-prod-2) into my namespace of pre-production; is it possible ? and how the yaml file will look like ?</p> <p>Thank you</p>
dia
<p>To create another independent deployment in the same namespace, take your existing Deployment YAML and change the following fields:</p> <ul> <li>metadata.name</li> <li>spec.selector.matchLabels.app</li> <li>template.metadata.labels.app</li> </ul> <p>It will be sufficient to just append a &quot;2&quot; to each of these values.</p>
jwhb
<p>We get this error when uploading a large file (more than 10Mb but less than 100Mb):</p> <pre><code>403 POST https://www.googleapis.com/upload/storage/v1/b/dm-scrapes/o?uploadType=resumable: ('Response headers must contain header', 'location') </code></pre> <p>Or this error when the file is more than 5Mb</p> <pre><code>403 POST https://www.googleapis.com/upload/storage/v1/b/dm-scrapes/o?uploadType=multipart: ('Request failed with status code', 403, 'Expected one of', &lt;HTTPStatus.OK: 200&gt;) </code></pre> <p>It seems that this API is looking at the file size and trying to upload it via multi part or resumable method. I can't imagine that is something that as a caller of this API I should be concerned with. Is the problem somehow related to permissions? Does the bucket need special permission do it can accept multipart or resumable upload. </p> <pre><code>from google.cloud import storage try: client = storage.Client() bucket = client.get_bucket('my-bucket') blob = bucket.blob('blob-name') blob.upload_from_filename(zip_path, content_type='application/gzip') except Exception as e: print(f'Error in uploading {zip_path}') print(e) </code></pre> <p>We run this inside a Kubernetes pod so the permissions get picked up by storage.Client() call automatically. </p> <p>We already tried these:</p> <ul> <li><p>Can't upload with gsutil because the container is Python 3 and <a href="https://github.com/GoogleCloudPlatform/gsutil/issues/29" rel="noreferrer">gsutil does not run in python 3</a>.</p></li> <li><p><a href="https://dev.to/sethmichaellarson/python-data-streaming-to-google-cloud-storage-with-resumable-uploads-458h" rel="noreferrer">Tried this example</a>: but runs into the same error: <code>('Response headers must contain header', 'location')</code></p></li> <li><p><a href="https://googlecloudplatform.github.io/google-resumable-media-python/latest/google.resumable_media.requests.html#resumable-uploads" rel="noreferrer">There is also this library.</a> But it is basically alpha quality with little activity and no commits for a year. </p></li> <li>Upgraded to google-cloud-storage==1.13.0</li> </ul> <p>Thanks in advance</p>
David Dehghan
<p>The problem was indeed the credentials. Somehow the error message was very miss-leading. When we loaded the credentials explicitly the problem went away. </p> <pre><code> # Explicitly use service account credentials by specifying the private key file. storage_client = storage.Client.from_service_account_json( 'service_account.json') </code></pre>
David Dehghan
<p>On my GCE Kubernetes cluster I can no longer create pods.</p> <pre><code>Warning FailedScheduling pod (www.caveconditions.com-f1be467e31c7b00bc983fbe5efdbb8eb-438ef) failed to fit in any node fit failure on node (gke-prod-cluster-default-pool-b39c7f0c-c0ug): Insufficient CPU </code></pre> <p>Looking at the allocated stats of that node</p> <pre><code>Non-terminated Pods: (8 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- default dev.caveconditions.com-n80z8 100m (10%) 0 (0%) 0 (0%) 0 (0%) default lamp-cnmrc 100m (10%) 0 (0%) 0 (0%) 0 (0%) default mongo-2-h59ly 200m (20%) 0 (0%) 0 (0%) 0 (0%) default www.caveconditions.com-tl7pa 100m (10%) 0 (0%) 0 (0%) 0 (0%) kube-system fluentd-cloud-logging-gke-prod-cluster-default-pool-b39c7f0c-c0ug 100m (10%) 0 (0%) 200Mi (5%) 200Mi (5%) kube-system kube-dns-v17-qp5la 110m (11%) 110m (11%) 120Mi (3%) 220Mi (5%) kube-system kube-proxy-gke-prod-cluster-default-pool-b39c7f0c-c0ug 100m (10%) 0 (0%) 0 (0%) 0 (0%) kube-system kubernetes-dashboard-v1.1.0-orphh 100m (10%) 100m (10%) 50Mi (1%) 50Mi (1%) Allocated resources: (Total limits may be over 100%, i.e., overcommitted. More info: http://releases.k8s.io/HEAD/docs/user-guide/compute-resources.md) CPU Requests CPU Limits Memory Requests Memory Limits ------------ ---------- --------------- ------------- 910m (91%) 210m (21%) 370Mi (9%) 470Mi (12%) </code></pre> <p>Sure I have 91% allocated and can not fit another 10% into it. But is it not possible to over commit resources?</p> <p>The usage of the server is at about 10% CPU average</p> <p><a href="https://i.stack.imgur.com/wBSuc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wBSuc.png" alt="enter image description here" /></a></p> <p>What changes do I need to make for my Kubernetes cluster to be able to create more pods?</p>
Chris
<p>I recently had this same issue. After some research, I found that GKE has a default <code>LimitRange</code> with CPU requests limit set to <code>100m</code>.</p> <p>You can validate this by running <code>kubectl get limitrange -o=yaml</code>. It's going to display something like this:</p> <pre><code>apiVersion: v1 items: - apiVersion: v1 kind: LimitRange metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {&quot;apiVersion&quot;:&quot;v1&quot;,&quot;kind&quot;:&quot;LimitRange&quot;,&quot;metadata&quot;:{&quot;annotations&quot;:{},&quot;name&quot;:&quot;limits&quot;,&quot;namespace&quot;:&quot;default&quot;},&quot;spec&quot;:{&quot;limits&quot;:[{&quot;defaultRequest&quot;:{&quot;cpu&quot;:&quot;100m&quot;},&quot;type&quot;:&quot;Container&quot;}]}} creationTimestamp: 2017-11-16T12:15:40Z name: limits namespace: default resourceVersion: &quot;18741722&quot; selfLink: /api/v1/namespaces/default/limitranges/limits uid: dcb25a24-cac7-11e7-a3d5-42010a8001b6 spec: limits: - defaultRequest: cpu: 100m type: Container kind: List metadata: resourceVersion: &quot;&quot; selfLink: &quot;&quot; </code></pre> <p>This limit is applied to every container. So, for instance, if you have a 4 cores node and each pod creates 2 containers, it will allow only for around ~20 pods to be created (4 cpus = 4000m -&gt; / 100m = 40 -&gt; / 2 = 20).</p> <p>The &quot;fix&quot; here is to change the default <code>LimitRange</code> to one that better fits your use-case and then remove old pods allowing them to be recreated with the updated values. Another (and probably better) option is to directly set the CPU limits on each deployment/pod definition you have.</p> <p>Some reading material:</p> <p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/#specify-a-cpu-request-and-a-cpu-limit" rel="noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/#specify-a-cpu-request-and-a-cpu-limit</a></p> <p><a href="https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/#create-a-limitrange-and-a-pod" rel="noreferrer">https://kubernetes.io/docs/tasks/administer-cluster/manage-resources/cpu-default-namespace/#create-a-limitrange-and-a-pod</a></p> <p><a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#how-pods-with-resource-limits-are-run" rel="noreferrer">https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#how-pods-with-resource-limits-are-run</a></p> <p><a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-resource-requests-and-limits" rel="noreferrer">https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-resource-requests-and-limits</a></p>
jonathancardoso
<p>I run a kubernetes cluster with cert-manager installed for managing ACME (Let's Encrypt) certificates. I'm using DNS domain validation with Route 53 and it works all fine.</p> <p>The problem comes when I try to issue a certificate for a cluster internal domain. In this case domain validation does not pass since the validation challenge is presented on external Route53 zone only, while cert-manager is trying to look for domain name via cluster internal DNS.</p> <p>Any hints on how this can be solved are welcome.</p>
roman
<p>Assuming that you don't control public DNS for your cluster internal domain, you will not be able to receive LetsEncrypt certificates for it.</p> <p>You may however set up another issuer that will grant you certificates for this domain, e.g. the SelfSigned issuer: <a href="https://cert-manager.io/docs/configuration/selfsigned/" rel="nofollow noreferrer">https://cert-manager.io/docs/configuration/selfsigned/</a> Then set the <code>issuerRef</code> of your certificate object to point to your SelfSigned issuer:</p> <pre class="lang-yaml prettyprint-override"><code>(...) issuerRef: name: selfsigned-issuer kind: ClusterIssuer group: cert-manager.io </code></pre>
jwhb
<p>We are enabling Google Cloud Groups RBAC in our existing GKE clusters.</p> <p>For that, we first created all the groups in Workspace, and also the required &quot;[email protected]&quot; according to documentation.</p> <p>Those groups are created in Workspace with an integration with Active Directory for Single Sign On.</p> <p>All groups are members of &quot;gke-security-groups@ourdomain&quot; as stated by documentation. And all groups can View members.</p> <p>The cluster was updated to enabled the flag for Google Cloud Groups RBAC and we specify the value to be &quot;[email protected]&quot;.</p> <p>We then Added one of the groups (let's called it [email protected]) to IAM and assigned a custom role which only gives access to:</p> <pre><code>&quot;container.apiServices.get&quot;, &quot;container.apiServices.list&quot;, &quot;container.clusters.getCredentials&quot;, &quot;container.clusters.get&quot;, &quot;container.clusters.list&quot;, </code></pre> <p>This is just the minimum for the user to be able to log into the Kubernetes cluster and from there being able to apply Kubernetes RBACs.</p> <p>In Kubernetes, we applied a Role, which provides list of pods in a specific namespace, and a role binding that specifies the group we just added to IAM.</p> <pre><code>kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: test-role namespace: custom-namespace rules: - apiGroups: [&quot;&quot;] resources: [&quot;pods&quot;] verbs: [&quot;get&quot;, &quot;list&quot;, &quot;watch&quot;] </code></pre> <hr /> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: test-rolebinding namespace: custom-namespace roleRef: kind: Role name: test-role apiGroup: rbac.authorization.k8s.io subjects: - kind: Group name: [email protected] </code></pre> <p>Everything looks good until now. But when trying to list the pods of this namespace with the user that belongs to the group &quot;[email protected]&quot;, we get:</p> <blockquote> <p>Error from server (Forbidden): pods is forbidden: User &quot;[email protected]&quot; cannot list resource &quot;pods&quot; in API group &quot;&quot; in the namespace &quot;custom-namespace&quot;: requires one of [&quot;container.pods.list&quot;] permission(s).</p> </blockquote> <p>Of course if I give container.pods.list to the group_a@ourdomain assigned role, I can list pods, but it opens for all namespaces, as this permission in GCloud is global.</p> <p>What am I missing here?</p> <p>Not sure if this is relevant, but our organisation in gcloud is called for example &quot;my-company.io&quot;, while the groups for SSO are named &quot;[email protected]&quot;, and the gke-security-groups group was also created with the &quot;groups.my-company.io&quot; domain.</p> <p>Also, if instead of a Group in the RoleBinding, I specify the user directly, it works.</p>
codiaf
<p>It turned out to be an issue about case-sensitive strings and nothing related with the actual rules defined in the RBACs, which were working as expected.</p> <p>The names of the groups were created in Azure AD with a camel case model. These group names where then showed in Google Workspace all lowercase.</p> <p><strong>Example in Azure AD:</strong> [email protected]</p> <p><strong>Example configured in the RBACs as shown in Google Workspace:</strong> [email protected]</p> <p>We copied the names from the Google Workspace UI all lowercase and we put them in the bindings and that caused the issue. Kubernetes GKE is case sensitive and it didn't match the name configured in the binding with the email configured in Google Workspace.</p> <p>After changing the RBAC bindings to have the same format, everything worked as expected.</p>
codiaf
<p>I am trying to use sidecar mode in kubernetes to create a logs sidecar to expose specific container logs. And I am using kubernetes client to fetch logs from kubernetes api and send it out by websocket. The code shows below:</p> <pre><code>func serveWs(w http.ResponseWriter, r *http.Request) { w.Header().Set("Access-Control-Allow-Origin", "*") conn, err := upgrader.Upgrade(w, r, nil) if err != nil { if _, ok := err.(websocket.HandshakeError); !ok { log.Println(err) } return } defer conn.Close() logsClient, err := InitKubeLogsClient(config.InCluster) if err != nil { log.Fatalln(err) } stream, err := logsClient.GetLogs(config.Namespace, config.PodName, config.ContainerName) if err != nil { log.Fatalln(err) } defer stream.Close() reader := bufio.NewReader(stream) for { line, err := reader.ReadString('\n') if err != nil { log.Fatalln(err) } conn.WriteMessage(websocket.TextMessage, []byte(line)) } } </code></pre> <p>I am using <a href="https://github.com/gorilla/websocket" rel="nofollow noreferrer">https://github.com/gorilla/websocket</a> as the websocket lib. And on the browser</p> <p>Is this the best way to do what I want? Is there some better way to just expose the logs api from k8s to websocket?</p>
aisensiy
<p>Put my final code here, thanks for the tips from @Peter:</p> <pre><code>func serveWs(w http.ResponseWriter, r *http.Request) { w.Header().Set("Access-Control-Allow-Origin", "*") conn, err := upgrader.Upgrade(w, r, nil) if err != nil { if _, ok := err.(websocket.HandshakeError); !ok { log.Println(err) } return } log.Println("create new connection") defer func() { conn.Close() log.Println("connection close") }() logsClient, err := InitKubeLogsClient(config.InCluster) if err != nil { log.Println(err) return } stream, err := logsClient.GetLogs(config.Namespace, config.PodName, config.ContainerName) if err != nil { log.Println(err) return } defer stream.Close() reader := bufio.NewReaderSize(stream, 16) lastLine := "" for { data, isPrefix, err := reader.ReadLine() if err != nil { log.Println(err) return } lines := strings.Split(string(data), "\r") length := len(lines) if len(lastLine) &gt; 0 { lines[0] = lastLine + lines[0] lastLine = "" } if isPrefix { lastLine = lines[length-1] lines = lines[:(length - 1)] } for _, line := range lines { if err := conn.WriteMessage(websocket.TextMessage, []byte(line)); err != nil { log.Println(err) return } } } } </code></pre>
aisensiy
<p>Good day, i'm newby in kubernetes and try to setup my first environment. I want to following scheme:</p> <ul> <li>My organization has public IP (x.x.x.x)</li> <li>This IP routed to server in private LAN (i.e. <code>192.168.0.10</code>) with win server + IIS. On IIS i have URL rewrite module and it's act as reverse proxy</li> <li>I have kubernetes cluster</li> <li>I have some service deployed to k8s</li> <li>I want to access this service from the internet with SSL, gained from let's encrypt</li> </ul> <p>I have already setup k8s cluster, deploy traefik (<code>v1.7</code>) ingress and configure them for let's encrypt (setup <code>http-&gt;https</code> redirect, setup acme challenge). This works fine - I can observe it from LAN or WAN, and there is no warning about certificate - i see green lock. Now I deploy service (in my case this is graylog). Again - I can observe it from LAN and WAN, but in this case i see warning about certificate (it was issued by <code>TRAEFIK_DEFAULT_CERT</code>). After I saw this I try to search more information and found that I need cert-manager. I deploy cert-manager, create let's encrypt issuer (with ClusterIssuer role), but then I try to issue certificate I get following error (found in challenge description):</p> <pre><code>Waiting for http-01 challenge propagation: wrong status code '404', expected '200' </code></pre> <p>My traefik configmap:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: traefik-conf namespace: kube-system data: traefik.toml: | # traefik.toml defaultEntryPoints = ["http","https"] [entryPoints] [entryPoints.http] address = ":80" [entryPoints.http.redirect] regex = "^http://(.*)" replacement = "https://$1" [entryPoints.https] address = ":443" [entryPoints.https.tls] [acme] email = "[email protected]" storage = "/acme/acme.json" entryPoint = "https" onHostRule = true [acme.httpChallenge] entryPoint = "http" [[acme.domains]] main = "my-public-domain.com" </code></pre> <p>I also try to use wildcard there:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: traefik-conf namespace: kube-system data: traefik.toml: | # traefik.toml defaultEntryPoints = ["http","https"] [entryPoints] [entryPoints.http] address = ":80" [entryPoints.http.redirect] regex = "^http://(.*)" replacement = "https://$1" [entryPoints.https] address = ":443" [entryPoints.https.tls] [acme] email = "[email protected]" storage = "/acme/acme.json" entryPoint = "https" onHostRule = true [acme.httpChallenge] entryPoint = "http" [[acme.domains]] main = "*.my-public-domain.com" sans = ["my-public-domain.com"] </code></pre> <p>My cluster issuer:</p> <pre><code>apiVersion: certmanager.k8s.io/v1alpha1 kind: ClusterIssuer metadata: name: letsencrypt-dev spec: acme: email: [email protected] server: https://acme-staging-v02.api.letsencrypt.org/directory privateKeySecretRef: # Secret resource used to store the account's private key. name: example-issuer-account-key # Add a single challenge solver, HTTP01 using nginx solvers: - http01: ingress: class: traefik </code></pre> <p>And my test certificate:</p> <pre><code>apiVersion: certmanager.k8s.io/v1alpha1 kind: Certificate metadata: name: example-com namespace: default spec: secretName: example-com-tls renewBefore: 360h # 15d commonName: logger.my-public-domain.com dnsNames: - logger.my-public-domain.com issuerRef: name: letsencrypt-dev kind: ClusterIssuer </code></pre> <p>I have DNS entries for wildcard domain and can ping it</p> <p>My certificates stuck in state <code>OrderCreated</code>:</p> <pre><code>Status: Conditions: Last Transition Time: 2019-10-08T09:40:30Z Message: Certificate issuance in progress. Temporary certificate issued. Reason: TemporaryCertificate Status: False Type: Ready Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal OrderCreated 47m cert-manager Created Order resource "example-com-3213698372" </code></pre> <p>Order stuck in state <code>Created</code>:</p> <pre><code>Status: Challenges: Authz URL: &lt;url&gt; Dns Name: logger.my-public-domain.com Issuer Ref: Kind: ClusterIssuer Name: letsencrypt-dev Key: &lt;key&gt; Solver: http01: Ingress: Class: traefik Token: &lt;token&gt; Type: http-01 URL: &lt;url&gt; Wildcard: false Finalize URL: &lt;url&gt; State: pending URL: &lt;url&gt; Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Created 48m cert-manager Created Challenge resource "example-com-3213698372-0" for domain "logger.my-public-domain.com" </code></pre> <p>And, at last, by challenge:</p> <pre><code>Spec: Authz URL: &lt;url&gt; Dns Name: logger.my-public-domain.com Issuer Ref: Kind: ClusterIssuer Name: letsencrypt-dev Key: &lt;key&gt; Solver: http01: Ingress: Class: traefik Token: &lt;token&gt; Type: http-01 URL: &lt;url&gt; Wildcard: false Status: Presented: true Processing: true Reason: Waiting for http-01 challenge propagation: wrong status code '404', expected '200' State: pending Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Started 55m cert-manager Challenge scheduled for processing Normal Presented 55m cert-manager Presented challenge using http-01 challenge mechanism </code></pre> <p>I see that there is 404 response, but I can't understand the reason of it. On my IIS I have following rewrite rules: Let's encrypt bypass - all url matched by <code>.well-known/*</code> rewrited to kubernetes host. http to https - all url not matched by let's encrypt redirected to https sub-domain redirect - all subdomains request rewrited to kubernetes.</p> <p>In my LAN i have own DNS server, there all domains from <code>my-public-domain.com</code> has mapping to internal addresses, so I can redirect public hostname <code>logger.my-public-domain.com (x.x.x.x)</code> to internal <code>logger.my-public-domain.com (192.168.0.y)</code>.</p> <p>While challenge active I cat see new backend and frontend in traefik dashboard.</p> <p>Maybe I misunderstand how it should works, but I expected that cert-manager issue certificate with let's encrypt and I can observe my service.</p>
anatoly.kryzhanosky
<p>In your <code>traefik.toml</code> ConfigMap you're redirecting to HTTPS:</p> <pre><code>[entryPoints.http.redirect] regex = &quot;^http://(.*)&quot; replacement = &quot;https://$1&quot; </code></pre> <p>Remove that replacement using <code>kubectl edit configmap -n kube-system traefik</code>, save the changes then restart the Traefik pod. You should be good to go then. Use your Ingress to manage the redirects with annotations instead of putting them in the Traefik config.</p>
vhs
<p>I have an EKS cluster to which I've added support to work in hybrid mode (in other words, I've added Fargate profile to it). My intention is to run only specific workload on the AWS Fargate while keeping the EKS worker nodes for other kind of workload.</p> <p>To test this out, my Fargate profile is defined to be:</p> <ul> <li>Restricted to specific namespace (Let's say: <strong>mynamespace</strong>)</li> <li>Has specific label so that pods need to match it in order to be scheduled on Fargate (Label is: <strong>fargate: myvalue</strong>)</li> </ul> <p>For testing k8s resources, I'm trying to deploy simple nginx deployment which looks like this:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment namespace: mynamespace labels: fargate: myvalue spec: selector: matchLabels: app: nginx version: 1.7.9 fargate: myvalue replicas: 1 template: metadata: labels: app: nginx version: 1.7.9 fargate: myvalue spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 </code></pre> <p>When I try to apply this resource, I get following:</p> <pre><code>$ kubectl get pods -n mynamespace -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-deployment-596c594988-x9s6n 0/1 Pending 0 10m &lt;none&gt; &lt;none&gt; 07c651ad2b-7cf85d41b2424e529247def8bda7bf38 &lt;none&gt; </code></pre> <p>Pod stays in the Pending state and it is never scheduled to the AWS Fargate instances.</p> <p>This is a pod describe output:</p> <pre><code>$ kubectl describe pod nginx-deployment-596c594988-x9s6n -n mynamespace Name: nginx-deployment-596c594988-x9s6n Namespace: mynamespace Priority: 2000001000 PriorityClassName: system-node-critical Node: &lt;none&gt; Labels: app=nginx eks.amazonaws.com/fargate-profile=myprofile fargate=myvalue pod-template-hash=596c594988 version=1.7.9 Annotations: kubernetes.io/psp: eks.privileged Status: Pending IP: Controlled By: ReplicaSet/nginx-deployment-596c594988 NominatedNodeName: 9e418415bf-8259a43075714eb3ab77b08049d950a8 Containers: nginx: Image: nginx:1.7.9 Port: 80/TCP Host Port: 0/TCP Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-784d2 (ro) Volumes: default-token-784d2: Type: Secret (a volume populated by a Secret) SecretName: default-token-784d2 Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: &lt;none&gt; </code></pre> <p>One thing that I can conclude from this output is that correct Fargate profile was chosen:</p> <pre><code>eks.amazonaws.com/fargate-profile=myprofile </code></pre> <p>Also, I see that some value is added to NOMINATED NODE field but not sure what it represents.</p> <p>Any ideas or usual problems that happen and that might be worth troubleshooting in this case? Thanks</p>
Bakir Jusufbegovic
<p>It turns out the problem was in networking setup of private subnets associated with the Fargate profile all the time. </p> <p>To give more info, here is what I initially had:</p> <ol> <li>EKS cluster with several worker nodes where I've assigned only public subnets to the EKS cluster itself</li> <li>When I tried to add Fargate profile to the EKS cluster, because of the current limitation on Fargate, it is not possible to associate profile with public subnets. In order to solve this, I've created private subnets with the same tag like the public ones so that EKS cluster is aware of them</li> <li><p>What I forgot was that I needed to enable connectivity from the vpc private subnets to the outside world (I was missing NAT gateway). So I've created NAT gateway in Public subnet that is associated with EKS and added to the private subnets additional entry in their associated Routing table that looks like this:</p> <p>0.0.0.0/0 nat-xxxxxxxx</p></li> </ol> <p>This solved the problem that I had above although I'm not sure about the real reason why AWS Fargate profile needs to be associated only with private subnets.</p>
Bakir Jusufbegovic
<p>The istio docs <a href="https://istio.io/latest/docs/setup/install/istioctl/#check-what-s-installed" rel="nofollow noreferrer">here</a> has the following information:</p> <blockquote> <p>The istioctl command saves the IstioOperator CR that was used to install Istio in a copy of the CR named installed-state. You can inspect this CR if you lose track of what is installed in a cluster.</p> <p>The installed-state CR is also used to perform checks in some istioctl commands and should therefore not be removed.</p> </blockquote> <p>Now, I would like to know is what is &quot;CR&quot; and how to inspect this &quot;CR&quot; ?</p>
Sibi
<h2>Short answer</h2> <p>this will give you all deployed objects belongs to Istio CRs in all namespaces:</p> <pre><code>kubectl api-resources | grep -i istio | awk '{print $4}' | while read cr; do kubectl get $(echo $cr | tr '[:upper:]' '[:lower:]') --all-namespaces done </code></pre> <h2>Details:</h2> <p>CR is a general k8s terminology and it means <code>Custom Resource</code>. And its definition is named CRD : Custom Resource Definition.</p> <p>So we have two category of resources:</p> <ul> <li><p>Built-in Resources: Pod, Service, Deployment, Ingress, ReplicaSet, StatefulSet,...</p> </li> <li><p>Custom Resources(CR): which depends on your customization of your cluster.</p> <ul> <li>For example if you install Istio, you will get CRs like : IstioOperator, ...</li> <li>If you install Prometheus-Operator, you will get CRs like: Alertmanager, PrometheusRule, ...</li> </ul> </li> </ul> <p>Now to get the list of resources whether are built-in or custom (CR) , run:</p> <pre><code>kubectl api-resources | awk '{print $4}' </code></pre> <p>Filter them to resources belongs to Istio</p> <pre><code>kubectl api-resources | grep -i istio | awk '{print $4}' </code></pre> <p>Now because <code>IstioOperator</code> (for example) is a resource, you can run the following:</p> <pre><code>kubectl get istiooperator </code></pre> <p>Check objects belongs to this CR in all namespaces</p> <pre><code>kubectl get istiooperator --all-namespaces </code></pre> <p>All commands above will help you to build a YAML object based on the existing resources. Actually, it will help you in <code>kind: ???</code> field</p> <p>If you want also to get the suitable <code>apiVersion: ???</code>, check <code>kubectl api-versions</code></p>
Abdennour TOUMI
<p>I have a brand new (so empty) AKS cluster. I want to install two instances of the nginx ingress controller, in different namespaces and with different ingress class, using helm.</p> <p>I start with the first:</p> <pre><code>helm install ingress1 ingress-nginx/ingress-nginx --namespace namespace1 --set controller.ingressClass=class1 NAME: ingress1 LAST DEPLOYED: Fri Sep 24 20:46:28 2021 NAMESPACE: namespace1 STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: The ingress-nginx controller has been installed. It may take a few minutes for the LoadBalancer IP to be available. </code></pre> <p>All good</p> <p>Now I go with the second:</p> <pre><code>helm install ingress2 ingress-nginx/ingress-nginx --namespace namespace2 --set controller.ingressClass=class2 Error: rendered manifests contain a resource that already exists. Unable to continue with install: IngressClass &quot;nginx&quot; in namespace &quot;&quot; exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key &quot;meta.helm.sh/release-name&quot; must equal &quot;ingress2&quot;: current value is &quot;ingress1&quot;; annotation validation error: key &quot;meta.helm.sh/release-namespace&quot; must equal &quot;namespace2&quot;: current value is &quot;namespace1&quot; </code></pre> <p>What is the correct way to install multiple nginx ingress controller instances in the same cluster?</p>
Franco Tiveron
<p>I think that you're setting the wrong values, thus the class name is <code>nginx</code> in both installs. Take a look at the template here: <a href="https://github.com/kubernetes/ingress-nginx/blob/3ae09fd1fac381fce9f5066febf172a4a70c10a9/charts/ingress-nginx/templates/controller-ingressclass.yaml#L13" rel="noreferrer">controller-ingressclass</a></p> <p>If you're using the official ingress-nginx Helm repository: <code>https://kubernetes.github.io/ingress-nginx</code> then try setting this instead: <code>controller.ingressClassResource.name=class1|class2</code> instead:</p> <pre class="lang-sh prettyprint-override"><code>helm install ingress1 ingress-nginx/ingress-nginx --namespace namespace1 --set controller.ingressClassResource.name=class1 helm install ingress2 ingress-nginx/ingress-nginx --namespace namespace2 --set controller.ingressClassResource.name=class2 </code></pre> <p>depending on your needs, you may also require to change the other values <a href="https://github.com/kubernetes/ingress-nginx/blob/3ae09fd1fac381fce9f5066febf172a4a70c10a9/charts/ingress-nginx/values.yaml#L98" rel="noreferrer">of the ingressClassResource</a></p>
iska
<p>I'm trying to create an ingress rule for a backend service. The ingress controller is the Microk8s Nginx ingress. If I set the host, the ingress stops matching the backend, resulting in a 404 when I visit <a href="https://my-host.com" rel="nofollow noreferrer">https://my-host.com</a></p> <p>Here is my code:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/backend-protocol: &quot;HTTPS&quot; nginx.ingress.kubernetes.io/rewrite-target: / kubernetes.io/ingress.class: public name: http-ingress spec: tls: - hosts: - &quot;my-host.com&quot; secretName: nginx-ingress rules: - host: &quot;my-host.com&quot; - http: paths: - pathType: Prefix path: / backend: service: name: some-service port: number: 80 </code></pre>
Nash
<p>You have created 2 rules, one with only <code>host</code> and a second with <code>http: ...</code>. It should be</p> <pre><code>rules: - host: &quot;my-host.com&quot; http: paths: </code></pre> <p>Yes, YAML is evil.</p>
e.dan
<p>I have the following configmap spec:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 data: MY_NON_SECRET: foo MY_OTHER_NON_SECRET: bar kind: ConfigMap metadata: name: web-configmap namespace: default </code></pre> <pre><code>$ kubectl describe configmap web-configmap Name: web-configmap Namespace: default Labels: &lt;none&gt; Annotations: &lt;none&gt; Data ==== MY_NON_SECRET: ---- foo MY_OTHER_NON_SECRET: ---- bar Events: &lt;none&gt; </code></pre> <p>And the following pod spec:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Pod metadata: name: web-pod spec: containers: - name: web image: kahunacohen/hello-kube:latest envFrom: - configMapRef: name: web-configmap ports: - containerPort: 3000 </code></pre> <pre><code>$ kubectl describe pod web-deployment-5bb9d846b6-8k2s9 Name: web-deployment-5bb9d846b6-8k2s9 Namespace: default Priority: 0 Node: minikube/192.168.49.2 Start Time: Mon, 12 Jul 2021 12:22:24 +0300 Labels: app=web-pod pod-template-hash=5bb9d846b6 service=web-service Annotations: &lt;none&gt; Status: Running IP: 172.17.0.5 IPs: IP: 172.17.0.5 Controlled By: ReplicaSet/web-deployment-5bb9d846b6 Containers: web: Container ID: docker://8de5472c9605e5764276c345865ec52f9ec032e01ed58bc9a02de525af788acf Image: kahunacohen/hello-kube:latest Image ID: docker-pullable://kahunacohen/hello-kube@sha256:930dc2ca802bff72ee39604533342ef55e24a34b4a42b9074e885f18789ea736 Port: 3000/TCP Host Port: 0/TCP State: Running Started: Mon, 12 Jul 2021 12:22:27 +0300 Ready: True Restart Count: 0 Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tcqwz (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-tcqwz: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: &lt;nil&gt; DownwardAPI: true QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 19m default-scheduler Successfully assigned default/web-deployment-5bb9d846b6-8k2s9 to minikube Normal Pulling 19m kubelet Pulling image &quot;kahunacohen/hello-kube:latest&quot; Normal Pulled 19m kubelet Successfully pulled image &quot;kahunacohen/hello-kube:latest&quot; in 2.3212119s Normal Created 19m kubelet Created container web Normal Started 19m kubelet Started container web </code></pre> <p>The pod has container that is running expressjs with this code which is trying to print out the env vars set in the config map:</p> <pre class="lang-js prettyprint-override"><code>const process = require(&quot;process&quot;); const express = require(&quot;express&quot;); const app = express(); app.get(&quot;/&quot;, (req, res) =&gt; { res.send(`&lt;h1&gt;Kubernetes Expressjs Example 0.3&lt;/h2&gt; &lt;h2&gt;Non-Secret Configuration Example&lt;/h2&gt; &lt;p&gt;This uses ConfigMaps as env vars.&lt;/p&gt; &lt;ul&gt; &lt;li&gt;MY_NON_SECRET: &quot;${process.env.MY_NON_SECRET}&quot;&lt;/li&gt; &lt;li&gt;MY_OTHER_NON_SECRET: &quot;${process.env.MY_OTHER_NON_SECRET}&quot;&lt;/li&gt; &lt;/ul&gt; `); }); app.listen(3000, () =&gt; { console.log(&quot;Listening on http://localhost:3000&quot;); }) </code></pre> <p>When I deploy these pods, the env vars are <code>undefined</code></p> <p>When I do <code>$ kubectl exec {POD_NAME} -- env</code></p> <p>I don't see my env vars.</p> <p>What am I doing wrong? I've tried killing the pods, waiting till they restart then check again to no avail.</p>
Aaron
<p>It looks like your pods are managed by <code>web-deployment</code> deployment. You cannot patch such pods directly.</p> <p>If you run <code>kubectl get pod &lt;pod-name&gt; -n &lt;namespace&gt; -oyaml</code>, you'll see a block called <code>ownerReferences</code> under the <code>metadata</code> section. This tells you who is the owner/manager of this pod.</p> <p>In case of a deployment, here is the ownership hierarchy:</p> <p><strong>Deployment</strong> -&gt; <strong>ReplicaSet</strong> -&gt; <strong>Pod</strong></p> <p>i.e. A deployment creates replicaset and replicaset in turn creates pod.</p> <p>So, if you want to change anything in the pod Spec, you should make that change in the <em>deployment</em>, not in the replicaset or the pod directly as they will get overwritten.</p> <p>Patch your deployment either by running and edit the environment field there:</p> <pre><code>kubectl edit deployment.apps &lt;deployment-name&gt; -n &lt;namespace&gt; </code></pre> <p>or update the deployment yaml with your changes and run</p> <pre><code>kubectl apply -f &lt;deployment-yaml-file&gt; </code></pre>
Raghwendra Singh
<p>I'm having troubles setting up kubernetes ingress-nginx in order to expose my app externally. Here are the steps that I did:</p> <p><strong>Application deployment:</strong></p> <ol> <li>Created namespace called ingress</li> <li>Deployed statefulset set resource that describes my application (let's call it testapp) in ingress namespace</li> <li>Created ClusterIP service that makes my app available in the kube cluster (testapp) in ingress namespace</li> </ol> <p><strong>Ingress nginx setup:</strong></p> <ol> <li>Created ingress-nginx controller in namespace ingress</li> <li>Created ingress-nginx service in namespace ingress </li> </ol> <blockquote> <pre><code>ingress-nginx NodePort 10.102.152.58 &lt;none&gt; 80:30692/TCP,443:32297/TCP 6d2h </code></pre> </blockquote> <ol start="3"> <li>Created ingress resource type in ingress namespace that looks like this</li> </ol> <blockquote> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: / name: testappingress spec: rules: - host: testapp.k8s.myorg.io http: paths: - backend: serviceName: testapp servicePort: 80 path: / </code></pre> </blockquote> <p>If I do describe of ingress resource, I get this:</p> <pre><code>ubuntu@ip-10-0-20-81:~/ingress$ kubectl describe ingress testappingress -n ingress Name: testappingress Namespace: ingress Address: Default backend: default-http-backend:80 (&lt;none&gt;) Rules: Host Path Backends ---- ---- -------- testapp.k8s.myorg.io / testapp:80 (&lt;none&gt;) Annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: / Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CREATE 15m nginx-ingress-controller Ingress ingress/testappingress Normal CREATE 15m nginx-ingress-controller Ingress ingress/testappingress Normal UPDATE 14m nginx-ingress-controller Ingress ingress/testappingress Normal UPDATE 14m nginx-ingress-controller Ingress ingress/testappingress </code></pre> <p>If I check logs from ingress-nginx controller, I get following:</p> <pre><code>I0317 15:06:00.029706 6 event.go:221] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ingress", Name:"testappingress", UID:"2c8db73f-48c6-11e9-ae46-12bdb9ac3010", APIVersion:"extensions/v1beta1", ResourceVersion:"1185441", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ingress/testappingress I0317 15:06:00.039419 6 controller.go:177] Configuration changes detected, backend reload required. I0317 15:06:00.508433 6 controller.go:195] Backend successfully reloaded. I0317 15:06:00.568448 6 controller.go:212] Dynamic reconfiguration succeeded. </code></pre> <p><strong>Route53/ELB classic setup:</strong></p> <ol> <li>Created ELB classic which points to kubernetes cluster instances and exposes port 80</li> <li>Created Route53 entry in existing hosted zone (CNAME) which points to ELB classic from above.</li> <li>Setup Health Check on ELB to point to ingress-nginx NodePort service on port: <strong>30692</strong> which works.</li> </ol> <p>When I do:</p> <pre><code>curl http://testapp.k8s.myorg.io </code></pre> <p>I don't get any response.</p> <p><strong>Here is what I've tried to troubleshoot the problem:</strong></p> <p>If I do:</p> <pre><code>telnet testapp.k8s.myorg.io 80 </code></pre> <p>It will resolve to my ELB classic DNS name</p> <p>If I go into any container/pod that exist inside ingress namespace and do following:</p> <pre><code>curl http://testapp </code></pre> <p>I will get appropriate response.</p> <p>From this I can conclude following:</p> <ol> <li><p>Application is correctly deployed and available through the service that is exposed (ClusterIP) from the inside of kubernetes cluster</p></li> <li><p>DNS gets resolved properly to the ELB </p></li> <li><p>HealthCheck on ELB works</p></li> </ol> <p>Not sure what else could I do to troubleshoot why I'm unable to access my service through Ingress? </p>
Bakir Jusufbegovic
<p>It was a mistake on my end. The missing part was following:</p> <p>On the ELB, I didn't set listeners correctly. So basically, what was needed is to point 80/443 port from ELB to the NodePorts of Ingress Service. </p> <pre><code>ingress-nginx ingress-nginx NodePort 10.96.249.168 &lt;none&gt; 80:32327/TCP,443:30313/TCP 25h </code></pre>
Bakir Jusufbegovic
<p>I am just setting two simple services on Mac using minikube</p> <p>I have the service set up and I can access it via ingress / minikube tunnel . So i know the service works</p> <p>I am using Spring Boot 3, with the so i need to specify the <code>spring-cloud-starter-kubernetes-all</code> package. This means I need to specify a url for <code>spring.cloud.kubernetes.discovery.discovery-server-url </code></p> <p>When i try to do the simple call to</p> <p><code>discoveryClient.getServices()</code></p> <p>I get the error &quot;Connection refused <a href="https://kubernetes.docker.internal:6443/apps%22" rel="nofollow noreferrer">https://kubernetes.docker.internal:6443/apps&quot;</a></p> <p>&quot;apps&quot; is my second service</p> <p>It is refusing connection to the value of <code>spring.cloud.kubernetes.discovery.discovery-server-url</code></p> <p>At the moment i have this set to <code>spring.cloud.kubernetes.discovery.discovery-server-url=https://kubernetes.docker.internal:6443</code></p> <p>I am assuming this is incorrect and I need some help as to what is the correct url to set this to / or the correct place to find this. I thought this would be the internal url.</p>
duckdivesurvive
<p>You are trying to configure your discovery client with the Kubernetes API server URL, which is incorrect. Your client application needs to be connected to <strong>Spring Cloud Kubernetes Discovery Server</strong>. It's an independent application that will work like a <strong>proxy</strong> between your client SpringBoot apps and Kubernetes. You can find its images here: <a href="https://hub.docker.com/r/springcloud/spring-cloud-kubernetes-discoveryserver/tags" rel="nofollow noreferrer">https://hub.docker.com/r/springcloud/spring-cloud-kubernetes-discoveryserver/tags</a> And it should be deployed to Kubernetes via yaml file.</p> <p>Then you can configure <code>spring.cloud.kubernetes.discovery.discovery-server-url</code> with this discovery server URL. That URL will most likely come from a Kubernetes service that you will create for the discovery server application.</p> <p>Please, find the full deployment YAML and the related documentation here: <a href="https://spring.io/blog/2021/10/26/new-features-for-spring-cloud-kubernetes-in-spring-cloud-2021-0-0-m3" rel="nofollow noreferrer">https://spring.io/blog/2021/10/26/new-features-for-spring-cloud-kubernetes-in-spring-cloud-2021-0-0-m3</a></p> <p>Please, let us know how that goes</p>
Igor Kanshyn
<p>I am quite confused about readiness probe. Suppose I use httpGet with /health as the probing endpoint. Once the readiness check returns 500, the server will stop serving traffic. Then how can the /health endpoint work? In other words, once a readiness check fails, how can it ever work again since it can no longer answer to future /health checks?</p> <p>I guess one valid explanation is that the path is invoked locally? (i.e. not through the https:${ip and port}/health)</p>
Brian Shih
<p>You have typo.. you said :</p> <blockquote> <p>Once the readiness check returns 500, the <strong>server</strong> will stop serving traffic.</p> </blockquote> <p>However, it should be :</p> <blockquote> <p>Once the readiness check returns 500, the <strong>k8s service</strong> will stop serving traffic.</p> </blockquote> <p><a href="https://i.stack.imgur.com/RZqkN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RZqkN.png" alt="enter image description here" /></a></p> <p>k8s service behaves like a load balancer for multi-pods.</p> <ul> <li>If pod is ready, an endpoint will be created for the ready pod, and the traffic will be received.</li> <li>If pod is not ready, its endpoint will be removed and it will not more receive traffic.</li> </ul> <p>While <strong>Readiness Probe</strong> decides to forward traffic or not, <strong>Liveness Probe</strong> decides to restart the Pod or not.</p> <p>If you want to get rid off unhealthy Pod, you have to specify also <strong>Liveness Probe</strong>.</p> <h2>So let's summarize:</h2> <p>To get full HA deployment you need 3 things together:</p> <ul> <li>Pod are managed by <strong>Deployment</strong> which will maintain a number of replicas.</li> <li><strong>Liveness Probe</strong> will help to remove/restart the unlheathy pod.. After somtime ( 6 restarts), the Pod will be unhealthy and the Deployment will take care to bring new one.</li> <li><strong>Readiness Probe</strong> will help forward traffic to only ready pods : Either at beginning of run, or at the end of run ( graceful shutdown).</li> </ul>
Abdennour TOUMI
<p>So when creating secrets I often will use:</p> <pre class="lang-sh prettyprint-override"><code>kubectl create secret generic super-secret --from-env-file=secrets </code></pre> <p>However, I wanted to move this to a dedicated secrets.yaml file, of kind &quot;Secret&quot; as per the documentation: <a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-secret-generic-em-" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-secret-generic-em-</a></p> <p>However, to do this, I need to base64 encode all of the secrets. Wha! This is bad imho. Why can't kubectl handle this for us? (Another debate for another day).</p> <p>So, I am back to using the <code>kubectl create secret</code> command which works really well for ease of deployment - however, I am wondering what the process is to apply said creation of secret to a specific namespace? (Again, this probably would involve something tricky to get things to work - but why it's not a standard feature yet is a little worrying?)</p>
Micheal J. Roberts
<p>You can use <code>--dry-run</code> and <code>-oyaml</code> flags.</p> <p>Use this command to generate your secrets.yaml file</p> <pre class="lang-sh prettyprint-override"><code>kubectl create secret generic super-secret \ --from-env-file=secrets --namespace &lt;your-namespace&gt; \ --dry-run=client -oyaml &gt; secrets.yaml </code></pre> <hr /> <p>The above is pretty standard in the k8s community.</p>
Raghwendra Singh
<p>I was following a guide to connect a database to kubernetes: <a href="https://itnext.io/basic-postgres-database-in-kubernetes-23c7834d91ef" rel="nofollow noreferrer">https://itnext.io/basic-postgres-database-in-kubernetes-23c7834d91ef</a></p> <p>after installing Kubernetes (minikube) on Windows 10 64 bit: <a href="https://minikube.sigs.k8s.io/docs/start/" rel="nofollow noreferrer">https://minikube.sigs.k8s.io/docs/start/</a></p> <p>I am encountering an issue with 'base64' where the DB is trying to connect and store the password. As PowerShell doesn't recognise it. I was wondering if anyone has any ideas how I could either fix this and still use windows or an alternative means that would enable me to continue with the rest of the guide?</p> <p>Error Code:</p> <pre><code>base64 : The term 'base64' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:1 char:131 + ... postgresql -o jsonpath=&quot;{.data.postgresql-password}&quot; | base64 --decod ... + ~~~~~~ + CategoryInfo : ObjectNotFound: (base64:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException export : The term 'export' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:1 char:1 + export POSTGRES_PASSWORD=$(kubectl get secret --namespace default pos ... + ~~~~~~ + CategoryInfo : ObjectNotFound: (export:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException </code></pre> <p><a href="https://i.stack.imgur.com/veiUJ.png" rel="nofollow noreferrer">Windows Powershell Error message</a></p>
Tanik
<p>The <code>base64</code> cli found in Mac OS and some *nix distros is not available on Windows.</p> <p>You <em>could</em> write a small function named <code>base64</code> that mimics the behavior of the <code>base64</code> unix tool though:</p> <pre><code>function base64 { # enumerate all pipeline input $input |ForEach-Object { if($MyInvocation.UnboundArguments -contains '--decode'){ # caller supplied `--decode`, so decode $bytes = [convert]::FromBase64String($_) [System.Text.Encoding]::ASCII.GetString($bytes) } else { # default mode, encode ascii text as base64 $bytes = [System.Text.Encoding]::ASCII.GetBytes($_) [convert]::ToBase64String($bytes) } } } </code></pre> <p>This should work as a drop-in replacement for conversion between ASCII/UTF7 text and base64:</p> <pre><code>PS ~&gt; 'Hello, World!' |base64 --encode SGVsbG8sIFdvcmxkIQ== PS ~&gt; 'Hello, World!' |base64 --encode |base64 --decode Hello, World! </code></pre> <hr /> <p>To use with your existing scripts, simple dot-source a script with the function definition in your shell before executing the others:</p> <pre><code>PS ~&gt; . .\path\to\base64.ps1 </code></pre> <p>The above will work from a script as well. If you have a multi-line paste-aware shell (Windows' default Console Host with PSReadLine should be okay), you can also just paste the function definition directly into the prompt :)</p>
Mathias R. Jessen
<p>due to company policies I have to replace my Kyverno rules by OPA ones. One of my rule is, that I want to add all pods of a specific namespace to our service-mesh (we're using Kuma) So for this I have to add the following annotations/labels</p> <pre><code>metadata: labels: kuma.io/mesh: mesh annotations: kuma.io/sidecar-injection: enabled </code></pre> <p>so my gatekeeper rule looks the following (it is WIP ;) )</p> <pre><code>apiVersion: mutations.gatekeeper.sh/v1beta1 kind: AssignMetadata metadata: name: demo-annotation-owner spec: match: scope: Namespaced kinds: - apiGroups: [&quot;&quot;] kinds: [&quot;Pod&quot;] location: &quot;metadata.annotations.kuma.io/sidecar-injection&quot; parameters: assign: value: &quot;enabled&quot; </code></pre> <p>the request gets rejected with the following error in the kube-apiserver</p> <pre><code> rejected by webhook &quot;validation.gatekeeper.sh&quot;: &amp;errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:&quot;&quot;, APIVersion:&quot;&quot;}, ListMeta:v1.ListMeta{SelfLink:&quot;&quot;, ResourceVersion:&quot;&quot;, Continue:&quot;&quot;, RemainingItemCount:(*int64)(nil)}, Status:&quot;Failure&quot;, Message:&quot;admission webhook \&quot;validation.gatekeeper.sh\&quot; denied the request: invalid location format for AssignMetadata demo-annotation-owner: metadata.annotations.kuma.io/sidecar-injection: unexpected token: expected '.' or eof, got: ERROR: \&quot;/\&quot;&quot;, Reason:&quot;&quot;, Details:(*v1.StatusDetails)(nil), Code:422}} </code></pre> <p>Replacing the location by metadata.annotations.test is accepted by the apiserver, but that does not help me much as you can imagine.</p> <p>So my question is - did I do a big flaw or what is the way of creating annotations/labels in OPA by the mutating webhook with a slash in it's name?</p> <p>Many thanks</p>
Zwelch
<p>Just replace the slash <code>/</code> by <code>~1</code></p> <pre class="lang-yaml prettyprint-override"><code> location: &quot;metadata.annotations.kuma.io~1sidecar-injection&quot; </code></pre> <p>Or wrap it by <code>&quot;&quot;</code></p> <pre class="lang-yaml prettyprint-override"><code> location: 'metadata.annotations.&quot;kuma.io/sidecar-injection&quot;' </code></pre>
Abdennour TOUMI
<p>In my k8s environment where spring-boot applications runs, I checked log location in <code>/var/log</code> and <code>/var/lib</code> but both empty. Then I found log location in <code>/tmp/spring.log</code> . It seems this the default log location. My problem are</p> <ol> <li>How <code>kubectl log</code> knows it should read logs from <code>/tmp</code> location. I get log output on <code>kubectl logs</code> command.</li> <li>I have fluent-bit configured where it has input as</li> </ol> <p>following</p> <pre><code> [INPUT] Name tail Tag kube.dev.* Path /var/log/containers/*dev*.log DB /var/log/flb_kube_dev.db </code></pre> <p>This suggest it should reads logs from <code>/var/log/containers/</code> but it does not have logs. However i am getting fluent-bit logs successfully. What am i missing here ?</p>
Viraj
<p>Docker logs only contain the logs that are dumped on STDOUT by your container's process with PID 1 (your container's <code>entrypoint</code> or <code>cmd</code> process).</p> <p>If you want to see the logs via <code>kubectl logs</code> or <code>docker logs</code>, you should redirect your application logs to STDOUT instead of file <code>/tmp/spring.log</code>. <a href="https://serverfault.com/a/634296">Here's</a> an excellent example of how this can achieved with minimal effort.</p> <hr /> <p>Alternatively, you can also use <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">hostPath</a> volumeMount. This way, you can directly access the log from the path on the host.</p> <h5>Warning when using hostPath volumeMount</h5> <p>If the pod is shifted to another host due to some reason, you logs will not move along with it. A new log file will be created on this new host at the same path.</p>
Raghwendra Singh
<p>We run Couchbase in Kubernetes platform in AWS cloud. As per the 'Couchbase on AWS' best practices, it is suggested to use EBS 'gp3' or EBS 'io1' based on the following link. (<a href="https://docs.couchbase.com/server/current/cloud/couchbase-cloud-deployment.html#aws-deployment-methods" rel="nofollow noreferrer">https://docs.couchbase.com/server/current/cloud/couchbase-cloud-deployment.html#aws-deployment-methods</a>)</p> <p>But it seems AWS has introduced a new EFS storage type, known as, &quot;Amazon EFS Elastic Throughput&quot; (<a href="https://aws.amazon.com/blogs/aws/new-announcing-amazon-efs-elastic-throughput/" rel="nofollow noreferrer">https://aws.amazon.com/blogs/aws/new-announcing-amazon-efs-elastic-throughput/</a>). It gives much better throughput. Is it suggested to use EFS with Elastic Throughput for Couchbase storage?</p> <p><em>Throughput claimed by AWS: Elastic Throughput allows you to drive throughput up to a limit of 3 GiB/s for read operations and 1 GiB/s for write operations per file system in all Regions.</em></p>
Ganesh N
<p>Apologies for the delay here. In short, no we do not recommend using EFS with Couchbase as it is a file share rather than a block device.</p>
Perry Krug
<p>I've created a Switch in <code>Hyper-V Manager</code> : </p> <pre><code>Virtual Switch Manager (on the right) =&gt; New Virtual Network Switch =&gt; External =&gt; MinikubeSwitch </code></pre> <p><a href="https://i.stack.imgur.com/nMTDf.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nMTDf.gif" alt="enter image description here"></a></p> <p>And then : </p> <p><a href="https://i.stack.imgur.com/GKY8I.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GKY8I.gif" alt="enter image description here"></a></p> <p>After hitting in CLI (under Administrator in Windows 10) the line : </p> <pre><code> minikube start --driver=hyperv --hyperv-virtual-switch=MinikubeSwitch I got : PS C:\bin&gt; minikube start --driver=hyperv --hyperv-virtual-switch=MinikubeSwitch * minikube v1.9.2 on Microsoft Windows 10 Pro 10.0.18363 Build 18363 * Using the hyperv driver based on user configuration * Starting control plane node m01 in cluster minikube * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ... ! StartHost failed, but will try again: creating host: create host timed out in 120.000000 seconds * Stopping "minikube" in hyperv ... * Powering off "minikube" via SSH ... * Deleting "minikube" in hyperv ... E0421 12:59:59.746863 2736 main.go:106] libmachine: [stderr =====&gt;] : Hyper-V\Get-VM : Hyper-V was unable to find a virtual machine with name "minikube". At line:1 char:3 + ( Hyper-V\Get-VM minikube ).state + ~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidArgument: (minikube:String) [Get-VM], VirtualizationException + FullyQualifiedErrorId : InvalidParameter,Microsoft.HyperV.PowerShell.Commands.GetVM E0421 13:00:01.624914 2736 main.go:106] libmachine: [stderr =====&gt;] : Hyper-V\Get-VM : Hyper-V was unable to find a virtual machine with name "minikube". At line:1 char:3 + ( Hyper-V\Get-VM minikube ).state + ~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidArgument: (minikube:String) [Get-VM], VirtualizationException + FullyQualifiedErrorId : InvalidParameter,Microsoft.HyperV.PowerShell.Commands.GetVM E0421 13:00:03.443467 2736 main.go:106] libmachine: [stderr =====&gt;] : Hyper-V\Get-VM : Hyper-V was unable to find a virtual machine with name "minikube". At line:1 char:3 + ( Hyper-V\Get-VM minikube ).state + ~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidArgument: (minikube:String) [Get-VM], VirtualizationException + FullyQualifiedErrorId : InvalidParameter,Microsoft.HyperV.PowerShell.Commands.GetVM * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ... E0421 13:00:05.635939 2736 main.go:106] libmachine: [stderr =====&gt;] : Hyper-V\Get-VM : Hyper-V was unable to find a virtual machine with name "minikube". At line:1 char:3 + ( Hyper-V\Get-VM minikube ).state + ~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidArgument: (minikube:String) [Get-VM], VirtualizationException + FullyQualifiedErrorId : InvalidParameter,Microsoft.HyperV.PowerShell.Commands.GetVM E0421 13:00:07.748572 2736 main.go:106] libmachine: [stderr =====&gt;] : Hyper-V\Get-VM : Hyper-V was unable to find a virtual machine with name "minikube". At line:1 char:3 + ( Hyper-V\Get-VM minikube ).state + ~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidArgument: (minikube:String) [Get-VM], VirtualizationException + FullyQualifiedErrorId : InvalidParameter,Microsoft.HyperV.PowerShell.Commands.GetVM E0421 13:00:09.940572 2736 main.go:106] libmachine: [stderr =====&gt;] : Hyper-V\Get-VM : Hyper-V was unable to find a virtual machine with name "minikube". At line:1 char:3 + ( Hyper-V\Get-VM minikube ).state + ~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidArgument: (minikube:String) [Get-VM], VirtualizationException + FullyQualifiedErrorId : InvalidParameter,Microsoft.HyperV.PowerShell.Commands.GetVM E0421 13:00:11.850044 2736 main.go:106] libmachine: [stderr =====&gt;] : Hyper-V\Get-VM : Hyper-V was unable to find a virtual machine with name "minikube". At line:1 char:3 + ( Hyper-V\Get-VM minikube ).state + ~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidArgument: (minikube:String) [Get-VM], VirtualizationException + FullyQualifiedErrorId : InvalidParameter,Microsoft.HyperV.PowerShell.Commands.GetVM E0421 13:00:13.887769 2736 main.go:106] libmachine: [stderr =====&gt;] : Hyper-V\Get-VM : Hyper-V was unable to find a virtual machine with name "minikube". At line:1 char:3 + ( Hyper-V\Get-VM minikube ).state + ~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidArgument: (minikube:String) [Get-VM], VirtualizationException + FullyQualifiedErrorId : InvalidParameter,Microsoft.HyperV.PowerShell.Commands.GetVM E0421 13:00:16.088700 2736 main.go:106] libmachine: [stderr =====&gt;] : Hyper-V\Get-VM : Hyper-V was unable to find a virtual machine with name "minikube". At line:1 char:3 + ( Hyper-V\Get-VM minikube ).state + ~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidArgument: (minikube:String) [Get-VM], VirtualizationException + FullyQualifiedErrorId : InvalidParameter,Microsoft.HyperV.PowerShell.Commands.GetVM PS C:\bin&gt; Get-Vm Name State CPUUsage(%) MemoryAssigned(M) Uptime Status Version ---- ----- ----------- ----------------- ------ ------ ------- DockerDesktopVM Running 0 2048 02:16:40.4550000 Operating normally 9.0 PS C:\bin&gt; </code></pre> <p>However it keeps failing , even though I've tried to delete the old minikube and reinstall it.</p> <p>Any idea how to fix it ? </p>
JAN
<p>OK , found the problem , I put it below for other people whom might encounter the same problem : </p> <p>Replace the CLI command </p> <pre><code>minikube start --driver=hyperv --hyperv-virtual-switch=MinikubeSwitch </code></pre> <p>with : </p> <pre><code> minikube start --driver=hyperv MinikubeSwitch </code></pre> <p>The param <code>--hyperv-virtual-switch</code> is not relevant anymore.</p>
JAN
<p>I have started a pod with an angular app in it and it is healthy with no errors when I do kubectl describe pod.</p> <p>However, when I curl to ingress route I get a 502 bad gateway.</p> <p>I tried exec into angular pod and do a curl localhost:4200 and I get: Failed to connect to localhost port 4200: Connection refused.</p> <p>Weirdly enough when i do kubectl logs podname I see nothing and I thought I always see something here, not sure.</p> <p>Question now is, do I have overlooked something?</p> <ul> <li>How do I check if angular app is working inside the pod?</li> <li>What is the location of the logfile you see normally when you npm start an app?</li> </ul> <p>I havent posted any yaml files, because I think the problem lies with the angular app itself (the curl localhost:4200 hinted me to that).</p>
furion2000
<p>your container configuration is not set up correctly to start the service?</p> <p>you have a config like:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: my-angular-app spec: containers: - name: my-angular-app image: nginx:latest ports: - containerPort: 80 </code></pre> <p>it should also have an command like <code>command: [&quot;nginx&quot;, &quot;-g&quot;, &quot;daemon off;&quot;]</code> to actually start the service.</p> <p>if it does, then you have logs indicating if it is starting and the current description should indicate the correct port:</p> <p>You can check the logs at the container level by using <code>kubectl logs -f podname -c containername</code> to see if there are any errors or issues that are preventing the app from starting</p> <p>You can check the port by running <code>kubectl describe pod podname</code> and looking at the &quot;Containers&quot; section.</p>
roberto tomás
<p>I am attempting to create a HA Kubernetes cluster in Azure using <code>kubeadm</code> as documented here <code>https://kubernetes.io/docs/setup/independent/high-availability/</code></p> <p>I have everything working when using only 1 master node but when changing to 3 master nodes kube-dns keeps crashing with apiserver issues</p> <p>I can see when running <code>kubectl get nodes</code> that the 3 master nodes are ready</p> <pre><code>NAME STATUS ROLES AGE VERSION k8s-master-0 Ready master 3h v1.9.3 k8s-master-1 Ready master 3h v1.9.3 k8s-master-2 Ready master 3h v1.9.3 </code></pre> <p>but the dns and dashboard pod keep crashing</p> <pre><code>NAME READY STATUS RESTARTS AGE kube-apiserver-k8s-master-0 1/1 Running 0 3h kube-apiserver-k8s-master-1 1/1 Running 0 2h kube-apiserver-k8s-master-2 1/1 Running 0 3h kube-controller-manager-k8s-master-0 1/1 Running 0 3h kube-controller-manager-k8s-master-1 1/1 Running 0 3h kube-controller-manager-k8s-master-2 1/1 Running 0 3h kube-dns-6f4fd4bdf-rmqbf 1/3 CrashLoopBackOff 88 3h kube-proxy-5phhf 1/1 Running 0 3h kube-proxy-h5rk8 1/1 Running 0 3h kube-proxy-ld9wg 1/1 Running 0 3h kube-proxy-n947r 1/1 Running 0 3h kube-scheduler-k8s-master-0 1/1 Running 0 3h kube-scheduler-k8s-master-1 1/1 Running 0 3h kube-scheduler-k8s-master-2 1/1 Running 0 3h kubernetes-dashboard-5bd6f767c7-d8kd7 0/1 CrashLoopBackOff 42 3h </code></pre> <p>The logs <code>kubectl -n kube-system logs kube-dns-6f4fd4bdf-rmqbf -c kubedns</code> indicate there is an api server issue</p> <pre><code>I0521 14:40:31.303585 1 dns.go:48] version: 1.14.6-3-gc36cb11 I0521 14:40:31.304834 1 server.go:69] Using configuration read from directory: /kube-dns-config with period 10s I0521 14:40:31.304989 1 server.go:112] FLAG: --alsologtostderr="false" I0521 14:40:31.305115 1 server.go:112] FLAG: --config-dir="/kube-dns-config" I0521 14:40:31.305164 1 server.go:112] FLAG: --config-map="" I0521 14:40:31.305233 1 server.go:112] FLAG: --config-map-namespace="kube-system" I0521 14:40:31.305285 1 server.go:112] FLAG: --config-period="10s" I0521 14:40:31.305332 1 server.go:112] FLAG: --dns-bind-address="0.0.0.0" I0521 14:40:31.305394 1 server.go:112] FLAG: --dns-port="10053" I0521 14:40:31.305454 1 server.go:112] FLAG: --domain="cluster.local." I0521 14:40:31.305531 1 server.go:112] FLAG: --federations="" I0521 14:40:31.305596 1 server.go:112] FLAG: --healthz-port="8081" I0521 14:40:31.305656 1 server.go:112] FLAG: --initial-sync-timeout="1m0s" I0521 14:40:31.305792 1 server.go:112] FLAG: --kube-master-url="" I0521 14:40:31.305870 1 server.go:112] FLAG: --kubecfg-file="" I0521 14:40:31.305960 1 server.go:112] FLAG: --log-backtrace-at=":0" I0521 14:40:31.306026 1 server.go:112] FLAG: --log-dir="" I0521 14:40:31.306109 1 server.go:112] FLAG: --log-flush-frequency="5s" I0521 14:40:31.306160 1 server.go:112] FLAG: --logtostderr="true" I0521 14:40:31.306216 1 server.go:112] FLAG: --nameservers="" I0521 14:40:31.306267 1 server.go:112] FLAG: --stderrthreshold="2" I0521 14:40:31.306324 1 server.go:112] FLAG: --v="2" I0521 14:40:31.306375 1 server.go:112] FLAG: --version="false" I0521 14:40:31.306433 1 server.go:112] FLAG: --vmodule="" I0521 14:40:31.306510 1 server.go:194] Starting SkyDNS server (0.0.0.0:10053) I0521 14:40:31.306806 1 server.go:213] Skydns metrics enabled (/metrics:10055) I0521 14:40:31.306926 1 dns.go:146] Starting endpointsController I0521 14:40:31.306996 1 dns.go:149] Starting serviceController I0521 14:40:31.307267 1 logs.go:41] skydns: ready for queries on cluster.local. for tcp://0.0.0.0:10053 [rcache 0] I0521 14:40:31.307350 1 logs.go:41] skydns: ready for queries on cluster.local. for udp://0.0.0.0:10053 [rcache 0] I0521 14:40:31.807301 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver... I0521 14:40:32.307629 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver... E0521 14:41:01.307985 1 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:147: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout E0521 14:41:01.308227 1 reflector.go:201] k8s.io/dns/pkg/dns/dns.go:150: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout I0521 14:41:01.807271 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver... I0521 14:41:02.307301 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver... I0521 14:41:02.807294 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver... I0521 14:41:03.307321 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver... I0521 14:41:03.807649 1 dns.go:173] Waiting for services and endpoints to be initialized from apiserver... </code></pre> <p>The output from <code>kubectl -n kube-system logs kube-apiserver-k8s-master-0</code> looks relatively normal, except for all the TLS errors</p> <pre><code> I0521 11:09:53.982465 1 server.go:121] Version: v1.9.7 I0521 11:09:53.982756 1 cloudprovider.go:59] --external-hostname was not specified. Trying to get it from the cloud provider. I0521 11:09:55.934055 1 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated I0521 11:09:55.935038 1 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated I0521 11:09:55.938929 1 feature_gate.go:190] feature gates: map[Initializers:true] I0521 11:09:55.938945 1 initialization.go:90] enabled Initializers feature as part of admission plugin setup I0521 11:09:55.942042 1 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated I0521 11:09:55.948001 1 master.go:225] Using reconciler: lease W0521 11:10:01.032046 1 genericapiserver.go:342] Skipping API batch/v2alpha1 because it has no resources. W0521 11:10:03.333423 1 genericapiserver.go:342] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources. W0521 11:10:03.340119 1 genericapiserver.go:342] Skipping API storage.k8s.io/v1alpha1 because it has no resources. W0521 11:10:04.188602 1 genericapiserver.go:342] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources. [restful] 2018/05/21 11:10:04 log.go:33: [restful/swagger] listing is available at https://10.240.0.231:6443/swaggerapi [restful] 2018/05/21 11:10:04 log.go:33: [restful/swagger] https://10.240.0.231:6443/swaggerui/ is mapped to folder /swagger-ui/ [restful] 2018/05/21 11:10:06 log.go:33: [restful/swagger] listing is available at https://10.240.0.231:6443/swaggerapi [restful] 2018/05/21 11:10:06 log.go:33: [restful/swagger] https://10.240.0.231:6443/swaggerui/ is mapped to folder /swagger-ui/ I0521 11:10:06.424379 1 logs.go:41] warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated I0521 11:10:10.910296 1 serve.go:96] Serving securely on [::]:6443 I0521 11:10:10.919244 1 crd_finalizer.go:242] Starting CRDFinalizer I0521 11:10:10.919835 1 apiservice_controller.go:112] Starting APIServiceRegistrationController I0521 11:10:10.919940 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I0521 11:10:10.920028 1 controller.go:84] Starting OpenAPI AggregationController I0521 11:10:10.921417 1 available_controller.go:262] Starting AvailableConditionController I0521 11:10:10.922341 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I0521 11:10:10.927021 1 logs.go:41] http: TLS handshake error from 10.240.0.231:49208: EOF I0521 11:10:10.932960 1 logs.go:41] http: TLS handshake error from 10.240.0.231:49210: EOF I0521 11:10:10.937813 1 logs.go:41] http: TLS handshake error from 10.240.0.231:49212: EOF I0521 11:10:10.941682 1 logs.go:41] http: TLS handshake error from 10.240.0.231:49214: EOF I0521 11:10:10.945178 1 logs.go:41] http: TLS handshake error from 127.0.0.1:56640: EOF I0521 11:10:10.949275 1 logs.go:41] http: TLS handshake error from 127.0.0.1:56642: EOF I0521 11:10:10.953068 1 logs.go:41] http: TLS handshake error from 10.240.0.231:49442: EOF --- I0521 11:10:19.912989 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/admin I0521 11:10:19.941699 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/edit I0521 11:10:19.957582 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/view I0521 11:10:19.968065 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-admin I0521 11:10:19.998718 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-edit I0521 11:10:20.015536 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:aggregate-to-view I0521 11:10:20.032728 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:heapster I0521 11:10:20.045918 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:node I0521 11:10:20.063670 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:node-problem-detector I0521 11:10:20.114066 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:node-proxier I0521 11:10:20.135010 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:node-bootstrapper I0521 11:10:20.147462 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:auth-delegator I0521 11:10:20.159892 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:kube-aggregator I0521 11:10:20.181092 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:kube-controller-manager I0521 11:10:20.197645 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:kube-scheduler I0521 11:10:20.219016 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:kube-dns I0521 11:10:20.235273 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:persistent-volume-provisioner I0521 11:10:20.245893 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:aws-cloud-provider I0521 11:10:20.257459 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:nodeclient I0521 11:10:20.269857 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:certificates.k8s.io:certificatesigningrequests:selfnodeclient I0521 11:10:20.286785 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:attachdetach-controller I0521 11:10:20.298669 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller I0521 11:10:20.310573 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:cronjob-controller I0521 11:10:20.347321 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:daemon-set-controller I0521 11:10:20.364505 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:deployment-controller I0521 11:10:20.365888 1 trace.go:76] Trace[1489234739]: "Create /api/v1/namespaces/kube-system/configmaps" (started: 2018-05-21 11:10:15.961686997 +0000 UTC m=+22.097873350) (total time: 4.404137704s): Trace[1489234739]: [4.000707016s] [4.000623216s] About to store object in database Trace[1489234739]: [4.404137704s] [403.430688ms] END E0521 11:10:20.366636 1 client_ca_hook.go:112] configmaps "extension-apiserver-authentication" already exists I0521 11:10:20.391784 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:disruption-controller I0521 11:10:20.404492 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:endpoint-controller W0521 11:10:20.405827 1 lease.go:223] Resetting endpoints for master service "kubernetes" to [10.240.0.231 10.240.0.233] I0521 11:10:20.423540 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:generic-garbage-collector I0521 11:10:20.476466 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler I0521 11:10:20.495934 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:job-controller I0521 11:10:20.507318 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:namespace-controller I0521 11:10:20.525086 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:node-controller I0521 11:10:20.538631 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:persistent-volume-binder I0521 11:10:20.558614 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:pod-garbage-collector I0521 11:10:20.586665 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:replicaset-controller I0521 11:10:20.600567 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:replication-controller I0521 11:10:20.617268 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:resourcequota-controller I0521 11:10:20.628770 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:route-controller I0521 11:10:20.655147 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:service-account-controller I0521 11:10:20.672926 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:service-controller I0521 11:10:20.694137 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:statefulset-controller I0521 11:10:20.718936 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:ttl-controller I0521 11:10:20.731868 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:certificate-controller I0521 11:10:20.752910 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:pvc-protection-controller I0521 11:10:20.767297 1 storage_rbac.go:208] created clusterrole.rbac.authorization.k8s.io/system:controller:pv-protection-controller I0521 11:10:20.788265 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/cluster-admin I0521 11:10:20.801791 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:discovery I0521 11:10:20.815924 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:basic-user I0521 11:10:20.828531 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:node-proxier I0521 11:10:20.854715 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-controller-manager I0521 11:10:20.864554 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-dns I0521 11:10:20.875950 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:kube-scheduler I0521 11:10:20.900809 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:aws-cloud-provider I0521 11:10:20.913751 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:node I0521 11:10:20.924284 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:attachdetach-controller I0521 11:10:20.940075 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:clusterrole-aggregation-controller I0521 11:10:20.969408 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:cronjob-controller I0521 11:10:20.980017 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:daemon-set-controller I0521 11:10:21.016306 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:deployment-controller I0521 11:10:21.047910 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:disruption-controller I0521 11:10:21.058829 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller I0521 11:10:21.083536 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector I0521 11:10:21.100235 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler I0521 11:10:21.127927 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller I0521 11:10:21.146373 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller I0521 11:10:21.160099 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller I0521 11:10:21.184264 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder I0521 11:10:21.204867 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector I0521 11:10:21.224648 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller I0521 11:10:21.742427 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller I0521 11:10:21.758948 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller I0521 11:10:21.801182 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller I0521 11:10:21.832962 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller I0521 11:10:21.860369 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller I0521 11:10:21.892241 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller I0521 11:10:21.931450 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller I0521 11:10:21.963364 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller I0521 11:10:21.980748 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller I0521 11:10:22.003657 1 storage_rbac.go:236] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller I0521 11:10:22.434855 1 controller.go:538] quota admission added evaluator for: { endpoints} ... I0521 11:12:06.609728 1 logs.go:41] http: TLS handshake error from 168.63.129.16:64981: EOF I0521 11:12:21.611308 1 logs.go:41] http: TLS handshake error from 168.63.129.16:65027: EOF I0521 11:12:36.612129 1 logs.go:41] http: TLS handshake error from 168.63.129.16:65095: EOF I0521 11:12:51.612245 1 logs.go:41] http: TLS handshake error from 168.63.129.16:65141: EOF I0521 11:13:06.612118 1 logs.go:41] http: TLS handshake error from 168.63.129.16:65177: EOF I0521 11:13:21.612170 1 logs.go:41] http: TLS handshake error from 168.63.129.16:65235: EOF I0521 11:13:36.612218 1 logs.go:41] http: TLS handshake error from 168.63.129.16:65305: EOF I0521 11:13:51.613097 1 logs.go:41] http: TLS handshake error from 168.63.129.16:65354: EOF I0521 11:14:06.613523 1 logs.go:41] http: TLS handshake error from 168.63.129.16:65392: EOF I0521 11:14:21.614148 1 logs.go:41] http: TLS handshake error from 168.63.129.16:65445: EOF I0521 11:14:36.614143 1 logs.go:41] http: TLS handshake error from 168.63.129.16:65520: EOF I0521 11:14:51.614204 1 logs.go:41] http: TLS handshake error from 168.63.129.16:49193: EOF I0521 11:15:06.613995 1 logs.go:41] http: TLS handshake error from 168.63.129.16:49229: EOF I0521 11:15:21.613962 1 logs.go:41] http: TLS handshake error from 168.63.129.16:49284: EOF I0521 11:15:36.615026 1 logs.go:41] http: TLS handshake error from 168.63.129.16:49368: EOF I0521 11:15:51.615991 1 logs.go:41] http: TLS handshake error from 168.63.129.16:49413: EOF I0521 11:16:06.616993 1 logs.go:41] http: TLS handshake error from 168.63.129.16:49454: EOF I0521 11:16:21.616947 1 logs.go:41] http: TLS handshake error from 168.63.129.16:49510: EOF I0521 11:16:36.617859 1 logs.go:41] http: TLS handshake error from 168.63.129.16:49586: EOF I0521 11:16:51.618921 1 logs.go:41] http: TLS handshake error from 168.63.129.16:49644: EOF I0521 11:17:06.619768 1 logs.go:41] http: TLS handshake error from 168.63.129.16:49696: EOF I0521 11:17:21.620123 1 logs.go:41] http: TLS handshake error from 168.63.129.16:49752: EOF I0521 11:17:36.620814 1 logs.go:41] http: TLS handshake error from 168.63.129.16:49821: EOF </code></pre> <p>The output from a second api server however looks at lot more broken </p> <pre><code>E0521 11:11:15.035138 1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]] E0521 11:11:15.040764 1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]] E0521 11:11:15.717294 1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]] E0521 11:11:15.721875 1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]] E0521 11:11:15.728534 1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]] E0521 11:11:15.734572 1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]] E0521 11:11:16.036398 1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]] E0521 11:11:16.041735 1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]] E0521 11:11:16.730094 1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]] E0521 11:11:16.736057 1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]] E0521 11:11:16.741505 1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]] E0521 11:11:16.741980 1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]] E0521 11:11:17.037722 1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]] E0521 11:11:17.042680 1 authentication.go:64] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, crypto/rsa: verification error]] </code></pre>
Mike Norgate
<p>I eventually got to the bottom of this. I had not copied the same Service Account signing keys onto each master node (<code>sa.key</code>, <code>sa.pub</code>).</p> <p>These keys are documented here: <a href="https://github.com/kubernetes/kubeadm/blob/master/docs/design/design_v1.7.md" rel="nofollow noreferrer">https://github.com/kubernetes/kubeadm/blob/master/docs/design/design_v1.7.md</a></p> <p><code>a private key for signing ServiceAccount Tokens (sa.key) along with its public key (sa.pub)</code></p> <p>And the step that I had missed is documented here: <a href="https://kubernetes.io/docs/setup/independent/high-availability/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/independent/high-availability/</a></p> <p><code>Copy the contents of /etc/kubernetes/pki/ca.crt, /etc/kubernetes/pki/ca.key, /etc/kubernetes/pki/sa.key and /etc/kubernetes/pki/sa.pub and create these files manually on master1 and master2</code></p>
Mike Norgate
<p>We are trying to deploy a Kubernetes cluster with help of Azure Kubernetes Service (<strong>AKS</strong>) to our <strong>existing</strong> virtual network. This virtual network has <strong>custom route tables</strong>.</p> <p>The deployment process is done via an external application. Permissions should be given to this application with help of Service Principal. As per the <a href="https://learn.microsoft.com/en-us/azure/aks/configure-kubenet#bring-your-own-subnet-and-route-table-with-kubenet" rel="nofollow noreferrer">documentation</a> says under the <em>Limitations</em> section:</p> <ul> <li><em>Permissions must be assigned before cluster creation, ensure you are using a service principal with write permissions to your custom subnet and custom route table.</em></li> </ul> <p>We have a security team which are responsible for <strong>giving permissions to service principals</strong>, managing networking. Without knowing exactly what rules will be written into the route tables <strong>by</strong> the <strong>AKS</strong>, they wont give the permission to the proper service principal.</p> <p><strong>Does somebody know what rules the AKS wants to write into those route tables?</strong></p>
Robert
<p>The documentation you are pointing to is for a cluster using <strong>Kubenet</strong> networking. Is there a reason why you don't want to use <a href="https://learn.microsoft.com/en-us/azure/aks/configure-azure-cni" rel="nofollow noreferrer">Azure CNI</a> instead? If you are using Azure CNI, you will off course consume more IP adresses, but AKS will not need to write into the route table.</p> <p>With that said, if you really want to use Kubenet, the rules that will be write on the route table will depend on what you are deploying inside your cluster since Kubenet is using the route table to route the traffic... It will adds rules throughout the cluster lifecycle when you will add Pods, Services, etc.</p> <p><a href="https://i.stack.imgur.com/oXJVx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oXJVx.png" alt="enter image description here" /></a></p>
Jean-Philippe Bond
<p>We're trying to use nextflow on a k8s namespace other than our default, the namespace we're using is <code>nextflownamespace</code>. We've created our PVC and ensured the default service account has an admin rolebinding. We're getting an error that nextflow can't access the PVC:</p> <pre><code>&quot;message&quot;: &quot;persistentvolumeclaims \&quot;my-nextflow-pvc\&quot; is forbidden: User \&quot;system:serviceaccount:mynamespace:default\&quot; cannot get resource \&quot;persistentvolumeclaims\&quot; in API group \&quot;\&quot; in the namespace \&quot;nextflownamespace\&quot;&quot;, </code></pre> <p>In that error we see that <code>system:serviceaccount:mynamespace:default</code> is incorrectly pointing to our default namespace, <code>mynamespace</code>, not <code>nextflownamespace</code> which we created for nextflow use.</p> <p>We tried adding <code>debug.yaml = true</code> to our <code>nextflow.config</code> but couldn't find the YAML it submits to k8s to validate the error. Our config file looks like this:</p> <pre><code>profiles { standard { k8s { executor = &quot;k8s&quot; namespace = &quot;nextflownamespace&quot; cpus = 1 memory = 1.GB debug.yaml = true } aws{ endpoint = &quot;https://s3.nautilus.optiputer.net&quot; } } </code></pre> <p>We did verify that when we change the namespace to another arbitrary value the error message used the new arbitrary namespace, but the service account name continued to point to the users default namespace erroneously.</p> <p>We've tried every variant of <code>profiles.standard.k8s.serviceAccount = &quot;system:serviceaccount:nextflownamespace:default&quot;</code> that we could think of but didn't get any change with those attempts.</p>
David Parks
<p>I think it's best to avoid using nested <a href="https://www.nextflow.io/docs/latest/config.html#config-profiles" rel="nofollow noreferrer">config profiles</a> with Nextflow. I would either remove the 'standard' layer from your profile or just make 'standard' a separate profile:</p> <pre><code>profiles { standard { process.executor = 'local' } k8s { executor = &quot;k8s&quot; namespace = &quot;nextflownamespace&quot; cpus = 1 memory = 1.GB debug.yaml = true } aws{ endpoint = &quot;https://s3.nautilus.optiputer.net&quot; } } </code></pre>
Steve
<p>On Cloud Composer I have long running DAG tasks, each of them running for 4 to 6 hours. The task ends with an error which is caused by Kubernetes API. The error message states 401 Unauthorized.</p> <p>The error message:</p> <pre><code>kubernetes.client.rest.ApiException: (401) Reason: Unauthorized HTTP response headers: HTTPHeaderDict({'Audit-Id': 'e1a37278-0693-4f36-8b04-0a7ce0b7f7a0', 'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'Date': 'Fri, 07 Jul 2023 08:10:15 GMT', 'Content-Length': '129'}) HTTP response body: {&quot;kind&quot;:&quot;Status&quot;,&quot;apiVersion&quot;:&quot;v1&quot;,&quot;metadata&quot;:{},&quot;status&quot;:&quot;Failure&quot;,&quot;message&quot;:&quot;Unauthorized&quot;,&quot;reason&quot;:&quot;Unauthorized&quot;,&quot;code&quot;:401} </code></pre> <p>The kubernetes API token has an expiry of 1 hour and the Composer is not renewing the token before it expires. This issue never happened with Composer1, it started showing only when I migrated from Composer1 to Composer2</p> <p>Additional details: There is an option in GKEStartPodOperator called is_delete_operator_pod that is set to true. This option deletes the pod from the cluster after the job is done. So, after the task is completed in about 4 hours, the Composer tries to delete the pod, and that time this 401 Unauthorized error is shown.</p> <p>I have checked some Airflow configs like kubernetes.enable_tcp_keepalive that enables TCP keepalive mechanism for kubernetes clusters, but it doesn't help resolving the problem.</p> <p>What can be done to prevent this error?</p>
Kavya
<p>After experiencing the same issue, I found a fix in the latest version of the Google provider for Airflow, which is currently not yet available in Cloud Composer. However, you can manually override this by adding the release candidate package to your Cloud Composer instance.</p> <p>You can use the release candidate for version <code>10.5.0</code> of the <code>apache-airflow-providers-google</code> python package. It can be found <a href="https://pypi.org/project/apache-airflow-providers-google/10.5.0rc1/" rel="nofollow noreferrer">here</a>.</p> <p>The override can be accomplished by either <a href="https://cloud.google.com/composer/docs/how-to/using/installing-python-dependencies#install-pypi" rel="nofollow noreferrer">manually adding a Pypi package</a> in the Cloud Composer environment's settings, or by adding the package to the <a href="https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/composer_environment.html#with-software-airflow-config" rel="nofollow noreferrer">terraform resource</a>. The updates takes about 15-30 minutes.</p> <p>I tested this and can confirm it works. Tasks can again run longer than 1h.</p>
Jonny5
<p>I have created EKS cluster with Fargate. I deployed two microservices. Everything is working properly with ingress and two separate application load balancers. I am trying to create ingress with one alb which will route the traffic to the services. The potential problem is that both services use the same port (8080). How to create ingress for this problem? Also I have got a registered domain in route 53.</p>
Bakula33
<p>I believe you can accomplish this using the <a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/" rel="nofollow noreferrer">ALB Ingress Controller</a>.</p>
grahamlyons
<p><a href="https://i.stack.imgur.com/A2G76.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/A2G76.png" alt="enter image description here" /></a></p> <p>The nodeport takes in 3 parameters in the service yaml.</p> <ul> <li>port</li> <li>targetPort</li> <li>nodePort</li> </ul> <p>Since all you wanted was to map a port on the node(nodeport) to the port on the container(targetPort), why do we need to provide the port of the service?</p> <p>Is this because Nodeport is internally implemented <strong>on top of</strong> ClusterIP?</p>
D.B.K
<p>&quot;Nodeport is internally implemented on top of ClusterIP&quot; - correct.</p> <p>The port in the Kubernetes Service definition is used to specify the port on which the service will listen for traffic within the Kubernetes cluster. This is the port that will be exposed to other pods in the cluster as an endpoint for the service. When a request is made to this port by a client within the cluster, the traffic will be routed to one of the pods selected by the Service based on its load balancing algorithm.</p> <p>The nodePort is used to expose the service on a port on the node itself, which allows the service to be accessed from outside the cluster.</p>
HoaPhan
<p>I have created an AKS cluster using the following Terraform code</p> <pre><code>resource &quot;azurerm_virtual_network&quot; &quot;test&quot; { name = var.virtual_network_name location = azurerm_resource_group.rg.location resource_group_name = azurerm_resource_group.rg.name address_space = [var.virtual_network_address_prefix] subnet { name = var.aks_subnet_name address_prefix = var.aks_subnet_address_prefix } subnet { name = &quot;appgwsubnet&quot; address_prefix = var.app_gateway_subnet_address_prefix } tags = var.tags } data &quot;azurerm_subnet&quot; &quot;kubesubnet&quot; { name = var.aks_subnet_name virtual_network_name = azurerm_virtual_network.test.name resource_group_name = azurerm_resource_group.rg.name depends_on = [azurerm_virtual_network.test] } resource &quot;azurerm_kubernetes_cluster&quot; &quot;k8s&quot; { name = var.aks_name location = azurerm_resource_group.rg.location dns_prefix = var.aks_dns_prefix resource_group_name = azurerm_resource_group.rg.name http_application_routing_enabled = false linux_profile { admin_username = var.vm_user_name ssh_key { key_data = file(var.public_ssh_key_path) } } default_node_pool { name = &quot;agentpool&quot; node_count = var.aks_agent_count vm_size = var.aks_agent_vm_size os_disk_size_gb = var.aks_agent_os_disk_size vnet_subnet_id = data.azurerm_subnet.kubesubnet.id } service_principal { client_id = local.client_id client_secret = local.client_secret } network_profile { network_plugin = &quot;azure&quot; dns_service_ip = var.aks_dns_service_ip docker_bridge_cidr = var.aks_docker_bridge_cidr service_cidr = var.aks_service_cidr } # Enabled the cluster configuration to the Azure kubernets with RBAC azure_active_directory_role_based_access_control { managed = var.azure_active_directory_role_based_access_control_managed admin_group_object_ids = var.active_directory_role_based_access_control_admin_group_object_ids azure_rbac_enabled = var.azure_rbac_enabled } oms_agent { log_analytics_workspace_id = module.log_analytics_workspace[0].id } timeouts { create = &quot;20m&quot; delete = &quot;20m&quot; } depends_on = [data.azurerm_subnet.kubesubnet,module.log_analytics_workspace] tags = var.tags } resource &quot;azurerm_role_assignment&quot; &quot;ra1&quot; { scope = data.azurerm_subnet.kubesubnet.id role_definition_name = &quot;Network Contributor&quot; principal_id = local.client_objectid depends_on = [data.azurerm_subnet.kubesubnet] } </code></pre> <p>and followed the below steps to install the ISTIO as per the <a href="https://istio.io/latest/docs/setup/install/helm/" rel="nofollow noreferrer">ISTIO documentation</a></p> <pre><code>#Prerequisites helm repo add istio https://istio-release.storage.googleapis.com/charts helm repo update #create namespace kubectl create namespace istio-system # helm install istio-base and istiod helm install istio-base istio/base -n istio-system helm install istiod istio/istiod -n istio-system --wait # Check the installation status helm status istiod -n istio-system #create namespace and enable istio-injection for envoy proxy containers kubectl create namespace istio-ingress kubectl label namespace istio-ingress istio-injection=enabled ## helm install istio-ingress for traffic management helm install istio-ingress istio/gateway -n istio-ingress --wait ## Mark the default namespace as istio-injection=enabled kubectl label namespace default istio-injection=enabled ## Install the App and Gateway kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.16/samples/bookinfo/platform/kube/bookinfo.yaml kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.16/samples/bookinfo/networking/bookinfo-gateway.yaml # Check the Services, Pods and Gateway kubectl get services kubectl get pods kubectl get gateway # Ensure the app is running kubectl exec &quot;$(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}')&quot; -c ratings -- curl -sS productpage:9080/productpage | grep -o &quot;&lt;title&gt;.*&lt;/title&gt;&quot; </code></pre> <p>and it is responding as shown below</p> <p><a href="https://i.stack.imgur.com/5EHbb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5EHbb.png" alt="enter image description here" /></a></p> <p><a href="https://i.stack.imgur.com/MfKvS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MfKvS.png" alt="enter image description here" /></a></p> <pre><code># Check the $INGRESS_NAME=&quot;istio-ingress&quot; $INGRESS_NS=&quot;istio-ingress&quot; kubectl get svc &quot;$INGRESS_NAME&quot; -n &quot;$INGRESS_NS&quot; </code></pre> <p>it returns the external IP as shown below</p> <p><a href="https://i.stack.imgur.com/AMU9t.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AMU9t.png" alt="enter image description here" /></a></p> <p>However, I am not able to access the application</p> <p><a href="https://i.stack.imgur.com/97RHe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/97RHe.png" alt="enter image description here" /></a></p> <p>Also I am getting an error while trying to run the following commands to find the ports</p> <pre><code>kubectl -n &quot;$INGRESS_NS&quot; get service &quot;$INGRESS_NAME&quot; -o jsonpath='{.spec.ports[?(@.name==&quot;http2&quot;)].port}' kubectl -n &quot;$INGRESS_NS&quot; get service &quot;$INGRESS_NAME&quot; -o jsonpath='{.spec.ports[?(@.name==&quot;https&quot;)].port}' kubectl -n &quot;$INGRESS_NS&quot; get service &quot;$INGRESS_NAME&quot; -o jsonpath='{.spec.ports[?(@.name==&quot;tcp&quot;)].port}' </code></pre>
One Developer
<p>This is because you have hit <a href="https://artifacthub.io/packages/helm/istio-official/gateway#general-concerns" rel="nofollow noreferrer">general concerns</a> of istio- prefix get striped, from the steps by steps installation with <code>istio-ingress</code> will stripe with <code>ingress</code>, so if you using <code>istio-ingressgateway</code> that could match with app selector , or change the app selector to match with it.</p>
Turbot
<p>I am trying to use Hazelcast on Kubernetes. For that the Docker is installed on Windows and Kubernetes environment is simulate on the Docker. Here is the config file <code>hazelcast.xml</code></p> <pre class="lang-xml prettyprint-override"><code>&lt;?xml version=&quot;1.0&quot; encoding=&quot;UTF-8&quot;?&gt; &lt;hazelcast xsi:schemaLocation=&quot;http://www.hazelcast.com/schema/config hazelcast-config-3.7.xsd&quot; xmlns=&quot;http://www.hazelcast.com/schema/config&quot; xmlns:xsi=&quot;http://www.w3.org/2001/XMLSchema-instance&quot;&gt; &lt;properties&gt; &lt;property name=&quot;hazelcast.discovery.enabled&quot;&gt;true&lt;/property&gt; &lt;/properties&gt; &lt;network&gt; &lt;join&gt; &lt;multicast enabled=&quot;false&quot; /&gt; &lt;tcp-ip enabled=&quot;false&quot;/&gt; &lt;discovery-strategies&gt; &lt;discovery-strategy enabled=&quot;true&quot; class=&quot;com.hazelcast.kubernetes.HazelcastKubernetesDiscoveryStrategy&quot;&gt; &lt;!-- &lt;properties&gt; &lt;property name=&quot;service-dns&quot;&gt;cobrapp.default.endpoints.cluster.local&lt;/property&gt; &lt;property name=&quot;service-dns-timeout&quot;&gt;10&lt;/property&gt; &lt;/properties&gt; --&gt; &lt;/discovery-strategy&gt; &lt;/discovery-strategies&gt; &lt;/join&gt; &lt;/network&gt; &lt;/hazelcast&gt; </code></pre> <p>The problem is that it is unable to create cluster on the simulated environment. According to my deploment file it should create three clusters. Here is the deployment config file</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: test-deployment labels: app: test spec: replicas: 3 selector: matchLabels: app: test template: metadata: labels: app: test spec: containers: - name: test imagePullPolicy: Never image: testapp:latest ports: - containerPort: 5701 - containerPort: 8085 --- apiVersion: v1 kind: Service metadata: name: test-service spec: selector: app: test type: LoadBalancer ports: - name: hazelcast port: 5701 - name: test protocol: TCP port: 8085 targetPort: 8085 </code></pre> <p>The output upon executing the deployment file</p> <pre><code>Members [1] { Member [10.1.0.124]:5701 this } </code></pre> <p>However the expected output is, it should have three clusters in it as per the deployment file. If anybody can help?</p>
nee nee
<p>Hazelcast's default multicast discovery doesn't work on Kubernetes out-of-the-box. You need an additional plugin for that. Two alternatives are available, Kubernetes API and DNS lookup.</p> <p>Please check <a href="https://github.com/hazelcast/hazelcast-kubernetes" rel="nofollow noreferrer">the relevant documentation</a> for more information.</p>
Nicolas
<p>I am trying to get a certificate issued from Let's Encrypt, and it has been 3 and a half hours.</p> <p>I accidentally originally set my secretName as "echo-tls" before switching it to the correct "pandaist-tls" that I want to use instead.</p> <p>I currently have this:</p> <pre><code>kubectl get CertificateRequest -o wide NAME READY ISSUER STATUS AGE pandaist-tls-1926992011 False letsencrypt-prod Waiting on certificate issuance from order default/pandaist-tls-1926992011-2163900139: "pending" 3h26m </code></pre> <p>When I describe the certificate, I get this:</p> <pre><code>Deployment kubectl describe CertificateRequest pandaist-tls-1926992011 Name: pandaist-tls-1926992011 Namespace: default Labels: &lt;none&gt; Annotations: cert-manager.io/certificate-name: pandaist-tls cert-manager.io/private-key-secret-name: pandaist-tls API Version: cert-manager.io/v1alpha2 Kind: CertificateRequest Metadata: Creation Timestamp: 2020-04-07T15:41:13Z Generation: 1 Owner References: API Version: cert-manager.io/v1alpha2 Block Owner Deletion: true Controller: true Kind: Certificate Name: pandaist-tls UID: 25c3ff31-447f-4abf-a23e-ec48f5a591a9 Resource Version: 500795 Self Link: /apis/cert-manager.io/v1alpha2/namespaces/default/certificaterequests/pandaist-tls-1926992011 UID: 8295836d-fb99-4ebf-8803-a344d6edb574 Spec: Csr: ABUNCHOFVALUESTHATIWILLNOTDESCRIBE Issuer Ref: Group: cert-manager.io Kind: ClusterIssuer Name: letsencrypt-prod Status: Conditions: Last Transition Time: 2020-04-07T15:41:13Z Message: Waiting on certificate issuance from order default/pandaist-tls-1926992011-2163900139: "pending" Reason: Pending Status: False Type: Ready Events: &lt;none&gt; </code></pre> <p>And then I look at my logs for my cert-manager pods - here are small slices of each:</p> <pre><code>I0407 19:01:35.499469 1 service.go:43] cert-manager/controller/challenges/http01/selfCheck/http01/ensureService "level"=0 "msg"="found one existing HTTP01 solver Service for challenge resource" "dnsName"="pandaist.com" "related_resource_kind"="Service" "related_resource_name"="cm-acme-http-solver-2fp58" "related_resource_namespace"="default" "resource_kind"="Challenge" "resource_name"="pandaist-tls-1926992011-2163900139-2157075729" "resource_namespace"="default" "type"="http-01" I0407 19:01:35.499513 1 service.go:43] cert-manager/controller/challenges/http01/selfCheck/http01/ensureService "level"=0 "msg"="found one existing HTTP01 solver Service for challenge resource" "dnsName"="auth.pandaist.com" "related_resource_kind"="Service" "related_resource_name"="cm-acme-http-solver-xhjsr" "related_resource_namespace"="default" "resource_kind"="Challenge" "resource_name"="pandaist-tls-1926992011-2163900139-832917849" "resource_namespace"="default" "type"="http-01" I0407 19:01:35.499534 1 ingress.go:91] cert-manager/controller/challenges/http01/selfCheck/http01/ensureIngress "level"=0 "msg"="found one existing HTTP01 solver ingress" "dnsName"="pandaist.com" "related_resource_kind"="Ingress" "related_resource_name"="cm-acme-http-solver-pd9fh" "related_resource_namespace"="default" "resource_kind"="Challenge" "resource_name"="pandaist-tls-1926992011-2163900139-2157075729" "resource_namespace"="default" "type"="http-01" I0407 19:01:35.499578 1 ingress.go:91] cert-manager/controller/challenges/http01/selfCheck/http01/ensureIngress "level"=0 "msg"="found one existing HTTP01 solver ingress" "dnsName"="auth.pandaist.com" "related_resource_kind"="Ingress" "related_resource_name"="cm-acme-http-solver-b6zr2" "related_resource_namespace"="default" "resource_kind"="Challenge" "resource_name"="pandaist-tls-1926992011-2163900139-832917849" "resource_namespace"="default" "type"="http-01" E0407 19:03:46.571074 1 sync.go:184] cert-manager/controller/challenges "msg"="propagation check failed" "error"="failed to perform self check GET request 'http://pandaist.com/.well-known/acme-challenge/6Wduj2Ejr59OZ9SFy_Rw4jnozE50xspK-a5OIvCwYsc': Get http://pandaist.com/.well-known/acme-challenge/6Wduj2Ejr59OZ9SFy_Rw4jnozE50xspK-a5OIvCwYsc: dial tcp 178.128.132.218:80: connect: connection timed out" "dnsName"="pandaist.com" "resource_kind"="Challenge" "resource_name"="pandaist-tls-1926992011-2163900139-2157075729" "resource_namespace"="default" "type"="http-01" E0407 19:03:46.571109 1 sync.go:184] cert-manager/controller/challenges "msg"="propagation check failed" "error"="failed to perform self check GET request 'http://auth.pandaist.com/.well-known/acme-challenge/gO91--fK0SGG15aS3ALOHXXYtCSly2Q9pbVO8OJW2aE': Get http://auth.pandaist.com/.well-known/acme-challenge/gO91--fK0SGG15aS3ALOHXXYtCSly2Q9pbVO8OJW2aE: dial tcp 178.128.132.218:80: connect: connection timed out" "dnsName"="auth.pandaist.com" "resource_kind"="Challenge" "resource_name"="pandaist-tls-1926992011-2163900139-832917849" "resource_namespace"="default" "type"="http-01" I0407 19:03:46.571382 1 controller.go:135] cert-manager/controller/challenges "level"=0 "msg"="finished processing work item" "key"="default/pandaist-tls-1926992011-2163900139-832917849" I0407 19:03:46.571528 1 controller.go:129] cert-manager/controller/challenges "level"=0 "msg"="syncing item" "key"="default/pandaist-tls-1926992011-2163900139-832917849" I0407 19:03:46.571193 1 controller.go:135] cert-manager/controller/challenges "level"=0 "msg"="finished processing work item" "key"="default/pandaist-tls-1926992011-2163900139-2157075729" I0407 19:03:46.572009 1 controller.go:129] cert-manager/controller/challenges "level"=0 "msg"="syncing item" "key"="default/pandaist-tls-1926992011-2163900139-2157075729" I0407 19:03:46.572338 1 pod.go:58] cert-manager/controller/challenges/http01/selfCheck/http01/ensurePod "level"=0 "msg"="found one existing HTTP01 solver pod" "dnsName"="auth.pandaist.com" "related_resource_kind"="Pod" "related_resource_name"="cm-acme-http-solver-scqtx" "related_resource_namespace"="default" "resource_kind"="Challenge" "resource_name"="pandaist-tls-1926992011-2163900139-832917849" "resource_namespace"="default" "type"="http-01" I0407 19:03:46.572600 1 service.go:43] cert-manager/controller/challenges/http01/selfCheck/http01/ensureService "level"=0 "msg"="found one existing HTTP01 solver Service for challenge resource" "dnsName"="auth.pandaist.com" "related_resource_kind"="Service" "related_resource_name"="cm-acme-http-solver-xhjsr" "related_resource_namespace"="default" "resource_kind"="Challenge" "resource_name"="pandaist-tls-1926992011-2163900139-832917849" "resource_namespace"="default" "type"="http-01" I0407 19:03:46.572860 1 ingress.go:91] cert-manager/controller/challenges/http01/selfCheck/http01/ensureIngress "level"=0 "msg"="found one existing HTTP01 solver ingress" "dnsName"="auth.pandaist.com" "related_resource_kind"="Ingress" "related_resource_name"="cm-acme-http-solver-b6zr2" "related_resource_namespace"="default" "resource_kind"="Challenge" "resource_name"="pandaist-tls-1926992011-2163900139-832917849" "resource_namespace"="default" "type"="http-01" I0407 19:03:46.573128 1 pod.go:58] cert-manager/controller/challenges/http01/selfCheck/http01/ensurePod "level"=0 "msg"="found one existing HTTP01 solver pod" "dnsName"="pandaist.com" "related_resource_kind"="Pod" "related_resource_name"="cm-acme-http-solver-jn65v" "related_resource_namespace"="default" "resource_kind"="Challenge" "resource_name"="pandaist-tls-1926992011-2163900139-2157075729" "resource_namespace"="default" "type"="http-01" I0407 19:03:46.573433 1 service.go:43] cert-manager/controller/challenges/http01/selfCheck/http01/ensureService "level"=0 "msg"="found one existing HTTP01 solver Service for challenge resource" "dnsName"="pandaist.com" "related_resource_kind"="Service" "related_resource_name"="cm-acme-http-solver-2fp58" "related_resource_namespace"="default" "resource_kind"="Challenge" "resource_name"="pandaist-tls-1926992011-2163900139-2157075729" "resource_namespace"="default" "type"="http-01" I0407 19:03:46.573749 1 ingress.go:91] cert-manager/controller/challenges/http01/selfCheck/http01/ensureIngress "level"=0 "msg"="found one existing HTTP01 solver ingress" "dnsName"="pandaist.com" "related_resource_kind"="Ingress" "related_resource_name"="cm-acme-http-solver-pd9fh" "related_resource_namespace"="default" "resource_kind"="Challenge" "resource_name"="pandaist-tls-1926992011-2163900139-2157075729" "resource_namespace"="default" "type"="http-01" </code></pre> <p>And then here, where I still see echo-tls, despite the fact that I changed my ingress to use pandaist-tls:</p> <pre><code>I0407 15:34:37.115159 1 controller.go:242] cert-manager/controller-runtime/controller "level"=1 "msg"="Successfully Reconciled" "controller"="validatingwebhookconfiguration" "request"={"Namespace":"","Name":"cert-manager-webhook"} I0407 15:34:37.118246 1 controller.go:170] cert-manager/inject-controller "level"=1 "msg"="updated object" "resource_kind"="ValidatingWebhookConfiguration" "resource_name"="cert-manager-webhook" "resource_namespace"="" I0407 15:34:37.118520 1 controller.go:242] cert-manager/controller-runtime/controller "level"=1 "msg"="Successfully Reconciled" "controller"="validatingwebhookconfiguration" "request"={"Namespace":"","Name":"cert-manager-webhook"} I0407 15:34:37.119415 1 sources.go:176] cert-manager/inject-controller "level"=0 "msg"="Extracting CA from Secret resource" "resource_kind"="ValidatingWebhookConfiguration" "resource_name"="cert-manager-webhook" "resource_namespace"="" "secret"="cert-manager/cert-manager-webhook-tls" I0407 15:34:37.120959 1 controller.go:170] cert-manager/inject-controller "level"=1 "msg"="updated object" "resource_kind"="MutatingWebhookConfiguration" "resource_name"="cert-manager-webhook" "resource_namespace"="" I0407 15:34:37.121399 1 controller.go:242] cert-manager/controller-runtime/controller "level"=1 "msg"="Successfully Reconciled" "controller"="mutatingwebhookconfiguration" "request"={"Namespace":"","Name":"cert-manager-webhook"} I0407 15:34:37.124545 1 controller.go:170] cert-manager/inject-controller "level"=1 "msg"="updated object" "resource_kind"="ValidatingWebhookConfiguration" "resource_name"="cert-manager-webhook" "resource_namespace"="" I0407 15:34:37.125160 1 controller.go:242] cert-manager/controller-runtime/controller "level"=1 "msg"="Successfully Reconciled" "controller"="validatingwebhookconfiguration" "request"={"Namespace":"","Name":"cert-manager-webhook"} E0407 16:19:36.762436 1 indexers.go:93] cert-manager/secret-for-certificate-mapper "msg"="unable to fetch certificate that owns the secret" "error"="Certificate.cert-manager.io \"echo-tls\" not found" "certificate"={"Namespace":"default","Name":"echo-tls"} "secret"={"Namespace":"default","Name":"echo-tls"} E0407 16:19:36.762573 1 indexers.go:93] cert-manager/secret-for-certificate-mapper "msg"="unable to fetch certificate that owns the secret" "error"="Certificate.cert-manager.io \"echo-tls\" not found" "certificate"={"Namespace":"default","Name":"echo-tls"} "secret"={"Namespace":"default","Name":"echo-tls"} E0407 16:19:36.762753 1 indexers.go:93] cert-manager/secret-for-certificate-mapper "msg"="unable to fetch certificate that owns the secret" "error"="Certificate.cert-manager.io \"echo-tls\" not found" "certificate"={"Namespace":"default","Name":"echo-tls"} "secret"={"Namespace":"default","Name":"echo-tls"} E0407 16:19:36.762766 1 indexers.go:93] cert-manager/secret-for-certificate-mapper "msg"="unable to fetch certificate that owns the secret" "error"="Certificate.cert-manager.io \"echo-tls\" not found" "certificate"={"Namespace":"default","Name":"echo-tls"} "secret"={"Namespace":"default","Name":"echo-tls"} </code></pre> <p>My ingress:</p> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: pandaist-ingress annotations: kubernetes.io/ingress.class: "nginx" cert-manager.io/cluster-issuer: "letsencrypt-prod" spec: tls: - hosts: - pandaist.com - auth.pandaist.com secretName: pandaist-tls rules: - host: pandaist.com http: paths: - backend: serviceName: pandaist-main servicePort: 80 - host: auth.pandaist.com http: paths: - backend: serviceName: pandaist-keycloak servicePort: 80 </code></pre> <p>This ingress was absolutely applied after the echo one.</p> <p>Is this just normal certificate approval time (3.5 hours) or did the accidental inclusion of echo-tls mess up my certificate issuance? If so, how do I fix it?</p>
Steven Matthews
<p>Due to a bug in how load balancers work on Digital Ocean:</p> <p><a href="https://www.digitalocean.com/community/questions/how-do-i-correct-a-connection-timed-out-error-during-http-01-challenge-propagation-with-cert-manager" rel="nofollow noreferrer">https://www.digitalocean.com/community/questions/how-do-i-correct-a-connection-timed-out-error-during-http-01-challenge-propagation-with-cert-manager</a></p> <p>This will solve the problem:</p> <pre><code>kind: Service apiVersion: v1 metadata: name: ingress-nginx annotations: # See https://github.com/digitalocean/digitalocean-cloud-controller-manager/blob/master/docs/controllers/services/examples/README.md#accessing-pods-over-a-managed-load-balancer-from-inside-the-cluster service.beta.kubernetes.io/do-loadbalancer-hostname: "kube.mydomain.com" namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx spec: externalTrafficPolicy: Local type: LoadBalancer selector: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx ports: - name: http port: 80 targetPort: http - name: https port: 443 targetPort: https </code></pre>
Steven Matthews
<p>This is what I keep getting:</p> <pre><code>[root@centos-master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE nfs-server-h6nw8 1/1 Running 0 1h nfs-web-07rxz 0/1 CrashLoopBackOff 8 16m nfs-web-fdr9h 0/1 CrashLoopBackOff 8 16m </code></pre> <p>Below is output from <code>describe pods</code> <a href="https://i.stack.imgur.com/qUtPV.png" rel="noreferrer">kubectl describe pods</a></p> <pre><code>Events: FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 16m 16m 1 {default-scheduler } Normal Scheduled Successfully assigned nfs-web-fdr9h to centos-minion-2 16m 16m 1 {kubelet centos-minion-2} spec.containers{web} Normal Created Created container with docker id 495fcbb06836 16m 16m 1 {kubelet centos-minion-2} spec.containers{web} Normal Started Started container with docker id 495fcbb06836 16m 16m 1 {kubelet centos-minion-2} spec.containers{web} Normal Started Started container with docker id d56f34ae4e8f 16m 16m 1 {kubelet centos-minion-2} spec.containers{web} Normal Created Created container with docker id d56f34ae4e8f 16m 16m 2 {kubelet centos-minion-2} Warning FailedSync Error syncing pod, skipping: failed to &quot;StartContainer&quot; for &quot;web&quot; with CrashLoopBackOff: &quot;Back-off 10s restarting failed container=web pod=nfs-web-fdr9h_default(461c937d-d870-11e6-98de-005056040cc2)&quot; </code></pre> <p>I have two pods: <code>nfs-web-07rxz</code>, <code>nfs-web-fdr9h</code>, but if I do <code>kubectl logs nfs-web-07rxz</code> or with <code>-p</code> option I don't see any log in both pods.</p> <pre><code>[root@centos-master ~]# kubectl logs nfs-web-07rxz -p [root@centos-master ~]# kubectl logs nfs-web-07rxz </code></pre> <p>This is my replicationController yaml file: <a href="https://i.stack.imgur.com/kSbnv.png" rel="noreferrer">replicationController yaml file</a></p> <pre><code>apiVersion: v1 kind: ReplicationController metadata: name: nfs-web spec: replicas: 2 selector: role: web-frontend template: metadata: labels: role: web-frontend spec: containers: - name: web image: eso-cmbu-docker.artifactory.eng.vmware.com/demo-container:demo-version3.0 ports: - name: web containerPort: 80 securityContext: privileged: true </code></pre> <p>My Docker image was made from this simple docker file:</p> <pre><code>FROM ubuntu RUN apt-get update RUN apt-get install -y nginx RUN apt-get install -y nfs-common </code></pre> <p>I am running my kubernetes cluster on CentOs-1611, kube version:</p> <pre><code>[root@centos-master ~]# kubectl version Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;3&quot;, GitVersion:&quot;v1.3.0&quot;, GitCommit:&quot;86dc49aa137175378ac7fba7751c3d3e7f18e5fc&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2016-12-15T16:57:18Z&quot;, GoVersion:&quot;go1.6.3&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} Server Version: version.Info{Major:&quot;1&quot;, Minor:&quot;3&quot;, GitVersion:&quot;v1.3.0&quot;, GitCommit:&quot;86dc49aa137175378ac7fba7751c3d3e7f18e5fc&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2016-12-15T16:57:18Z&quot;, GoVersion:&quot;go1.6.3&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} </code></pre> <p>If I run the docker image by <code>docker run</code> I was able to run the image without any issue, only through kubernetes I got the crash.</p> <p>Can someone help me out, how can I debug without seeing any log?</p>
Lucifer
<p>As @Sukumar commented, you need to have your Dockerfile have a <a href="https://docs.docker.com/engine/reference/builder/#/cmd" rel="noreferrer">Command</a> to run or have your ReplicationController specify a command. </p> <p>The pod is crashing because it starts up then immediately exits, thus Kubernetes restarts and the cycle continues. </p>
Steve Sloka
<p>I have multiple micro services in my project. I want to dynamically pause and resume them without losing the data.</p> <p>For example: I am deploying an theia ide and user created a folder. I want to down this service for sometime and resume again with the data.</p> <p>References: <a href="https://github.com/theia-ide/theia" rel="nofollow noreferrer">https://github.com/theia-ide/theia</a></p> <p>I have already tried with reducing replicas to 0 and 1. It removes the data. I want the data to be persistent.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: servicetest spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: run: servicetest template: metadata: labels: run: servicetest spec: containers: - image: gcr.io/YYYY-ZZZZ-249311/test imagePullPolicy: IfNotPresent name: servicetest terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - name: data mountPath: /data/ volumes: - name: data persistentVolumeClaim: claimName: service-pv-claim --- apiVersion: v1 kind: Service metadata: labels: run: servicetest name: servicetest spec: ports: - name: web port: 80 protocol: TCP targetPort: 3000 - name: deployport port: 8080 protocol: TCP targetPort: 8080 selector: run: servicetest type: LoadBalancer kind: PersistentVolumeClaim apiVersion: v1 metadata: name: service-pv-claim spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 3Gi </code></pre>
Imrahamed
<p>Whether your underlying storage gets deleted depends on the persistent volume's <a href="https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/#why-change-reclaim-policy-of-a-persistentvolume" rel="nofollow noreferrer">reclaim policy</a>. If you set the policy to <code>Retain</code>, it should keep your pod's PV around for later rather than deleting it's contents and purging the volume.</p> <p>Also worth looking into <a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#deployment-and-scaling-guarantees" rel="nofollow noreferrer">statefulset</a> if you're using this deployment of size 1, because deployments are "at least N" as opposed to statefulsets being "at most N" replicas. Statefulsets also let you have a different volume associated with each replica.</p>
Anirudh Ramanathan
<p>I am using GCP Container Engine in my project and now I am facing some issue that I don't know if it can be solved via secrets.</p> <p>One of my deployments is node-js app server, there I use some npm modules which require my GCP service account key (.json file) as an input.</p> <p>The input is the path where this json file is located. Currently I managed to make it work by providing this file as part of my docker image and then in the code I put the path to this file and it works as expected. The problem is that I think that it is not a good solution because I want to decouple my nodejs image from the service account key because the service account key may be changed (e.g. dev,test,prod) and I will not be able to reuse my existing image (unless I will build and push it to a different registry).</p> <p>So how could I upload this service account json file as secret and then consume it inside my pod? I saw it is possible to create secrets out of files but I don't know if it is possible to specify the path to the place where this json file is stored. If it is not possible with secrets (because maybe secrets are not saved in files...) so how (and if) it can be done?</p>
Ran Hassid
<p>You can make your json file a secret and consume in your pod. See the following link for secrets (<a href="http://kubernetes.io/docs/user-guide/secrets/" rel="nofollow noreferrer">http://kubernetes.io/docs/user-guide/secrets/</a>), but I'll summarize next:</p> <p>First create a secret from your json file:</p> <pre><code>kubectl create secret generic nodejs-key --from-file=./key.json </code></pre> <p>Now that you've created the secret, you can consume in your pod (in this example as a volume):</p> <pre><code>{ "apiVersion": "v1", "kind": "Pod", "metadata": { "name": "nodejs" }, "spec": { "containers": [{ "name": "nodejs", "image": "node", "volumeMounts": [{ "name": "foo", "mountPath": "/etc/foo", "readOnly": true }] }], "volumes": [{ "name": "foo", "secret": { "secretName": "nodejs-key" } }] } } </code></pre> <p>So when your pod spins up the file will be dropped in the "file system" in /etc/foo/key.json</p>
Steve Sloka
<p>I am having a problem with a helm deployment of GitLab onto my kubernetes cluster. Everything was working except for the GitLab Runner that was throwing the error:</p> <pre><code>ERROR: Registering runner... forbidden (check registration token) PANIC: Failed to register this runner. Perhaps you are having network problems </code></pre>
Antebios
<p>The solution to this was the <code>gitlab-gitlab-runner-secret</code>.</p> <p>Data element: runner-registration-token</p> <p>value: FGvuUvo0aAce2xkLjvWxj1ktGHD8NAzWY4sLYDVvD3Z56JXh2E7rwfaTvKGkRlUJ</p> <p>It was pre-populated with an invalid <code>runner-registration-token</code>. I solved this by:</p> <ol> <li>Going to GitLab Admin Area --&gt; Runners</li> <li>In the section &quot;Set up a shared Runner manually&quot;, copy the <code>registration token</code>.</li> <li>On a bash command line encode the string to base64 with the command:</li> </ol> <pre><code>echo &lt;my_registration_token&gt; | base 64 </code></pre> <ol start="4"> <li>Copy the output value and go Edit the kubernetes secret for <code>gitlab-runner-secret</code>.</li> <li>Paste the encrypted value OVER the existing value, then click the Update button to save changes.</li> <li>Now stop/destory the runner pod OR scale the deployment to zero destroy it and scale the deployment back up to 1.</li> <li>Now you will see the GitLab Runner pod finally running properly with the log showing:</li> </ol> <pre><code>Registering runner... succeeded Runner registered successfully. </code></pre>
Antebios
<p>I'm trying to update an image in Kubernetes by using the following command:</p> <pre><code>kubectl set image deployment/ms-userservice ms-userservice=$DOCKER_REGISTRY_NAME/$BITBUCKET_REPO_SLUG:$BITBUCKET_COMMIT --insecure-skip-tls-verify </code></pre> <p>But when I receive the following error:</p> <pre><code>error: the server doesn't have a resource type "deployment" </code></pre> <p>I have checked that i am in the right namespace, and that the pod with the name exists and is running.</p> <p>I can't find any meaningful resources in regards to this error.</p> <p>Sidenote: I'm doing this through Bitbucket and a pipeline, which also builds the image i want to use. </p> <p>Any suggestions?</p>
TietjeDK
<p>I've had this error fixed by explicitly setting the namespace as an argument, e.g.:</p> <pre><code>kubectl set image -n foonamespace deployment/ms-userservice..... </code></pre> <p><a href="https://www.mankier.com/1/kubectl-set-image#--namespace" rel="nofollow noreferrer">https://www.mankier.com/1/kubectl-set-image#--namespace</a></p>
Darren Rogers
<p>I have the following values.yaml</p> <pre><code>documentStorage: folders: TEST1: permissions: read: "[email protected]" write: "[email protected]" TEST2: permissions: read: "[email protected]" write: "[email protected]" </code></pre> <p>And, I want to move that to my config map, but, since the keys under folder can be extended, I would like to use the range functionality. And also, I would like to copy the exact structure under it, if possible:</p> <pre><code>documentStorage: folders: {{- range $folder, $structure := .Values.global.documentStorage.folders }} {{ $folder }}: {{ $structure }} {{- end}} </code></pre> <p>But, it isn't working, and I get this:</p> <pre><code>folders: TEST1: permissions: map[read:[email protected] write:[email protected]] TEST2: permissions: map[read:[email protected] write:[email protected]] </code></pre> <p>What am I missing?</p>
Manuelarte
<p>Use the following snippet. You'd need to change the value of indent depending on what nesting level you are setting documentStorage at</p> <pre><code> documentStorage: folders: {{ .Values.documentStorage.folders | toYaml | indent 6 }} </code></pre>
mukesh
<p>I'm trying to install the cert-manager ClusterIssuer on a AKS, and because the cluster is behind Azure Application Gateway I've gone down the route of using a DNS solver rather the HTTP. However, the challenge fails with an error calling the Cloudflare API. I've redacted emails and domains through the code snippets, the output of <code>kubectl describe challenge rabt-cert-tls-g4mcl-1991965707-2468967546</code> is:</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Started 72s cert-manager Challenge scheduled for processing Warning PresentError 3s (x5 over 71s) cert-manager Error presenting challenge: Cloudflare API Error for GET &quot;/zones?name=&lt;domain&gt;&quot; Error: 6003: Invalid request headers&lt;- 6103: Invalid format for X-Auth-Key header </code></pre> <p>I have followed the guide at <a href="https://blog.darkedges.com/2020/05/04/cert-manager-kubernetes-cloudflare-dns-update/" rel="nofollow noreferrer">https://blog.darkedges.com/2020/05/04/cert-manager-kubernetes-cloudflare-dns-update/</a> and the issues at <a href="https://github.com/jetstack/cert-manager/issues/3021" rel="nofollow noreferrer">https://github.com/jetstack/cert-manager/issues/3021</a> and <a href="https://github.com/jetstack/cert-manager/issues/2384" rel="nofollow noreferrer">https://github.com/jetstack/cert-manager/issues/2384</a> but can't see any differences beyond the apiVersion of the issuer. I've checked this against the official documentation and there are no changes from what appears in these guides.</p> <p>The relationship between ingress and cluster issuer seems fine; if I delete and recreate the ingress a new certificate, order and challenge are created. I've verified the secret is populated and I can print it to console, so it shouldn't be sending a blank string in the header. The token is valid, I can use the example CURL request from CloudFlare to check its validity.</p> <p>Is there somewhere I can see logs and find out exactly what is being sent?</p> <p>ClusterIssuer</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: cloudflare-api-token-secret namespace: cert-manager type: Opaque stringData: api-token: ${CLOUDFLARE_API_TOKEN} --- apiVersion: cert-manager.io/v1 kind: ClusterIssuer metadata: name: rabt-letsencrypt spec: acme: # You must replace this email address with your own. # Let's Encrypt will use this to contact you about expiring # certificates, and issues related to your account. email: &lt;email&gt; # ACME server URL for Let’s Encrypt’s staging environment. # The staging environment will not issue trusted certificates but is # used to ensure that the verification process is working properly # before moving to production server: https://acme-staging-v02.api.letsencrypt.org/directory privateKeySecretRef: # Secret resource used to store the account's private key. name: rabt-letsencrypt-key # Enable the HTTP-01 challenge provider # you prove ownership of a domain by ensuring that a particular # file is present at the domain solvers: - dns01: cloudflare: email: &lt;email&gt; apiTokenSecretRef: name: cloudflare-api-token-secret key: api-key </code></pre> <p>Ingress</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: rabt-ingress namespace: default annotations: kubernetes.io/ingress.class: azure/application-gateway appgw.ingress.kubernetes.io/backend-protocol: https appgw.ingress.kubernetes.io/ssl-redirect: &quot;true&quot; cert-manager.io/cluster-issuer: rabt-letsencrypt cert-manager.io/acme-challenge-type: dns01 appgw.ingress.kubernetes.io/backend-path-prefix: &quot;/&quot; spec: tls: - hosts: - &quot;*.rabt.&lt;domain&gt;&quot; secretName: rabt-cert-tls rules: - host: &quot;mq.rabt.&lt;domain&gt;&quot; http: paths: - path: / pathType: Prefix backend: service: name: rabt-mq port: number: 15672 - host: es.rabt.&lt;domain&gt; http: paths: - path: / pathType: Prefix backend: service: name: rabt-db-es-http port: number: 9200 - host: &quot;kibana.rabt.&lt;domain&gt;&quot; http: paths: - path: / pathType: Prefix backend: service: name: rabt-kb-http port: number: 5601 </code></pre>
Mark
<p>As <a href="https://stackoverflow.com/users/5525824/harsh-manvar">Harsh Manvar</a> guessed, it was an issue with the secret. I wasn't running the <code>kubectl apply</code> command through <a href="https://www.gnu.org/software/gettext/manual/html_node/envsubst-Invocation.html" rel="nofollow noreferrer">envsubst</a> so it was encoding the literal string &quot;${CLOUDFLARE_API_TOKEN}&quot;</p>
Mark
<p>If I want to run multiple replicas of some container that requires a one off initialisation task, is there a standard or recommended practice?</p> <p>Possibilities:</p> <ul> <li>Use a StatefulSet even if it isn't necessary after initialisation, and have init containers which check to see if they are on the first pod in the set and do nothing otherwise. (If a StatefulSet is needed for other reasons anyway, this is almost certainly the simplest answer.)</li> <li>Use init containers which use leader election or some similar method to pick only one of them to do the initialisation.</li> <li>Use init containers, and make sure that multiple copies can safely run in parallel. Probably ideal, but not always simple to arrange. (Especially in the case where a pod fails randomly during a rolling update, and a replacement old pod runs its init at the same time as a new pod is being started.)</li> <li>Use a separate Job (or a separate Deployment) with a single replica. Might make the initialisation easy, but makes managing the dependencies between it and the main containers in a CI/CD pipeline harder (we're not using Helm, but this would be something roughly comparable to a post-install/post-upgrade hook).</li> </ul>
armb
<p>We effectively ended up with a Job that does the initialization task and creates a secret that the Deployment replicas have mounted as a volume, blocking them until the Job has completed. We're using ArgoCD without sync waves. (There are complications with patching the Job name whenever its spec is updated because Jobs are immutable, but they aren't directly relevant to the original question.)</p>
armb
<p>I would like to use kubernetes on any IaaS cloud (e.g. OpenStack, AWS, etc.) and have it scale up the pool of worker instances when it can no longer bin-pack new workload.</p> <p>I hope there is a IaaS-independent integration/API to allow this. If not, an integration with a specific cloud is good too.</p>
want-to-be-algorist
<p><a href="https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler" rel="nofollow noreferrer">Kubernetes cluster autoscaler</a> is what you are looking for. It works with multiple cloud providers including AWS</p>
mukesh
<p>I have 2 services running on AKS (v1.16.13) and deployed the following istio (v1.7.3) configuration. First one is a UI where I invoke the OIDC flow and get JWT token, second one is a backend service which should require a valid JWT token.</p> <pre><code>apiVersion: networking.istio.io/v1beta1 kind: Gateway metadata: name: myapp-gateway namespace: &quot;istio-system&quot; spec: selector: istio: ingressgateway servers: - hosts: - myapp.com port: name: http-myapp number: 80 protocol: HTTP tls: httpsRedirect: true - hosts: - myapp.com port: name: https-myapp number: 443 protocol: HTTPS tls: credentialName: myapp-credential mode: SIMPLE --- apiVersion: networking.istio.io/v1beta1 kind: VirtualService metadata: name: myapp namespace: myapp spec: gateways: - istio-system/myapp-gateway hosts: - myapp.com http: - match: - uri: prefix: /ui route: - destination: host: myapp-ui.myapp.svc.cluster.local port: number: 4200 - match: - uri: prefix: /backend/ rewrite: uri: / route: - destination: host: myapp-service-backend.myapp.svc.cluster.local port: number: 8080 --- apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: myapp-jwt-backend namespace: myapp spec: jwtRules: - issuer: https://oktapreview.com selector: matchLabels: app: myapp-service-backend </code></pre> <p>With that config I would expect to get 401 if I invoke <strong>myapp.com/backend</strong> but that's not the case. Request authentication doesn't kick in.</p> <p>After some further research (<a href="https://discuss.istio.io/t/cannot-use-jwt-policy-with-an-externalname-virtualservice-target/2794/3" rel="nofollow noreferrer">https://discuss.istio.io/t/cannot-use-jwt-policy-with-an-externalname-virtualservice-target/2794/3</a>), I found out that I can't apply RequestAuthentication on a VirtualService but only on a Gateway which is quite odd to me but ok. I've changed the RequestAuthentication to the following but still nothing has changed after invoking backend:</p> <pre><code>apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: myapp-jwt-backend namespace: istio-system spec: jwtRules: - issuer: https://oktapreview.com selector: matchLabels: istio: myapp-gateway </code></pre> <p>Do you have any idea how can I setup istio for my use case? Assuming the RequestAuthentication would work on a gateway, do I need 2 gateway? 1 for UI and the second for backend? Seems like an overkill.</p>
Blink
<p>Thanks to the sachin's comment and going again through the documentation made me realized that I need AuthorizationPolicy on top of RequestAuthentication:</p> <pre><code>apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: myapp-require-jwt-backend spec: action: ALLOW rules: - from: - source: requestPrincipals: - https://xxx/* selector: matchLabels: app: myapp-service-backend </code></pre> <p>The request authentication is only making sure that when a JWT token is provided, it has to be a valid one. If there is no token, it will just pass through the request.</p>
Blink
<p>I try to set up an infinispan cache in my application that is running on several nodes on <code>google-cloud-platform</code> with Kubernetes and Docker.</p> <p>Each of these caches shall share their data with the other node chaches so they all have the same data available.</p> <p>My problem is that the JGroups configuration doesn't seem to work the way I want and the nodes don't see any of their siblings.</p> <p>I tried several configurations but the nodes always see themselves and do not build up a cluster with the other ones.</p> <p>I've tried some configurations from GitHub examples like <a href="https://github.com/jgroups-extras/jgroups-kubernetes" rel="nofollow noreferrer">https://github.com/jgroups-extras/jgroups-kubernetes</a> or <a href="https://github.com/infinispan/infinispan-simple-tutorials" rel="nofollow noreferrer">https://github.com/infinispan/infinispan-simple-tutorials</a></p> <p>Here my jgroups.xml</p> <pre><code>&lt;config xmlns="urn:org:jgroups" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="urn:org:jgroups http://www.jgroups.org/schema/jgroups-4.0.xsd"&gt; &lt;TCP bind_addr="${jgroups.tcp.address:127.0.0.1}" bind_port="${jgroups.tcp.port:7800}" enable_diagnostics="false" thread_naming_pattern="pl" send_buf_size="640k" sock_conn_timeout="300" bundler_type="no-bundler" logical_addr_cache_expiration="360000" thread_pool.min_threads="${jgroups.thread_pool.min_threads:0}" thread_pool.max_threads="${jgroups.thread_pool.max_threads:200}" thread_pool.keep_alive_time="60000" /&gt; &lt;org.jgroups.protocols.kubernetes.KUBE_PING port_range="1" namespace="${KUBERNETES_NAMESPACE:myGoogleCloudPlatformNamespace}" /&gt; &lt;MERGE3 min_interval="10000" max_interval="30000" /&gt; &lt;FD_SOCK /&gt; &lt;!-- Suspect node `timeout` to `timeout + timeout_check_interval` millis after the last heartbeat --&gt; &lt;FD_ALL timeout="10000" interval="2000" timeout_check_interval="1000" /&gt; &lt;VERIFY_SUSPECT timeout="1000"/&gt; &lt;pbcast.NAKACK2 xmit_interval="100" xmit_table_num_rows="50" xmit_table_msgs_per_row="1024" xmit_table_max_compaction_time="30000" resend_last_seqno="true" /&gt; &lt;UNICAST3 xmit_interval="100" xmit_table_num_rows="50" xmit_table_msgs_per_row="1024" xmit_table_max_compaction_time="30000" /&gt; &lt;pbcast.STABLE stability_delay="500" desired_avg_gossip="5000" max_bytes="1M" /&gt; &lt;pbcast.GMS print_local_addr="false" join_timeout="${jgroups.join_timeout:5000}" /&gt; &lt;MFC max_credits="2m" min_threshold="0.40" /&gt; &lt;FRAG3 frag_size="8000"/&gt; &lt;/config&gt; </code></pre> <p>And how I initalize the Infinispan Cache (Kotlin)</p> <pre><code>import org.infinispan.configuration.cache.CacheMode import org.infinispan.configuration.cache.ConfigurationBuilder import org.infinispan.configuration.global.GlobalConfigurationBuilder import org.infinispan.manager.DefaultCacheManager import java.util.concurrent.TimeUnit class MyCache&lt;V : Any&gt;(private val cacheName: String) { companion object { private var cacheManager = DefaultCacheManager( GlobalConfigurationBuilder() .transport().defaultTransport() .addProperty("configurationFile", "jgroups.xml") .build() ) } private val backingCache = buildCache() private fun buildCache(): org.infinispan.Cache&lt;CacheKey, V&gt; { val cacheConfiguration = ConfigurationBuilder() .expiration().lifespan(8, TimeUnit.HOURS) .clustering().cacheMode(CacheMode.REPL_ASYNC) .build() cacheManager.defineConfiguration(this.cacheName, cacheConfiguration) log.info("Started cache with name $cacheName. Found cluster members are ${cacheManager.clusterMembers}") return cacheManager.getCache(this.cacheName) } } </code></pre> <p>Here what the logs says</p> <pre><code>INFO o.i.r.t.jgroups.JGroupsTransport - ISPN000078: Starting JGroups channel ISPN INFO o.j.protocols.kubernetes.KUBE_PING - namespace myNamespace set; clustering enabled INFO org.infinispan.CLUSTER - ISPN000094: Received new cluster view for channel ISPN: [myNamespace-7d878d4c7b-cks6n-57621|0] (1) [myNamespace-7d878d4c7b-cks6n-57621] INFO o.i.r.t.jgroups.JGroupsTransport - ISPN000079: Channel ISPN local address is myNamespace-7d878d4c7b-cks6n-57621, physical addresses are [127.0.0.1:7800] </code></pre> <p>I expect that on startup a new node finds the already existing ones and gets the date from them.</p> <p>Currently, on startup every node only sees themselves and nothing is shared</p>
HuMa
<p>Usually the first thing to do when you need help with JGroups/Infinispan is setting trace-level logging.</p> <p>The problem with KUBE_PING might be that the pod does not run under proper serviceaccount, and therefore it does not have the authorization token to access Kubernetes Master API. That's why currently preferred way is using DNS_PING, and registering a <a href="https://kubernetes.io/docs/concepts/services-networking/service/#headless-services" rel="nofollow noreferrer">headless service</a>. See <a href="https://github.com/slaskawi/jgroups-dns-ping-example" rel="nofollow noreferrer">this example</a>.</p>
Radim Vansa
<p>I have all sorts of problems with Kubernetes/helm, but I'm really new to it and so I'm not sure what I'm doing at all, despite spending a day trying to work it out.</p> <p>I have a pod that's in a CrashLoopBackOff situation as I entered an incorrect port number in the Dockerfile. When I do a <code>kubectl -n dev get pods</code>, I can see it in the crash loop. I tried to kill it with <code>helm delete --purge emails</code> but I get the error <code>Error: unknown flag: --purge</code>. I tried to edit the chart with <code>kubectl edit pod emails -n dev</code>, but I get an error saying that the field cannot be changed.</p> <p>But I can't delete the pod, so I'm not quite sure where to go from here. I've tried without the --purge flag and I get the error <code>Error: uninstall: Release not loaded: emails: release: not found</code>. I get the same if I try <code>helm uninstall emails</code> or pretty much anything.</p> <p>To get to the crux of the matter, I believe it's because there's been an upgrade to the helm client to version v3.1.0 but the pods were created with v2.11.0. But I don't know how to roll back the client to this version. I've downloaded it via <code>curl -L https://git.io/get_helm.sh | bash -s -- --version v2.11.0</code> but I can't run <code>helm init</code> and so I'm still on v3.1.0</p> <p>If I run <code>helm list</code>, I get an empty list. I have 16 running pods I can see through <code>kubectl -n dev get pods</code> but I don't seem to be able to do anything to any of them.</p> <p>Is this likely to be because my helm client is the wrong version and, if so, how do I roll it back?</p> <p>Thanks for any suggestions.</p>
EricP
<p>The problem is you mixed helm 2 and helm 3</p> <p>The release was created by helm v2, so you need helm v2 to delete it, helm v3 won't be able to see releases created by helm v2.</p> <p>You could do the following </p> <blockquote> <ol> <li>download helm v2, delete release (I normally have both helm 2/3 in one folder, rename helm v2 to helm2). </li> <li>Optional, you can delete tiller, , as helm v3 won't need tiller anymore. just make sure no other release deployed by helm v2</li> <li>update your helm chart to use correct port</li> <li>use helm v3 deploy your updated chart</li> </ol> </blockquote>
EricZ
<p>I'm running jenkins jobs to build our maven application and deploy into kubernetes cluster. in-order do that i have created pod container template to deploy my modules.</p> <p>When i am running to build my jenkins job, my build got failed with below error,</p> <pre><code>Still waiting to schedule task ‘Jenkins’ doesn’t have label ‘Angular_71-bt2f0’ </code></pre> <p>At the same time when i am checking kubernetes master, i can able to see pods is trying to schedule and its going back to terminating state after few seconds.</p> <pre><code>root@poc:/var/run# kubectl get pods NAME READY STATUS RESTARTS AGE angular-71-bt2f0-ns772-4rmrq 0/3 ContainerCreating 0 1s root@poc:/var/run# kubectl get pods NAME READY STATUS RESTARTS AGE angular-71-bt2f0-ns772-4rmrq 2/3 Terminating 0 28s angular-71-bt2f0-ns772-mcv9z 2/3 Error 0 8s </code></pre> <p>pipeline script</p> <pre><code>def label = "worker-${UUID.randomUUID().toString()}" podTemplate(containers: [ containerTemplate(name: 'maven', image: 'maven:3.3.9-jdk-8-alpine', ttyEnabled: true, command: 'cat'), containerTemplate(name: 'golang', image: 'golang:1.8.0', ttyEnabled: true, command: 'cat') ]) { node(POD_LABEL) { stage('Get a Maven project') { git 'https://github.com/jenkinsci/kubernetes-plugin.git' container('maven') { stage('Build a Maven project') { sh 'mvn -B clean install' } } } } } </code></pre> <p>Please find the below master machine configuration</p> <pre><code>root@poc:~# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 2d root@poc:~# kubectl get nodes NAME STATUS ROLES AGE VERSION poc-worker2 Ready worker 6m3s v1.17.0 poc.com Ready master 2d v1.17.0 root@poc:~# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 2d root@poc:~# kubectl cluster-info Kubernetes master is running at https://10.0.0.4:6443 KubeDNS is running at https://10.0.0.4:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy </code></pre> <p>kubectl error log</p> <pre><code>kubectl logs angular-71-bt2f0-ns772-4rmrq error: a container name must be specified for pod angular-77-l32fr-sfxqk-qhdgf, choose one of: [maven golang jnlp] </code></pre> <pre><code>Below is the kubectl logs while running jenkins job root@poc:~# kubectl logs angular103f578nkcrnfx69fk c maven Error from server (NotFound): pods "angular103f578nkcrnfx69fk" not found root@poc:~# kubectl logs angular103f578nkcrnfx69fk c golang Error from server (NotFound): pods "angular103f578nkcrnfx69fk" not found root@poc:~# kubectl logs angular103f578nkcrnfx69fk c jnlp Error from server (NotFound): pods "angular103f578nkcrnfx69fk" not found </code></pre> <p>can you please some one help me how to fix this issue, i am not sure where i am doing wrong here.</p>
Anonymuss
<p>Because, <code>kubectl -n jenkins-namespace get services</code> shows : </p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cicd-jenkins ClusterIP 172.20.120.227 &lt;none&gt; 8080/TCP 128d cicd-jenkins-agent ClusterIP 172.20.105.189 &lt;none&gt; 50000/TCP 128d </code></pre> <p>You have to go to Jenkins > Manage Jenkins > Configure System (<a href="http://jenkins:8080/configure" rel="nofollow noreferrer">http://jenkins:8080/configure</a>). Then, configure <strong>Jenkins URL</strong> and <strong>jenkins tunnel</strong> accordingly (see screenshot below )</p> <p><strong>CREDITS</strong> to <a href="https://youtu.be/MkzCVvlpiaM" rel="nofollow noreferrer">https://youtu.be/MkzCVvlpiaM</a></p> <p><a href="https://i.stack.imgur.com/sys1u.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sys1u.png" alt="enter image description here"></a></p> <p>If you'are using the Jenkins Configuration as Code (JCasC) plugin this is configured via <code>jenkinsUrl</code> and <code>jenkinsTunnel</code> keys : </p> <pre><code>jenkins: clouds: - kubernetes: name: cluster serverUrl: https://kubernetes.default # .... jenkinsUrl: http://cicd-jenkins:8080/ jenkinsTunnel: cicd-jenkins-agent:50000 # .... </code></pre>
Abdennour TOUMI
<p>When rolling out my AKS cluster</p> <pre><code>az deployment group create --template-file template.json --parameters parameters.d.json -g &lt;resource_group&gt; </code></pre> <p>three managed identities are created. These resources are located, by default, in the Managed Cluster resource group (with prefix <code>MC_</code>). I would like the <code>agentpool</code> managed identity (for vmss) to reside in another resource group.</p> <p>Ideally I'd like to keep these managed identities when redeploying the cluster, however, the <code>_MC</code> resource groups are deleted and created again. Is there an option to deploy these managed identities in another resource group where they have a longer lifetime? Can this be configured in the <code>template.json</code>?</p>
Casper Dijkstra
<p>I don't know which managed identities you have because it depends on which add-ons is enabled. Most add-on identities can't be managed outside the cluster, but you can use a user-assigned identity for the kubelet and agentpool (preview) identity. Here is the <a href="https://learn.microsoft.com/en-us/azure/aks/use-managed-identity#summary-of-managed-identities" rel="nofollow noreferrer">documentation</a> of the identities for AKS, it tells for which feature you can bring your own identity.</p> <p>Since it is in preview for the agentpool identity, I'm not sure that this can be done via an ARM tempalte yet. There is an identity field, but I suspect it's for the kubelet identity and not the agentpool. I might be wrong though, you should give it a try.</p>
Jean-Philippe Bond
<p>I created the below ns.yaml file:</p> <pre><code>apiVersion: v1 kind: Namespace metadata: Name: testns </code></pre> <p>I am getting the below error.</p> <pre><code>error: error validating "ns.yaml": error validating data: ValidationError(Namespace.metadata): unknown field "Name" in io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta; if you choose to ignore these errors, turn validation off with --validate=false </code></pre>
Raghu
<p>The root cause is clear in the error logs: <code>unknown field "Name" in io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta;</code></p> <p>This means you need to use <code>name</code> instead of <code>Name</code>.</p> <p>For more info about YAML format of Kubernetes object Namespace metadata, run the following command :</p> <pre><code>kubectl explain namespace.metadata </code></pre> <p>And you will get amazing documentation.</p>
Abdennour TOUMI
<p>I want to define a Kubernetes secrets map as part of my deployment pipeline. According to the Kubernetes documentation, there are two ways to define a secret.</p> <ol> <li>Declarative Using a .yml with the Secret Object</li> <li>Imperative Using <code>kubectl create secret generic</code></li> </ol> <p>The declarative approach requires writing a YAML similar to the one below.</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: test-secret data: username: bXktYXBw password: Mzk1MjgkdmRnN0pi </code></pre> <p>I want to be able to check in all the Kubernetes YAML files to git so I can run them on a CI server. Checking in the YAML means that the secret is stored in git which is not good for security. I can put the secret into my CI systems secret's store but then how do I create a secrets YAML that references the OS environment variable at the time that <code>kubectl</code> is called.</p> <p><strong>Questions:</strong></p> <ul> <li>How to define a Kubernetes Secret from within a CI pipeline without having to commit the secrets into source control?</li> <li>Is there a best practice approach for defining secrets as part of CI for K8s?</li> </ul>
ams
<p>There is no really good way to managed secrets securely with a vanilla Kubernetes. If you decrypt the secret or inject an unencrypted secret in your CI/CD pipeline and create a Kubernetes Secret, you'll have a decrypted Base64 encoded string to store in your Kubernetes cluster (Etcd).</p> <p>Most companies I've worked with recently deciding to either keep the secret in their Key Vault and use a Kubernetes controller to inject the secret at runtime or use a controller to be able to manage encrypted secrets like <a href="https://github.com/bitnami-labs/sealed-secrets" rel="nofollow noreferrer">sealed-secrets</a> or <a href="https://github.com/Soluto/kamus" rel="nofollow noreferrer">Kamus</a>. Using encrypted secrets might be a good option if you want to keep your secrets in Git.</p> <p>First-class support for Hashicorp Vault and Kubernetes : <a href="https://github.com/hashicorp/vault-k8s" rel="nofollow noreferrer">https://github.com/hashicorp/vault-k8s</a></p> <p>Take a look at this blog post from Banzai Cloud for a more detailed explanation : <a href="https://banzaicloud.com/blog/inject-secrets-into-pods-vault-revisited/" rel="nofollow noreferrer">Inject secrets directly into Pods from Vault revisited</a></p>
Jean-Philippe Bond
<p>KEDA scaler not scales with scaled object defined with trigger using pod identity for authentication for service bus queue. I'm following <a href="https://github.com/kedacore/sample-dotnet-worker-servicebus-queue/blob/main/pod-identity.md" rel="nofollow noreferrer">this</a> KEDA service bus triggered scaling project.<br /> The scaling works fine with the connection string, but when I try to scale using the pod identity for KEDA scaler the keda operator fails to get the azure identity bound to it with the following keda operator error message log:</p> <pre><code>github.com/kedacore/keda/v2/pkg/scaling.(*scaleHandler).isScaledObjectActive /workspace/pkg/scaling/scale_handler.go:228 github.com/kedacore/keda/v2/pkg/scaling.(*scaleHandler).checkScalers /workspace/pkg/scaling/scale_handler.go:211 github.com/kedacore/keda/v2/pkg/scaling.(*scaleHandler).startScaleLoop /workspace/pkg/scaling/scale_handler.go:145 2021-10-10T17:35:53.916Z ERROR azure_servicebus_scaler error {&quot;error&quot;: &quot;failed to refresh token, error: adal: Refresh request failed. Status Code = '400'. Response body: {\&quot;error\&quot;:\&quot;invalid_request\&quot;,\&quot;error_description\&quot;:\&quot;Identity not found\&quot;}\n&quot;} </code></pre> <p>Edited on 11/09/2021 I opened a github issue at keda, and we did some troubleshoot. But it seems like an issue with AAD Pod Identity as @Tom suggests. The AD Pod Identity MIC pod gives logs like this:</p> <pre><code>E1109 03:15:34.391759 1 mic.go:1111] failed to update user-assigned identities on node aks-agentpool-14229154-vmss (add [2], del [0], update[0]), error: failed to update identities for aks-agentpool-14229154-vmss in MC_Arun_democluster_westeurope, error: compute.VirtualMachineScaleSetsClient#Update: Failure sending request: StatusCode=0 -- Original Error: Code=&quot;LinkedAuthorizationFailed&quot; Message=&quot;The client 'fe0d7679-8477-48e3-ae7d-43e2a6fdb957' with object id 'fe0d7679-8477-48e3-ae7d-43e2a6fdb957' has permission to perform action 'Microsoft.Compute/virtualMachineScaleSets/write' on scope '/subscriptions/f3786c6b-8dca-417d-af3f-23929e8b4129/resourceGroups/MC_Arun_democluster_westeurope/providers/Microsoft.Compute/virtualMachineScaleSets/aks-agentpool-14229154-vmss'; however, it does not have permission to perform action 'Microsoft.ManagedIdentity/userAssignedIdentities/assign/action' on the linked scope(s) '/subscriptions/f3786c6b-8dca-417d-af3f-23929e8b4129/resourcegroups/arun/providers/microsoft.managedidentity/userassignedidentities/autoscaler-id' or the linked scope(s) are invalid.&quot; </code></pre> <p>Any clues how to fix it?</p> <p>My scaler objects' definition is as below:</p> <pre><code>apiVersion: keda.sh/v1alpha1 kind: TriggerAuthentication metadata: name: trigger-auth-service-bus-orders spec: podIdentity: provider: azure --- apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: order-scaler spec: scaleTargetRef: name: order-processor # minReplicaCount: 0 Change to define how many minimum replicas you want maxReplicaCount: 10 triggers: - type: azure-servicebus metadata: namespace: demodemobus queueName: orders messageCount: '5' authenticationRef: name: trigger-auth-service-bus-orders </code></pre> <p>Im deploying the azure identity to the <code>namespace keda</code> where my keda deployment resides. And installs KEDA with the following command to set the <code>pod identity binding</code> using helm:</p> <pre><code>helm install keda kedacore/keda --set podIdentity.activeDirectory.identity=app-autoscaler --namespace keda </code></pre> <p><strong>Expected Behavior</strong> The KEDA scaler should have worked fine with the assigned pod identity and access token to perform scaling</p> <p><strong>Actual Behavior</strong> The KEDA operator could not be able to find the azure identity assigned and scaling fails</p> <p><strong>Scaler Used</strong> Azure Service Bus</p> <p><strong>Steps to Reproduce the Problem</strong></p> <ol> <li>Create the azure identity and bindings for the KEDA</li> <li>Install KEDA with the aadpodidentitybinding</li> <li>Create the scaledobject and triggerauthentication using KEDA pod identity</li> <li>The scaler fails to authenticate and scale</li> </ol>
iarunpaul
<p>Unfortunately this looks like an issue with the identity itself and with AD Pod identities, they can be a bit flaky (based on my experiences)</p>
Tom Kerkhove
<p>I have created a Mutating WebHook that works fine when the resulting pods reach healthy Running state. But when used with pods that ultimately fail (e.g. bad image name), the scheduler keeps creating more and more, up to 4000 pods that all error out and retry. If I disable the webhook, and the pod still fails for the same reason, then only 2 are attempted and all is normal failures. </p> <p>Its like my webhook is creating "new" pods and not just mutating ones passed to it. This ONLY happens when the resulting pods fail to Run.</p> <p>So what is about having the webhook in place that is causing so many additional pods to be scheduled when pods fail? </p>
Jerico Sandhorn
<p>Turns out I had a mistake in the webhook where instead of just adding an additional label to indicate the mutation was done, it was instead removing existing labels including the ones kube uses to manage the pod. So when the pod got mutated, it erased the control lables and consequently the scheduler thinks no pods had been created and kept creating new ones. Once fixed, all works normally.</p>
Jerico Sandhorn
<p>A few of our Pods access the Kubernetes API via the &quot;kubernetes&quot; Service. We're in the process of applying Network Policies which allow access to the K8S API, but the only way we've found to accomplish this is to query for the &quot;kubernetes&quot; Service's ClusterIP, and include it as an ipBlock within an egress rule within the Network Policy.</p> <p>Specifically, this value:</p> <pre><code>kubectl get services kubernetes --namespace default -o jsonpath='{.spec.clusterIP}' </code></pre> <p>Is it possible for the &quot;kubernetes&quot; Service ClusterIP to change to a value other than what it was initialized with during cluster creation? If so, there's a possibility our configuration will break. Our hope is that it's not possible, but we're hunting for official supporting documentation.</p>
Puma
<p>The short answer is no.</p> <p>More details :</p> <ul> <li><p>You cannot change/edit clusterIP because it's immutable... so <code>kubectl edit</code> will not work for this field.</p> </li> <li><p>The service cluster IP can be changed easly by <code>kubectl delete -f svc.yaml</code>, then <code>kubectl apply -f svc.yaml</code> again.</p> </li> <li><p>Hence, never ever relies on service IP because services are designed to be referred by DNS :</p> <ul> <li>Use <code>service-name</code> if the communicator is inside the same namespace</li> <li>Use <code>service-name.service-namespace</code> if the communicator is inside or outside the same namespace.</li> <li>Use <code>service-name.service-namespace.svc.cluster.local</code> for FQDN.</li> </ul> </li> </ul>
Abdennour TOUMI
<p>I need to share a directory between two containers: myapp and monitoring and to achieve this I created an emptyDir: {} and then volumeMount on both the containers.</p> <pre><code>spec: volumes: - name: shared-data emptyDir: {} containers: - name: myapp volumeMounts: - name: shared-data mountPath: /etc/myapp/ - name: monitoring volumeMounts: - name: shared-data mountPath: /var/read </code></pre> <p>This works fine as the data I write to the shared-data directory is visible in both containers. However, the config file that is created when creating the container under /etc/myapp/myapp.config is hidden as the shared-data volume is mounted over /etc/myapp path (overlap).</p> <p>How can I force the container to first mount the volume to /etc/myapp path and then cause the docker image to place the myapp.config file under the default path /etc/myapp except that it is the mounted volume thus allowing the config file to be accessible by the monitoring container under /var/read?</p> <p>Summary: let the monitoring container read the /etc/myapp/myapp.config file sitting on myapp container.</p> <p>Can anyone advice please?</p>
Buggy B
<p>Can you mount shared-data at /var/read in an init container and copy config file from /etc/myapp/myapp.config to /var/read?</p>
Sameer Naik
<p>I am using calico as my kubernetes CNI plugin, but when I ping service from kubernetes pod, it failed.First I find the service ip:</p> <pre><code> [root@localhost ~]# kubectl get svc -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR prometheus-1594471894-kube-state-metrics ClusterIP 10.20.39.193 &lt;none&gt; 8080/TCP 3h16m app.kubernetes.io/instance=prometheus-1594471894,app.kubernetes.io/name=kube-state-metrics </code></pre> <p>then ping this ip from any pods(already login into pods):</p> <pre><code>root@k8sslave1:/# ping 10.20.39.193 PING 10.20.39.193 (10.20.39.193) 56(84) bytes of data. </code></pre> <p>no response. and then using traceroute to check the path:</p> <pre><code>root@k8sslave1:/# traceroute 10.20.39.193 traceroute to 10.20.39.193 (10.20.39.193), 64 hops max 1 192.168.31.1 0.522ms 0.539ms 0.570ms 2 192.168.1.1 1.171ms 0.877ms 0.920ms 3 100.81.0.1 3.918ms 3.917ms 3.602ms 4 117.135.40.145 4.768ms 4.337ms 4.232ms 5 * * * 6 * * * </code></pre> <p>the package was route to internet, not forward to kubernetes service.Why would this happen? what should I do to fix it? the pod could access internet, and could successfully ping other pods ip.</p> <pre><code>root@k8sslave1:/# ping 10.11.157.67 PING 10.11.157.67 (10.11.157.67) 56(84) bytes of data. 64 bytes from 10.11.157.67: icmp_seq=1 ttl=64 time=0.163 ms 64 bytes from 10.11.157.67: icmp_seq=2 ttl=64 time=0.048 ms 64 bytes from 10.11.157.67: icmp_seq=3 ttl=64 time=0.036 ms 64 bytes from 10.11.157.67: icmp_seq=4 ttl=64 time=0.102 ms </code></pre> <p>this is my ip config when install the kubernetes cluster:</p> <pre><code>kubeadm init \ --apiserver-advertise-address 0.0.0.0 \ --apiserver-bind-port 6443 \ --cert-dir /etc/kubernetes/pki \ --control-plane-endpoint 192.168.31.29 \ --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \ --kubernetes-version 1.18.2 \ --pod-network-cidr 10.11.0.0/16 \ --service-cidr 10.20.0.0/16 \ --service-dns-domain cluster.local \ --upload-certs \ --v=6 </code></pre> <p>this is the dns resolv.conf:</p> <pre><code>cat /etc/resolv.conf nameserver 10.20.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5 </code></pre> <p>this is the pod's kernel route table:</p> <pre><code>[root@localhost ~]# kubectl exec -it shell-demo /bin/bash kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead. root@k8sslave1:/# route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 0.0.0.0 192.168.31.1 0.0.0.0 UG 100 0 0 enp1s0 10.11.102.128 192.168.31.29 255.255.255.192 UG 0 0 0 tunl0 10.11.125.128 192.168.31.31 255.255.255.192 UG 0 0 0 tunl0 10.11.157.64 0.0.0.0 255.255.255.192 U 0 0 0 * 10.11.157.66 0.0.0.0 255.255.255.255 UH 0 0 0 cali4ac004513e1 10.11.157.67 0.0.0.0 255.255.255.255 UH 0 0 0 cali801b80f5d85 10.11.157.68 0.0.0.0 255.255.255.255 UH 0 0 0 caliaa7c2766183 10.11.157.69 0.0.0.0 255.255.255.255 UH 0 0 0 cali83957ce33d2 10.11.157.71 0.0.0.0 255.255.255.255 UH 0 0 0 calia012ca8e3b0 10.11.157.72 0.0.0.0 255.255.255.255 UH 0 0 0 cali3e6b175ded9 10.11.157.73 0.0.0.0 255.255.255.255 UH 0 0 0 calif042b3edac7 172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0 192.168.31.0 0.0.0.0 255.255.255.0 U 100 0 0 enp1s0 192.168.122.0 0.0.0.0 255.255.255.0 U 0 0 0 virbr0 </code></pre>
Dolphin
<p>This is a very common issue and it required from me a full migration of CIDR IPs.</p> <p>Most probably, this issue about the overlap of CIDRs between Pods CIDR ( which is IP pool used to assign IPs for services and pods) and CIDR of your network.</p> <p>in this case, route tables of each node (VM) will ensure that:</p> <pre><code>sudo route -n </code></pre> <p>Because you didn't provide enough logs, i will help you here how to troubleshoot the issue . If you get the same issue that I guessed, you will need to change the CIDR range of pods as explained starting from Step3.</p> <p><strong>Step1 : Install calicoctl as a Kubernetes pod</strong></p> <pre><code> kubectl apply -f https://docs.projectcalico.org/manifests/calicoctl.yaml alias calicoctl=&quot;kubectl exec -i -n kube-system calicoctl -- /calicoctl&quot; </code></pre> <p><strong>Step2 : Check the status of the Calico instance.</strong></p> <pre><code>calicoctl node status # Sample of output ################### Calico process is running. IPv4 BGP status +--------------+-------------------+-------+----------+-------------+ | PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO | +--------------+-------------------+-------+----------+-------------+ | 172.17.8.102 | node-to-node mesh | up | 23:30:04 | Established | +--------------+-------------------+-------+----------+-------------+ </code></pre> <p>If you have issue in this step, stop here and fix it.</p> <p>Otherwise, you can proceed.</p> <p><strong>Step3: List existing Pools</strong></p> <pre><code>calicoctl get ippool -o wide </code></pre> <p><strong><strong>Step4:</strong> Create new Pool</strong></p> <p>make sure it does not overlap with your network CIDR.</p> <pre class="lang-sh prettyprint-override"><code>calicoctl create -f -&lt;&lt;EOF apiVersion: projectcalico.org/v3 kind: IPPool metadata: name: pool-c spec: cidr: 10.244.0.0/16 ipipMode: Always natOutgoing: true EOF </code></pre> <p>The new pool is named <strong>pool-c</strong>.</p> <p><strong>Step5: Delete the current pool:</strong></p> <pre><code># get all pools calicoctl get ippool -o yaml &gt; pools.yaml # edit the file pools.yaml and remove the current pool. # file editing ... save &amp; quit # then apply changes calicoctl apply -f -&lt;&lt;EOF # Here, Must be the new content of the file pools.yaml EOF </code></pre> <p><strong>Step6: Check SDN IPs assigned to each workload(pod):</strong></p> <pre class="lang-sh prettyprint-override"><code>calicoctl get wep --all-namespaces </code></pre> <p>Keep restarting old pods, recreating old services until make sure that all resources are assigned IPs from the new pool.</p>
Abdennour TOUMI
<p>I'm struggling at having secrets read using application.yml.</p> <p>When I do:</p> <pre><code>quarkus: application: name: pi-quarkus-fund-persistence-service kubernetes-config: enabled: true fail-on-missing-config: false config-maps: pi-quarkus-fund-persistence-service-configmap secrets: pi-quarkus-fund-persistence-service-secrets enabled: true </code></pre> <p>The build fails with:</p> <pre><code>Failed to build quarkus application: mapping values are not allowed here in 'reader', line 16, column 20: enabled: true ^ </code></pre> <p>When:</p> <pre><code>quarkus: application: name: pi-quarkus-fund-persistence-service kubernetes-config: enabled: true secrets.enabled: true fail-on-missing-config: false config-maps: pi-quarkus-fund-persistence-service-configmap secrets: pi-quarkus-fund-persistence-service-secrets </code></pre> <p>The build fails with:</p> <pre><code>Unrecognized configuration key &quot;quarkus.kubernetes-config.&quot;secrets.enabled&quot;&quot; was provided; it will be ignored; verify that the dependency extension for this configuration is set or you did not make a typo </code></pre> <p>When:</p> <pre><code>quarkus.kubernetes-config.secrets.enabled: true quarkus: application: name: pi-quarkus-fund-persistence-service kubernetes-config: enabled: true fail-on-missing-config: false config-maps: pi-quarkus-fund-persistence-service-configmap secrets: pi-quarkus-fund-persistence-service-secrets </code></pre> <p>The build succeed but the service fails at startup with:</p> <p>Configuration is read from Secrets [pi-quarkus-fund-persistence-service-secrets], but <strong>quarkus.kubernetes-config.secrets.enabled is false</strong>. Check if your application's service account has enough permissions to read secrets.</p> <p>When I look at this commit: <a href="https://github.com/quarkusio/quarkus/commit/93f00af9444deafe950afa1fad60f56fceb81ca3" rel="nofollow noreferrer">https://github.com/quarkusio/quarkus/commit/93f00af9444deafe950afa1fad60f56fceb81ca3</a></p> <p>Line 48: // TODO: should probably use converter here</p> <p>Could it be because the property is not converted from yaml?</p>
Frédéric Thomas
<p>I think this is just about how to write the correct YAML. It should be:</p> <pre><code>quarkus: application: name: pi-quarkus-fund-persistence-service kubernetes-config: enabled: true fail-on-missing-config: false config-maps: pi-quarkus-fund-persistence-service-configmap secrets: ~: pi-quarkus-fund-persistence-service-secrets enabled: true </code></pre> <p>In retrospect, <code>quarkus.kubernetes-config.secrets.enabled</code> wasn't the best choice for this config property, sorry about that :-(</p>
Ladicek
<p>I've been reading about microservices and deploying educational projects with Spring Boot and Spring Cloud. Now I want to step up to another level and start using ` Docker and Kubernetes as container and orchestrator. My doubt is, most microservices tutorial for Java are about Spring Cloud with Eureka and Zuul, but when you move to Kubernetes, you don't really need Eureka and Zuul, do you? If so, is there an orchestrator that fully integrates Spring Cloud system? Or, best bet is integrating Spring Cloud with Kubernetes and forgetting about Eureka and Zuul?</p>
didgewind
<p>Kubernetes provides native support for service discovery and API gateway. So below technologies can be replaced:</p> <ul> <li>Netflix Eureka with Kubernetes Service</li> <li>Spring Cloud Config Server with Kubernetes config maps and secrets</li> <li>Spring Cloud Gateway with a Kubernetes Ingress resource</li> </ul> <p>Below blogs provide more information on the above:</p> <p><a href="https://blog.christianposta.com/microservices/netflix-oss-or-kubernetes-how-about-both/" rel="nofollow noreferrer">https://blog.christianposta.com/microservices/netflix-oss-or-kubernetes-how-about-both/</a></p>
Girish
<p>We have an on-premise kubernetes deployment in our data center. I just finished deploying the pods for Dex, configured hooked up with our LDAP server to allow LDAP based authentication via Dex, ran tests and was able to retrieve the OpenID connect token for authentication.</p> <p>Now I would like to change our on-premise k8s API server startup parameters to enable OIDC and point it to the Dex container.</p> <p>How do I enable OIDC to the API server startup command without downtime to our k8s cluster? Was reading this doc <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/authentication/</a> but the site just says &quot;Enable the required flags&quot; without the steps</p> <p>Thanks!</p>
jrlonan
<p>I installed <strong>Dex</strong> + Active Directory Integration few months ago on a cluster installed by <strong>kubeadmn</strong> .</p> <blockquote> <p>Let's assume that Dex is now running and it can be accessible thru <a href="https://dex.example.com" rel="nofollow noreferrer">https://dex.example.com</a> .</p> </blockquote> <p>In this case,..</p> <p>Enabling ODIC at the level of API server has 3 steps :</p> <p>These steps have to be done on each of your Kubernetes master nodes.</p> <p><strong>1- SSH to your master node.</strong></p> <pre class="lang-sh prettyprint-override"><code>$ ssh root@master-ip </code></pre> <p><strong>2- Edit the Kubernetes API configuration.</strong></p> <p>Add the OIDC parameters and modify the issuer URL accordingly.</p> <pre class="lang-yaml prettyprint-override"><code>$ sudo vim /etc/kubernetes/manifests/kube-apiserver.yaml ... command: - /hyperkube - apiserver - --advertise-address=x.x.x.x ... - --oidc-issuer-url=https://dex.example.com # &lt;-- 🔴 Please focus here - --oidc-client-id=oidc-auth-client # &lt;-- 🔴 Please focus here - --oidc-username-claim=email # &lt;-- 🔴 Please focus here - --oidc-groups-claim=groups # &lt;-- 🔴 Please focus here ... </code></pre> <p><strong>3- The Kubernetes API will restart by itself.</strong></p> <p>I recommend also to check a full guide like this <a href="https://blog.inkubate.io/access-your-kubernetes-cluster-with-your-active-directory-credentials/" rel="nofollow noreferrer">tuto</a>.</p>
Abdennour TOUMI
<p>I am trying to build a docker image for dotnet core app on windows which I am planning to host it on Kubernetes </p> <p>with following details</p> <pre><code>#docker file FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base WORKDIR /app EXPOSE 8989 FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build WORKDIR /src COPY ["amazing-app.csproj", "amazing-app/"] RUN dotnet restore "amazing-app/amazing-app.csproj" COPY . . WORKDIR "/src/amazing-app" RUN dotnet build "amazing-app.csproj" -c Release -o /app/build FROM build AS publish RUN dotnet publish "amazing-app.csproj" -c Release -o /app/publish FROM base AS final WORKDIR /app COPY --from=publish /app/publish . ENTRYPOINT ["dotnet", "amazing-app.dll"] </code></pre> <p>directory Structure is >></p> <p><a href="https://i.stack.imgur.com/eB3af.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eB3af.png" alt="enter image description here"></a></p> <p>After running <code>docker build -t amazing-app-one .</code> I am getting following error at step 10/16. Application builds &amp; runs locally but docker is not able to build image out of it.</p> <pre><code>$&gt; &lt;Path&gt;\amazing-app&gt;docker build -t amazing-app-one . Sending build context to Docker daemon 5.741MB Step 1/16 : FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base ---&gt; 08096137b740 Step 2/16 : WORKDIR /app ---&gt; Running in 14f5f9b7b3e5 Removing intermediate container 14f5f9b7b3e5 ---&gt; ae4846eda3f7 Step 3/16 : EXPOSE 8989 ---&gt; Running in 0f464e383641 Removing intermediate container 0f464e383641 ---&gt; 6b855b84749e Step 4/16 : FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build ---&gt; 9817c25953a8 Step 5/16 : WORKDIR /src ---&gt; Running in 5a8fc99a3ecf Removing intermediate container 5a8fc99a3ecf ---&gt; 694d5063e8e6 Step 6/16 : COPY ["amazing-app.csproj", "amazing-app/"] ---&gt; 450921f630c3 Step 7/16 : RUN dotnet restore "amazing-app/amazing-app.csproj" ---&gt; Running in ddb564e875be Restore completed in 1.81 sec for /src/amazing-app/amazing-app.csproj. Removing intermediate container ddb564e875be ---&gt; b59e0c1dfb4d Step 8/16 : COPY . . ---&gt; 695977f3b543 Step 9/16 : WORKDIR "/src/amazing-app" ---&gt; Running in aa5575c99ce3 Removing intermediate container aa5575c99ce3 ---&gt; 781b4c552434 Step 10/16 : RUN dotnet build "amazing-app.csproj" -c Release -o /app/build ---&gt; Running in 3a602c34b5a9 Microsoft (R) Build Engine version 16.4.0+e901037fe for .NET Core Copyright (C) Microsoft Corporation. All rights reserved. Restore completed in 36.78 ms for /src/amazing-app/amazing-app.csproj. CSC : error CS5001: Program does not contain a static 'Main' method suitable for an entry point [/src/amazing-app/amazing-app.csproj] Build FAILED. CSC : error CS5001: Program does not contain a static 'Main' method suitable for an entry point [/src/amazing-app/amazing-app.csproj] 0 Warning(s) 1 Error(s) Time Elapsed 00:00:02.37 The command '/bin/sh -c dotnet build "amazing-app.csproj" -c Release -o /app/build' returned a non-zero code: 1 </code></pre> <p>find the source code on <a href="https://github.com/Kundan22/amazing-app/tree/master/amazing-app" rel="nofollow noreferrer">github link</a></p> <p>am I missing something here? any help much appreciated.</p>
Kundan
<p>The issue is in folder structure. You copied your source to <code>/src</code> folder (see line 8 and 11 of Dockerfile), but *.csproj file is copied to <code>amazing-app</code> subfolder. </p> <p>You can check it by running "intermediate" image, eg: <code>781b4c552434</code> (it is a hash of image before crash), like following: <code>docker run -it --rm --name xxx b5a968c1103c bash</code> and then you can inspect file system by <code>ls</code> command to see that source code and csproj file are located in different places.</p> <p>So, i suggest to move Dockerfile to root directory (close to .dockerignore) and update path to csproj file (also, you need to run <code>docker build</code> command from root directory. This Dockerfile should work:</p> <pre><code>FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-buster-slim AS base WORKDIR /app EXPOSE 8989 FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build WORKDIR /src COPY ["amazing-app/amazing-app.csproj", "amazing-app/"] RUN dotnet restore "amazing-app/amazing-app.csproj" COPY . . WORKDIR "/src/amazing-app" RUN dotnet build "amazing-app.csproj" -c Release -o /app/build FROM build AS publish RUN dotnet publish "amazing-app.csproj" -c Release -o /app/publish FROM base AS final WORKDIR /app COPY --from=publish /app/publish . ENTRYPOINT ["dotnet", "amazing-app.dll"] </code></pre>
Exploding Kitten
<p>I'm installing nginx using these commands onto my AKS cluster by doing:</p> <p>kubectl create namespace hello-world</p> <p>helm3 repo add ingress-nginx <a href="https://kubernetes.github.io/ingress-nginx" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx</a></p> <p>helm3 install nginx-ingress ingress-nginx/ingress-nginx --namespace hello-world</p> <p>I want to specify a specifc version of nginx-ingress to be installed onto my namespace I can see the version when I run &quot;kubectl exec -it nginx-ingress-ingress-nginx-controller-6d4456f967-abc2fb -n hello-world -- /nginx-ingress-controller --version&quot;</p> <p>How would I be able to update it or configure it next time?</p>
Joby Santhosh
<p>List the available versions using</p> <pre><code>helm search repo nginx -l </code></pre> <p>Install a specific version using</p> <pre><code>helm3 install nginx-ingress ingress-nginx/ingress-nginx --namespace hello-world --version [theVersion] </code></pre>
CSharpRocks
<p>I install kubernetes v1.11.5 from kubeadm with cni plugin flannel and everything is ok. But I after try to switch to calico I found that the cross machine pod communication is broken. So I switch back to flannel. But got error message when creating pod:</p> <p><a href="https://i.stack.imgur.com/V8dN5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/V8dN5.png" alt="enter image description here"></a></p> <p>It seems that I need to reset cni network? But I don't know how to solve this problem. </p> <p>My flannel and calico installation is follow <a href="https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/" rel="nofollow noreferrer">kubeadm instruction</a> with zero config update.</p>
aisensiy
<p>I use following steps to remove old calico configs from kubernetes without <code>kubeadm reset</code>:</p> <ol> <li>clear ip route: <code>ip route flush proto bird</code></li> <li>remove all calico links in all nodes <code>ip link list | grep cali | awk '{print $2}' | cut -c 1-15 | xargs -I {} ip link delete {}</code></li> <li>remove ipip module <code>modprobe -r ipip</code></li> <li>remove calico configs <code>rm /etc/cni/net.d/10-calico.conflist &amp;&amp; rm /etc/cni/net.d/calico-kubeconfig</code></li> <li>restart kubelet <code>service kubelet restart</code></li> </ol> <p>After those steps all the running pods won't be connect, then I have to delete all the pods, then all the pods works. This has litter influence if you are using <code>replicaset</code>.</p>
aisensiy
<p>I am having an issue with the authentication operator not becoming stable (bouncing Between Avaialbe = True, and Degraded = True). The operator is trying to check the health using the endpoing <a href="https://oauth-openshift.apps.oc.sow.expert/healthz" rel="nofollow noreferrer">https://oauth-openshift.apps.oc.sow.expert/healthz</a>. and it sees it as not available (at least sometimes).</p> <p>Cluster version :</p> <pre><code>[root@bastion ~]# oc get clusterversion NAME VERSION AVAILABLE PROGRESSING SINCE STATUS version 4.7.1 True False 44h Error while reconciling 4.7.1: the cluster operator ingress is degraded </code></pre> <p>Cluster operator describe:</p> <pre><code>[root@bastion ~]# oc describe clusteroperator authentication Name: authentication Namespace: Labels: &lt;none&gt; Annotations: exclude.release.openshift.io/internal-openshift-hosted: true include.release.openshift.io/self-managed-high-availability: true include.release.openshift.io/single-node-developer: true API Version: config.openshift.io/v1 Kind: ClusterOperator Metadata: Creation Timestamp: 2021-03-15T19:54:21Z Generation: 1 Managed Fields: API Version: config.openshift.io/v1 Fields Type: FieldsV1 fieldsV1: f:metadata: f:annotations: .: f:exclude.release.openshift.io/internal-openshift-hosted: f:include.release.openshift.io/self-managed-high-availability: f:include.release.openshift.io/single-node-developer: f:spec: f:status: .: f:extension: Manager: cluster-version-operator Operation: Update Time: 2021-03-15T19:54:21Z API Version: config.openshift.io/v1 Fields Type: FieldsV1 fieldsV1: f:status: f:conditions: f:relatedObjects: f:versions: Manager: authentication-operator Operation: Update Time: 2021-03-15T20:03:18Z Resource Version: 1207037 Self Link: /apis/config.openshift.io/v1/clusteroperators/authentication UID: b7ca7d49-f6e5-446e-ac13-c5cc6d06fac1 Spec: Status: Conditions: Last Transition Time: 2021-03-17T11:42:49Z Message: OAuthRouteCheckEndpointAccessibleControllerDegraded: Get &quot;https://oauth-openshift.apps.oc.sow.expert/healthz&quot;: EOF Reason: AsExpected Status: False Type: Degraded Last Transition Time: 2021-03-17T11:42:53Z Message: All is well Reason: AsExpected Status: False Type: Progressing Last Transition Time: 2021-03-17T11:43:21Z Message: OAuthRouteCheckEndpointAccessibleControllerAvailable: Get &quot;https://oauth-openshift.apps.oc.sow.expert/healthz&quot;: EOF Reason: OAuthRouteCheckEndpointAccessibleController_EndpointUnavailable Status: False Type: Available Last Transition Time: 2021-03-15T20:01:24Z Message: All is well Reason: AsExpected Status: True Type: Upgradeable Extension: &lt;nil&gt; Related Objects: Group: operator.openshift.io Name: cluster Resource: authentications Group: config.openshift.io Name: cluster Resource: authentications Group: config.openshift.io Name: cluster Resource: infrastructures Group: config.openshift.io Name: cluster Resource: oauths Group: route.openshift.io Name: oauth-openshift Namespace: openshift-authentication Resource: routes Group: Name: oauth-openshift Namespace: openshift-authentication Resource: services Group: Name: openshift-config Resource: namespaces Group: Name: openshift-config-managed Resource: namespaces Group: Name: openshift-authentication Resource: namespaces Group: Name: openshift-authentication-operator Resource: namespaces Group: Name: openshift-ingress Resource: namespaces Group: Name: openshift-oauth-apiserver Resource: namespaces Versions: Name: oauth-apiserver Version: 4.7.1 Name: operator Version: 4.7.1 Name: oauth-openshift Version: 4.7.1_openshift Events: &lt;none&gt; </code></pre> <p>When I curl multiple times to the same endpoint from bastion server, it results in two different responses once with the error &quot;OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to oauth-openshift.apps.oc.sow.expert:443&quot; and the other seems to be successful as follows:</p> <pre><code>[root@bastion ~]# curl -vk https://oauth-openshift.apps.oc.sow.expert/healthz * Trying 192.168.124.173... * TCP_NODELAY set * Connected to oauth-openshift.apps.oc.sow.expert (192.168.124.173) port 443 (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * successfully set certificate verify locations: * CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none * TLSv1.3 (OUT), TLS handshake, Client hello (1): * OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to oauth-openshift.apps.oc.sow.expert:443 * Closing connection 0 curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to oauth-openshift.apps.oc.sow.expert:443 [root@bastion ~]# curl -vk https://oauth-openshift.apps.oc.sow.expert/healthz * Trying 192.168.124.173... * TCP_NODELAY set * Connected to oauth-openshift.apps.oc.sow.expert (192.168.124.173) port 443 (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * successfully set certificate verify locations: * CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none * TLSv1.3 (OUT), TLS handshake, Client hello (1): * TLSv1.3 (IN), TLS handshake, Server hello (2): * TLSv1.3 (IN), TLS handshake, [no content] (0): * TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8): * TLSv1.3 (IN), TLS handshake, [no content] (0): * TLSv1.3 (IN), TLS handshake, Request CERT (13): * TLSv1.3 (IN), TLS handshake, [no content] (0): * TLSv1.3 (IN), TLS handshake, Certificate (11): * TLSv1.3 (IN), TLS handshake, [no content] (0): * TLSv1.3 (IN), TLS handshake, CERT verify (15): * TLSv1.3 (IN), TLS handshake, [no content] (0): * TLSv1.3 (IN), TLS handshake, Finished (20): * TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1): * TLSv1.3 (OUT), TLS handshake, [no content] (0): * TLSv1.3 (OUT), TLS handshake, Certificate (11): * TLSv1.3 (OUT), TLS handshake, [no content] (0): * TLSv1.3 (OUT), TLS handshake, Finished (20): * SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384 * ALPN, server accepted to use http/1.1 * Server certificate: * subject: CN=*.apps.oc.sow.expert * start date: Mar 15 20:05:53 2021 GMT * expire date: Mar 15 20:05:54 2023 GMT * issuer: CN=ingress-operator@1615838672 * SSL certificate verify result: self signed certificate in certificate chain (19), continuing anyway. * TLSv1.3 (OUT), TLS app data, [no content] (0): &gt; GET /healthz HTTP/1.1 &gt; Host: oauth-openshift.apps.oc.sow.expert &gt; User-Agent: curl/7.61.1 &gt; Accept: */* &gt; * TLSv1.3 (IN), TLS handshake, [no content] (0): * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): * TLSv1.3 (IN), TLS app data, [no content] (0): &lt; HTTP/1.1 200 OK &lt; Cache-Control: no-cache, no-store, max-age=0, must-revalidate &lt; Content-Type: text/plain; charset=utf-8 &lt; Expires: 0 &lt; Pragma: no-cache &lt; Referrer-Policy: strict-origin-when-cross-origin &lt; X-Content-Type-Options: nosniff &lt; X-Dns-Prefetch-Control: off &lt; X-Frame-Options: DENY &lt; X-Xss-Protection: 1; mode=block &lt; Date: Wed, 17 Mar 2021 11:49:50 GMT &lt; Content-Length: 2 &lt; * Connection #0 to host oauth-openshift.apps.oc.sow.expert left intact ok </code></pre> <p>In the Bastion server, I am hosting the HAProxy load balancer and the squid proxy to allow internal instalnces to access the internet.</p> <p>HAProxy configurations is as follows:</p> <pre><code>[root@bastion ~]# cat /etc/haproxy/haproxy.cfg #--------------------------------------------------------------------- # Example configuration for a possible web application. See the # full configuration options online. # # https://www.haproxy.org/download/1.8/doc/configuration.txt # #--------------------------------------------------------------------- #--------------------------------------------------------------------- # Global settings #--------------------------------------------------------------------- global # to have these messages end up in /var/log/haproxy.log you will # need to: # # 1) configure syslog to accept network log events. This is done # by adding the '-r' option to the SYSLOGD_OPTIONS in # /etc/sysconfig/syslog # # 2) configure local2 events to go to the /var/log/haproxy.log # file. A line like the following can be added to # /etc/sysconfig/syslog # # local2.* /var/log/haproxy.log # log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon # turn on stats unix socket stats socket /var/lib/haproxy/stats # utilize system-wide crypto-policies #ssl-default-bind-ciphers PROFILE=SYSTEM #ssl-default-server-ciphers PROFILE=SYSTEM #--------------------------------------------------------------------- # common defaults that all the 'listen' and 'backend' sections will # use if not designated in their block #--------------------------------------------------------------------- defaults mode tcp log global option tcplog option dontlognull option http-server-close #option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 # Control Plane config - external frontend api bind 192.168.124.174:6443 mode tcp default_backend api-be # Control Plane config - internal frontend api-int bind 10.164.76.113:6443 mode tcp default_backend api-be backend api-be mode tcp balance roundrobin # server bootstrap 10.94.124.2:6443 check server master01 10.94.124.3:6443 check server master02 10.94.124.4:6443 check server master03 10.94.124.5:6443 check frontend machine-config bind 10.164.76.113:22623 mode tcp default_backend machine-config-be backend machine-config-be mode tcp balance roundrobin # server bootstrap 10.94.124.2:22623 check server master01 10.94.124.3:22623 check server master02 10.94.124.4:22623 check server master03 10.94.124.5:22623 check # apps config frontend https mode tcp bind 10.164.76.113:443 default_backend https frontend http mode tcp bind 10.164.76.113:80 default_backend http frontend https-ext mode tcp bind 192.168.124.173:443 default_backend https frontend http-ext mode tcp bind 192.168.124.173:80 default_backend http backend https mode tcp balance roundrobin server storage01 10.94.124.6:443 check server storage02 10.94.124.7:443 check server storage03 10.94.124.8:443 check server worker01 10.94.124.15:443 check server worker02 10.94.124.16:443 check server worker03 10.94.124.17:443 check server worker04 10.94.124.18:443 check server worker05 10.94.124.19:443 check server worker06 10.94.124.20:443 check backend http mode tcp balance roundrobin server storage01 10.94.124.6:80 check server storage02 10.94.124.7:80 check server storage03 10.94.124.8:80 check server worker01 10.94.124.15:80 check server worker02 10.94.124.16:80 check server worker03 10.94.124.17:80 check server worker04 10.94.124.18:80 check server worker05 10.94.124.19:80 check server worker06 10.94.124.20:80 check </code></pre> <p>And Here is the squid proxy configurations:</p> <pre><code>[root@bastion ~]# cat /etc/squid/squid.conf # # Recommended minimum configuration: # # Example rule allowing access from your local networks. # Adapt to list your (internal) IP networks from where browsing # should be allowed acl localnet src 0.0.0.1-0.255.255.255 # RFC 1122 &quot;this&quot; network (LAN) acl localnet src 10.0.0.0/8 # RFC 1918 local private network (LAN) acl localnet src 100.64.0.0/10 # RFC 6598 shared address space (CGN) acl localnet src 169.254.0.0/16 # RFC 3927 link-local (directly plugged) machines acl localnet src 172.16.0.0/12 # RFC 1918 local private network (LAN) acl localnet src 192.168.0.0/16 # RFC 1918 local private network (LAN) acl localnet src fc00::/7 # RFC 4193 local private network range acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines acl SSL_ports port 443 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port 443 # https acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port 1025-65535 # unregistered ports acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 591 # filemaker acl Safe_ports port 777 # multiling http acl CONNECT method CONNECT # # Recommended minimum Access Permission configuration: # # Deny requests to certain unsafe ports #http_access deny !Safe_ports # Deny CONNECT to other than secure SSL ports #http_access deny CONNECT !SSL_ports # Only allow cachemgr access from localhost http_access allow localhost manager http_access deny manager # We strongly recommend the following be uncommented to protect innocent # web applications running on the proxy server who think the only # one who can access services on &quot;localhost&quot; is a local user #http_access deny to_localhost # # INSERT YOUR OWN RULE(S) HERE TO ALLOW ACCESS FROM YOUR CLIENTS # # Example rule allowing access from your local networks. # Adapt localnet in the ACL section to list your (internal) IP networks # from where browsing should be allowed http_access allow localnet http_access allow localhost # And finally deny all other access to this proxy http_access deny all # Squid normally listens to port 3128 http_port 3128 http_port 10.164.76.113:3128 # Uncomment and adjust the following to add a disk cache directory. #cache_dir ufs /var/spool/squid 100 16 256 # Leave coredumps in the first cache dir coredump_dir /var/spool/squid # # Add any of your own refresh_pattern entries above these. # refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern . 0 20% 4320 </code></pre> <p>Can someone please help me resolve the connection problem when hitting the application endpoint?</p> <p>EDITED:</p> <p>I get the following error in the console pod logs:</p> <pre><code>[root@bastion cp]# oc logs -n openshift-console console-6697f85d68-p8jxf W0404 14:59:30.706793 1 main.go:211] Flag inactivity-timeout is set to less then 300 seconds and will be ignored! I0404 14:59:30.706887 1 main.go:288] cookies are secure! E0404 14:59:31.221158 1 auth.go:235] error contacting auth provider (retrying in 10s): request to OAuth issuer endpoint https://oauth-openshift.apps.oc.sow.expert/oauth/token failed: Head &quot;https://oauth-openshift.apps.oc.sow.expert&quot;: EOF E0404 14:59:41.690905 1 auth.go:235] error contacting auth provider (retrying in 10s): request to OAuth issuer endpoint https://oauth-openshift.apps.oc.sow.expert/oauth/token failed: Head &quot;https://oauth-openshift.apps.oc.sow.expert&quot;: EOF E0404 14:59:52.155373 1 auth.go:235] error contacting auth provider (retrying in 10s): request to OAuth issuer endpoint https://oauth-openshift.apps.oc.sow.expert/oauth/token failed: Head &quot;https://oauth-openshift.apps.oc.sow.expert&quot;: EOF E0404 15:00:02.618751 1 auth.go:235] error contacting auth provider (retrying in 10s): request to OAuth issuer endpoint https://oauth-openshift.apps.oc.sow.expert/oauth/token failed: Head &quot;https://oauth-openshift.apps.oc.sow.expert&quot;: EOF E0404 15:00:13.071041 1 auth.go:235] error contacting auth provider (retrying in 10s): request to OAuth issuer endpoint https://oauth-openshift.apps.oc.sow.expert/oauth/token failed: Head &quot;https://oauth-openshift.apps.oc.sow.expert&quot;: EOF E0404 15:00:23.531058 1 auth.go:235] error contacting auth provider (retrying in 10s): request to OAuth issuer endpoint https://oauth-openshift.apps.oc.sow.expert/oauth/token failed: Head &quot;https://oauth-openshift.apps.oc.sow.expert&quot;: EOF E0404 15:00:33.999953 1 auth.go:235] error contacting auth provider (retrying in 10s): request to OAuth issuer endpoint https://oauth-openshift.apps.oc.sow.expert/oauth/token failed: Head &quot;https://oauth-openshift.apps.oc.sow.expert&quot;: EOF E0404 15:00:44.455873 1 auth.go:235] error contacting auth provider (retrying in 10s): request to OAuth issuer endpoint https://oauth-openshift.apps.oc.sow.expert/oauth/token failed: Head &quot;https://oauth-openshift.apps.oc.sow.expert&quot;: EOF E0404 15:00:54.935240 1 auth.go:235] error contacting auth provider (retrying in 10s): request to OAuth issuer endpoint https://oauth-openshift.apps.oc.sow.expert/oauth/token failed: Head &quot;https://oauth-openshift.apps.oc.sow.expert&quot;: EOF I0404 15:01:05.666751 1 main.go:670] Binding to [::]:8443... I0404 15:01:05.666776 1 main.go:672] using TLS </code></pre>
Abdullah Alsowaygh
<p>I just resolved this issue. To check you have the same issue:</p> <pre><code>oc logs -n openshift-console console-xxxxxxx-yyyyy </code></pre> <p>Check if you have messages like these:</p> <blockquote> <p>error contacting auth provider (retrying in 10s): request to OAuth issuer endpoint <a href="https://oauth-openshift.apps.oc4.tt.testing/oauth/token" rel="nofollow noreferrer">https://oauth-openshift.apps.oc4.tt.testing/oauth/token</a> failed: Head &quot;https://oauth-openshift.apps.oc4.tt.testing&quot;: dial tcp: lookup oauth-openshift.apps.oc4.tt.testing on 172.30.0.10:53: no such host</p> </blockquote> <p>In my case I'm deploying through libvirt. Libvirt does part of the DNS resolving. I had already added this entry to the libvirt network however I had to delete and add it again.</p> <pre><code>WORKER_IP=192.168.126.51 virsh net-update oc4-xxxx delete dns-host &quot;&lt;host ip='$WORKER_IP'&gt;&lt;hostname&gt;oauth-openshift.apps.oc4.tt.testing&lt;/hostname&gt;&lt;/host&gt;&quot; virsh net-update oc4-xxxx add dns-host &quot;&lt;host ip='$WORKER_IP'&gt;&lt;hostname&gt;oauth-openshift.apps.oc4.tt.testing&lt;/hostname&gt;&lt;/host&gt;&quot; </code></pre>
th3penguinwhisperer
<p>I have a K8 cluster that has smb mounted drives connected to an AWS Storage Gateway / file share. We've recently undergone a migration of that SGW to another AWS account and while doing that the IP address and password for that SGW changed.</p> <p>I noticed that our existing setup has a K8 storage class that looks for a K8 secret called &quot;smbcreds&quot;. In that K8 secret they have keys &quot;username&quot; and &quot;password&quot;. I'm assuming it's in line with the <a href="https://github.com/kubernetes-csi/csi-driver-smb/tree/master/deploy/example/smb-provisioner" rel="nofollow noreferrer">setup guide</a> for the Helm chart we're using &quot;csi-driver-smb&quot;.</p> <p>I assumed changing the secret used for the storage class would update everything downstream that uses that storage class, but apparently it does not. I'm obviously a little cautious when it comes to potentially blowing away important data, what do I need to do to update everything to use the new secret and IP config?</p> <p>Here is a simple example of our setup in Terraform -</p> <pre><code>provider &quot;kubernetes&quot; { config_path = &quot;~/.kube/config&quot; config_context = &quot;minikube&quot; } resource &quot;helm_release&quot; &quot;container_storage_interface_for_aws&quot; { count = 1 name = &quot;local-filesystem-csi&quot; repository = &quot;https://raw.githubusercontent.com/kubernetes-csi/csi-driver-smb/master/charts&quot; chart = &quot;csi-driver-smb&quot; namespace = &quot;default&quot; } resource &quot;kubernetes_storage_class&quot; &quot;aws_storage_gateway&quot; { count = 1 metadata { name = &quot;smbmount&quot; } storage_provisioner = &quot;smb.csi.k8s.io&quot; reclaim_policy = &quot;Retain&quot; volume_binding_mode = &quot;WaitForFirstConsumer&quot; parameters = { source = &quot;//1.2.3.4/old-file-share&quot; &quot;csi.storage.k8s.io/node-stage-secret-name&quot; = &quot;smbcreds&quot; &quot;csi.storage.k8s.io/node-stage-secret-namespace&quot; = &quot;default&quot; } mount_options = [&quot;vers=3.0&quot;, &quot;dir_mode=0777&quot;, &quot;file_mode=0777&quot;] } resource &quot;kubernetes_persistent_volume_claim&quot; &quot;aws_storage_gateway&quot; { count = 1 metadata { name = &quot;smbmount-volume-claim&quot; } spec { access_modes = [&quot;ReadWriteMany&quot;] resources { requests = { storage = &quot;10Gi&quot; } } storage_class_name = &quot;smbmount&quot; } } resource &quot;kubernetes_deployment&quot; &quot;main&quot; { metadata { name = &quot;sample-pod&quot; } spec { replicas = 1 selector { match_labels = { app = &quot;sample-pod&quot; } } template { metadata { labels = { app = &quot;sample-pod&quot; } } spec { volume { name = &quot;shared-fileshare&quot; persistent_volume_claim { claim_name = &quot;smbmount-volume-claim&quot; } } container { name = &quot;ubuntu&quot; image = &quot;ubuntu&quot; command = [&quot;sleep&quot;, &quot;3600&quot;] image_pull_policy = &quot;IfNotPresent&quot; volume_mount { name = &quot;shared-fileshare&quot; read_only = false mount_path = &quot;/data&quot; } } } } } } </code></pre> <p>My original change was to change the K8 secret &quot;smbcreds&quot; and change <code>source = &quot;//1.2.3.4/old-file-share&quot;</code> to <code>source = &quot;//5.6.7.8/new-file-share&quot;</code></p>
Ryan Grush
<p>The solution I settled on was to create a second K8 storage class and persistent volume claim that's connected to the new AWS Storage Gateway. I then switched the K8 deployments to use the new PVC.</p>
Ryan Grush
<p>When a new <em>Kubernetes service</em> is created in Azure a new <em>resource group</em> with a similar name to the cluster is also created. In this &quot;resource group&quot; all the disks etc. which are created from Kubernetes are stored. Is there a way I can avoid that this sub-<em>resource group</em> is created and that all resources created from Kubernetes are put in the <em>resource group</em> where the cluster is located?</p> <p>The background is that our Ops team would like to structure the <em>resource groups</em> and therefore the generated <em>resource group</em> structure is not preferred. Note that the information that this is not possible or at least not without a significant amount of effort would also be very useful for me.</p>
Manuel
<p>According to the <a href="https://learn.microsoft.com/en-us/azure/aks/faq#can-i-provide-my-own-name-for-the-aks-node-resource-group" rel="nofollow noreferrer">documentation</a>, you can't create the cluster resources inside the resource group where the cluster is located. The only option available is to provide a name for the second resource group at cluster creation.</p>
CSharpRocks
<p><a href="https://marketplace.visualstudio.com/items?itemName=ballerina.ballerina" rel="nofollow noreferrer">Ballerina extension</a> was installed successfully in visual code. Also I configured <code>ballerina.home</code> to point to the installed package </p> <pre><code>ballerina.home = "/Library/Ballerina/ballerina-0.975.1" </code></pre> <p>Visual code is linting correctly. However, when I introduced <code>@kubernetes:*</code> annotations:</p> <pre><code>import ballerina/http; import ballerina/log; @kubernetes:Deployment { enableLiveness: true, image: "ballerina/ballerina-platform", name: "ballerina-abdennour-demo" } @kubernetes:Service { serviceType: "NodePort", name: "ballerina-abdennour-demo" } service&lt;http:Service&gt; hello bind { port: 9090 } { sayHello (endpoint caller, http:Request request) { http:Response res = new; res.setPayload("Hello World from Ballerina Service"); caller -&gt;respond(res) but { error e =&gt; log:printError("Error sending response", err = e)}; } } </code></pre> <p>VisualCode reports an error :</p> <pre><code>undefined package "kubernetes" undefined annotation "Deployment" </code></pre> <p>Nevertheless, I have minikube up and running, and I don't know if I need another extension, so VisualCode can detect running clusters? </p> <p>Or is it a package that is missing and should be installed inside Ballerina SDK/ Platform?</p> <h2>UPDATE</h2> <p>I am running <code>ballerina build file.bal</code>, and I can see this errors :</p> <p><a href="https://i.stack.imgur.com/atVV8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/atVV8.png" alt="enter image description here"></a></p> <p>Any thoughts ?</p>
Abdennour TOUMI
<p>Solved! Just add the <code>import</code> instruction at the beginning of the file</p> <pre><code>import ballerinax/kubernetes; </code></pre> <p>Note, it is <code>ballerinax/kubernetes</code> and not <code>ballerina/kubernetes</code> (add <code>x</code>)</p>
Abdennour TOUMI
<p>We a pod which needs certificate file We need to provide a path to a certificate file, (we have this certificate) how should we put this certificate file into k8s that the pod will have an access to it e.g. that we were able to provide it like the following to the pod <code>&quot;/path/to/certificate_authority.crt”</code> Should we use secret/ configmap, if yes than how?</p>
Jenney
<p>Create a TLS secret then mount it to the desired folder.</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: secret-tls type: kubernetes.io/tls data: # the data is abbreviated in this example tls.crt: | MIIC2DCCAcCgAwIBAgIBATANBgkqh ... tls.key: | MIIEpgIBAAKCAQEA7yn3bRHQ5FHMQ ... </code></pre> <p><a href="https://kubernetes.io/docs/concepts/configuration/secret/#tls-secrets" rel="nofollow noreferrer">Documentation</a></p> <p>To mount the secret in a volume from your pod:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: mypod spec: containers: - name: mypod image: redis volumeMounts: - name: foo mountPath: &quot;/path/to/&quot; readOnly: true volumes: - name: foo secret: secretName: secret-tls </code></pre> <p><a href="https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod" rel="nofollow noreferrer">Documentation</a></p>
CSharpRocks
<p>I am trying to configure kubernetes plugin in Jenkins. Here are the details I am putting in:</p> <p><a href="https://i.stack.imgur.com/OuRnk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OuRnk.png" alt="enter image description here"></a></p> <p>Now, when I click on test connection, I get the following error:</p> <pre><code>Error testing connection https://xx.xx.xx.xx:8001: Failure executing: GET at: https://xx.xx.xx.xx:8001/api/v1/namespaces/default/pods. Message: Unauthorized! Configured service account doesn't have access. Service account may have been revoked. Unauthorized. </code></pre> <p>After doing some google, I realized it might be because of role binding, so I create a role binding for my <code>default</code> service account:</p> <pre><code># kubectl describe rolebinding jenkins Name: jenkins Labels: &lt;none&gt; Annotations: &lt;none&gt; Role: Kind: ClusterRole Name: pod-reader Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount default default </code></pre> <p>Here is the pod-reader role:</p> <pre><code># kubectl describe role pod-reader Name: pod-reader Labels: &lt;none&gt; Annotations: &lt;none&gt; PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- pods [] [] [get watch list] </code></pre> <p>But I still get the same error. Is there anything else that needs to be done here? TIA.</p>
Pensu
<p>Figured it out, I was using credentials as plain text. I changed that to kubernetes secret, and it worked. </p>
Pensu
<p><strong>The context</strong></p> <p>Let me know if I've gone down a rabbit hole here.</p> <p>I have a simple web app with a frontend and backend component, deployed using Docker/Helm inside a Kubernetes cluster. The frontend is servable via nginx, and the backend component will be running a NodeJS microservice.</p> <p>I had been thinking to have both run on the same pod inside Docker, but ran into some problems getting both nginx and Node to run in the background. I could try having a startup script that runs both, but <a href="https://runnable.com/docker/rails/run-multiple-processes-in-a-container" rel="nofollow noreferrer">the Internet</a> says it's a best practice to have different containers each be responsible for only running one service - so one container to run nginx and another to run the microservice.</p> <p><strong>The problem</strong></p> <p>That's fine, but then say the nginx server's HTML pages need to know what to send a POST request to in the backend - how can the HTML pages know what IP to hit for the backend's Docker container? Articles like <a href="https://levelup.gitconnected.com/how-to-access-a-docker-container-from-another-container-656398c93576" rel="nofollow noreferrer">this one</a> come up talking about manually creating a Docker network for the two containers to speak to one another, but how can I configure this with Helm so that the frontend container knows how to hit the backend container each time a new container is deployed, without having to manually configure any network service each time? I want the deployments to be automated.</p>
A. Duff
<p>You mention that your frontend is based on Nginx.</p> <p>Accordingly,Frontend must hit the <strong>public</strong> URL of backend.</p> <p>Thus, backend must be exposed by choosing the service type, whether:</p> <ul> <li><strong>NodePort</strong> -&gt; Frontend will communicate to backend with <code>http://&lt;any-node-ip&gt;:&lt;node-port&gt;</code></li> <li>or <strong>LoadBalancer</strong> -&gt; Frontend will communicate to backend with the <code>http://loadbalancer-external-IP:service-port</code> of the service.</li> <li>or, keep it <strong>ClusterIP</strong>, but add <strong>Ingress</strong> resource on top of it -&gt; Frontend will communicate to backend with its ingress host <code>http://ingress.host.com</code>.</li> </ul> <p>We recommended the last way, but it requires to have ingress controller.</p> <p>Once you tested one of them and it works, then, you can extend your helm chart to update the service and add the ingress resource if needed</p>
Abdennour TOUMI
<p>I am using the <code>flannel</code> network plugin in my k8s cluster. And there is one special node which has one internal IP address and one public ip address which make it possible to ssh into it. </p> <p>After I add the node using <code>kubeadm</code> I found out that the <code>k get node xx -o yaml</code> returns the <code>flannel.alpha.coreos.com/public-ip</code> annotation with the public IP address and <strong>which makes the internal Kubernetes pod unaccessible from other nodes</strong>.</p> <pre><code>apiVersion: v1 kind: Node metadata: annotations: flannel.alpha.coreos.com/backend-data: '{"VtepMAC":"xxxxxx"}' flannel.alpha.coreos.com/backend-type: vxlan flannel.alpha.coreos.com/kube-subnet-manager: "true" flannel.alpha.coreos.com/public-ip: &lt;the-public-ip, not the internal one&gt; kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: "0" volumes.kubernetes.io/controller-managed-attach-detach: "true" </code></pre> <p>I try to use <code>k edit node xxx</code> to change the <code>public-ip</code> in the annotation it works in just one minute and then it will change back to the original one.</p> <p>So...my question is just like the title: How can I change the Kubernetes node annotation <code>flannel.alpha.coreos.com/public-ip</code> without modifying back?</p>
aisensiy
<p>Do the modification using <code>kubectl</code> and you will have two ways:</p> <ul> <li><p><strong>kubectl annotate</strong>: </p> <pre><code>kubectl annotate node xx --overwrite flannel.alpha.coreos.com/public-ip=new-value </code></pre></li> <li><p>or <strong>kubectl patch</strong> : </p> <pre><code>kubectl patch node xx -p '{"metadata":{"annotations":{"flannel.alpha.coreos.com/public-ip":"new-value"}}}' </code></pre></li> </ul>
Abdennour TOUMI
<p>I just created a namespace, have done nothing with it and now deleted it. However, when I list contexts I can still see it there. It seems to have been deleted as I can't delete it again. Why can I still see it listed when I get contexts?</p> <pre><code>kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * dev minikube minikube dev minikube minikube minikube kubectl delete namespace dev namespace "dev" deleted kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * dev minikube minikube dev minikube minikube minikube </code></pre> <p>I switched contexts just in case but still get the same problem. E.g.</p> <pre><code>kubectl delete namespace dev Error from server (NotFound): namespaces "dev" not found kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE dev minikube minikube dev * minikube minikube minikube </code></pre> <p>Interestingly, I don't see it when I list namespaces. E.g.</p> <pre><code>kubectl get namespaces </code></pre>
Snowcrash
<p>A <code>context</code> in <code>kubectl</code>is just a local config that contains details (metadata) about a particular cluster or a namespace. This is the config that is needed for cluster/namespace management using the <code>kubectl</code> client.</p> <p>So, when you type <code>kubectl config &lt;any_command&gt;</code>, it's just doing a lookup in a file stored locally on you computer. Run the following to know more about this command and how to control the location of the config file:</p> <pre><code>kubectl config --help </code></pre> <p>Deleting a cluster or a namespace does not delete the associated context. The reason is that the deleting of a cluster or a namespace is an asynchronous operation that runs on the cluster. This operation may take longer than a few seconds to actually finish. Thus, <code>kubectl</code> cannot immediately delete the context from the config file after you issues the delete to the cluster master.</p> <p>To answer your question, you have to manually delete the context using:</p> <pre><code>kubectl config delete-context dev </code></pre>
Ashu Pachauri
<p>I have a ReactJS application and I'm deploying it using Kubernetes.</p> <p>I'm trying to wrap my head around how to inject environment variables into my <code>config.js</code> file from within the Kubernetes deployment file.</p> <p>I currently have these: <code>config.js</code> file:</p> <pre><code>export const CLIENT_API_ENDPOINT = { default:process.env.URL_TO_SERVICE, }; </code></pre> <p>and here's my Kubernetes deployment variables:</p> <pre><code>"spec": { "containers": [ { "name": "container_name", "image": "image_name", "env": [ { "name": "URL_TO_SERVICE", "value": "https://www.myurl.com" } ] </code></pre> <p>Kinda clueless of why I can't see the environment variable in my <code>config.js</code> file. Any help would be highly appreciated.</p> <p>Here's my dockerfile:</p> <pre><code># Dockerfile (tag: v3) FROM node:9.3.0 RUN npm install webpack -g WORKDIR /tmp COPY package.json /tmp/ RUN npm config set registry http://registry.npmjs.org/ &amp;&amp; npm install WORKDIR /usr/src/app COPY . /usr/src/app/ RUN cp -a /tmp/node_modules /usr/src/app/ #RUN webpack ENV NODE_ENV=production ENV PORT=4000 #CMD [ "/usr/local/bin/node", "./index.js" ] ENTRYPOINT npm start EXPOSE 4000 </code></pre>
Jonathan Perry
<p>The kubernetes environment variables are available in your container. So you would think the task here is a version of getting server side configuration variables shipped to your client side code.</p> <p>But, If your react application is running in a container, you are most likely running your javascript build pipeline when you build the docker image. Something like this:</p> <pre><code>RUN npm run build # Run app using nodemon CMD [ "npm", "start" ] </code></pre> <p>When docker is building your container, the environment variables injected by kubernetes aren't yet yet available. They won't exist until you run the built container on a cluster.</p> <p>One solution, and this is maybe your shortest path, is to stop building your client side code in the docker file and combine the build and run steps in npm start command . Something like this if you are using webpack:</p> <pre><code>"start": "webpack -p --progress --config webpack.production.config.js &amp;&amp; node index.js" </code></pre> <p>If you go this route, then you can use any of the well documented techniques for shipping server side environment variables to your client during the build step : <a href="https://stackoverflow.com/questions/30030031/passing-environment-dependent-variables-in-webpack">Passing environment-dependent variables in webpack</a>. There are similar techniques and tools for all other javascript build tools.</p> <p>Two: If you are running node, you can continue building your client app in the container, but have the node app write a config.js to the file system on the startup of the node application.</p> <p>You could do even more complicated things like exposing your config via an api (a variation on the second approach), but this seems like throwing good money after bad. </p> <p>I wonder if there isn't an easier way. If you have a purely client side app, why not just deploy it as a static site to, say, an amazon or gcloud bucket, firebase, or netlify? This way you just run the build process and deploy to the correct environment. no container needed.</p>
Robert Moskal
<p>I created a nginx ingress using <a href="https://kubernetes.github.io/ingress-nginx/deploy/#quick-start" rel="nofollow noreferrer">this link</a> using docker-desktop.</p> <pre class="lang-sh prettyprint-override"><code>kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.0/deploy/static/provider/cloud/deploy.yaml </code></pre> <p>My service:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: be-assinaturas namespace: apps-space labels: tier: backend app: assinaturas spec: selector: matchLabels: name: be-assinaturas app: assinaturas strategy: type: Recreate replicas: 2 template: metadata: name: be-assinaturas labels: app: assinaturas name: be-assinaturas spec: containers: - name: be-assinaturas image: jedi31/assinaturas:latest imagePullPolicy: Always --- kind: Service apiVersion: v1 metadata: name: svc-assinaturas namespace: apps-space spec: selector: name: be-assinaturas app: assinaturas type: ClusterIP ports: - name: be-assinaturas-http port: 80 targetPort: 80 </code></pre> <p>My ingress resource is defined as:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-be-assinaturas namespace: apps-space annotations: kubernetes.io/ingress.class: &quot;nginx&quot; spec: rules: - http: paths: - path: /assinaturas pathType: Prefix backend: service: name: svc-assinaturas port: number: 80 </code></pre> <p>Running a <code>kubectl get services --all-namespaces</code> I get</p> <pre class="lang-sh prettyprint-override"><code>NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE apps-space svc-assinaturas ClusterIP 10.107.188.28 &lt;none&gt; 80/TCP 12m default kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 66d ingress-nginx ingress-nginx-controller LoadBalancer 10.102.238.173 localhost 80:32028/TCP,443:30397/TCP 5h45m ingress-nginx ingress-nginx-controller-admission ClusterIP 10.98.148.190 &lt;none&gt; 443/TCP 5h45m kube-system kube-dns ClusterIP 10.96.0.10 &lt;none&gt; 53/UDP,53/TCP,9153/TCP 66d </code></pre> <p>If I do a port forward on service, like this: <code>kubectl port-forward -n apps-space service/svc-assinaturas 5274:80</code></p> <p>I can acess my app using curl, like this: <code>curl -v http://localhost:5274/hc/alive</code></p> <p>and the response is:</p> <pre class="lang-sh prettyprint-override"><code>* Trying 127.0.0.1:5274... * TCP_NODELAY set * Connected to localhost (127.0.0.1) port 5274 (#0) &gt; GET /hc/alive HTTP/1.1 &gt; Host: localhost:5274 &gt; User-Agent: curl/7.68.0 &gt; Accept: */* &gt; * Mark bundle as not supporting multiuse &lt; HTTP/1.1 200 OK &lt; Content-Type: application/json; charset=utf-8 &lt; Date: Mon, 06 Dec 2021 23:22:40 GMT &lt; Server: Kestrel &lt; Transfer-Encoding: chunked &lt; * Connection #0 to host localhost left intact {&quot;service&quot;:&quot;Catalogo&quot;,&quot;status&quot;:&quot;Alive&quot;,&quot;version&quot;:&quot;bab4653&quot;} </code></pre> <p>But if I try access the service using Ingress, it returns a 404. <code>curl -v http://localhost/assinaturas/hc/alive</code></p> <pre class="lang-sh prettyprint-override"><code>* Trying 127.0.0.1:80... * TCP_NODELAY set * Connected to localhost (127.0.0.1) port 80 (#0) &gt; GET /assinaturas/hc/alive HTTP/1.1 &gt; Host: localhost &gt; User-Agent: curl/7.68.0 &gt; Accept: */* &gt; * Mark bundle as not supporting multiuse &lt; HTTP/1.1 404 Not Found &lt; Date: Mon, 06 Dec 2021 23:22:51 GMT &lt; Content-Length: 0 &lt; Connection: keep-alive &lt; * Connection #0 to host localhost left intact </code></pre> <p>What I'm doing wrong here? Why I can acess the service, but the ingress do not find it?</p>
Jedi31
<p>this is because the prefix <code>/assinaturas</code> need to be omitted by an Nginx <strong>rewrite</strong>.. And that's explain why you got 404 (not found):</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-be-assinaturas namespace: apps-space annotations: kubernetes.io/ingress.class: &quot;nginx&quot; nginx.ingress.kubernetes.io/rewrite-target: /$1 # &lt;- 🔴 rewrite here spec: rules: - http: paths: - path: /assinaturas/(.*) # &lt;-- 🔴 Match the path to be rewritten pathType: Prefix backend: service: name: svc-assinaturas port: number: 80 </code></pre>
Abdennour TOUMI
<p>I have a container based application running node JS and my backend is a mongoDB container. </p> <p>Basically, what I am planning to do is to run this in kubernetes. </p> <p>I have deployed this as separate containers on my current environment and it works fine. I have a mongoDB container and a node JS container. </p> <p>To connect the two I would do </p> <pre><code>docker run -d --link=mongodb:mongodb -e MONGODB_URL='mongodb://mongodb:27017/user' -p 4000:4000 e922a127d049 </code></pre> <p>my connection.js runs as below where it would take the MONGODB_URL and pass into the process.env in my node JS container. My connection.js would then extract the MONGODB_URL into the mongoDbUrl as show below. </p> <pre><code>const mongoClient = require('mongodb').MongoClient; const mongoDbUrl = process.env.MONGODB_URL; //console.log(process.env.MONGODB_URL) let mongodb; function connect(callback){ mongoClient.connect(mongoDbUrl, (err, db) =&gt; { mongodb = db; callback(); }); } function get(){ return mongodb; } function close(){ mongodb.close(); } module.exports = { connect, get, close }; </code></pre> <p>To deploy on k8s, I have written a yaml file for </p> <p>1) web controller 2) web service 3) mongoDB controller 4) mongoDB service</p> <p>This is my current mongoDB controller </p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: mongo-deployment spec: replicas: 1 template: metadata: labels: name: mongo spec: containers: - image: mongo:latest name: mongo ports: - name: mongo containerPort: 27017 hostPort: 27017 </code></pre> <p>my mongoDB service</p> <pre><code>apiVersion: v1 kind: Service metadata: labels: name: mongodb name: mongodb spec: ports: - port: 27017 targetPort: 27017 selector: name: mongo </code></pre> <p>my web controller</p> <pre><code>apiVersion: v1 kind: ReplicationController metadata: labels: name: web name: web-controller spec: replicas: 1 selector: name: web template: metadata: labels: name: web spec: containers: - image: leexha/node_demo:21 env: - name: MONGODB_URL value: "mongodb://mongodb:27017/user" name: web ports: - containerPort: 4000 name: node-server </code></pre> <p>and my web service</p> <pre><code>apiVersion: v1 kind: Service metadata: name: web labels: name: web spec: type: NodePort ports: - port: 4000 targetPort: 4000 protocol: TCP selector: name: web </code></pre> <p>I was able to deploy all the services and pods on my local kubernetes cluster. </p> <p>However, when I tried to access the web application over a nodeport, it tells me that there is a connection error to my mongoDB. </p> <pre><code>TypeError: Cannot read property 'collection' of null at /app/app.js:24:17 at Layer.handle [as handle_request] </code></pre> <p>This is my node JS code for app.js</p> <pre><code>var bodyParser = require('body-parser') , MongoClient = require('mongodb').MongoClient , PORT = 4000 , instantMongoCrud = require('express-mongo-crud') // require the module , express = require('express') , app = express() , path = require('path') , options = { //specify options host: `localhost:${PORT}` } , db = require('./connection') // connection to database db.connect(() =&gt; { app.use(bodyParser.json()); // add body parser app.use(bodyParser.urlencoded({ extended: true })); //console.log('Hello ' + process.env.MONGODB_URL) // get function app.get('/', function(req, res) { db.get().collection('users').find({}).toArray(function(err, data){ if (err) console.log(err) else res.render('../views/pages/index.ejs',{data:data}); }); }); </code></pre> <p>Clearly, this is an error when my node JS application is unable to read the mongoDB service. </p> <p>I at first thought my MONGODB_URL was not set in my container. However, when I checked the nodeJS container using </p> <pre><code>kubectl exec -it web-controller-r269f /bin/bash </code></pre> <p>and echo my MONGODB_URL it returned me back mongodb://mongodb:27017/user which is correct. </p> <p>Im quite unsure what I am doing wrong as I am pretty sure I have done everything in order and my web deployment is communicating to mongoDB service. Any help? Sorry am still learning kubernetes and please pardon any mistakes</p>
adr
<p>[Edit] </p> <p>Sorry my bad, the connections string <code>mongodb://mongodb:27017</code> would actually work. I tried dns querying that name, and it was able to resolve to the correct ip address even without specifying ".default.svc...". </p> <p><code>root@web-controller-mlplb:/app# host mongodb mongodb.default.svc.cluster.local has address 10.108.119.125</code></p> <p>@Anshul Jindal is correct that you have race condition, where the web pods are being loaded first before the database pods. You were probably doing <code>kubectl apply -f .</code> Try doing a reset <code>kubectl delete -f .</code> in the folder containing those yaml . Then <code>kubectl apply</code> the database manifests first, then after a few seconds, <code>kubectl apply</code> the web manifests. You could also probably use Init Containers to check when the mongo service is ready, before running the pods. Or, you can also do that check in your node.js application. </p> <p><strong>Example of waiting for mongodb service in Node.js</strong></p> <p>In your connection.js file, you can change the connect function such that if it fails the first time (i.e due to mongodb service/pod not being available yet), it will retry again every 3 seconds until a connection can be established. This way, you don't even have to worry about load order of applying kubernetes manifests, you can just <code>kubectl apply -f .</code></p> <pre><code>let RECONNECT_INTERVAL = 3000 function connect(callback){ mongoClient.connect(mongoDbUrl, (err, db) =&gt; { if (err) { console.log("attempting to reconnect to " + mongoDbUrl) setTimeout(connect.bind(this, callback), RECONNECT_INTERVAL) return } else { mongodb = db; callback(); } }); } </code></pre>
redgetan
<p>I'm trying to construct a Kubernetes informer outside of the EKS cluster that it's watching. I'm using <a href="https://github.com/kubernetes-sigs/aws-iam-authenticator" rel="nofollow noreferrer">aws-iam-authenticator</a> plugin to provide the <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins" rel="nofollow noreferrer">exec-based credentials</a> to the EKS cluster. For the plugin to work, I'm assuming an IAM role and passing the AWS IAM credentials as environment variables.</p> <p>The problem is that these credentials expire after an hour and cause the informer to fail with</p> <blockquote> <p>E0301 23:34:22.167817 582 runtime.go:79] Observed a panic: &amp;errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:&quot;&quot;, APIVersion:&quot;&quot;}, ListMeta:v1.ListMeta{SelfLink:&quot;&quot;, ResourceVersion:&quot;&quot;, Continue:&quot;&quot;, RemainingItemCount:(*int64)(nil)}, Status:&quot;Failure&quot;, Message:&quot;the server has asked for the client to provide credentials (get pods)&quot;, Reason:&quot;Unauthorized&quot;, Details:(*v1.StatusDetails)(0xc0005b0300), Code:401}} (the server has asked for the client to provide credentials (get pods))</p> </blockquote> <p>Is there a better way of getting <code>ClientConfig</code> and <code>aws-iam-authenticator</code> to refresh the credentials?</p> <p>Here's a rough skeleton of my code:</p> <pre class="lang-golang prettyprint-override"><code>credentialsProvider := aws.NewCredentialsCache(stscreds.NewWebIdentityRoleProvider(...)) creds, err := credentialsProvider.Retrieve(ctx) config := clientcmdapi.NewConfig() // ... config.AuthInfos[&quot;eks&quot;] = &amp;clientcmdapi.AuthInfo{ Exec: &amp;clientcmdapi.ExecConfig{ Command: &quot;aws-iam-authenticator&quot;, Args: []string{ &quot;token&quot;, &quot;-i&quot;, clusterName, }, // These env vars are static! :( Env: []clientcmdapi.ExecEnvVar{ { Name: &quot;AWS_ACCESS_KEY_ID&quot;, Value: creds.AccessKeyID, }, { Name: &quot;AWS_SECRET_ACCESS_KEY&quot;, Value: creds.SecretAccessKey, }, { Name: &quot;AWS_SESSION_TOKEN&quot;, Value: creds.SessionToken, }, }, APIVersion: &quot;client.authentication.k8s.io/v1beta1&quot;, InteractiveMode: clientcmdapi.NeverExecInteractiveMode, }, } restConfig, err := config.ClientConfig() clientset, err = kubernetes.NewForConfig(restConfig) informerFactory := informers.NewSharedInformerFactory(clientset, time.Second*30) podInformer := cw.informerFactory.Core().V1().Pods().Informer() </code></pre> <p>Here are a couple similar threads I found:</p> <ul> <li><a href="https://stackoverflow.com/questions/74563117/kubernetes-client-go-informers-getting-unauthorized-error-after-15-mins">Kubernetes client-go informers getting &quot;Unauthorized&quot; error after 15 mins</a></li> <li><a href="https://github.com/kubernetes/client-go/issues/1189" rel="nofollow noreferrer">https://github.com/kubernetes/client-go/issues/1189</a></li> </ul>
tskuzzy
<p>My solution was to create write the credentials to a file and create a background thread to refresh that file. I can then pass tell <code>aws-iam-authenticator</code> to read the credentials from the file via the <code>AWS_SHARED_CREDENTIALS_FILE</code> environment variable.</p> <p>This might also be possible using <code>AWS_WEB_IDENTITY_TOKEN_FILE</code> to save some steps, but I didn't look further.</p> <p>The updated code looks like this</p> <pre><code>func updateCredentials(ctx context.Context) { creds, err := c.credentialsProvider.Retrieve(ctx) s := fmt.Sprintf(`[default] aws_access_key_id=%s aws_secret_access_key=%s aws_session_token=%s`, creds.AccessKeyID, creds.SecretAccessKey, creds.SessionToken) err = os.WriteFile(credentialsFile.Name(), []byte(s), 0666) return nil } func updateCredentialsLoop(ctx context.Context) { for { err := updateCredentials(ctx) time.Sleep(5*time.Minute) } } credentialsProvider := aws.NewCredentialsCache(stscreds.NewWebIdentityRoleProvider(...)) credentialsFile, err := os.CreateTemp(&quot;&quot;, &quot;credentials&quot;) updateCredentials(ctx) go updateCredentialsLoop(ctx) config := clientcmdapi.NewConfig() // ... config.AuthInfos[&quot;eks&quot;] = &amp;clientcmdapi.AuthInfo{ Exec: &amp;clientcmdapi.ExecConfig{ Command: &quot;aws-iam-authenticator&quot;, Args: []string{ &quot;token&quot;, &quot;-i&quot;, clusterName, }, Env: []clientcmdapi.ExecEnvVar{ { Name: &quot;AWS_SHARED_CREDENTIALS_FILE&quot;, Value: credentialsFile.Name(), }, }, APIVersion: &quot;client.authentication.k8s.io/v1beta1&quot;, InteractiveMode: clientcmdapi.NeverExecInteractiveMode, }, } restConfig, err := config.ClientConfig() clientset, err = kubernetes.NewForConfig(restConfig) informerFactory := informers.NewSharedInformerFactory(clientset, time.Second*30) podInformer := cw.informerFactory.Core().V1().Pods().Informer() </code></pre>
tskuzzy
<p>Unable to access the Kubernetes dashboard. Executed below steps:</p> <ol> <li><p>kubectl apply -f <a href="https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml</a></p></li> <li><p>kubectl proxy --address="192.168.56.12" -p 8001 --accept-hosts='^*$'</p></li> <li>Now trying to access from url: <a href="http://192.168.56.12:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/" rel="nofollow noreferrer">http://192.168.56.12:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/</a></li> </ol> <pre><code>{ "kind": "Status", "apiVersion": "v1", "metadata": { }, "status": "Failure", "message": "no endpoints available for service \"https:kubernetes-dashboard:\"", "reason": "ServiceUnavailable", "code": 503 }``` Output of a few commands that will required: </code></pre> <p>[root@k8s-master ~]# kubectl logs kubernetes-dashboard-6bb65fcc49-zn2c2 --namespace=kubernetes-dashboard</p> <p>Error from server: Get <a href="https://192.168.56.14:10250/containerLogs/kubernetes-dashboard/kubernetes-dashboard-6bb65fcc49-7wz6q/kubernetes-dashboard" rel="nofollow noreferrer">https://192.168.56.14:10250/containerLogs/kubernetes-dashboard/kubernetes-dashboard-6bb65fcc49-7wz6q/kubernetes-dashboard</a>: dial tcp 192.168.56.14:10250: connect: no route to host [root@k8s-master ~]#</p> <pre><code>$kubectl get pods -o wide --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE ATES kube-system coredns-5c98db65d4-89c9p 1/1 Running 0 76m 10.244.0.14 k8s-master kube-system coredns-5c98db65d4-ggqfj 1/1 Running 0 76m 10.244.0.13 k8s-master kube-system etcd-k8s-master 1/1 Running 0 75m 192.168.56.12 k8s-master kube-system kube-apiserver-k8s-master 1/1 Running 0 75m 192.168.56.12 k8s-master kube-system kube-controller-manager-k8s-master 1/1 Running 1 75m 192.168.56.12 k8s-master kube-system kube-flannel-ds-amd64-74zrn 1/1 Running 1 74m 192.168.56.14 node1 kube-system kube-flannel-ds-amd64-hgcp8 1/1 Running 0 75m 192.168.56.12 k8s-master kube-system kube-proxy-2lczb 1/1 Running 0 74m 192.168.56.14 node1 kube-system kube-proxy-8dxdm 1/1 Running 0 76m 192.168.56.12 k8s-master kube-system kube-scheduler-k8s-master 1/1 Running 1 75m 192.168.56.12 k8s-master kubernetes-dashboard dashboard-metrics-scraper-fb986f88d-d49sw 1/1 Running 0 71m 10.244.1.21 node1 kubernetes-dashboard kubernetes-dashboard-6bb65fcc49-7wz6q 0/1 CrashLoopBackOff 18 71m 10.244.1.20 node1 ========================================= [root@k8s-master ~]# kubectl describe pod kubernetes-dashboard-6bb65fcc49-7wz6q -n kubernetes-dashboard Name: kubernetes-dashboard-6bb65fcc49-7wz6q Namespace: kubernetes-dashboard Priority: 0 Node: node1/192.168.56.14 Start Time: Mon, 23 Sep 2019 12:56:18 +0530 Labels: k8s-app=kubernetes-dashboard pod-template-hash=6bb65fcc49 Annotations: &lt;none&gt; Status: Running IP: 10.244.1.20 Controlled By: ReplicaSet/kubernetes-dashboard-6bb65fcc49 Containers: kubernetes-dashboard: Container ID: docker://2cbbbc9b95a43a5242abe13f8178dc589487abcfccaea06ff4be70781f4c3711 Image: kubernetesui/dashboard:v2.0.0-beta4 Image ID: docker-pullable://docker.io/kubernetesui/dashboard@sha256:a35498beec44376efcf8c4478eebceb57ec3ba39a6579222358a1ebe455ec49e Port: 8443/TCP Host Port: 0/TCP Args: --auto-generate-certificates --namespace=kubernetes-dashboard State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 2 Started: Mon, 23 Sep 2019 14:10:27 +0530 Finished: Mon, 23 Sep 2019 14:10:28 +0530 Ready: False Restart Count: 19 Liveness: http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3 Environment: &lt;none&gt; Mounts: /certs from kubernetes-dashboard-certs (rw) /tmp from tmp-volume (rw) /var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-q7j4z (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: kubernetes-dashboard-certs: Type: Secret (a volume populated by a Secret) SecretName: kubernetes-dashboard-certs Optional: false tmp-volume: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: &lt;unset&gt; kubernetes-dashboard-token-q7j4z: Type: Secret (a volume populated by a Secret) SecretName: kubernetes-dashboard-token-q7j4z Optional: false QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node-role.kubernetes.io/master:NoSchedule node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning BackOff &lt;invalid&gt; (x354 over 63m) kubelet, node1 Back-off restarting failed container [root@k8s-master ~]# </code></pre>
muku
<p>After realizing that the chart <code>stable/kubernetes-dashboard</code> is outdated, I found that you need to apply this manifest :</p> <pre class="lang-sh prettyprint-override"><code>kubectl apply -f \ https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml </code></pre> <p>However, this is not acceptable to migrate from helm chart to hard coded manifests.. After some search, the related chart is now <a href="https://github.com/kubernetes/dashboard/tree/master/aio/deploy/helm-chart/kubernetes-dashboard" rel="nofollow noreferrer">under this Git repo subfolder</a> No more <code>stable</code> repo, but use the following:</p> <pre class="lang-sh prettyprint-override"><code>helm repository add kubernetes-dashboard https://kubernetes.github.io/dashboard/ helm install kubernetes-dashboard/kubernetes-dashboard --name my-release </code></pre> <p>Good luck! This will fix all your issues since this chart considers all dependencies.</p> <p>By the way:</p> <ul> <li>even the image repository is no more <code>k8s.gcr.io/kubernetes-dashboard-amd64</code> Instead, it is now under dockerhub <code>kubernetesui/dashboard</code></li> <li>There is a sidecar for metrics <strong>scraper</strong> which not defined in the stable chart.</li> </ul>
Abdennour TOUMI
<p>I'm having trouble getting my client container talking to the API container, I was hoping to use a fanout ingress as so:</p> <pre><code>foo.bar.com/api - routes to API container foo.bar.com - routes to client container </code></pre> <p>My setup does render the client no problem, but all calls to the API result in 404s - so it's obviously not working. I think the 404 behaviour is a red herring, it's probably looking for Angular routes that match <code>/api</code> and can't find any, I don't <em>think</em> the routing is even happening. My Ingress yaml is below, I can share any other parts of the config if needed. Any pointers much appreciated!</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: namespace: foo-bar name: foo-bar-ingress annotations: kubernetes.io/ingress.class: nginx certmanager.k8s.io/cluster-issuer: letsencrypt-prod nginx.ingress.kubernetes.io/from-to-www-redirect: "true" spec: tls: - hosts: - foo.bar.com secretName: tls-secret-prod rules: - host: foo-bar.com http: paths: - backend: serviceName: server servicePort: 3000 path: /api - backend: serviceName: client servicePort: 80 path: / </code></pre>
Mark
<p>As suggested by @HelloWorld in the comments, checking the api server routes revealed the issue to be misconfigured routing in the server not the ingress rules.</p>
Mark
<p>I am using this command in Helm 3 to install kubernetes dashboard 2.2.0 in kubernetes v1.18,the OS is CentOS 8:</p> <pre><code>helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/ helm repo update helm install k8s-dashboard/kubernetes-dashboard --generate-name --version 2.2.0 </code></pre> <p>the installing is success,but when I check the pod status,it shows <code>CrashLoopBackOff</code> like this:</p> <pre><code>[root@localhost ~]# kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES default kubernetes-dashboard-1594440918-549c59c487-h8z9l 0/1 CrashLoopBackOff 15 87m 10.11.157.65 k8sslave1 &lt;none&gt; &lt;none&gt; default traefik-5f95ff4766-vg8gx 1/1 Running 0 34m 10.11.125.129 k8sslave2 &lt;none&gt; &lt;none&gt; kube-system calico-kube-controllers-75d555c48-lt4jr 1/1 Running 0 36h 10.11.102.134 localhost.localdomain &lt;none&gt; &lt;none&gt; kube-system calico-node-6rj58 1/1 Running 0 14h 192.168.31.30 k8sslave1 &lt;none&gt; &lt;none&gt; kube-system calico-node-czhww 1/1 Running 0 36h 192.168.31.29 localhost.localdomain &lt;none&gt; &lt;none&gt; kube-system calico-node-vwr5w 1/1 Running 0 36h 192.168.31.31 k8sslave2 &lt;none&gt; &lt;none&gt; kube-system coredns-546565776c-45jr5 1/1 Running 40 4d13h 10.11.102.132 localhost.localdomain &lt;none&gt; &lt;none&gt; kube-system coredns-546565776c-zjwg7 1/1 Running 0 4d13h 10.11.102.129 localhost.localdomain &lt;none&gt; &lt;none&gt; kube-system etcd-localhost.localdomain 1/1 Running 0 14h 192.168.31.29 localhost.localdomain &lt;none&gt; &lt;none&gt; kube-system kube-apiserver-localhost.localdomain 1/1 Running 0 14h 192.168.31.29 localhost.localdomain &lt;none&gt; &lt;none&gt; kube-system kube-controller-manager-localhost.localdomain 1/1 Running 0 14h 192.168.31.29 localhost.localdomain &lt;none&gt; &lt;none&gt; kube-system kube-proxy-8z9vs 1/1 Running 0 38h 192.168.31.31 k8sslave2 &lt;none&gt; &lt;none&gt; kube-system kube-proxy-dnpc6 1/1 Running 0 4d13h 192.168.31.29 localhost.localdomain &lt;none&gt; &lt;none&gt; kube-system kube-proxy-s5t5r 1/1 Running 0 14h 192.168.31.30 k8sslave1 &lt;none&gt; &lt;none&gt; kube-system kube-scheduler-localhost.localdomain 1/1 Running 0 14h 192.168.31.29 localhost.localdomain &lt;none&gt; &lt;none&gt; </code></pre> <p>so I just check the kubernetes dashboard pod logs and see what happen:</p> <pre><code>[root@localhost ~]# kubectl logs kubernetes-dashboard-1594440918-549c59c487-h8z9l 2020/07/11 05:44:13 Starting overwatch 2020/07/11 05:44:13 Using namespace: default 2020/07/11 05:44:13 Using in-cluster config to connect to apiserver 2020/07/11 05:44:13 Using secret token for csrf signing 2020/07/11 05:44:13 Initializing csrf token from kubernetes-dashboard-csrf secret panic: Get &quot;https://10.20.0.1:443/api/v1/namespaces/default/secrets/kubernetes-dashboard-csrf&quot;: dial tcp 10.20.0.1:443: i/o timeout goroutine 1 [running]: github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc0000a2080) /home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:41 +0x446 github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...) /home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:66 github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc0005a4100) /home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:501 +0xc6 github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0xc0005a4100) /home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:469 +0x47 github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...) /home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:550 main.main() /home/runner/work/dashboard/dashboard/src/app/backend/dashboard.go:105 +0x20d </code></pre> <p>I am tried to access this resource using curl in host machine to see if the master server is response properly:</p> <pre><code>[root@localhost ~]# curl -k https://10.20.0.1:443/api/v1/namespaces/default/secrets/kubernetes-dashboard-csrf { &quot;kind&quot;: &quot;Status&quot;, &quot;apiVersion&quot;: &quot;v1&quot;, &quot;metadata&quot;: { }, &quot;status&quot;: &quot;Failure&quot;, &quot;message&quot;: &quot;secrets \&quot;kubernetes-dashboard-csrf\&quot; is forbidden: User \&quot;system:anonymous\&quot; cannot get resource \&quot;secrets\&quot; in API group \&quot;\&quot; in the namespace \&quot;default\&quot;&quot;, &quot;reason&quot;: &quot;Forbidden&quot;, &quot;details&quot;: { &quot;name&quot;: &quot;kubernetes-dashboard-csrf&quot;, &quot;kind&quot;: &quot;secrets&quot; }, &quot;code&quot;: 403 } </code></pre> <p>this is my master node and k8sslave1 firewalld status:</p> <pre><code>[root@localhost ~]# systemctl status firewalld ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled) Active: inactive (dead) Docs: man:firewalld(1) [root@k8sslave1 ~]# systemctl status firewalld ● firewalld.service - firewalld - dynamic firewall daemon Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor &gt; Active: inactive (dead) Docs: man:firewalld(1) lines 1-4/4 (END) </code></pre> <p>so where is the problem? what should I do to make the dashbord running success?</p>
Dolphin
<p>The problem is that you didn't specify <strong>ClusterRole</strong> for the serviceaccount attached to the dashboard pod.</p> <p>I've used this chart few months ago and i have to provide custom values.yaml as following :</p> <pre><code># myvalues.yaml #these are mine rbac: clusterReadOnlyRole: true # &lt;--- 🔴 YOU NEED this one clusterAdminRole: false extraArgs: - --enable-skip-login - --enable-insecure-login - --system-banner=&quot;Welcome to Company.com Kubernetes Cluster&quot; </code></pre> <p>As you can see <code>rbac.enabled</code> is not enough, you need to specify also <code>rbac.clusterReadOnlyRole=true</code>.</p> <p>Or if you want to give more access to the Dashboard, set true to <code>rbac.clusterAdminRole</code>.</p> <p>Now, you can upgrade your helm release using values file above :</p> <pre><code>helm install &lt;generate-release-name&gt; k8s-dashboard/kubernetes-dashboard \ --version 2.2.0 \ -f myvalues.yaml </code></pre>
Abdennour TOUMI
<p>Here is my deployment &amp; service file for Django. The 3 pods generated from deployment.yaml works, but the resource request and limits are being ignored.</p> <p>I have seen a lot of tutorials about applying resource specifications on Pods but not on Deployment files, is there a way around it?</p> <p>Here is my yaml file:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: app: djangoapi type: web name: djangoapi namespace: "default" spec: replicas: 3 template: metadata: labels: app: djangoapi type: web spec: containers: - name: djangoapi image: wbivan/app:v0.8.1a imagePullPolicy: Always args: - gunicorn - api.wsgi - --bind - 0.0.0.0:8000 resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" envFrom: - configMapRef: name: djangoapi-config ports: - containerPort: 8000 resources: {} imagePullSecrets: - name: regcred restartPolicy: Always --- apiVersion: v1 kind: Service metadata: name: djangoapi-svc namespace: "default" labels: app: djangoapi spec: ports: - port: 8000 protocol: TCP targetPort: 8000 selector: app: djangoapi type: web type: NodePort </code></pre>
Ivan
<p>There is one extra resource attribute under your container definition after ports.</p> <pre><code>resources: {} </code></pre> <p>This overrides original resource definition. Remove this one and apply it again.</p>
‌‌R‌‌‌.
<p>We have a bunch of pods that use RabbitMQ. If the pods are shut down by K8S with SIGTERM, we have found that our RMQ client (Python Pika) has no time to close the connection to RMQ Server causing it to think those clients are still alive until 2 heartbeats are missed.</p> <p>Our investigation has turned up that on SIGTERM, K8S kills all in- and most importantly OUTbound TCP connections, among other things (removing endpoints, etc.) Tried to see if any connections were still possible during preStop hooks, but preStop seems very internally focused and no traffic got out.</p> <p>Has anybody else experienced this issue and solved it? All we need to do is be able to get a message out the door before kubelet slams the door. Our pods are not K8S &quot;Services&quot; so some <a href="https://stackoverflow.com/questions/62567844/kubernetes-graceful-shutdown-continue-to-serve-traffic-during-termination">suggestions</a> didn't help.</p> <p>Steps to reproduce:</p> <ol> <li>add preStop hook sleep 30s to Sender pod</li> <li>tail logs of Receiver pod to see inbound requests</li> <li>enter Sender container's shell &amp; loop curl Receiver - requests appear in the logs</li> <li><code>k delete pod</code> to start termination of Sender pod</li> <li>curl requests immediately begin to hang in Sender, nothing in the Receiver logs</li> </ol>
Spanky
<p>We tested this extensively and found that new EKS clusters, with Calico installed (see below) will experience this problem, unless Calico is upgraded. Networking will be immediately killed when a pod is sent SIGTERM instead of waiting for the grace period. If you're experiencing this problem and are using Calico, please check the version of Calico against this thread:</p> <p><a href="https://github.com/projectcalico/calico/issues/4518" rel="nofollow noreferrer">https://github.com/projectcalico/calico/issues/4518</a></p> <p>If you're installing Calico using the AWS yaml found here: <a href="https://github.com/aws/amazon-vpc-cni-k8s/tree/master/config" rel="nofollow noreferrer">https://github.com/aws/amazon-vpc-cni-k8s/tree/master/config</a></p> <p>Be advised that the fixes have NOT landed in any of the released versions, we had to install from master, like so:</p> <pre><code> kubectl apply \ -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/master/config/master/calico-operator.yaml \ -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/master/config/master/calico-crs.yaml </code></pre> <p>and we also upgraded the AWS CNI for good measure, although that wasn't explicitly required to solve our issue:</p> <pre><code> kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/v1.8.0/config/v1.8/aws-k8s-cni.yaml kubectl apply -f https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/v1.9.1/config/v1.9/aws-k8s-cni.yaml </code></pre> <p>Here's a bunch of <a href="https://docs.aws.amazon.com/eks/latest/userguide/managing-vpc-cni.html" rel="nofollow noreferrer">confusing documentation from AWS</a> that makes it seem like you should switch to use new AWS &quot;add-ons&quot; to manage this stuff, but after an extensive discussion with support, was advised against</p>
Spanky
<p>Today, it's 366 days since the kubernetes 1.17 cluster is running. Accordingly, All PKI certificates are expired. Since we are using k8s 1.17, we are able to renew certificates</p> <pre class="lang-sh prettyprint-override"><code>kubeadm alpha certs renew all </code></pre> <p>everything is OK , except jenkins: still not able to spin off new pods as agents. :(</p> <ul> <li>no pipeline agent is able to be Running .. always pending then it's recreated</li> </ul> <p><a href="https://i.stack.imgur.com/lQp5z.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lQp5z.png" alt="enter image description here" /></a></p> <ul> <li>The pb is that &quot;Test Connection&quot; works even thu jenkins was not provision agent</li> </ul> <p><a href="https://i.stack.imgur.com/umUuF.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/umUuF.png" alt="enter image description here" /></a></p> <ul> <li>The other apps which uses (<a href="https://kubernetes.default" rel="nofollow noreferrer">https://kubernetes.default</a>) internally, are working fine ( like ArgoCD,...)</li> </ul> <p>I tried to :</p> <ul> <li>Restart kubeapi ( delete the pod kubeapi in kube-system to restart automatically)</li> <li>Safely restart jenkins</li> </ul> <p>After all these trials, the same bahavior in Jenkins :</p> <ul> <li>no pipeline agent is able to be Running .. always pending then it's recreated</li> </ul>
Abdennour TOUMI
<p>actually the issue is not jenkins , but it was kubernetes master. actually, i have to restart the control plane. I reboot master tomake sure that all control plane components are refreshed. Issue is fixed now</p>
Abdennour TOUMI
<p>I've build docker image locally:</p> <pre><code>docker build -t backend -f backend.docker </code></pre> <p>Now I want to create deployment with it:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: backend-deployment spec: selector: matchLabels: tier: backend replicas: 2 template: metadata: labels: tier: backend spec: containers: - name: backend image: backend imagePullPolicy: IfNotPresent # This should be by default so ports: - containerPort: 80 </code></pre> <p><code>kubectl apply -f file_provided_above.yaml</code> works, but then I have following pods statuses:</p> <pre><code>$ kubectl get pods NAME READY STATUS RESTARTS AGE backend-deployment-66cff7d4c6-gwbzf 0/1 ImagePullBackOff 0 18s </code></pre> <p>Before that it was <code>ErrImagePull</code>. So, my question is, how to tell it to use local docker images? Somewhere on the internet I read that I need to build images using <code>microk8s.docker</code> but it <a href="https://github.com/ubuntu/microk8s/issues/382" rel="noreferrer">seems to be removed</a>.</p>
Bunyk
<p>Found docs on how to use private registry: <a href="https://microk8s.io/docs/working" rel="noreferrer">https://microk8s.io/docs/working</a></p> <p>First it needs to be enabled:</p> <pre><code>microk8s.enable registry </code></pre> <p>Then images pushed to registry:</p> <pre><code>docker tag backend localhost:32000/backend docker push localhost:32000/backend </code></pre> <p>And then in above config <code>image: backend</code> needs to be replaced with <code>image: localhost:32000/backend</code></p>
Bunyk
<p>I am unable to scale vertical my AKS cluster. Currently, I have 3 nodes in my cluster with 2 core and 8 ram, I am trying to upgrade it with 16 code and 64 RAM, how do I do it? I tried scaling the VM scale set, on Azure portal it shows it is scaled but when I do &quot;kubectl get nodes -o wide&quot; it still shows the old version.</p> <p>Any leads will be helpful. Thanks, Abhishek</p>
Abhishek Anvekar
<p>Vertical scaling or changing the node pool VM size is not supported. You need to create a new node pool and schedule your pods on the new nodes.</p> <p><a href="https://github.com/Azure/AKS/issues/1556#issuecomment-615390245" rel="nofollow noreferrer">https://github.com/Azure/AKS/issues/1556#issuecomment-615390245</a></p> <blockquote> <p>this UX issues is due to how the VMSS is managed by AKS. Since AKS is a managed service, we don't support operations done outside of the AKS API to the infrastructure resources. In this example you are using the VMSS portal to resize, which uses VMSS APIs to resize the resource and as a result has unexpected changes.</p> <p>AKS nodepools don't support resize in place, so the supported way to do this is to create a new nodepool with a new target and delete the previous one. This needs to be done through the AKS portal UX. This maintains the goal state of the AKS node pool, as at the moment the portal is showing the VMSize AKS knows you have because that is what was originally requested.</p> </blockquote>
CSharpRocks
<p>So before I used kubernetes the general rule I used for running multiple express instances on a VM was one per cpu. That seemed to give the best performance. For kubernetes, would it be wise to have a replica per node cpu? Or should I let the horizontalpodautoscaler decide? The cluster has a node autoscaler. Thanks for any advice!</p>
danthegoodman
<p>good question !</p> <p>You need to consider 4 things :</p> <ol> <li><p>Run the pod using <strong>Deployment</strong> so you enable replication, rolling update,...so on</p> </li> <li><p>Set <code>resources.limits</code> to your container definition. this is mandatory for autoscaling , because HPA is monitoring the percentage of usage, and if there is <strong>NO limit</strong>, there will be <strong>NEVER percentage</strong>, then HPA will never reach the <strong>threshold</strong>.</p> </li> <li><p>Set <code>resources.requests</code>. This will help the scheduler to estimate how much the app needs, so it will be assigned to the suitable Node per its current capacity.</p> </li> <li><p>Set HPA threshold: The percentage of usage (CPU, memory) when the HPA will trigger scale out or scale in.</p> </li> </ol> <p>for your situation, you said &quot;one per cpu&quot;.. then, it should be:</p> <pre><code> containers: - name: express image: myapp-node #..... resources: requests: memory: &quot;256Mi&quot; cpu: &quot;750m&quot; limits: memory: &quot;512Mi&quot; cpu: &quot;1000m&quot; # &lt;-- 🔴 match what you have in the legacy deployment </code></pre> <p>you may wonder why I put memory limits/requests without any input from your side ? The answer is that I put it randomly. Your task is to monitor your application, and adjust all these values accordingly.</p>
Abdennour TOUMI
<p>I know dapr has support for service discovery built in but how does that work when deployed to kubernetes in a cross cluster setup? Can't seem to find example or docs.</p>
Dan Soltesz
<p>Dapr is not designed for that use case. It might be better to use something like <a href="https://skupper.io/" rel="nofollow noreferrer">https://skupper.io/</a> to create cross-cluster flat network and then use Dapr on top. This a valid scenario presented by a user in July community call</p>
Bilgin Ibryam
<p>I've created a Kubernetes cluster with AWS ec2 instances using kubeadm but when I try to create a service with type LoadBalancer I get an EXTERNAL-IP pending status</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 123m nginx LoadBalancer 10.107.199.170 &lt;pending&gt; 8080:31579/TCP 45m52s </code></pre> <p>My create command is</p> <pre><code>kubectl expose deployment nginx --port 8080 --target-port 80 --type=LoadBalancer </code></pre> <p>I'm not sure what I'm doing wrong.</p> <p>What I expect to see is an EXTERNAL-IP address given for the load balancer.</p> <p>Has anyone had this and successfully solved it, please?</p> <p>Thanks.</p>
Hammed
<p>You need to setup the <strong>interface</strong> between k8s and AWS which is a<a href="https://github.com/kubernetes/cloud-provider-aws#readme" rel="noreferrer">ws-cloud-provider-controller</a>.</p> <pre><code>apiVersion: kubeadm.k8s.io/v1beta1 kind: InitConfiguration nodeRegistration: kubeletExtraArgs: cloud-provider: aws </code></pre> <p>More details can be found:</p> <ul> <li><a href="https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/" rel="noreferrer">https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/</a></li> <li><a href="https://blog.heptio.com/setting-up-the-kubernetes-aws-cloud-provider-6f0349b512bd" rel="noreferrer">https://blog.heptio.com/setting-up-the-kubernetes-aws-cloud-provider-6f0349b512bd</a></li> <li><a href="https://blog.scottlowe.org/2019/02/18/kubernetes-kubeadm-and-the-aws-cloud-provider/" rel="noreferrer">https://blog.scottlowe.org/2019/02/18/kubernetes-kubeadm-and-the-aws-cloud-provider/</a></li> <li><a href="https://itnext.io/kubernetes-part-2-a-cluster-set-up-on-aws-with-aws-cloud-provider-and-aws-loadbalancer-f02c3509f2c2" rel="noreferrer">https://itnext.io/kubernetes-part-2-a-cluster-set-up-on-aws-with-aws-cloud-provider-and-aws-loadbalancer-f02c3509f2c2</a></li> </ul> <p>Once you finish this setup, you will have the luxury to control not only the creation of AWS LB for each k8s service with type LoadBalancer.. But also , you will be able to control many things using <strong>annotations</strong>.</p> <pre><code>apiVersion: v1 kind: Service metadata: name: example namespace: kube-system labels: run: example annotations: service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:xx-xxxx-x:xxxxxxxxx:xxxxxxx/xxxxx-xxxx-xxxx-xxxx-xxxxxxxxx #replace this value service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http spec: type: LoadBalancer ports: - port: 443 targetPort: 5556 protocol: TCP selector: app: example </code></pre> <p><a href="https://kubernetes.io/docs/concepts/cluster-administration/cloud-providers/#aws" rel="noreferrer">Different settings can be applied to a load balancer service in AWS using <strong>annotations</strong>.</a></p>
Abdennour TOUMI
<p>As I understood from the documentation , if you use azure portal to create AKS cluster , you can't use the basic load balancer ,which is free in my current subscription. So how can I then use the basic load balancer with aks.</p>
Mou
<p>You must use the CLI to create an AKS with a Basic load balancer.</p> <pre><code>az aks create -g MyRG -n MyCluster --load-balancer-sku basic </code></pre> <p>It's clearly stated in the infobox in the Portal.</p> <p><a href="https://i.stack.imgur.com/66J01.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/66J01.png" alt="enter image description here" /></a></p>
CSharpRocks
<p>My Requirement is Scale up PODS on Custom metrics like pending messages from queue increases pods has to increase to process jobs. In kubernetes Scale up is working fine with prometheus adapter &amp; prometheus operator.</p> <p>I have long running process in pods, but HPA checks the custom metrics and try to scale down, Due to this process killing mid of operations and loosing that message. How i can control the HPA kill only free pods where no process is running. </p> <h2>AdapterService to collect custom metrics</h2> <ul> <li>seriesQuery: '{namespace="default",service="hpatest-service"}' resources: overrides: namespace: resource: "namespace" service: resource: "service" name: matches: "msg_consumergroup_lag" metricsQuery: 'avg_over_time(msg_consumergroup_lag{topic="test",consumergroup="test"}[1m])'</li> </ul> <h2>HPA Configuration</h2> <ul> <li>type: Object object: describedObject: kind: Service name: custommetric-service metric: name: msg_consumergroup_lag target: type: Value value: 2</li> </ul>
Santhoo Kumar
<p>At present the HPA cannot be configured to accommodate workloads of this nature. The HPA simply sets the replica count on the deployment to a desired value according to the scaling algorithm, and the deployment chooses one or more pods to terminate.</p> <p>There is a lot of discussion on this topic in <a href="https://github.com/kubernetes/kubernetes/issues/45509" rel="nofollow noreferrer">this Kubernetes issue</a> that may be of interest to you. It is not solved by the HPA, and may never be. There may need to be a different kind of autoscaler for this type of workload. Some suggestions are given in the link that may help you in defining one of these.</p> <p>If I was to take this on myself, I would create a new controller, with corresponding CRD containing a job definition and the scaling requirements. Instead of scaling deployments, I would have it launch jobs. I would have the jobs do their work (process the queue) until they became idle (no items in the queue) then exit. The controller would only scale up, by adding jobs, never down. The jobs themselves would scale down by exiting when the queue is empty.</p> <p>This would require that your jobs be able to detect when they become idle, by checking the queue and exiting if there is nothing there. If your queue read blocks forever, this would not work and you would need a different solution.</p> <p>The <a href="https://book.kubebuilder.io" rel="nofollow noreferrer">kubebuilder project</a> has an excellent example of a job controller. I would start with that and extend it with the ability to check your published metrics and start the jobs accordingly.</p> <p>Also see <a href="https://kubernetes.io/docs/tasks/job/fine-parallel-processing-work-queue/" rel="nofollow noreferrer">Fine Parallel Processing Using a Work Queue</a> in the Kubernetes documentation.</p>
dlaidlaw
<p>I created registry credits and when I apply on pod like this:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: private-reg spec: containers: - name: private-reg-container image: registry.io.io/simple-node imagePullSecrets: - name: regcred </code></pre> <p>it works succesfly pull image</p> <p>But if I try to do this:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: node123 namespace: node123 spec: replicas: 5 strategy: type: RollingUpdate rollingUpdate: maxSurge: 2 maxUnavailable: 0 selector: matchLabels: name: node123 template: metadata: labels: name: node123 spec: containers: - name: node123 image: registry.io.io/simple-node ports: - containerPort: 3000 imagePullSecrets: - name: regcred </code></pre> <p>On pod will get error: ImagePullBackOff</p> <p>when I describe it getting</p> <blockquote> <p>Failed to pull image &quot;registry.io.io/simple-node&quot;: rpc error: code = Unknown desc = Error response from daemon: Get <a href="https://registry.io.io/v2/simple-node/manifests/latest" rel="nofollow noreferrer">https://registry.io.io/v2/simple-node/manifests/latest</a>: no basic auth credentials</p> </blockquote> <p>Anyone know how to solve this issue?</p>
Vladimir Djukic
<p>We are always running images from private registry. And this checklist might help you :</p> <ol> <li><p>Put your params in env variable in your terminal to have single source of truth:</p> <pre><code>export DOCKER_HOST=registry.io.io export DOCKER_USER=&lt;your-user&gt; export DOCKER_PASS=&lt;your-pass&gt; </code></pre> </li> <li><p>Make sure that you can authenticate &amp; the image really exist</p> <pre><code>echo $DOCKER_PASS | docker login -u$DOCKER_USER --password-stdin $DOCKER_HOST docker pull ${DOCKER_HOST}/simple-node </code></pre> </li> <li><p>Make sure that you created the Dockerconfig secret in the same namespace of pod/deployment;</p> <pre><code>namespace=mynamespace # default kubectl -n ${namespace} create secret docker-registry regcred \ --docker-server=${DOCKER_HOST} \ --docker-username=${DOCKER_USER} \ --docker-password=${DOCKER_PASS} \ [email protected] </code></pre> </li> <li><p>Patch the service account used by the Pod with the secret</p> <pre><code>namespace=mynamespace kubectl -n ${namespace} patch serviceaccount default \ -p '{&quot;imagePullSecrets&quot;: [{&quot;name&quot;: &quot;regcred&quot;}]}' # if the pod use another service account, # replace &quot;default&quot; by the relevant service account </code></pre> <p>or</p> <p>Add <code>imagePullSecrets</code> in the pod :</p> <pre><code>imagePullSecrets: - name: regcred containers: - .... </code></pre> </li> </ol>
Abdennour TOUMI
<p>Created a cluster in EKS (Kubernetes 1.11.5) with multiple node groups however I'm noticing that in the <code>extension-apiserver-authentication</code> configmap that <code>client-ca-file</code> key is missing.</p> <p>I assume this is due to the way Kubernetes API service is initiated. Has anyone else come across this issue ?</p> <p>I came across this problem while deploying certificate manager which queries the api server with <code>GET https://10.100.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication</code>.</p> <p>In GKE this isnt a problem as <code>extension-apiserver-authentication</code> configmap already includes <code>client-ca-file</code>.</p> <p><code>extension-apiserver-authentication</code> configmap in AWS,</p> <pre><code>apiVersion: v1 data: requestheader-allowed-names: '["front-proxy-client"]' requestheader-client-ca-file: | &lt;certificate file&gt; requestheader-extra-headers-prefix: '["X-Remote-Extra-"]' requestheader-group-headers: '["X-Remote-Group"]' requestheader-username-headers: '["X-Remote-User"]' kind: ConfigMap metadata: creationTimestamp: 2019-01-14T04:56:51Z name: extension-apiserver-authentication namespace: kube-system resourceVersion: "39" selfLink: /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication uid: ce2b6f64-17b8-11e9-a6dd-021a269d3ce8 </code></pre> <p>However in GKE,</p> <pre><code>apiVersion: v1 data: client-ca-file: | &lt;client certificate file&gt; requestheader-allowed-names: '["aggregator"]' requestheader-client-ca-file: | &lt;certificate file&gt; requestheader-extra-headers-prefix: '["X-Remote-Extra-"]' requestheader-group-headers: '["X-Remote-Group"]' requestheader-username-headers: '["X-Remote-User"]' kind: ConfigMap metadata: creationTimestamp: 2018-05-24T12:06:33Z name: extension-apiserver-authentication namespace: kube-system resourceVersion: "32" selfLink: /api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication uid: e6c0c431-5f4a-11e8-8d8c-42010a9a0191 </code></pre>
nixgadget
<p>I've also run into this issue while trying to use cert-manager on an AWS EKS cluster. It is possible to inject the certificate yourself using the certificate obtained from the AWS CLI. Follow these steps to address this issue:</p> <p><strong>Obtain the Certificate</strong></p> <p>The certificate is stored Base64 encoded and can be retrieved using</p> <pre class="lang-sh prettyprint-override"><code>aws eks describe-cluster \ --region=${AWS_DEFAULT_REGION} \ --name=${CLUSTER_NAME} \ --output=text \ --query 'cluster.{certificateAuthorityData: certificateAuthority.data}' | base64 -D </code></pre> <p><strong>Inject the Certificate</strong></p> <p>Edit configMap/extension-apiserver-authentication under the kube-system namespace: <code>kubectl -n kube-system edit cm extension-apiserver-authentication</code></p> <p>Under the data section, add the CA under a new config entry named client-ca-file. For example:</p> <pre><code> client-ca-file: | -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- </code></pre>
Justin
<p>Minikube not starting with several error messages. kubectl version gives following message with port related message:</p> <pre><code>iqbal@ThinkPad:~$ kubectl version Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-24T06:54:59Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"} The connection to the server localhost:8080 was refused - did you specify the right host or port? </code></pre>
Iqbal Khan
<p>You didn't give more details, but there are some concerns that I solved few days ago about minikube issues with kubernetes <strong>1.12</strong>.</p> <p>Indeed, the compatibility matrix between kubernetes and docker recommends to run : Docker <strong>18.06</strong> + kubernetes <strong>1.12</strong> (Docker 18.09 is not supported now).</p> <p>Thus, make sure <code>docker version</code> is NOT above <strong>18.06</strong>. Then, run the following:</p> <pre><code># clean up minikube delete minikube start --vm-driver="none" kubectl get nodes </code></pre> <p>If you are still encountering issues, please give more details, namely <code>minikube logs</code>.</p>
Abdennour TOUMI
<p>Have an <code>ingress-nginx-controller</code> Deployment in kubernetes cluster which passes requests to backend services within the cluster and this all currently works as expected.</p> <p>There is now a requirement within one of the backend services to get the caller's client IP address from within but, with the nginx controller in its default configuration, the backend service is only seeing the kubernetes cluster's network IP address when it calls <code>HttpServletRequest.getRemoteAddr()</code> and not the client caller's IP address.</p> <p>I understand that requests when proxied can have the client IP address overridden which I am assuming is what is happening here as the request goes through the nginx controller.</p> <p>I have added a debug log in the backend service to print all relevant headers within received requests and, with the nginx controller in its default configuration, I am seeing the following <code>X-</code> headers within each request received:</p> <pre><code>x-request-id:3821cea91ffdfd04bed8516586869bdd5 x-real-ip:100.10.75.1 x-forwarded-proto:https x-forwarded-host:www.myexample.com x-forwarded-port:443 x-scheme:https </code></pre> <p>I have read in various places that nginx can be configured to pass the client's IP address in <code>X-Forwarded-For</code> header for example (which as can be seen in debug log above it is not currently included in client requests).</p> <p>Looking at the <code>nginx.conf</code> in the ingress-nginx-controller Deployment, the backend's domain server configuration has the following set:</p> <pre><code> proxy_set_header X-Request-ID $req_id; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Forwarded-Proto $pass_access_scheme; proxy_set_header X-Forwarded-Host $best_http_host; proxy_set_header X-Forwarded-Port $pass_port; proxy_set_header X-Scheme $pass_access_scheme; # Pass the original X-Forwarded-For proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for; </code></pre> <p>Doing a <code>kubectl describe deploy -n ingress-nginx ingress-nginx-controller</code> shows that the nginx controller has the following configmap argument: <code>--configmap=ingress-nginx/ingress-nginx-controller</code> so, using this information, what do I need to include in a custom yaml that I can then apply in order to override the nginx config settings to have it pass the client IP to the backend service?</p>
Going Bananas
<p>In order to have the nginx controller pass the client's ip address to the backend service I applied the following configmap yaml config:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: labels: helm.sh/chart: ingress-nginx-3.10.1 app.kubernetes.io/name: ingress-nginx app.kubernetes.io/instance: ingress-nginx app.kubernetes.io/version: 0.41.2 app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: controller name: ingress-nginx-controller namespace: ingress-nginx data: compute-full-forwarded-for: &quot;true&quot; use-forwarded-headers: &quot;false&quot; real-ip-header: proxy_protocol </code></pre> <p>I believe the configuration section that matters in this config is the line: <code>real-ip-header: proxy_protocol</code></p> <p>With this <code>configmap</code> applied to the <code>nginx controller</code> I can now see the client's IP address (no longer the kubernetes cluster's network IP address) shown in the request's <code>x-real-ip</code> header.</p>
Going Bananas
<p>I have the following role:</p> <p><code>roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: admin</code> </p> <p>When I do a <code>kubectl proxy --port 8080</code> and then try doing</p> <p><code>http://127.0.0.1:8080/apis/extensions/v1beta1/namespaces/cdp/deployments/{deploymentname}</code></p> <p>I get a <code>200</code> and everything works fine. However when I do:</p> <p><code>http://127.0.0.1:8080/apis/extensions/v1beta1/namespaces/cdp/deployments/{deploymentname}/status</code></p> <p>I get forbidden and a <code>403</code> status back . </p> <p>I also am able to do <code>get</code>, <code>create</code>, <code>list</code>,<code>watch</code> on deployments with my <code>admin</code> role . </p> <p>Any idea as to why <code>/status</code> would give forbidden when I clearly have all the necessary permission as admin for my namespace. </p>
Dipayan
<p>You mentioned verbs of the role and you didn't mention resources and apiGroup. Make sure the following are set:</p> <pre><code> - apiGroups: - apps - extensions resources: - deployments/status </code></pre>
Abdennour TOUMI
<p>I'm trying to set up a MongoDB replica set on my Kubernetes cluster but the Secondary member keeps restarting after a few seconds.</p> <p>Here's a couple of things but might be useful to know:</p> <ul> <li>The Mongo server (and client) version is <code>4.0.6</code></li> <li>I'm using the official helm <a href="https://github.com/helm/charts/tree/master/stable/mongodb-replicaset" rel="nofollow noreferrer">mongodb-replicaset</a> chart to set up the replica set and the only custom setting I'm using is <code>enableMajorityReadConcern: false</code></li> <li>Oplog size is configured to ~1228MB (only 4.7 used)</li> <li>It happens with both a Primary-Secondary arch and with a PSA architecture where the Arbiter dies repeatedly like the Secondary member whilst the Primary is always up and running</li> <li>This happens both on my minikube and on a staging cluster on GCP with plenty of free resources (I'm deploying this with no resources limits, see right below for cluster status)</li> </ul> <p>Staging cluster status (4 nodes):</p> <pre><code>NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% gke-staging-pool-1-********-**** 551m 28% 3423Mi 60% gke-staging-pool-1-********-**** 613m 31% 3752Mi 66% gke-staging-pool-1-********-**** 960m 49% 2781Mi 49% gke-staging-pool-1-********-**** 602m 31% 3590Mi 63% </code></pre> <p>At the moment since the Primary seems to be able to stay up and running I managed to keep the cluster live by removing the <code>votes</code> to all members but the primary. This way Mongo doesn't relinquish the primary for not being able to see a majority of the set and my app can still do writes.</p> <p>If I turn the <code>logLevel</code> to <code>5</code> on the Secondary the only error I get is this:</p> <pre><code>2019-04-02T15:11:42.233+0000 D EXECUTOR [replication-0] Executing a task on behalf of pool replication 2019-04-02T15:11:42.233+0000 D EXECUTOR [replication-0] Not reaping because the earliest retirement date is 2019-04-02T15:12:11.980+0000 2019-04-02T15:11:42.233+0000 D NETWORK [RS] Timer received error: CallbackCanceled: Callback was canceled 2019-04-02T15:11:42.235+0000 D NETWORK [RS] Decompressing message with snappy 2019-04-02T15:11:42.235+0000 D ASIO [RS] Request 114334 finished with response: { cursor: { nextBatch: [], id: 46974224885, ns: "local.oplog.rs" }, ok: 1.0, operationTime: Timestamp(1554217899, 1), $replData: { term: 11536, lastOpCommitted: { ts: Timestamp(1554217899, 1), t: 11536 }, lastOpVisible: { ts: Timestamp(1554217899, 1), t: 11536 }, configVersion: 666752, replicaSetId: ObjectId('5c8a607380091703c787b3ff'), primaryIndex: 0, syncSourceIndex: -1 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1554217899, 1), t: 11536 }, lastOpApplied: { ts: Timestamp(1554217899, 1), t: 11536 }, rbid: 1, primaryIndex: 0, syncSourceIndex: -1 }, $clusterTime: { clusterTime: Timestamp(1554217899, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } 2019-04-02T15:11:42.235+0000 D EXECUTOR [RS] Received remote response: RemoteResponse -- cmd:{ cursor: { nextBatch: [], id: 46974224885, ns: "local.oplog.rs" }, ok: 1.0, operationTime: Timestamp(1554217899, 1), $replData: { term: 11536, lastOpCommitted: { ts: Timestamp(1554217899, 1), t: 11536 }, lastOpVisible: { ts: Timestamp(1554217899, 1), t: 11536 }, configVersion: 666752, replicaSetId: ObjectId('5c8a607380091703c787b3ff'), primaryIndex: 0, syncSourceIndex: -1 }, $oplogQueryData: { lastOpCommitted: { ts: Timestamp(1554217899, 1), t: 11536 }, lastOpApplied: { ts: Timestamp(1554217899, 1), t: 11536 }, rbid: 1, primaryIndex: 0, syncSourceIndex: -1 }, $clusterTime: { clusterTime: Timestamp(1554217899, 1), signature: { hash: BinData(0, 0000000000000000000000000000000000000000), keyId: 0 } } } 2019-04-02T15:11:42.235+0000 D EXECUTOR [replication-5] Executing a task on behalf of pool replication 2019-04-02T15:11:42.235+0000 D REPL [replication-5] oplog fetcher read 0 operations from remote oplog 2019-04-02T15:11:42.235+0000 D EXECUTOR [replication-5] Scheduling remote command request: RemoteCommand 114336 -- target:foodchain-backend-mongodb-replicaset-0.foodchain-backend-mongodb-replicaset.foodchain.svc.cluster.local:27017 db:local expDate:2019-04-02T15:11:47.285+0000 cmd:{ getMore: 46974224885, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 50, term: 11536, lastKnownCommittedOpTime: { ts: Timestamp(1554217899, 1), t: 11536 } } 2019-04-02T15:11:42.235+0000 D ASIO [replication-5] startCommand: RemoteCommand 114336 -- target:foodchain-backend-mongodb-replicaset-0.foodchain-backend-mongodb-replicaset.foodchain.svc.cluster.local:27017 db:local expDate:2019-04-02T15:11:47.285+0000 cmd:{ getMore: 46974224885, collection: "oplog.rs", batchSize: 13981010, maxTimeMS: 50, term: 11536, lastKnownCommittedOpTime: { ts: Timestamp(1554217899, 1), t: 11536 } } 2019-04-02T15:11:42.235+0000 D EXECUTOR [replication-5] Not reaping because the earliest retirement date is 2019-04-02T15:12:11.980+0000 2019-04-02T15:11:42.235+0000 D NETWORK [RS] Timer received error: CallbackCanceled: Callback was canceled 2019-04-02T15:11:42.235+0000 D NETWORK [RS] Compressing message with snappy 2019-04-02T15:11:42.235+0000 D NETWORK [RS] Timer received error: CallbackCanceled: Callback was canceled 2019-04-02T15:11:42.235+0000 D NETWORK [RS] Timer received error: CallbackCanceled: Callback was canceled 2019-04-02T15:11:42.235+0000 D NETWORK [RS] Timer received error: CallbackCanceled: Callback was canceled </code></pre> <p>Given the network error I verified if all members could connect to each other and they can (it's explicitly showed in the logs of all three members).</p> <p><strong>ADDITIONAL INFO:</strong></p> <pre><code>foodchain_rs:PRIMARY&gt; rs.status() { "set" : "foodchain_rs", "date" : ISODate("2019-04-02T15:35:02.640Z"), "myState" : 1, "term" : NumberLong(11536), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "heartbeatIntervalMillis" : NumberLong(2000), "optimes" : { "lastCommittedOpTime" : { "ts" : Timestamp(1554219299, 1), "t" : NumberLong(11536) }, "readConcernMajorityOpTime" : { "ts" : Timestamp(1554219299, 1), "t" : NumberLong(11536) }, "appliedOpTime" : { "ts" : Timestamp(1554219299, 1), "t" : NumberLong(11536) }, "durableOpTime" : { "ts" : Timestamp(1554219299, 1), "t" : NumberLong(11536) } }, "members" : [ { "_id" : 0, "name" : "foodchain-backend-mongodb-replicaset-0.foodchain-backend-mongodb-replicaset.foodchain.svc.cluster.local:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 4376, "optime" : { "ts" : Timestamp(1554219299, 1), "t" : NumberLong(11536) }, "optimeDate" : ISODate("2019-04-02T15:34:59Z"), "syncingTo" : "", "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "", "electionTime" : Timestamp(1554214927, 1), "electionDate" : ISODate("2019-04-02T14:22:07Z"), "configVersion" : 666752, "self" : true, "lastHeartbeatMessage" : "" }, { "_id" : 1, "name" : "foodchain-backend-mongodb-replicaset-1.foodchain-backend-mongodb-replicaset.foodchain.svc.cluster.local:27017", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 10, "optime" : { "ts" : Timestamp(1554219299, 1), "t" : NumberLong(11536) }, "optimeDurable" : { "ts" : Timestamp(1554219299, 1), "t" : NumberLong(11536) }, "optimeDate" : ISODate("2019-04-02T15:34:59Z"), "optimeDurableDate" : ISODate("2019-04-02T15:34:59Z"), "lastHeartbeat" : ISODate("2019-04-02T15:35:01.747Z"), "lastHeartbeatRecv" : ISODate("2019-04-02T15:35:01.456Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncingTo" : "foodchain-backend-mongodb-replicaset-0.foodchain-backend-mongodb-replicaset.foodchain.svc.cluster.local:27017", "syncSourceHost" : "foodchain-backend-mongodb-replicaset-0.foodchain-backend-mongodb-replicaset.foodchain.svc.cluster.local:27017", "syncSourceId" : 0, "infoMessage" : "", "configVersion" : 666752 } ], "ok" : 1, "operationTime" : Timestamp(1554219299, 1), "$clusterTime" : { "clusterTime" : Timestamp(1554219299, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } } foodchain_rs:PRIMARY&gt; rs.printReplicationInfo() configured oplog size: 1228.8701171875MB log length start to end: 1646798secs (457.44hrs) oplog first event time: Thu Mar 14 2019 14:08:51 GMT+0000 (UTC) oplog last event time: Tue Apr 02 2019 15:35:29 GMT+0000 (UTC) now: Tue Apr 02 2019 15:35:34 GMT+0000 (UTC) foodchain_rs:PRIMARY&gt; db.getReplicationInfo() { "logSizeMB" : 1228.8701171875, "usedMB" : 4.7, "timeDiff" : 1646838, "timeDiffHours" : 457.46, "tFirst" : "Thu Mar 14 2019 14:08:51 GMT+0000 (UTC)", "tLast" : "Tue Apr 02 2019 15:36:09 GMT+0000 (UTC)", "now" : "Tue Apr 02 2019 15:36:11 GMT+0000 (UTC)" } foodchain_rs:PRIMARY&gt; rs.conf() { "_id" : "foodchain_rs", "version" : 666752, "protocolVersion" : NumberLong(1), "writeConcernMajorityJournalDefault" : true, "members" : [ { "_id" : 0, "host" : "foodchain-backend-mongodb-replicaset-0.foodchain-backend-mongodb-replicaset.foodchain.svc.cluster.local:27017", "arbiterOnly" : false, "buildIndexes" : true, "hidden" : false, "priority" : 1, "tags" : { }, "slaveDelay" : NumberLong(0), "votes" : 1 }, { "_id" : 1, "host" : "foodchain-backend-mongodb-replicaset-1.foodchain-backend-mongodb-replicaset.foodchain.svc.cluster.local:27017", "arbiterOnly" : false, "buildIndexes" : true, "hidden" : false, "priority" : 0, "tags" : { }, "slaveDelay" : NumberLong(0), "votes" : 0 } ], "settings" : { "chainingAllowed" : true, "heartbeatIntervalMillis" : 2000, "heartbeatTimeoutSecs" : 10, "electionTimeoutMillis" : 100, "catchUpTimeoutMillis" : -1, "catchUpTakeoverDelayMillis" : 30000, "getLastErrorModes" : { }, "getLastErrorDefaults" : { "w" : 1, "wtimeout" : 0 }, "replicaSetId" : ObjectId("5c8a607380091703c787b3ff") } } </code></pre>
Francesco Casula
<p>The issue was a too short <a href="https://docs.mongodb.com/manual/reference/replica-configuration/#rsconf.settings.electionTimeoutMillis" rel="nofollow noreferrer">electionTimeoutMillis</a> setting.</p> <blockquote> <p>Lower values result in faster failover, but increased sensitivity to primary node or network slowness or spottiness.</p> </blockquote> <p>In my case it was set to <code>100ms</code> and that wasn't enough time for my Secondary to find the Primary member so it was unable to sync and thus unavailable.</p> <p>I think it's also worth noting that the process was not being killed. The <code>mongod</code> PID was always <code>1</code> and the uptime showed in <code>top</code> did not coincide with the uptime showed in the <code>rs.status()</code> mongo shell.</p> <p>What I was doing was monitoring the Secondary uptime via the mongo shell like this:</p> <pre><code>watch -n 1.0 "kubectl -n foodchain exec -it foodchain-backend-mongodb-replicaset-0 -- mongo --eval='rs.status().members.map(m =&gt; m.uptime)'" </code></pre> <p>With that command I could see that the Secondary uptime was never longer than 10s so I assumed it was restarting itself or being OOM killed or something, instead I think it was trying to fire an election but didn't have the votes to do so and went silent on restarting. In fact what I think it really confused me was the lack of information in that regard despite having set the <code>logLevel</code> to <code>5</code>.</p>
Francesco Casula
<p>I've applied the yaml for the <a href="https://github.com/kubernetes/dashboard" rel="nofollow noreferrer">kubernetes dashboard</a>.</p> <p>Now I want to expose this service with the public IP of my server: <a href="https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/#objectives" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/#objectives</a></p> <p>But there is no service/deployment on my cluster:</p> <pre><code>$ sudo kubectl get services kubernetes NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 63d $ sudo kubectl get deployment NAME READY UP-TO-DATE AVAILABLE AGE </code></pre> <p>What did I do wrong?</p> <p>Thanks for the help</p>
Warok
<p>The command that you ran is fetching objects in <strong>default</strong> namespace.</p> <p>However, Dashboard is deployed on <strong>kube-system</strong> namespace.</p> <pre><code>kubectl -n kube-system get services kubernetes kubectl -n kube-system get deployment </code></pre> <p>I am giving you this info according to the link that you share <a href="https://github.com/kubernetes/dashboard" rel="nofollow noreferrer">kubernetes dashboard</a> . And namely the <a href="https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml" rel="nofollow noreferrer">YAML file</a></p>
Abdennour TOUMI
<p>I am stuck with a helm install of jenkins </p> <p>:( </p> <p>please help!</p> <p>I have predefined a storage class via:</p> <pre><code>$ kubectl apply -f generic-storage-class.yaml </code></pre> <p>with generic-storage-class.yaml:</p> <pre><code>kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: generic provisioner: kubernetes.io/aws-ebs parameters: type: gp2 zones: us-east-1a, us-east-1b, us-east-1c fsType: ext4 </code></pre> <p>I then define a PVC via:</p> <pre><code>$ kubectl apply -f jenkins-pvc.yaml </code></pre> <p>with jenkins-pvc.yaml:</p> <pre><code>apiVersion: v1 kind: PersistentVolumeClaim metadata: name: jenkins-pvc namespace: jenkins-project spec: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi </code></pre> <p>I can then see the PVC go into the BOUND status:</p> <pre><code>$ kubectl get pvc --all-namespaces NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE jenkins-project jenkins-pvc Bound pvc-a173294f-7cea-11e9-a90f-161c7e8a0754 20Gi RWO gp2 27m </code></pre> <p>But when I try to Helm install jenkins via:</p> <pre><code>$ helm install --name jenkins \ --set persistence.existingClaim=jenkins-pvc \ stable/jenkins --namespace jenkins-project </code></pre> <p>I get this output:</p> <pre><code>NAME: jenkins LAST DEPLOYED: Wed May 22 17:07:44 2019 NAMESPACE: jenkins-project STATUS: DEPLOYED RESOURCES: ==&gt; v1/ConfigMap NAME DATA AGE jenkins 5 0s jenkins-tests 1 0s ==&gt; v1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE jenkins 0/1 1 0 0s ==&gt; v1/PersistentVolumeClaim NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE jenkins Pending gp2 0s ==&gt; v1/Pod(related) NAME READY STATUS RESTARTS AGE jenkins-6c9f9f5478-czdbh 0/1 Pending 0 0s ==&gt; v1/Secret NAME TYPE DATA AGE jenkins Opaque 2 0s ==&gt; v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE jenkins LoadBalancer 10.100.200.27 &lt;pending&gt; 8080:31157/TCP 0s jenkins-agent ClusterIP 10.100.221.179 &lt;none&gt; 50000/TCP 0s NOTES: 1. Get your 'admin' user password by running: printf $(kubectl get secret --namespace jenkins-project jenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo 2. Get the Jenkins URL to visit by running these commands in the same shell: NOTE: It may take a few minutes for the LoadBalancer IP to be available. You can watch the status of by running 'kubectl get svc --namespace jenkins-project -w jenkins' export SERVICE_IP=$(kubectl get svc --namespace jenkins-project jenkins --template "{{ range (index .status.loadBalancer.ingress 0) }}{{ . }}{{ end }}") echo http://$SERVICE_IP:8080/login 3. Login with the password from step 1 and the username: admin For more information on running Jenkins on Kubernetes, visit: https://cloud.google.com/solutions/jenkins-on-container-engine </code></pre> <p>where I see helm creating a new PersistentVolumeClaim called jenkins.</p> <p>How come helm did not use the "exsistingClaim"</p> <p>I see this as the only helm values for the jenkins release</p> <pre><code>$ helm get values jenkins persistence: existingClaim: jenkins-pvc </code></pre> <p>and indeed it has just made its own PVC instead of using the pre-created one.</p> <pre><code>kubectl get pvc --all-namespaces NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE jenkins-project jenkins Bound pvc-a9caa3ba-7cf1-11e9-a90f-161c7e8a0754 8Gi RWO gp2 6m11s jenkins-project jenkins-pvc Bound pvc-a173294f-7cea-11e9-a90f-161c7e8a0754 20Gi RWO gp2 56m </code></pre> <p>I feel like I am close but missing something basic. Any ideas?</p>
gh4x
<p>So per Matthew L Daniel's comment I ran <code>helm repo update</code> and then re-ran the helm install command. This time it did not re-create the PVC but instead used the pre-made one. </p> <p>My previous jenkins chart version was "jenkins-0.35.0"</p> <p>For anyone wondering what the deployment looked like:</p> <pre><code>Name: jenkins Namespace: jenkins-project CreationTimestamp: Wed, 22 May 2019 22:03:33 -0700 Labels: app.kubernetes.io/component=jenkins-master app.kubernetes.io/instance=jenkins app.kubernetes.io/managed-by=Tiller app.kubernetes.io/name=jenkins helm.sh/chart=jenkins-1.1.21 Annotations: deployment.kubernetes.io/revision: 1 Selector: app.kubernetes.io/component=jenkins-master,app.kubernetes.io/instance=jenkins Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable StrategyType: Recreate MinReadySeconds: 0 Pod Template: Labels: app.kubernetes.io/component=jenkins-master app.kubernetes.io/instance=jenkins app.kubernetes.io/managed-by=Tiller app.kubernetes.io/name=jenkins helm.sh/chart=jenkins-1.1.21 Annotations: checksum/config: 867177d7ed5c3002201650b63dad00de7eb1e45a6622e543b80fae1f674a99cb Service Account: jenkins Init Containers: copy-default-config: Image: jenkins/jenkins:lts Port: &lt;none&gt; Host Port: &lt;none&gt; Command: sh /var/jenkins_config/apply_config.sh Limits: cpu: 2 memory: 4Gi Requests: cpu: 50m memory: 256Mi Environment: ADMIN_PASSWORD: &lt;set to the key 'jenkins-admin-password' in secret 'jenkins'&gt; Optional: false ADMIN_USER: &lt;set to the key 'jenkins-admin-user' in secret 'jenkins'&gt; Optional: false Mounts: /tmp from tmp (rw) /usr/share/jenkins/ref/plugins from plugins (rw) /usr/share/jenkins/ref/secrets/ from secrets-dir (rw) /var/jenkins_config from jenkins-config (rw) /var/jenkins_home from jenkins-home (rw) /var/jenkins_plugins from plugin-dir (rw) Containers: jenkins: Image: jenkins/jenkins:lts Ports: 8080/TCP, 50000/TCP Host Ports: 0/TCP, 0/TCP Args: --argumentsRealm.passwd.$(ADMIN_USER)=$(ADMIN_PASSWORD) --argumentsRealm.roles.$(ADMIN_USER)=admin Limits: cpu: 2 memory: 4Gi Requests: cpu: 50m memory: 256Mi Liveness: http-get http://:http/login delay=90s timeout=5s period=10s #success=1 #failure=5 Readiness: http-get http://:http/login delay=60s timeout=5s period=10s #success=1 #failure=3 Environment: JAVA_OPTS: JENKINS_OPTS: JENKINS_SLAVE_AGENT_PORT: 50000 ADMIN_PASSWORD: &lt;set to the key 'jenkins-admin-password' in secret 'jenkins'&gt; Optional: false ADMIN_USER: &lt;set to the key 'jenkins-admin-user' in secret 'jenkins'&gt; Optional: false Mounts: /tmp from tmp (rw) /usr/share/jenkins/ref/plugins/ from plugin-dir (rw) /usr/share/jenkins/ref/secrets/ from secrets-dir (rw) /var/jenkins_config from jenkins-config (ro) /var/jenkins_home from jenkins-home (rw) Volumes: plugins: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: &lt;unset&gt; tmp: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: &lt;unset&gt; jenkins-config: Type: ConfigMap (a volume populated by a ConfigMap) Name: jenkins Optional: false plugin-dir: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: &lt;unset&gt; secrets-dir: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: &lt;unset&gt; jenkins-home: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: jenkins-pvc ReadOnly: false Conditions: Type Status Reason ---- ------ ------ Available False MinimumReplicasUnavailable Progressing True ReplicaSetUpdated OldReplicaSets: jenkins-86dcf94679 (1/1 replicas created) NewReplicaSet: &lt;none&gt; Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ScalingReplicaSet 42s deployment-controller Scaled up replica set jenkins-86dcf94679 to 1 </code></pre>
gh4x
<p>I am trying to configure an Ingress to map to 2 different services but when I use anything other than <code>/</code> as the <code>path</code> I get a <code>Cannot GET /node-web-app-svc</code> error. I have tried using <code>Exact</code> as the <code>pathType</code> but it doesn't help. I am running this on a k3s cluster with Traefik.</p> <p><strong>EDIT</strong></p> <p>It seems like its trying to hit the deployment at an undefined path. So i htink it actually is hitting the expected service. Is there a way to rewrite <code>/node-web-app-svc</code> -&gt; <code>/</code> of the service?</p> <p>Here is the yaml:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: nginx-ingress annotations: kubernetes.io/ingress.class: traefik spec: rules: - http: paths: - path: /node-web-app-svc pathType: Prefix backend: service: name: node-web-app-svc port: number: 3000 - path: /node-web-app2-svc pathType: Prefix backend: service: name: node-web-app2-svc port: number: 3000 </code></pre> <p>Any help here would be appreciated. Let me know if more details would help.</p> <p>Thanks</p>
craigtb
<p>Need to leverage <code>traefik.ingress.kubernetes.io/rewrite-target: /app-root</code></p>
craigtb
<p>in my dotnet app I need to monitor activities and if there is no activity withing 10 min I should kill the kubernete pod. killing the process will not do the job, is there anyway to kill/delete a pod within dotnet? </p>
faranak777
<p>I assume you want to kill pods using code within the k8s cluster. Have a look at the <a href="https://github.com/kubernetes-client/csharp/" rel="nofollow noreferrer">kubernetes client</a> for dotnet core. You can use the cluster config from within the cluster you are running.</p> <pre><code>// Load from the default kubeconfig on the machine. var config = KubernetesClientConfiguration.BuildConfigFromConfigFile(); IKubernetes client = new Kubernetes(config); var list = client.ListNamespacedPod("default"); </code></pre> <p>After that, you can lists pods, services etc, and kill them if you want to.</p> <p>However, <strong>keep in mind</strong> that in order to read the local cluster config, and access resources, you need to set up a service account with the correct rights to do so.</p> <p>The example below has permissions to list services for example within the k8s cluster. Adjust to your scenario accordingly. Change 'services' to 'Pods' for example, and assign the right verbs for your logic.</p> <pre><code>apiVersion: v1 kind: ServiceAccount metadata: name: service-discovery-account --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: namespace: default name: service-discovery-service-reader rules: - apiGroups: [""] resources: ["services"] verbs: ["get", "watch", "list"] --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: service-discovery-service-reader subjects: - kind: ServiceAccount # Reference to ServiceAccount kind's `metadata.name` name: service-discovery-account # Reference to ServiceAccount kind's `metadata.namespace` namespace: default roleRef: kind: ClusterRole name: service-discovery-service-reader apiGroup: rbac.authorization.k8s.io </code></pre> <p>Don't forget to assign that service account to the deployment, responsible for monitoring other pods within the cluster:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: your-deployment-app spec: selector: matchLabels: app: your-deployment-app template: metadata: labels: app: your-deployment-app spec: serviceAccountName: service-discovery-account containers: </code></pre>
Thomas Luijken
<p>I have a small Kubernetes on prem cluster (Rancher 2.3.6) consisting of three nodes. The deployments inside the cluster are provisioned dynamically by an external application and always have their replica count set to 1, because these are stateful applications and high availability is not needed.</p> <p>The applications are exposed to the internet by NodePort services with a random port and ExternalTrafficPolicy set to Cluster. So if the user requests one of the three nodes, the k8s proxy will route and s-NAT the request to the correct node with the application pod. </p> <p>To this point, everything works fine.</p> <p>The problem started when we added Applications that rely on the requests source IP. Since the s-NAT replaces the request IP with an internal IP this applications don't work properly.</p> <p>I know, that setting the services ExternalTrafficPolicy to local will disabke s-natting. But this will also break the architecture, because not every pod has an instance of the application running.</p> <p>Is there a way to preserve the original client IP and still make use of the internal routing, so i won't have to worry about on which node the request will land?</p>
frinsch
<p>It depends on how the traffic gets into your cluster. But let's break it down a little bit:</p> <p>Generally, there are two strategies on how to handle source ip preservation:</p> <ul> <li>SNAT (packet IP)</li> <li>proxy/header (passing the original IP in an additional header)</li> </ul> <h4>1) SNAT</h4> <p>By default packets to <em>NodePort</em> and <em>LoadBalancer</em> are SourceNAT'd (to the node's IP that received the request), while packets send to <em>ClusterIP</em> are <strong>not</strong> SourceNAT'd.</p> <p>As you metioned already, there is a way to turn off SNAT for <em>NodePort</em> and <em>LoadBalancer</em> Services by setting <code>service.spec.externalTrafficPolicy: Local</code> which preserves the original source IP address, but with the undesired effect that kube-proxy only proxies proxy requests to local endpoints, and does not forward traffic to other nodes.</p> <h4>2) Header + Proxy IP preservation</h4> <p><em><strong>a) Nginx Ingress Controller and L7 LoadBalancer</strong></em></p> <ul> <li>When using L7 LoadBalancers which are sending a <code>X-Forwarded-For</code> header, Nginx by default evaluates the header containing the source ip if we have set the LB CIDR/Address in the <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#proxy-real-ip-cidr" rel="nofollow noreferrer"><code>proxy-real-ip-cidr</code></a></li> <li>you might need to set <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#use-forwarded-headers" rel="nofollow noreferrer"><code>use-forwarded-headers</code></a> explicitly to make nginx forward the header information</li> <li>additionally you might want to enable <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#enable-real-ip" rel="nofollow noreferrer"><code>enable-real-ip</code></a> so the realip_module replaces the real ip that has ben set in the <code>X-Forwarded-For</code> header provided by the trusted LB specified in <code>proxy-real-ip-cidr</code>.</li> </ul> <p><em><strong>b) Proxy Protocol and L4 LoadBalancer</strong></em></p> <ul> <li>With enabled <code>use-proxy-protocol: &quot;true&quot;</code> the header is <strong>not</strong> evaulated and the connection details will be send before forwarding the TCP actual connection. The LBs must support this.</li> </ul>
Martin Peter
<p>I have an ingress for my application:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: myapi-ingress annotations: nginx.ingress.kubernetes.io/ssl-redirect: &quot;true&quot; spec: ingressClassName: nginx rules: - host: mysite.com http: paths: - path: &quot;/posts&quot; pathType: Prefix backend: service: name: myservice port: number: 80 </code></pre> <p>When I run <code>kubectl describe ing myapi-ingress</code>, I can see that the ingress is stuck in sync state:</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Sync 26m (x2 over 27m) nginx-ingress-controller Scheduled for sync </code></pre> <p>PS. Before this happens, I tried to install another ingress for internal usage under another namespace and ingressclassname.</p> <p>I'm getting 404 when I try to hit this endpoint. Nothing in the logs.</p> <p>What is the problem?</p>
Rodrigo
<p>The problem was the host name set on the Ingress</p>
Rodrigo
<p>I have created multiple stacks (node groups) within my <strong>EKS cluster</strong>, and each group runs on a <strong>different instance type</strong> (for example, one group runs on GPU instances). I have added an entry in <em>mapRoles</em> of <em>aws-auth-cm.yaml</em> file for each of the node groups. Now I would like to deploy some <em>Deployments</em> on another. The deployment files look something like this:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: deployment-1 spec: replicas: 1 selector: matchLabels: component: component-1 template: metadata: labels: component: component-1 spec: containers: - name: d1 image: docker-container ports: - containerPort: 83 </code></pre> <p>The documentation shows that I can run the standard command <strong>kubectl apply</strong>. Is there any way to specify the group? Maybe something like</p> <blockquote> <p>kubectl apply -f server-deployment.yaml -group node-group-1</p> </blockquote>
Alessandro
<p>You can use <a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/" rel="nofollow noreferrer">taints and tolerations</a> to ensure that your pods end up on the right nodes. When you have heterogeneous nodes, this is good practice. </p> <p>For example, in my deployment, we have 2 classes of nodes, ones which have NVMe SSD attached and ones which don't. They're both tainted differently and the deployments that run on top specify tolerations which ensure that they end up only on the nodes that have that particular taint.</p> <p>For example, the node would have:</p> <pre><code>spec: ... taints: - effect: NoSchedule key: role value: gpu-instance </code></pre> <p>and a pod that must schedule on one of those nodes must have:</p> <pre><code>spec: tolerations: - effect: NoSchedule key: role operator: Equal value: gpu-instance </code></pre> <p>Once you have this setup, you can just do a regular <code>kubectl apply</code> and pods will get targeted onto nodes correctly. Note that this is a more flexible approach than node selectors and labels because it can give you more fine grained control and configurable eviction behavior.</p>
Anirudh Ramanathan
<p>I am using Kubernetes on a coreOS cluster hosted on DigitalOcean. And using <a href="https://github.com/cescoferraro/kubernetes-do" rel="nofollow noreferrer">this</a> repo to set it up. I started the apiserver with the following line:</p> <pre><code> /opt/bin/kube-apiserver --runtime-config=api/v1 --allow-privileged=true \ --insecure-bind-address=0.0.0.0 --insecure-port=8080 \ --secure-port=6443 --etcd-servers=http://127.0.0.1:2379 \ --logtostderr=true --advertise-address=${COREOS_PRIVATE_IPV4} \ --service-cluster-ip-range=10.100.0.0/16 --bind-address=0.0.0.0 </code></pre> <p>The problem is that it accepts requests from anyone! I want to be able to provide a simple user/password authentication. I have been reading <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/admin/authentication.md" rel="nofollow noreferrer">this</a> and <a href="https://github.com/kubernetes/kubernetes/blob/master/docs/design/security.md" rel="nofollow noreferrer">this</a> and it seems that I have to do something like the below, but I cannot afford to take the cluster down for a long period of time, so I need your guys to help with this one. Btw, my pods do not create another pods, so I only need a few user, like 1/2 for devs and 1 for CI.</p> <p>I am thinking of doing something like including authorization-mode and authorization-policy-file flags as it seems required and making the insecure-bind-address localhost to make it only available locally. I am missing something?</p> <pre><code> /opt/bin/kube-apiserver --runtime-config=api/v1 --allow-privileged=true \ --authorization-mode=ABAC --authorization-policy-file=/access.json \ --insecure-bind-address=127.0.0.1 --insecure-port=8080 \ --secure-port=6443 --etcd-servers=http://127.0.0.1:2379 \ --logtostderr=true --advertise-address=${COREOS_PRIVATE_IPV4} \ --service-cluster-ip-range=10.100.0.0/16 --bind-address=0.0.0.0 </code></pre> <p>###/access.json</p> <pre><code>{&quot;user&quot;:&quot;admin&quot;} {&quot;user&quot;:&quot;wercker&quot;} {&quot;user&quot;:&quot;dev1&quot;} {&quot;user&quot;:&quot;dev2&quot;} </code></pre> <p>But where are the passwords? How do I actually make the request with kubectl and curl or httpie?</p>
CESCO
<p>If you want your users to authenticate using HTTP Basic Auth (user:password), you can add:</p> <pre><code>--basic-auth-file=/basic_auth.csv </code></pre> <p>to your kube-apiserver command line, where each line of the file should be <code>password, user-name, user-id</code>. E.g.:</p> <pre><code>@dm1nP@ss,admin,admin w3rck3rP@ss,wercker,wercker etc... </code></pre> <p>If you'd rather use access tokens (HTTP Authentication: Bearer), you can specify:</p> <pre><code>--token-auth-file=/known-tokens.csv </code></pre> <p>where each line should be <code>token,user-name,user-id[,optional groups]</code>. E.g.:</p> <pre><code>@dm1nT0k3n,admin,admin,adminGroup,devGroup w3rck3rT0k3n,wercker,wercker,devGroup etc... </code></pre> <p>For more info, checkout the <a href="http://kubernetes.io/docs/admin/authentication/" rel="noreferrer">Authentication docs</a>. Also checkout <a href="https://github.com/kubernetes/kubernetes/blob/master/pkg/auth/authorizer/abac/example_policy_file.jsonl" rel="noreferrer">example_policy_file.jsonl</a> for an example ABAC file.</p>
CJ Cullen
<p>My DigitalOcean kubernetes cluster is unable to pull images from the DigitalOcean registry. I get the following error message:</p> <pre><code>Failed to pull image &quot;registry.digitalocean.com/XXXX/php:1.1.39&quot;: rpc error: code = Unknown desc = failed to pull and unpack image &quot;registry.digitalocean.com/XXXXXXX/php:1.1.39&quot;: failed to resolve reference &quot;registry.digitalocean.com/XXXXXXX/php:1.1.39&quot;: failed to authorize: failed to fetch anonymous token: unexpected status: 401 Unauthorized </code></pre> <p>I have added the kubernetes cluster using DigitalOcean Container Registry Integration, which shows there successfully both on the registry and the settings for the kubernetes cluster.</p> <p><a href="https://i.stack.imgur.com/hOkVJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hOkVJ.png" alt="enter image description here" /></a></p> <p>I can confirm the above address `registry.digitalocean.com/XXXX/php:1.1.39 matches the one in the registry. I wonder if I’m misunderstanding how the token / login integration works with the registry, but I’m under the impression that this was a “one click” thing and that the cluster would automatically get the connection to the registry after that.</p> <p>I have tried by logging helm into a registry before pushing, but this did not work (and I wouldn't really expect it to, the cluster should be pulling the image).</p> <p>It's not completely clear to me how the image pull secrets are supposed to be used.</p> <p>My helm deployment chart is basically the default for API Platform:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: {{ include &quot;api-platform.fullname&quot; . }} labels: {{- include &quot;api-platform.labels&quot; . | nindent 4 }} spec: {{- if not .Values.autoscaling.enabled }} replicas: {{ .Values.replicaCount }} {{- end }} selector: matchLabels: {{- include &quot;api-platform.selectorLabels&quot; . | nindent 6 }} template: metadata: {{- with .Values.podAnnotations }} annotations: {{- toYaml . | nindent 8 }} {{- end }} labels: {{- include &quot;api-platform.selectorLabels&quot; . | nindent 8 }} spec: {{- with .Values.imagePullSecrets }} imagePullSecrets: {{- toYaml . | nindent 8 }} {{- end }} serviceAccountName: {{ include &quot;api-platform.serviceAccountName&quot; . }} securityContext: {{- toYaml .Values.podSecurityContext | nindent 8 }} containers: - name: {{ .Chart.Name }}-caddy securityContext: {{- toYaml .Values.securityContext | nindent 12 }} image: &quot;{{ .Values.caddy.image.repository }}:{{ .Values.caddy.image.tag | default .Chart.AppVersion }}&quot; imagePullPolicy: {{ .Values.caddy.image.pullPolicy }} env: - name: SERVER_NAME value: :80 - name: PWA_UPSTREAM value: {{ include &quot;api-platform.fullname&quot; . }}-pwa:3000 - name: MERCURE_PUBLISHER_JWT_KEY valueFrom: secretKeyRef: name: {{ include &quot;api-platform.fullname&quot; . }} key: mercure-publisher-jwt-key - name: MERCURE_SUBSCRIBER_JWT_KEY valueFrom: secretKeyRef: name: {{ include &quot;api-platform.fullname&quot; . }} key: mercure-subscriber-jwt-key ports: - name: http containerPort: 80 protocol: TCP - name: admin containerPort: 2019 protocol: TCP volumeMounts: - mountPath: /var/run/php name: php-socket #livenessProbe: # httpGet: # path: / # port: admin #readinessProbe: # httpGet: # path: / # port: admin resources: {{- toYaml .Values.resources | nindent 12 }} - name: {{ .Chart.Name }}-php securityContext: {{- toYaml .Values.securityContext | nindent 12 }} image: &quot;{{ .Values.php.image.repository }}:{{ .Values.php.image.tag | default .Chart.AppVersion }}&quot; imagePullPolicy: {{ .Values.php.image.pullPolicy }} env: {{ include &quot;api-platform.env&quot; . | nindent 12 }} volumeMounts: - mountPath: /var/run/php name: php-socket readinessProbe: exec: command: - docker-healthcheck initialDelaySeconds: 120 periodSeconds: 3 livenessProbe: exec: command: - docker-healthcheck initialDelaySeconds: 120 periodSeconds: 3 resources: {{- toYaml .Values.resources | nindent 12 }} volumes: - name: php-socket emptyDir: {} {{- with .Values.nodeSelector }} nodeSelector: {{- toYaml . | nindent 8 }} {{- end }} {{- with .Values.affinity }} affinity: {{- toYaml . | nindent 8 }} {{- end }} {{- with .Values.tolerations }} tolerations: {{- toYaml . | nindent 8 }} {{- end }} </code></pre> <p>How do I authorize the kubernetes cluster to pull from the registry? Is this a helm thing or a kubernetes only thing?</p> <p>Thanks!</p>
Brettins
<p>The issue was that API Platform automatically has a default value for imagePullSecrets in the helm chart, which is</p> <p><code>imagePullSecrets: []</code></p> <p>in <a href="https://github.com/api-platform/api-platform/blob/e7c3973a8bef114d9a618b0589a1ea34b21c5603/helm/api-platform/values.yaml#L60" rel="nofollow noreferrer">values.yaml</a></p> <p>So this seems to override kubernetes from accessing imagePullSecrets in the way that I expected. The solution was to add the name of the imagePullSecrets directly to the helm deployment command, like this:</p> <p><code>--set &quot;imagePullSecrets[0].name=registry-secret-name-goes-here&quot;</code></p> <p>You can view the name of your secret using <code>kubectl get secrets</code> like this:</p> <p><code>kubectl get secrets</code></p> <p>And the output should look something like this:</p> <pre><code>NAME TYPE DATA AGE default-token-lz2ck kubernetes.io/service-account-token 3 38d registry-secret-name-goes-here kubernetes.io/dockerconfigjson 1 2d16h </code></pre>
Brettins
<p>I'm new with service mesh thing, so I did some PoC of basic implementation of microservices in kubernetes with istio. </p> <p>I have 2 Deployments which is supposed to be talking to each other using gRPC. When I call the grpc server it returned error <code>rpc error: code = Internal desc = server closed the stream without sending trailers</code></p> <p>This is my grpc Service config: </p> <p><code> apiVersion: v1 kind: Service metadata: name: grpcserver labels: app: grpcserver spec: ports: - port: 8080 name: http selector: app: grpcserver </code></p>
Ahmad Muzakki
<p>Quoting Istio <a href="https://istio.io/docs/setup/kubernetes/spec-requirements/" rel="nofollow noreferrer">docs</a>, </p> <blockquote> <p>Service ports must be named. The port names must be of the form {protocol}[-{suffix}] with http, http2, grpc, mongo, or redis as the in order to take advantage of Istio’s routing features.</p> </blockquote> <p>So the Service configuration should be:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: grpcserver labels: app: grpcserver spec: ports: - port: 8080 name: grpc selector: app: grpcserver </code></pre>
Ahmad Muzakki