Question
stringlengths
65
39.6k
Answer
stringlengths
38
29.1k
<p>In kubernetes on windows docker desktop when I try to mount an empty directory I get the following error:</p> <pre><code>error: error when retrieving current configuration of: Resource: "/v1, Resource=pods", GroupVersionKind: "/v1, Kind=Pod" Name: "", Namespace: "default" Object: &amp;{map["apiVersion":"v1" "kind":"Pod" "metadata":map["annotations":map["kubectl.kubernetes.io/last-applied-configuration":""] "namespace":"default"] "spec":map["containers":[map["image":"nginx:alpine" "name":"nginx" "volumeMounts":[map["mountPath":"/usr/share/nginx/html" "name":"html" "readOnly":%!q(bool=true)]]] map["args":["while true; do date &gt;&gt; /html/index.html; sleep 10; done"] "command":["/bin/sh" "-c"] "image":"alpine" "name":"html-updater" "volumeMounts":[map["mountPath":"/html" "name":"html"]]]] "volumes":[map["emptyDir":map[] "name":"html"]]]]} from server for: "nginx-alpine-emptyDir.pod.yml": resource name may not be empty </code></pre> <p>The error message seems a bit unclear and I cannot find what's going on. My yaml configuration is the following:</p> <pre><code>apiVersion: v1 kind: Pod spec: volumes: - name: html emptyDir: {} containers: - name: nginx image: nginx:alpine volumeMounts: - name: html mountPath: /usr/share/nginx/html readOnly: true - name: html-updater image: alpine command: ["/bin/sh", "-c"] args: - while true; do date &gt;&gt; /html/index.html; sleep 10; done volumeMounts: - name: html mountPath: /html </code></pre>
<p>Forgot to add metadata name</p> <pre><code>metadata: name: empty-dir-test </code></pre> <p>Code after change is:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: empty-dir-test spec: volumes: - name: html emptyDir: {} containers: - name: nginx image: nginx:alpine volumeMounts: - name: html mountPath: /usr/share/nginx/html readOnly: true - name: html-updater image: alpine command: ["/bin/sh", "-c"] args: - while true; do date &gt;&gt; /html/index.html; sleep 10; done volumeMounts: - name: html mountPath: /html </code></pre>
<p>I am trying to spin up a testing Pod with the KubernetesPodOperator. As an image I am using the hello-world example from Docker, which I pushed to the local registry of my MicroK8s installation.</p> <pre><code>from airflow import DAG from airflow.operators.dummy_operator import DummyOperator from airflow.contrib.operators.kubernetes_pod_operator import KubernetesPodOperator from airflow.kubernetes.pod import Port from airflow.utils.dates import days_ago from datetime import timedelta ports = [Port('http', 80)] default_args = { 'owner': 'user', 'start_date': days_ago(5), 'email': ['user@mail'], 'email_on_failure': False, 'email_on_retry': False, 'retries': 0 } workflow = DAG( 'kubernetes_helloworld', default_args=default_args, description='Our first DAG', schedule_interval=None, ) op = DummyOperator(task_id='dummy', dag=workflow) t1 = KubernetesPodOperator( dag=workflow, namespace='default', image='localhost:32000/hello-world:registry', name='pod2', task_id='pod2', is_delete_operator_pod=True, hostnetwork=False, get_logs=True, do_xcom_push=False, in_cluster=False, ports=ports, ) op &gt;&gt; t1 </code></pre> <p>When I trigger the DAG it keeps running and reattempts to launch the pod indefinite times. This is the log output I get in Airflow:</p> <pre><code>Reading local file: /home/user/airflow/logs/kubernetes_helloworld/pod2/2021-03-17T16:25:11.142695+00:00/4.log [2021-03-17 16:30:00,315] {taskinstance.py:851} INFO - Dependencies all met for &lt;TaskInstance: kubernetes_helloworld.pod2 2021-03-17T16:25:11.142695+00:00 [queued]&gt; [2021-03-17 16:30:00,319] {taskinstance.py:851} INFO - Dependencies all met for &lt;TaskInstance: kubernetes_helloworld.pod2 2021-03-17T16:25:11.142695+00:00 [queued]&gt; [2021-03-17 16:30:00,319] {taskinstance.py:1042} INFO - -------------------------------------------------------------------------------- [2021-03-17 16:30:00,320] {taskinstance.py:1043} INFO - Starting attempt 4 of 1 [2021-03-17 16:30:00,320] {taskinstance.py:1044} INFO - -------------------------------------------------------------------------------- [2021-03-17 16:30:00,330] {taskinstance.py:1063} INFO - Executing &lt;Task(KubernetesPodOperator): pod2&gt; on 2021-03-17T16:25:11.142695+00:00 [2021-03-17 16:30:00,332] {standard_task_runner.py:52} INFO - Started process 9021 to run task [2021-03-17 16:30:00,335] {standard_task_runner.py:76} INFO - Running: ['airflow', 'tasks', 'run', 'kubernetes_helloworld', 'pod2', '2021-03-17T16:25:11.142695+00:00', '--job-id', '57', '--pool', 'default_pool', '--raw', '--subdir', 'DAGS_FOLDER/kubernetes_helloworld.py', '--cfg-path', '/tmp/tmp5ss4g6q4', '--error-file', '/tmp/tmp9t3l8emt'] [2021-03-17 16:30:00,336] {standard_task_runner.py:77} INFO - Job 57: Subtask pod2 [2021-03-17 16:30:00,357] {logging_mixin.py:104} INFO - Running &lt;TaskInstance: kubernetes_helloworld.pod2 2021-03-17T16:25:11.142695+00:00 [running]&gt; on host 05nclorenzvm01.internal.cloudapp.net [2021-03-17 16:30:00,369] {taskinstance.py:1255} INFO - Exporting the following env vars: AIRFLOW_CTX_DAG_EMAIL=user AIRFLOW_CTX_DAG_OWNER=user AIRFLOW_CTX_DAG_ID=kubernetes_helloworld AIRFLOW_CTX_TASK_ID=pod2 AIRFLOW_CTX_EXECUTION_DATE=2021-03-17T16:25:11.142695+00:00 AIRFLOW_CTX_DAG_RUN_ID=manual__2021-03-17T16:25:11.142695+00:00 [2021-03-17 16:32:09,805] {connectionpool.py:751} WARNING - Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('&lt;urllib3.connection.HTTPSConnection object at 0x7f812fc23eb0&gt;: Failed to establish a new connection: [Errno 110] Connection timed out')': /api/v1/namespaces/default/pods?labelSelector=dag_id%3Dkubernetes_helloworld%2Cexecution_date%3D2021-03-17T162511.1426950000-e549b02ea%2Ctask_id%3Dpod2 </code></pre> <p>When I launch the pod in kubernetes itself without Airflow it runs fine. What am I doing wrong?</p> <p>I tried the following things:</p> <ul> <li>Prevent the container from exiting with sleep commands</li> <li>Try different images e.g pyspark</li> <li>Reinstall Airflow and MicroK8s</li> </ul> <p>Airflow v2.0.1 MicroK8s v1.3.7 Python 3.8 Ubuntu 18.04 LTS</p>
<p>Unfortunately I still haven't figured out the problem with microK8s.</p> <p>But I was able to use the KubernetesPodOperator in Airflow with minikube. The following code was able to run without any problems:</p> <pre><code>from airflow import DAG from datetime import datetime, timedelta from airflow.contrib.operators.kubernetes_pod_operator import KubernetesPodOperator from airflow import configuration as conf from airflow.utils.dates import days_ago default_args = { 'owner': 'user', 'start_date': days_ago(5), 'email': ['[email protected]'], 'email_on_failure': False, 'email_on_retry': False, 'retries': 0 } namespace = conf.get('kubernetes', 'NAMESPACE') if namespace =='default': config_file = '/home/user/.kube/config' in_cluster=False else: in_cluster=True config_file=None dag = DAG('example_kubernetes_pod', schedule_interval='@once', default_args=default_args) with dag: k = KubernetesPodOperator( namespace=namespace, image=&quot;hello-world&quot;, labels={&quot;foo&quot;: &quot;bar&quot;}, name=&quot;airflow-test-pod&quot;, task_id=&quot;task-one&quot;, in_cluster=in_cluster, # if set to true, will look in the cluster, if false, looks for file cluster_context='minikube', # is ignored when in_cluster is set to True config_file=config_file, is_delete_operator_pod=True, get_logs=True) </code></pre>
<p>I wanna change docker storage drive to overlay2 for use kubernetes</p> <p>daemon.json :</p> <pre><code>{ "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2" } </code></pre> <p>but the service cannot start</p> <p>journalctl -b -u docker.service:</p> <pre><code>level=error msg="failed to mount overlay: permission denied" storage-drive May 05 05:35:32 master1 dockerd[492]: failed to start daemon: error initializing graphdriver: driver not supported </code></pre>
<p>I changed my dockerd version from 5.19.3 to 5.19.5 (ubuntu) and that problem fixed and docker worked on overlay2 storage mode</p> <pre><code>sudo apt-get install docker-ce=5:19.03.5~3-0~ubuntu-xenial </code></pre>
<p>I am running minikube on Windows. I am starting a pod and NodePort like this:</p> <pre><code>minikube start kubectl apply -f pod.yml kubectl apply -f service.yml kubectl port-forward service/sample-web-service 31111:80 </code></pre> <p>At this point, I can access my sample web service in a browser using:</p> <pre><code>localhost:31111 </code></pre> <p>and</p> <pre><code>127.0.0.1:31111 </code></pre> <p>Note, I get no response trying to access the service using the ip returned by <code>minikube ip</code> described here:</p> <p><a href="https://minikube.sigs.k8s.io/docs/handbook/accessing/#getting-the-nodeport-using-kubectl" rel="nofollow noreferrer">https://minikube.sigs.k8s.io/docs/handbook/accessing/#getting-the-nodeport-using-kubectl</a></p> <p>Is it possible to also access my sample web service using a qualified hostname (i.e. the <code>Full Computer Name</code> found in <code>Control Panel\All Control Panel Items\System</code>)? I tried the following in a browser but didn't get a response:</p> <pre><code>my-windows-pc-name.mydomain.com:31111 </code></pre> <p>I am on a VPN and tried turning it off but to no avail.</p>
<pre><code>kubectl port-forward service/sample-web-service --address 0.0.0.0 31111:80 </code></pre>
<p>I've installed kong-ingress-controller using yaml file on a 3-nodes k8s cluster( bare metal ) (you can see the file at the bottom of question) and every thing is up and runnig:</p> <pre><code>$kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE default bar-deployment-64d5b5b5c4-99p4q 1/1 Running 0 12m default foo-deployment-877bf9974-xmpj6 1/1 Running 0 15m kong ingress-kong-5cd9db4db9-4cg4q 2/2 Running 0 79m kube-system calico-kube-controllers-5f6cfd688c-5njnn 1/1 Running 0 18h kube-system calico-node-5k9b6 1/1 Running 0 18h kube-system calico-node-jbb7k 1/1 Running 0 18h kube-system calico-node-mmmts 1/1 Running 0 18h kube-system coredns-74ff55c5b-5q5fn 1/1 Running 0 23h kube-system coredns-74ff55c5b-9bbbk 1/1 Running 0 23h kube-system etcd-kubernetes-master 1/1 Running 1 23h kube-system kube-apiserver-kubernetes-master 1/1 Running 1 23h kube-system kube-controller-manager-kubernetes-master 1/1 Running 1 23h kube-system kube-proxy-4h7hs 1/1 Running 0 20h kube-system kube-proxy-sd6b2 1/1 Running 0 20h kube-system kube-proxy-v9z8p 1/1 Running 1 23h kube-system kube-scheduler-kubernetes-master 1/1 Running 1 23h </code></pre> <p><strong>but the problem is here</strong>:</p> <p>the <strong><code>EXTERNAL_IP</code></strong> of <strong><code>kong-proxy service</code></strong> is <strong>pending</strong> so i'm not able to reach to my cluster from the outside</p> <pre><code>$kubectl get services --all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default bar-service ClusterIP 10.103.49.102 &lt;none&gt; 5000/TCP 15m default foo-service ClusterIP 10.102.52.89 &lt;none&gt; 5000/TCP 19m default kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 23h kong kong-proxy LoadBalancer 10.104.79.161 &lt;pending&gt; 80:31583/TCP,443:30053/TCP 82m kong kong-validation-webhook ClusterIP 10.109.75.104 &lt;none&gt; 443/TCP 82m kube-system kube-dns ClusterIP 10.96.0.10 &lt;none&gt; 53/UDP,53/TCP,9153/TCP 23h </code></pre> <pre><code>$ kubectl describe service kong-proxy -n kong Name: kong-proxy Namespace: kong Labels: &lt;none&gt; Annotations: service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp service.beta.kubernetes.io/aws-load-balancer-type: nlb Selector: app=ingress-kong Type: LoadBalancer IP Families: &lt;none&gt; IP: 10.104.79.161 IPs: 10.104.79.161 Port: proxy 80/TCP TargetPort: 8000/TCP NodePort: proxy 31583/TCP Endpoints: 192.168.74.69:8000 Port: proxy-ssl 443/TCP TargetPort: 8443/TCP NodePort: proxy-ssl 30053/TCP Endpoints: 192.168.74.69:8443 Session Affinity: None External Traffic Policy: Cluster Events: &lt;none&gt; </code></pre> <p>my k8s version is 1.20.1 and my docker version is 19.3.10 . If someone could help me to get a solution ,that would be awesome</p> <p>=============================================</p> <p><strong>kong-ingress-controller</strong> yaml file:</p> <pre><code>apiVersion: v1 kind: Namespace metadata: name: kong --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: kongclusterplugins.configuration.konghq.com spec: additionalPrinterColumns: - JSONPath: .plugin description: Name of the plugin name: Plugin-Type type: string - JSONPath: .metadata.creationTimestamp description: Age name: Age type: date - JSONPath: .disabled description: Indicates if the plugin is disabled name: Disabled priority: 1 type: boolean - JSONPath: .config description: Configuration of the plugin name: Config priority: 1 type: string group: configuration.konghq.com names: kind: KongClusterPlugin plural: kongclusterplugins shortNames: - kcp scope: Cluster subresources: status: {} validation: openAPIV3Schema: properties: config: type: object configFrom: properties: secretKeyRef: properties: key: type: string name: type: string namespace: type: string required: - name - namespace - key type: object type: object disabled: type: boolean plugin: type: string protocols: items: enum: - http - https - grpc - grpcs - tcp - tls type: string type: array run_on: enum: - first - second - all type: string required: - plugin version: v1 --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: kongconsumers.configuration.konghq.com spec: additionalPrinterColumns: - JSONPath: .username description: Username of a Kong Consumer name: Username type: string - JSONPath: .metadata.creationTimestamp description: Age name: Age type: date group: configuration.konghq.com names: kind: KongConsumer plural: kongconsumers shortNames: - kc scope: Namespaced subresources: status: {} validation: openAPIV3Schema: properties: credentials: items: type: string type: array custom_id: type: string username: type: string version: v1 --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: kongingresses.configuration.konghq.com spec: group: configuration.konghq.com names: kind: KongIngress plural: kongingresses shortNames: - ki scope: Namespaced subresources: status: {} validation: openAPIV3Schema: properties: proxy: properties: connect_timeout: minimum: 0 type: integer path: pattern: ^/.*$ type: string protocol: enum: - http - https - grpc - grpcs - tcp - tls type: string read_timeout: minimum: 0 type: integer retries: minimum: 0 type: integer write_timeout: minimum: 0 type: integer type: object route: properties: headers: additionalProperties: items: type: string type: array type: object https_redirect_status_code: type: integer methods: items: type: string type: array path_handling: enum: - v0 - v1 type: string preserve_host: type: boolean protocols: items: enum: - http - https - grpc - grpcs - tcp - tls type: string type: array regex_priority: type: integer request_buffering: type: boolean response_buffering: type: boolean snis: items: type: string type: array strip_path: type: boolean upstream: properties: algorithm: enum: - round-robin - consistent-hashing - least-connections type: string hash_fallback: type: string hash_fallback_header: type: string hash_on: type: string hash_on_cookie: type: string hash_on_cookie_path: type: string hash_on_header: type: string healthchecks: properties: active: properties: concurrency: minimum: 1 type: integer healthy: properties: http_statuses: items: type: integer type: array interval: minimum: 0 type: integer successes: minimum: 0 type: integer type: object http_path: pattern: ^/.*$ type: string timeout: minimum: 0 type: integer unhealthy: properties: http_failures: minimum: 0 type: integer http_statuses: items: type: integer type: array interval: minimum: 0 type: integer tcp_failures: minimum: 0 type: integer timeout: minimum: 0 type: integer type: object type: object passive: properties: healthy: properties: http_statuses: items: type: integer type: array interval: minimum: 0 type: integer successes: minimum: 0 type: integer type: object unhealthy: properties: http_failures: minimum: 0 type: integer http_statuses: items: type: integer type: array interval: minimum: 0 type: integer tcp_failures: minimum: 0 type: integer timeout: minimum: 0 type: integer type: object type: object threshold: type: integer type: object host_header: type: string slots: minimum: 10 type: integer type: object version: v1 --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: kongplugins.configuration.konghq.com spec: additionalPrinterColumns: - JSONPath: .plugin description: Name of the plugin name: Plugin-Type type: string - JSONPath: .metadata.creationTimestamp description: Age name: Age type: date - JSONPath: .disabled description: Indicates if the plugin is disabled name: Disabled priority: 1 type: boolean - JSONPath: .config description: Configuration of the plugin name: Config priority: 1 type: string group: configuration.konghq.com names: kind: KongPlugin plural: kongplugins shortNames: - kp scope: Namespaced subresources: status: {} validation: openAPIV3Schema: properties: config: type: object configFrom: properties: secretKeyRef: properties: key: type: string name: type: string required: - name - key type: object type: object disabled: type: boolean plugin: type: string protocols: items: enum: - http - https - grpc - grpcs - tcp - tls type: string type: array run_on: enum: - first - second - all type: string required: - plugin version: v1 --- apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: tcpingresses.configuration.konghq.com spec: additionalPrinterColumns: - JSONPath: .status.loadBalancer.ingress[*].ip description: Address of the load balancer name: Address type: string - JSONPath: .metadata.creationTimestamp description: Age name: Age type: date group: configuration.konghq.com names: kind: TCPIngress plural: tcpingresses scope: Namespaced subresources: status: {} validation: openAPIV3Schema: properties: apiVersion: type: string kind: type: string metadata: type: object spec: properties: rules: items: properties: backend: properties: serviceName: type: string servicePort: format: int32 type: integer type: object host: type: string port: format: int32 type: integer type: object type: array tls: items: properties: hosts: items: type: string type: array secretName: type: string type: object type: array type: object status: type: object version: v1beta1 status: acceptedNames: kind: &quot;&quot; plural: &quot;&quot; conditions: [] storedVersions: [] --- apiVersion: v1 kind: ServiceAccount metadata: name: kong-serviceaccount namespace: kong --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: kong-ingress-clusterrole rules: - apiGroups: - &quot;&quot; resources: - endpoints - nodes - pods - secrets verbs: - list - watch - apiGroups: - &quot;&quot; resources: - nodes verbs: - get - apiGroups: - &quot;&quot; resources: - services verbs: - get - list - watch - apiGroups: - networking.k8s.io - extensions - networking.internal.knative.dev resources: - ingresses verbs: - get - list - watch - apiGroups: - &quot;&quot; resources: - events verbs: - create - patch - apiGroups: - networking.k8s.io - extensions - networking.internal.knative.dev resources: - ingresses/status verbs: - update - apiGroups: - configuration.konghq.com resources: - tcpingresses/status verbs: - update - apiGroups: - configuration.konghq.com resources: - kongplugins - kongclusterplugins - kongcredentials - kongconsumers - kongingresses - tcpingresses verbs: - get - list - watch - apiGroups: - &quot;&quot; resources: - configmaps verbs: - create - get - update --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: kong-ingress-clusterrole-nisa-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: kong-ingress-clusterrole subjects: - kind: ServiceAccount name: kong-serviceaccount namespace: kong --- apiVersion: v1 kind: Service metadata: annotations: service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp service.beta.kubernetes.io/aws-load-balancer-type: nlb name: kong-proxy namespace: kong spec: ports: - name: proxy port: 80 protocol: TCP targetPort: 8000 - name: proxy-ssl port: 443 protocol: TCP targetPort: 8443 selector: app: ingress-kong type: LoadBalancer --- apiVersion: v1 kind: Service metadata: name: kong-validation-webhook namespace: kong spec: ports: - name: webhook port: 443 protocol: TCP targetPort: 8080 selector: app: ingress-kong --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: ingress-kong name: ingress-kong namespace: kong spec: replicas: 1 selector: matchLabels: app: ingress-kong template: metadata: annotations: kuma.io/gateway: enabled prometheus.io/port: &quot;8100&quot; prometheus.io/scrape: &quot;true&quot; traffic.sidecar.istio.io/includeInboundPorts: &quot;&quot; labels: app: ingress-kong spec: containers: - env: - name: KONG_PROXY_LISTEN value: 0.0.0.0:8000, 0.0.0.0:8443 ssl http2 - name: KONG_PORT_MAPS value: 80:8000, 443:8443 - name: KONG_ADMIN_LISTEN value: 127.0.0.1:8444 ssl - name: KONG_STATUS_LISTEN value: 0.0.0.0:8100 - name: KONG_DATABASE value: &quot;off&quot; - name: KONG_NGINX_WORKER_PROCESSES value: &quot;2&quot; - name: KONG_ADMIN_ACCESS_LOG value: /dev/stdout - name: KONG_ADMIN_ERROR_LOG value: /dev/stderr - name: KONG_PROXY_ERROR_LOG value: /dev/stderr image: kong:2.5 lifecycle: preStop: exec: command: - /bin/sh - -c - kong quit livenessProbe: failureThreshold: 3 httpGet: path: /status port: 8100 scheme: HTTP initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 name: proxy ports: - containerPort: 8000 name: proxy protocol: TCP - containerPort: 8443 name: proxy-ssl protocol: TCP - containerPort: 8100 name: metrics protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /status port: 8100 scheme: HTTP initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 - env: - name: CONTROLLER_KONG_ADMIN_URL value: https://127.0.0.1:8444 - name: CONTROLLER_KONG_ADMIN_TLS_SKIP_VERIFY value: &quot;true&quot; - name: CONTROLLER_PUBLISH_SERVICE value: kong/kong-proxy - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace image: kong/kubernetes-ingress-controller:1.3 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 name: ingress-controller ports: - containerPort: 8080 name: webhook protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 1 serviceAccountName: kong-serviceaccount </code></pre>
<p>the short answer is what @iglen_ said <a href="https://stackoverflow.com/a/69177863/12741668">in this answer</a> but I decided to explain the solution.</p> <p>When using a cloud provider the <code>LoadBalancer</code> type for Services will be managed and provisioned by the environment (see <a href="https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#external-load-balancer-providers" rel="nofollow noreferrer">k8s docs</a>) automatically, but when creating your own baremetal cluster you will need to add the service which will manage provisioning <code>IPs</code> for <code>LoadBalancer</code> type Services. One such service is <a href="https://metallb.universe.tf" rel="nofollow noreferrer">Metal-LB</a> which is built for this.</p> <p>Before installation MetalLB Check the <a href="https://metallb.universe.tf/#requirements" rel="nofollow noreferrer">requirements</a>.</p> <p>before we deploying MetalLB we need to do one step:</p> <blockquote> <p>If you’re using kube-proxy in IPVS mode, since Kubernetes v1.14.2 you have to enable strict ARP mode.</p> </blockquote> <p>Note, you don’t need this if you’re using kube-router as service-proxy because it is enabling strict ARP by default. enter this command:</p> <p><code>$ kubectl edit configmap -n kube-system kube-proxy</code></p> <p>in the opened up page search for <strong>mode</strong>, in my case the mode is equal to empty string so i don't need to change any thing but in the case that mode is set to <code>ipvs</code> as it said in the installation guide you need to set below configuration in this file:</p> <pre><code>apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: &quot;ipvs&quot; ipvs: strictARP: true </code></pre> <p>as the next step you need to run these commands:</p> <pre><code>$ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.10.2/manifests/namespace.yaml $ kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.10.2/manifests/metallb.yaml $ kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey=&quot;$(openssl rand -base64 128)&quot; </code></pre> <p>after running above commands we have this:</p> <pre><code>$ kubectl get all -n metallb-system NAME READY STATUS RESTARTS AGE pod/controller-6b78sff7d9-2rv2f 1/1 Running 0 3m pod/speaker-7bqev 1/1 Running 0 3m pod/speaker-txrg5 1/1 Running 0 3m pod/speaker-w7th5 1/1 Running 0 3m NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/speaker 3 3 3 3 3 kubernetes.io/os=linux 3m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/controller 1/1 1 1 3m NAME DESIRED CURRENT READY AGE replicaset.apps/controller-6b78sff7d9 1 1 1 3m </code></pre> <p>MetalLB needs some <code>IPv4 addresses</code> :</p> <pre><code>$ ip a s 1: ens160: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc mq state UP group default qlen 1000 [...] inet 10.240.1.59/24 brd 10.240.1.255 scope global dynamic noprefixroute ens160 valid_lft 425669sec preferred_lft 421269sec [...] </code></pre> <p>the <code>ens160</code> is my control-plane network interface and as you see its ip range is <code>10.240.1.59/24</code> so i'm going to assign a set of ip address in this network:</p> <pre><code>$ sipcalc 10.240.1.59/24 -[ipv4 : 10.240.1.59/24] - 0 [CIDR] Host address - 10.240.1.59 Host address (decimal) - 183500115 Host address (hex) - AF0031B Network address - 10.240.1.0 Network mask - 255.255.255.0 Network mask (bits) - 24 Network mask (hex) - FFFFF000 Broadcast address - 10.240.1.255 Cisco wildcard - 0.0.0.255 Addresses in network - 256 Network range - 10.240.1.0 - 10.240.1.255 Usable range - 10.240.1.1 - 10.240.1.254 </code></pre> <p>now i'm going to take <strong>10</strong> ip addresses from the <code>Usable range</code> and assign it to MetalLB. let's create a <code>configmap</code> for MetalLB :</p> <pre><code>$ sudo nano metallb-cm.yaml </code></pre> <p>paste the below configuration into metallb-cm.yaml:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: namespace: metallb-system name: config data: config: | address-pools: - name: default protocol: layer2 addresses: - 10.240.1.100-10.240.1.110 </code></pre> <p>then save the file and run this command:</p> <pre><code>$ kubectl create -f metallb-cm.yaml </code></pre> <p>now let's check our services again:</p> <pre><code>$ kubectl get services --all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default bar-service ClusterIP 10.103.49.102 &lt;none&gt; 5000/TCP 15m default foo-service ClusterIP 10.102.52.89 &lt;none&gt; 5000/TCP 19m default kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 23h kong kong-proxy LoadBalancer 10.104.79.161 10.240.1.100 80:31583/TCP,443:30053/TCP 82m kong kong-validation-webhook ClusterIP 10.109.75.104 &lt;none&gt; 443/TCP 82m kube-system kube-dns ClusterIP 10.96.0.10 &lt;none&gt; 53/UDP,53/TCP,9153/TCP 23h </code></pre> <p>as you can see the service of type <code>LoadBalancer</code> has an ip address now.</p>
<p>I am deploying a kubernetes app via github on GCP clusters. Everything works fine then.. I came across <code>cloud deploy delivery pipeline</code>..now I am stuck.</p> <p>Following the <a href="https://cloud.google.com/deploy/docs/quickstart-basic?_ga=2.141149938.-1343950568.1631260475&amp;_gac=1.47309141.1631868766.CjwKCAjw-ZCKBhBkEiwAM4qfF2mz0qQw_k68XtDo-SSlglr1_U2xTUO0C2ZF8zBOdMlnf_gQVwDi3xoCQ8IQAvD_BwE" rel="nofollow noreferrer">docs</a> here</p> <pre><code>apiVersion: skaffold/v2beta12 kind: Config build: artifacts: - image: skaffold-example deploy: kubectl: manifests: - k8s-* </code></pre> <p>In the <code>k8s</code> folder I have my deployment files like so</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: ixh-auth-depl labels: app: ixh-auth spec: replicas: 1 selector: matchLabels: app: ixh-auth template: metadata: labels: app: ixh-auth spec: containers: - name: ixh-auth image: mb/ixh-auth:latest ports: - containerPort: 3000 resources: requests: cpu: 100m memory: 500Mi </code></pre> <p>but it gives the error <code>invalid kubernetes manifest</code>. I cannot find anything to read on this and don't know how to proceed.</p>
<p>The correct way to declare the manifests was this. The wildcard probably didn't work. The folder name here would be <code>k8s-manifests</code>.</p> <pre><code>deploy: kubectl: manifests: - k8s-manifests/redis-deployment.yml - k8s-manifests/node-depl.yml - k8s-manifests/node-service.yml </code></pre>
<p>I want to give the ability to trigger a K8s to a person that doesn't work with IT. He doesn't have docker nor K8s installed.</p> <p>What are my options to grant him this possibility ?</p> <p>I already thinked I can create a custom service with POST endpoint with a basic auth that should allow him to make the query only with curl / postman, but I wonder if there is any "easier" and free alternative.</p> <p>PD: This person has an account on gitlab.com and our K8s cluster is integrated to Gitlab.</p>
<p>Does the company use some CI/CD tool that requires company authentication, supports authorization, and has a browser-based GUI (e.g. Jenkins)? If "yes", then create a job in that CI/CD tool that will connect to Kubernetes behind the scene using a Kubernetes service account to trigger the Kubernetes Job. </p> <p>This way, the non-IT user won't have to know the any of the Kubernetes details, will use their company login credentials, will be restricted to trigger just this job, while using a browser GUI.</p>
<p>Consider two or more applications "talking" to each other and deployed to the cloud (cloud foundry). What are the best practices for a team to follow to work (develop/test/debug) on the same instance of the applications but in his/her "own" space without creating another instance of the application in the cloud? Or should every developer need to have a local copy of those application and run it in a docker/kubernetes, for example?</p>
<p>The question is broad, but there are some directions that are worth mentioning here. So, a short answer might be:</p> <ol> <li>Run the collaborating apps whenever necessary alongside the app(s) you are developing.</li> <li>To ease this, prefer <a href="https://github.com/cloudfoundry-incubator/cflocal" rel="nofollow noreferrer">CF Local</a> (lightweight Docker containers) over <a href="https://github.com/cloudfoundry-incubator/cfdev" rel="nofollow noreferrer">CF Dev</a> (running a whole CF foundation).</li> <li>If running the other collaborating apps is too much of a challenge, create mocks that mimics their behaviors, for the interactions (or the tests scenarios) you need.</li> </ol> <p>Some words about CF Local: nowadays Cloud Foundry developers are no more recommended to run a whole Cloud Foundry foundation on their laptop anymore. When CF Dev arrived, it was already an improvement for running a whole foundation over <a href="https://bosh.io/docs/bosh-lite/" rel="nofollow noreferrer">BOSH-Lite</a> (that still has its use-cases, I use it everyday), but for a typical <code>cf push</code> developer experience, CF Local fits well and is even more lightweight.</p> <p>So, CF Local is now recommended instead. It should help you run a bunch of collaborating micro-services applications on your local machine, within standard Docker containers, running on top of a plain Docker engine. For more information, see the <a href="https://github.com/cloudfoundry-incubator/cflocal" rel="nofollow noreferrer">CF Local Github</a> page, and the <a href="https://pivotal.io/fr/cf-local#comparisonchart" rel="nofollow noreferrer">comparison chart with CF Dev</a> about use-cases for which CF Local is a good fit.</p> <p>For data services (e.g. MySQL or PostgreSQL database), CF Local already provides solutions to re-use the same services instances from your CF foundation. You may also run you own engine on your laptop, and find a way for your cf-local-pushed app to bind to these services in this context.</p> <p>Hope this might give you some interesting directions in which you can dig and find your way!</p>
<p>I have used the following configurations to deploy an app on minikube.</p> <p><strong>Deployment</strong>:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: angular-app spec: replicas: 2 selector: matchLabels: run: angular-app template: metadata: labels: run: angular-app spec: containers: - name: angular-app image: nheidloff/angular-app ports: - containerPort: 80 - containerPort: 443 </code></pre> <p><strong>Service:</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: name: angular-app labels: run: angular-app spec: type: NodePort ports: - port: 80 targetPort: 80 protocol: TCP name: http </code></pre> <p><strong>Service description:</strong></p> <pre><code>Name: angular-app Namespace: default Labels: run=angular-app Annotations: &lt;none&gt; Selector: &lt;none&gt; Type: NodePort IP Family Policy: SingleStack IP Families: IPv4 IP: 10.102.174.98 IPs: 10.102.174.98 Port: http 80/TCP TargetPort: 80/TCP NodePort: http 31503/TCP Endpoints: 172.17.0.3:80,172.17.0.4:80 Session Affinity: None External Traffic Policy: Cluster Events: &lt;none&gt; </code></pre> <p>When i try to access the endpoints, the links are not responding. However after using <code>minikube service angular-app</code>. Following showed up:</p> <pre><code>|-----------|-------------|-------------|---------------------------| | NAMESPACE | NAME | TARGET PORT | URL | |-----------|-------------|-------------|---------------------------| | default | angular-app | http/80 | http://192.168.49.2:31503 | |-----------|-------------|-------------|---------------------------| 🏃 Starting tunnel for service angular-app. |-----------|-------------|-------------|------------------------| | NAMESPACE | NAME | TARGET PORT | URL | |-----------|-------------|-------------|------------------------| | default | angular-app | | http://127.0.0.1:60611 | |-----------|-------------|-------------|------------------------| </code></pre> <p>With this ip <code>http://127.0.0.1:60611</code> im able to access the app. What is the use of the endpoints given in the service description? How to access each replica? Say if i have 4 replicas, how do i access each one of them?</p>
<ul> <li><p>The endpoints provided in the service description are the endpoints for each of the pods <code>172.17.0.3:80,172.17.0.4:80</code>, when you deploy more replicas you will have more endpoints.</p> </li> <li><p>The angular-app service is bonded to port number <code>31503</code> and you can access your service on this port from cluster nodes (not your host machine).</p> </li> <li><p><code>minikube service angular-app</code> will create a tunnel between your host machine and the cluster nodes on port <code>60611</code>. This means anything that comes on <code>127.0.0.1:60611</code> will be redirected to <code>192.168.49.2:31503</code> and then one of the available endpoints.</p> </li> <li><p>The service will take care of balancing the load between all replicas automatically and you don't need to worry about it.</p> </li> <li><p>if you would like to access a specific pod you can use the below command:</p> </li> </ul> <pre><code>kubectl port-forward ${POD_NAME} 80:80 </code></pre> <p>you need to replace the pod name from the above command and the command assumes that port <code>80</code> is available on your machine.</p>
<p>Goal :</p> <p>I am trying to include a thirdparty module a Http_more_headers to provide a custom name to the server in the headers.</p> <p>I am able to <a href="https://docs.nginx.com/nginx-ingress-controller/installation/building-ingress-controller-image/" rel="nofollow noreferrer">build a custom controller image</a> with the module installed with a slightly tweaked DockerFile.</p> <p>Issue:</p> <p>To enable this third party module I need to add the load_module directive in the nginx.conf I am however confused on how the ingress controller interprets the nginx.conf file. If I add the load_module in the server-snippet annotation, will it work? or do I have to modify the .tmpl file to enable the third party module? or should I just modify the nginx.conf and use a COPY during the image build itself? which would be the best way to achieve the goal?</p>
<p>Use the &quot;main-snippets&quot; configMap key under &quot;data&quot; followed by the value &quot;load_module &lt;module_path&gt;&quot;</p> <p>example:</p> <pre><code>kind: ConfigMap apiVersion: v1 data: main-snippet: load_module /usr/lib/modules/xyz_module.so </code></pre> <p>Check the documentation <a href="https://docs.nginx.com/nginx-ingress-controller/configuration/global-configuration/configmap-resource/" rel="nofollow noreferrer">here</a>.</p>
<p>I'm Trying to have a kubernetes cluster on aws and It's keep on failing while validation. using following command to update the cluster <code>kops update cluster cluster.foo.com --yes</code> and post running this <code>kops validate cluster</code></p> <pre><code>Using cluster from kubectl context: cluster.foo.com Validating cluster cluster.api.com INSTANCE GROUPS NAME ROLE MACHINETYPE MIN MAX SUBNETS master-eu-west-2a Master t2.medium 1 1 eu-west-2a nodes Node t2.medium 2 2 eu-west-2a NODE STATUS NAME ROLE READY VALIDATION ERRORS KIND NAME MESSAGE dns apiserver Validation Failed The dns-controller Kubernetes deployment has not updated the Kubernetes cluster's API DNS entry to the correct IP address. The API DNS IP address is the placeholder address that kops creates: 203.0.113.123. Please wait about 5-10 minutes for a master to start, dns-controller to launch, and DNS to propagate. The protokube container and dns-controller deployment logs may contain more diagnostic information. Etcd and the API DNS entries must be updated for a kops Kubernetes cluster to start. Validation Failed </code></pre> <p>Please help in finding the root cause. <br> <br>1. I tried deleting and recreating multiple time but that did not helped me. <br>2. Also tried manually placing the master public and private IP to route 53 but it break everything.</p>
<p>Since EC2 uses elastic IP address for public IP, each time you reboot master node it will receive a new public IP. It happens that KOPS does not pick up the new IP for the Kube API. For example, if your cluster name was <code>kube.mydomain.com</code>, the API DNS would be: <code>api.kube.mydomain.com</code> as you can see from Route53.</p> <p>You'd see timeout error when you try to reach your cluster:</p> <pre><code> $ kops rolling-update cluster Using cluster from kubectl context: kube.mydomain.com Unable to reach the kubernetes API. Use --cloudonly to do a rolling-update without confirming progress with the k8s API error listing nodes in cluster: Get &quot;https://api.kube.mydomain.com/api/v1/nodes&quot;: dial tcp 3.8.157.44:443: i/o timeout $ </code></pre> <p><strong>To fix this:</strong> Each time your EC2 master node receives a new public IP, you must manually update the public IP against DNS of <code>api.kube.mydomain.com</code> in Route53.</p> <p>Also ensure that the master's private IP is updated against the DNS of <code>api.internal.kube.mydomain.com</code>. Otherwise, the nodes will got to network-unavailable state.</p>
<p>Is there a way to load a single image into a Kubernetes Cluster without going through Container Registries? </p> <p>For example, if I build an image locally on my laptop, can kubectl do something akin to docker save/load to make that image available in a remote Kubernetes cluster?</p>
<p>I don't think <code>kubectl</code> can make a node to load up an image that was not built on itself, but I think you can achieve it in a similar way with <code>docker Daemon CLI</code>(make remote worker node to build image from local environment).</p> <p>Something like:</p> <pre class="lang-sh prettyprint-override"><code>$ docker -H tcp://0.0.0.0:2375 build &lt;Dockerfile&gt; </code></pre> <p>or setup docker host as environment in your local(laptop) environment.</p> <pre><code>$ export DOCKER_HOST="tcp://0.0.0.0:2375" $ docker ps </code></pre> <p>Keep in mind that your remote worker node needs to be accessible to all the dependencies to build the image See <a href="https://docs.docker.com/engine/reference/commandline/dockerd/" rel="nofollow noreferrer">documentation</a></p> <p>Plus, I am not sure why you want to work around remote repository but if the reason is because you don't want to expose your image in public, I suggest you setup a custom docker registry in long term.</p>
<p>I've tried to create user accounts with a client certificate.</p> <p>I followed two tutorials but stuck with both options in an error with the message</p> <p><a href="https://medium.com/better-programming/k8s-tips-give-access-to-your-clusterwith-a-client-certificate-dfb3b71a76fe" rel="nofollow noreferrer">https://medium.com/better-programming/k8s-tips-give-access-to-your-clusterwith-a-client-certificate-dfb3b71a76fe</a></p> <p><a href="https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/" rel="nofollow noreferrer">https://docs.bitnami.com/kubernetes/how-to/configure-rbac-in-your-kubernetes-cluster/</a></p> <p>I set the right user, server and the right context. I set the namespace but still the same error.</p> <pre><code>&gt; kubectl get pods You must be logged in to the server (Unauthorized) </code></pre> <p>Did someone already experienced something similar? Or does someone knows what i'm doing wrong?</p> <p>My k3s cluster version is 1.15.4.</p>
<p>From the error you posted, your user is getting failed in Authentication phase only (HTTP error code: <strong>401</strong>), you can validate the same using:</p> <pre><code>$ k get pods -v=6 ... I0123 16:34:18.842853 29373 helpers.go:203] server response object: [{ ... "code": 401 }] F0123 16:34:18.842907 29373 helpers.go:114] error: You must be logged in to the server (Unauthorized) </code></pre> <p>Debug your setup using below steps:</p> <ol> <li><p>Verify you are using the correct context and correct user as you expected (with * in CURRENT column):</p> <pre><code>$ kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE * context-user-ca-signed kubernetes user-user-ca-signed ns1 kubernetes-admin@kubernetes kubernetes kubernetes-admin </code></pre></li> <li><p>Verify the CA certificate for Kubernetes API Server (assuming API server running as a Pod):</p> <pre><code>$ sudo cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep -i "\-\-client-ca-file" - --client-ca-file=/etc/kubernetes/pki/ca.crt $ openssl x509 -in /etc/kubernetes/pki/ca.crt -text -noout | grep -i "Issuer:\|Subject:" Issuer: CN = kubernetes Subject: CN = kubernetes </code></pre></li> <li><p>Verify your user's certificate is signed by above CA (Issuer CN of user's cert is same as Subject CN of CA cert, "kubernetes" here), which is configured in API server:</p> <pre><code>$ kubectl config view --raw -o jsonpath="{.users[?(@.name == \"user-user-ca-signed\")].user.client-certificate-data}" | base64 -d &gt; client.crt $ openssl x509 -in client.crt -text -noout | grep -i "Issuer:\|Subject:" Issuer: CN = kubernetes Subject: C = IN, ST = Some-State, O = Some-Organization, CN = user-ca-signed </code></pre></li> </ol> <p>If the above steps are fine for the user you created, you shall pass <a href="https://kubernetes.io/docs/reference/access-authn-authz/authentication/" rel="nofollow noreferrer">Authentication</a> phase. But <a href="https://kubernetes.io/docs/reference/access-authn-authz/authorization/" rel="nofollow noreferrer">Authorization</a> phase still needs to be configured using RBAC, ABAC or any other supported authorization mode, else you may still get HTTP error code: <strong>403</strong></p> <pre><code>$ kubectl get pods -v=6 I0123 16:59:41.350501 28553 helpers.go:203] server response object: [{ ... "code": 403 }] F0123 16:59:41.351080 28553 helpers.go:114] Error from server (Forbidden): pods is forbidden: User "user-ca-signed" cannot list resource "pods" in API group "" in the namespace "ns1": No policy matched. </code></pre>
<p>I have a database (oracle) in my server and I have multiple processes which using this database running in Kubernetes in Google Cloud. To establish the connection to the database, I need to add the IP address of my application node to the database vault.</p> <p>I don't want to add 3 different IPs to the vault instead I want a common IP address. Is there any way to do that? in the real environment, I have more than 100 processes which access the same database.</p>
<p>There is a lot of ways of doing that. But what you want is some kind of a load balancer or ingress that will translate all of your 3 ip's into 1 IP or a hostname dns.</p> <p>The easier way of doing that is to create a NGINX instance (do that inside on your kubernetes cluster, it is very easy - <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/</a>) and then you can configure your NGINX instance to route your IP's traffic to the NGINX. Then you will use your NGINX Service to access your DB instance. There's an example here - <a href="http://nginx.org/en/docs/http/load_balancing.html" rel="nofollow noreferrer">http://nginx.org/en/docs/http/load_balancing.html</a></p> <p>But you can look other great solutions too, such as <a href="https://istio.io/" rel="nofollow noreferrer">istio</a> or <a href="https://traefik.io/" rel="nofollow noreferrer">traefik</a></p>
<p>I am unable to issue a working certificate for my ingress host in k8s. I use a ClusterIssuer to issue certificates and the same ClusterIssuer has issued certificates in the past for my ingress hosts under my domain name *xyz.com. But all of a sudden neither i can issue new Certificate with state 'True' for my host names nor a proper certificate secret (kubernetes.io/tls) gets created (but instead an Opaque secret gets created).</p> <pre><code> **strong text** **kubectl describe certificate ingress-cert -n abc** Name: ingress-cert Namespace: abc Labels: &lt;none&gt; Annotations: &lt;none&gt; API Version: cert-manager.io/v1beta1 Kind: Certificate Metadata: Creation Timestamp: 2021-09-08T07:48:32Z Generation: 1 Owner References: API Version: extensions/v1beta1 Block Owner Deletion: true Controller: true Kind: Ingress Name: test-ingress UID: c03ffec0-df4f-4dbb-8efe-4f3550b9dcc1 Resource Version: 146643826 Self Link: /apis/cert-manager.io/v1beta1/namespaces/abc/certificates/ingress-cert UID: 90905ab7-22d2-458c-b956-7100c4c77a8d Spec: Dns Names: abc.xyz.com Issuer Ref: Group: cert-manager.io Kind: ClusterIssuer Name: letsencrypt Secret Name: ingress-cert Status: Conditions: Last Transition Time: 2021-09-08T07:48:33Z Message: Issuing certificate as Secret does not exist Reason: DoesNotExist Status: False Type: Ready Last Transition Time: 2021-09-08T07:48:33Z Message: Issuing certificate as Secret does not exist Reason: DoesNotExist Status: True Type: Issuing Next Private Key Secret Name: ingress-cert-gdq7g Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Issuing 11m cert-manager Issuing certificate as Secret does not exist Normal Generated 11m cert-manager Stored new private key in temporary Secret resource &quot;ingress-cert-gdq7g&quot; Normal Requested 11m cert-manager Created new CertificateRequest resource &quot;ingress-cert-dp6sp&quot; </code></pre> <p>I checked the certificate request and it contains no events. Also i can see no challenges. I have added the logs below. Any help would be appreciated</p> <pre><code> kubectl describe certificaterequest ingress-cert-dp6sp -n abc Namespace: abc Labels: &lt;none&gt; Annotations: cert-manager.io/certificate-name: ingress-cert cert-manager.io/certificate-revision: 1 cert-manager.io/private-key-secret-name: ingress-cert-gdq7g API Version: cert-manager.io/v1beta1 Kind: CertificateRequest Metadata: Creation Timestamp: 2021-09-08T07:48:33Z Generate Name: ingress-cert- Generation: 1 Owner References: API Version: cert-manager.io/v1alpha2 Block Owner Deletion: true Controller: true Kind: Certificate Name: ingress-cert UID: 90905ab7-22d2-458c-b956-7100c4c77a8d Resource Version: 146643832 Self Link: /apis/cert-manager.io/v1beta1/namespaces/abc/certificaterequests/ingress-cert-dp6sp UID: fef72617-fc1d-4384-9f4b-a7e4502582d8 Spec: Issuer Ref: Group: cert-manager.io Kind: ClusterIssuer Name: letsencrypt Request: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURSBSRVFVRVNULS0tLS0KTUlJQ2Z6Q0NBV2NDQVFBd0FEQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUxMNgphTGhZNjhuNnhmMUprYlF5ek9OV1J4dGtLOXJrbjh5WUtMd2l4ZEFMVUl0TERra0t6Uksyb3loZzRMMThSQmQvCkNJaGJ5RXBYNnlRditKclRTOC84T1A0MWdwTUxBLzROdVhXWWtyeWhtZFdNaFlqa21OOFpiTUk1SlZZcVV2cVkKRWQ1b2cydmVmSjU1QlJPRExsd0o3YjBZa3hXckUwMGJxQ1ExWER6ZzFhM08yQ2JWd1NQT29WV2x6Uy9CdzRYVgpMeVdMS3E4QU52b2dZMUxXRU8xcG9YelRObm9LK2U2YVZueDJvQ1ZLdGxPaG1iYXRHYXNSaTJKL1FKK0dOWHovCnFzNXVBSlhzYVErUzlxOHIvbmVMOXNPYnN2OWd1QmxCK09yQVg2eHhkNHZUdUIwVENFU00zWis2c2MwMFNYRXAKNk01RlY3dkFFeDQyTWpuejVoa0NBd0VBQWFBNk1EZ0dDU3FHU0liM0RRRUpEakVyTUNrd0p3WURWUjBSQkNBdwpIb0ljY25kemMyZHdMbU5zYjNWa1oyRjBaUzV0YVdOeWIyWnBiaTVrWlRBTkJna3Foa2lHOXcwQkFRc0ZBQU9DCkFRRUFTQ0cwTXVHMjZRbVFlTlBFdmphNHZqUUZOVFVINWVuMkxDcXloY2ZuWmxocWpMbnJqZURuL2JTV1hwdVIKTnhXTnkxS0EwSzhtMG0rekNPbWluZlJRS1k2eHkvZU1WYkw4dTgrTGxscDEvRHl3UGxvREE2TkpVOTFPaDM3TgpDQ0E4NWphLy9FYVVvK0p5aHBzaTZuS1d4UXRpYXdmYXhuNUN4SENPWGF5Qzg0Q0IzdGZ2WWp6YUF3Ykx4akxYCmxvd09LUHNxSE51ZktFM0NtcjZmWGgramd5VWhxamYwOUJHeGxCWEFsSVNBNkN5dzZ2UmpWamFBOW82TmhaTXUKbmdheWZON00zUzBBYnAzVFFCZW8xYzc3QlFGaGZlSUE5Sk51SWtFd3EvNXppYVY1RDErNUxSSnR5ZkVpdnJLTwpmVjQ5WkpCL1BGOTdiejhJNnYvVW9CSkc2Zz09Ci0tLS0tRU5EIENFUlRJRklDQVRFIFJFUVVFU1QtLS0tLQo= Status: Conditions: Last Transition Time: 2021-09-08T07:48:33Z Message: Waiting on certificate issuance from order abc/ingress-cert-dp6sp-3843501305: &quot;&quot; Reason: Pending Status: False Type: Ready Events: &lt;none&gt; </code></pre> <p>Here is the ingress.yaml</p> <pre><code>kind: Ingress apiVersion: extensions/v1beta1 metadata: name: test-ingress annotations: nginx.ingress.kubernetes.io/proxy-body-size: 20m kubernetes.io/ingress.class: &quot;nginx&quot; cert-manager.io/cluster-issuer: &quot;letsencrypt&quot; spec: rules: - host: abc.xyz.com http: paths: - path: /static backend: serviceName: app-service servicePort: 80 - path: / backend: serviceName: app-service servicePort: 8000 tls: - hosts: - abc.xyz.com secretName: ingress-cert </code></pre> <p>Here is the clusterissuer:</p> <pre><code>apiVersion: cert-manager.io/v1alpha2 kind: ClusterIssuer metadata: name: letsencrypt spec: acme: server: https://acme-v02.api.letsencrypt.org/directory email: [email protected] privateKeySecretRef: name: letsencrypt-key solvers: - http01: ingress: class: nginx </code></pre>
<p>Works only with Nginx Ingress Controller</p> <p>I was using ClusterIssuer but I changed it to Issuer and it works.</p> <p>-- Install cert-manager (Installed version 1.6.1) and be sure that the three pods are running</p> <p>-- Create an Issuer by appling this yml be sure that the issuer is running.</p> <pre><code>apiVersion: cert-manager.io/v1 kind: Issuer metadata: name: letsencrypt-nginx namespace: default spec: acme: server: https://acme-v02.api.letsencrypt.org/directory email: [email protected] privateKeySecretRef: name: letsencrypt-nginx-private-key solvers: - http01: ingress: class: nginx </code></pre> <p>-- Add this to your ingress annotations</p> <pre><code>cert-manager.io/issuer: letsencrypt-nginx </code></pre> <p>-- Add the secretName to your ingress spec.tls.hosts spec:</p> <pre><code> tls: - hosts: - yourdomain.com secretName: letsencrypt-nginx </code></pre> <p>Notice that the Nginx Ingress Controller is able to generate the Certificate CRD automatically via a special annotation: cert-manager.io/issuer. This saves work and time, because you don't have to create and maintain a separate manifest for certificates as well (only the Issuer manifest is required). For other ingresses you may need to provide the Certificate CRD as well.</p>
<p>I am following Cloud Guru K8S course and have issues with the template they provided. I can't see what’s wrong.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: blue-deployment spec: replicas: 1 selector: matchLabels: app: bluegreen-test color: blue template: metadata: labels: app: bluegreen-test color: blue spec: containers: - name: nginx image: linuxacademycontent/ckad-nginx:blue ports: - containerPort: 80 </code></pre> <p>When I run</p> <pre><code>kubectl apply -f my-deployment.yml </code></pre> <p>I get</p> <pre><code>error: error parsing blue-deployment.yml: error converting YAML to JSON: yaml: line 4: found character that cannot start any token </code></pre> <p>What's wrong with this template? It's nearly identical to the official example deployment definition <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#creating-a-deployment</a></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 </code></pre>
<p>Your template is correct, it's just YAML that's complaining. YAML doesn't accept tabs, only 2 spaces. If you copied + pasted, it could have caused a discrepancy. If you want to stay on terminal, <code>vim my-deployment.yml</code> and make sure each &quot;tab&quot; is 2 spaces. This is quite time-consuming especially in vim so the alternative is to use a text editor like Sublime Text to update it.</p>
<h3>Implementation Goal</h3> <p>Expose Zookeeper instance, running on kubernetes, to the internet.</p> <p><em>(configuration &amp; version information provided at the bottom)</em></p> <h3>Implementation Attempt</h3> <p>I currently have a <code>minikube</code> cluster running on <code>ubuntu 14.04</code>, backed by <code>docker</code> containers. I'm running a bare metal k8s cluster, and I'm trrying to expose a zookeeper service to <em>the internet</em>. Seeing as my cluster is not running on a cloud provider, I set up <code>metallb</code>, in order to provide a network-loadbalancer implementation for my zookeeper service.</p> <p>On startup everything looks good, an external IP is assigned and I can access it from the same host via a curl command.</p> <pre><code>$ kubectl get pods -n metallb-system NAME READY STATUS RESTARTS AGE controller-5c9894b5cd-9gh8m 1/1 Running 0 5h59m speaker-j2z8q 1/1 Running 0 5h59m $ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.xxx.xxx.xxx &lt;none&gt; 443/TCP 6d19h zk-cs LoadBalancer 10.xxx.xxx.xxx 172.1.1.x 2181:30035/TCP 56m zk-hs LoadBalancer 10.xxx.xxx.xxx 172.1.1.x 2888:30664/TCP,3888:31113/TCP 6m15s </code></pre> <p>When I curl the above mentioned external IP's, I get a valid response</p> <pre><code>$ curl -D- &quot;http://172.1.1.x:2181&quot; curl: (52) Empty reply from server </code></pre> <p>So far it all looks good, I can access the LB from outside the cluster with no issues, but this is where my lack of Kubernetes/Networking knowledge gets me.I'm finding it impossible to expose this LB to the internet. I've tried running <code>minikube tunnel</code> which I had high hopes for, only to be deeply disappointed.</p> <p>Running a curl command from another node, whilst minikube tunnel is running will just see the request time out.</p> <pre><code>$ curl -D- &quot;http://172.1.1.x:2181&quot; curl: (28) Failed to connect to 172.1.1.x port 2181: Timed out </code></pre> <p>At this point, as I mentioned before, I'm stuck. Is there any way that I can get this service exposed to the internet without giving my soul to <code>AWS</code> or <code>GCP</code>?</p> <p>Any help will be greatly appreciated.</p> <h3>Service Configuration</h3> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: zk-hs labels: app: zk spec: selector: app: zk ports: - port: 2888 targetPort: 2888 name: server protocol: TCP - port: 3888 targetPort: 3888 name: leader-election protocol: TCP clusterIP: &quot;&quot; type: LoadBalancer --- apiVersion: v1 kind: Service metadata: name: zk-cs labels: app: zk spec: selector: app: zk ports: - name: client protocol: TCP port: 2181 targetPort: 2181 type: LoadBalancer --- apiVersion: policy/v1beta1 kind: PodDisruptionBudget metadata: name: zk-pdb spec: selector: matchLabels: app: zk maxUnavailable: 1 --- apiVersion: apps/v1 kind: StatefulSet metadata: name: zk spec: selector: matchLabels: app: zk serviceName: zk-hs replicas: 1 updateStrategy: type: RollingUpdate podManagementPolicy: OrderedReady template: metadata: labels: app: zk spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: &quot;app&quot; operator: In values: - zk topologyKey: &quot;kubernetes.io/hostname&quot; containers: - name: zookeeper imagePullPolicy: Always image: &quot;library/zookeeper:3.6&quot; resources: requests: memory: &quot;1Gi&quot; cpu: &quot;0.5&quot; ports: - containerPort: 2181 name: client - containerPort: 2888 name: server - containerPort: 3888 name: leader-election volumeMounts: - name: datadir mountPath: /var/lib/zookeeper - name: zoo-config mountPath: /conf volumes: - name: zoo-config configMap: name: zoo-config securityContext: fsGroup: 2000 runAsUser: 1000 runAsNonRoot: true volumeClaimTemplates: - metadata: name: datadir spec: accessModes: [ &quot;ReadWriteOnce&quot; ] resources: requests: storage: 10Gi --- apiVersion: v1 kind: ConfigMap metadata: name: zoo-config namespace: default data: zoo.cfg: | tickTime=10000 dataDir=/var/lib/zookeeper clientPort=2181 initLimit=10 syncLimit=4 </code></pre> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: ConfigMap metadata: namespace: metallb-system name: config data: config: | address-pools: - name: default protocol: layer2 addresses: - 172.1.1.1-172.1.1.10 </code></pre> <p>minikube: <code>v1.13.1</code><br /> docker: <code>18.06.3-ce</code></p>
<p>You can do it with minikube, but the idea of minikube is just to test stuff on your local environment. So, by default, it does not have the correct IPTable permissions, and yes you can adjust that, but if your goal is only to use without any loud provider, I'll higly recommend you to use <code>kubeadm</code> (<a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/" rel="nofollow noreferrer">https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/</a>).</p> <p>This tool will provide you a very customizable cluster configuration and you will be able to set your network problems without headaches.</p>
<p>Kubernetes readiness (http) probe is failing, however liveness (http) is working fine without readiness. Using the following, tested with different initialDelaySeconds. </p> <pre><code>readinessProbe: httpGet: path: /healthz port: 8080 initialDelaySeconds: 120 periodSeconds: 10 </code></pre> <pre><code>livenessProbe: httpGet: path: /healthz port: 8080 initialDelaySeconds: 120 periodSeconds: 10 </code></pre>
<p>It's working fine after increasing the initialDelaySeconds to 150 seconds. As the Container is taking longer than 120 seconds to come up sometimes and few times it is under 120 seconds. </p>
<p>What do I need to run / install kubernetes on a node (I refer to linux-kernel level)?, if I have a custom linux distribution with docker installed (from source), can I run k8s on it, or it needs specific kernel config/flags to be enabled ?</p> <p>Is any linux-kernel compatible with docker isalso compatible with k8s, or there are some more modifications at a kernel level (since actually k8s supports specific distros like Ubuntu, CentOS, Debian.... but not all)? </p>
<p>I have recently applied internal service deployment process in develop environment at work, using internal kubernetes cluster on top of Centos7. I am also a beginner but as far as I know,</p> <p>if I have a custom linux distribution with docker installed (from source), could I ran k8s on it??</p> <ul> <li>Yes. you can install and run kubernetes cluster on a custom linux distribution, but your linux distribution needs to meet the minimum requirements such as kernel version(3.10+). (ie Ubuntu16.04+ .. Centos 7)</li> </ul> <p>Any linux-kernel compatible with docker is compatible too with k8s, or there are more modifications at kernel level (since actually k8s supports sepecifics dists like Ubuntu, CentOS, Debian.... but not any)?</p> <ul> <li>Since kubernetes does not run any container but let containers communicate one another within the clustered hosts, I agree with the former (Any linux-kernel compatible with docker is compatible too with k8s). (Resource requirement is a different question.)</li> </ul> <p>FYI, My cluster uses:</p> <pre class="lang-sh prettyprint-override"><code>$ cat /etc/redhat-release CentOS Linux release 7.6.1810 (Core) $ uname -a Linux k8s-master.local 3.10.0-957.10.1.el7.x86_64 #1 SMP Mon Mar 18 15:06:45 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux $ docker --version Docker version 18.09.5, build e8ff056 $ kubectl version Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.6", GitCommit:"96fac5cd13a5dc064f7d9f4f23030a6aeface6cc", GitTreeState:"clean", BuildDate:"2019-08-19T11:05:16Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"} $ kubeadm version kubeadm version: &amp;version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:08:49Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"} $ kubelet --version Kubernetes v1.14.1 </code></pre>
<p>In my <code>kustomization.yaml</code> I have:</p> <pre class="lang-yaml prettyprint-override"><code>... secretGenerator: - name: db-env behavior: create envs: - my.env patchesStrategicMerge: - app.yaml </code></pre> <p>And then in my <code>app.yaml</code> (the patch) I have:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: app-deployment spec: template: spec: containers: - name: server envFrom: - secretRef: name: db-env </code></pre> <p>When I try build this via <code>kustomize build k8s/development</code> I get back out:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment ... spec: containers: - envFrom: - secretRef: name: db-env name: server </code></pre> <p>When it should be:</p> <pre class="lang-yaml prettyprint-override"><code> - envFrom: - secretRef: name: db-env-4g95hhmhfc </code></pre> <p>How do I get the <code>secretGenerator</code> name hashing to apply to <code>patchesStrategicMerge</code> too?</p> <p>Or alternatively, what's the proper way to inject some environment vars into a deployment for a specific overlay?</p> <p>This for development.</p> <hr /> <p>My file structure is like:</p> <pre><code>❯ tree k8s k8s ├── base │   ├── app.yaml │   └── kustomization.yaml ├── development │   ├── app.yaml │   ├── golinks.sql │   ├── kustomization.yaml │   ├── mariadb.yaml │   ├── my.cnf │   └── my.env └── production ├── ingress.yaml └── kustomization.yaml </code></pre> <p>Where <code>base/kustomization.yaml</code> is:</p> <pre class="lang-yaml prettyprint-override"><code>namespace: go-mpen resources: - app.yaml images: - name: server newName: reg/proj/server </code></pre> <p>and <code>development/kustomization.yaml</code> is:</p> <pre class="lang-yaml prettyprint-override"><code>resources: - ../base - mariadb.yaml configMapGenerator: - name: mariadb-config files: - my.cnf - name: initdb-config files: - golinks.sql # TODO: can we mount this w/out a config file? secretGenerator: - name: db-env behavior: create envs: - my.env patchesStrategicMerge: - app.yaml </code></pre>
<p>This works fine for me with <code>kustomize v3.8.4</code>. Can you please check your version and if <code>disableNameSuffixHash</code> is not perhaps set to you true.</p> <p>Here are the manifests used by me to test this:</p> <pre><code>➜ app.yaml deployment.yaml kustomization.yaml my.env </code></pre> <p>app.yaml</p> <pre><code>kind: Deployment metadata: name: app-deployment spec: template: spec: containers: - name: server envFrom: - secretRef: name: db-env </code></pre> <p>deplyoment.yaml</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: app-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 </code></pre> <p>and my kustomization.yaml</p> <pre><code>apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization secretGenerator: - name: db-env behavior: create envs: - my.env patchesStrategicMerge: - app.yaml resources: - deployment.yaml </code></pre> <p>And here is the result:</p> <pre><code>apiVersion: v1 data: ASD: MTIz kind: Secret metadata: name: db-env-f5tt4gtd7d type: Opaque --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: nginx name: app-deployment spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - image: nginx:1.14.2 name: nginx ports: - containerPort: 80 - envFrom: - secretRef: name: db-env-f5tt4gtd7d name: server </code></pre>
<p>I want to convert this file into json format, does anyone know how to do it?</p> <p>This is the yaml file :</p> <pre><code>apiVersion: v1 kind: Namespace metadata: name: theiaide --- apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: #name: rewrite name: theiaide namespace: theiaide annotations: #nginx.ingress.kubernetes.io/rewrite-target: /$2 kubernetes.io/ingress.class: nginx spec: rules: - host: ide.quantum.com http: paths: - path: / backend: serviceName: theiaide servicePort: 80 --- apiVersion: v1 kind: Service metadata: name: theiaide namespace: theiaide spec: ports: - port: 80 targetPort: 3000 selector: app: theiaide --- apiVersion: apps/v1 kind: Deployment metadata: labels: app: theiaide name: theiaide namespace: theiaide spec: selector: matchLabels: app: theiaide replicas: 1 template: metadata: labels: app: theiaide spec: containers: - image: theiaide/theia-python imagePullPolicy: IfNotPresent name: theiaide ports: - containerPort: 3000 </code></pre> <p><strong>code.py</strong></p> <pre><code>import json,yaml txt=&quot;&quot; with open(r&quot;C:\Users\77922\PycharmProjects\ide-ingress.yaml&quot;,'r') as f: for a in f.readlines(): txt=txt+a print(yaml.dump(yaml.load_all(txt),default_flow_style=False)) print(json.dumps(yaml.load_all(txt),indent=2,sort_keys=True)) </code></pre> <p>when I run python code.py ,and I got the error:</p> <pre><code>TypeError: can't pickle generator objects </code></pre> <p>I don’t know if it is the reason for this --- delimiter, because there are multiple --- delimiters in my yaml file</p> <p>Then I tried the following function:</p> <pre><code>def main(): # config.load_kube_config() f = open(r&quot;C:\Users\77922\PycharmProjects\ide-ingress.yaml&quot;,&quot;r&quot;) generate_dict = yaml.load_all(f,Loader=yaml.FullLoader) generate_json = json.dumps(generate_dict) print(generate_json) # dep = yaml.load_all(f) # k8s_apps_v1 = client.AppsV1Api() # resp = k8s_apps_v1.create_namespaced_deployment( # body=dep, namespace=&quot;default&quot;) # print(&quot;Deployment created. status='%s'&quot; % resp.metadata.name) if __name__ == '__main__': main() </code></pre> <pre><code> raise TypeError(f'Object of type {o.__class__.__name__} ' TypeError: Object of type generator is not JSON serializable </code></pre> <p>I just want to use this yaml file to call kubernetes api to generate namespace</p>
<p>Your file contains more than one document. You should use the <code>safe_load_all</code> function rather than <code>yaml.load</code> and <code>list</code> rather than <code>json.dumps</code></p> <pre><code>import json,yaml txt=&quot;&quot; with open(r&quot;C:\Users\77922\PycharmProjects\ide-ingress.yaml&quot;,'r') as f: for a in f.readlines(): txt=txt+a print(yaml.dump_all(yaml.safe_load_all(txt),default_flow_style=False)) print(list(yaml.safe_load_all(txt))) </code></pre>
<p>I am using this guide for deploying Spiffe on K8s Cluster &quot;https://spiffe.io/docs/latest/try/getting-started-k8s/&quot;</p> <p>One of the steps in this process is running the command &quot;kubectl apply -f client-deployment.yaml&quot; which deploys spiffe client agent.</p> <p>But the pods keeps on getting in the error state</p> <p><code>Error: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: &quot;sleep&quot;: executable file not found in $PATH: unknown</code></p> <p>Image used : ghcr.io/spiffe/spire-agent:1.5.1</p>
<p>It seems connected to <a href="https://github.com/spiffe/spire-tutorials/pull/98" rel="nofollow noreferrer">this PR</a> from 3 days ago (there is no longer a &quot;sleep&quot; executable in the image).</p> <blockquote> <p>SPIRE is moving away from the alpine Docker release images in favor of scratch images that contain only the release binary to minimize the size of the images and include only the software that is necessary to run in the container.</p> </blockquote> <p>You should report the issue and use</p> <blockquote> <p>gcr.io/spiffe-io/spire-agent:1.2.3</p> </blockquote> <p>(the last image they used) meanwhile.</p>
<p>I'm using Kubernetes with Istio which comes with traffic management. All backend api endpoints starts with <code>/api/**</code> followed by specific uri except frontend service. Frontend service has no any general uri prefix.</p> <p>What i want to achieve is in the istio <code>VirtualService</code> use a regular expression that basically says, if a requested uri does not start with <code>/api/</code>, let it be served by frontend-service.</p> <p>This is my <code>VirtualService</code></p> <pre><code>apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: {{ .Release.Name }}-frontend-ingress namespace: default spec: hosts: {{ include "application.domain" . }} gateways: - iprocure-gateway http: - match: - uri: regex: '^(?!\/api\/).*' route: - destination: host: {{ printf "%s.%s.svc.cluster.local" .Values.frontendService.serviceName .Release.Name }} port: number: {{ .Values.frontendService.service.port }} </code></pre> <p>What is the <code>regex</code> value that I can use to make all request that does not start with <code>/api/</code> be served with frontend-service</p>
<p>See <a href="https://istio.io/news/releases/1.4.x/announcing-1.4/upgrade-notes/" rel="nofollow noreferrer">Istio 1.4.x upgrade notes</a> under the heading "Regex engine changes". Envoy has moved to the Google Re2 "safe" regex engine which doesn't support negative look-ahead. An opt-out via an environment variable to Pilot is possible but will be removed in future versions. I've yet to find a long-term solution to this other than writing long regexs.</p> <p>For your particular case try</p> <pre><code>regex: "^/((a$)|(ap$)|(api$)|([^a].*)|(a[^p].*)|(ap[^i].*)|(api[^/].*))" </code></pre>
<p>I frequently do port-forward on my mac but when I hit the CMD + C on my terminal to quit the port-forward, the process didn't get killed. i have to kill it in order to do a new port-forward. Do you know how to solve this issue on mac?</p> <pre><code>[2] + 43679 suspended kubectl port-forward pod/pod-0 27017:27017 </code></pre> <p>Re-try</p> <pre><code>kubectl port-forward pod/pod-2 27017:27017 Unable to listen on port 27017: Listeners failed to create with the following errors: [unable to create listener: Error listen tcp4 127.0.0.1:27017: bind: address already in use unable to create listener: Error listen tcp6 [::1]:27017: bind: address already in use] error: unable to listen on any of the requested ports: [{27017 27017}] ➜ ~ kill -9 43679 </code></pre>
<p>On Mac, the command you want is <code>CTRL+C</code> to quit/kill the process. The port forwarding is only in effect when the kubectl process is up.</p>
<p>the log of istio-ingressgateway:</p> <pre><code>[2019-11-11T06:09:02.823Z] "GET /notebook/name/test-root1/ HTTP/2" 404 -... outbound|80||test-root.name.svc.cluster.local - ...- </code></pre> <p>my http request with uri :/notebook/name/test-root1/ was forward to host <code>test-root.name.svc.cluster.local</code> , while there are two VirtualService's named "test-root" and "test-root1",respectively. Thus lead to a 404 error for test-root1.</p> <p>Any ideas about how to fix it? Thanks a lot,XD.</p>
<p>i figure out how this problem came out yesterday, the Kubeflow notebook-controller use istio proxy and set the match scheme as: <code>prefix</code>. But they carelessly set the match uri as <code>xxx/xxx</code> which lead to request like <code>xxx/xxxabc</code> mis-forward. </p> <p>And they have fixed this bug few days ago, as the pr mentioned in the comment.</p>
<p>I have upgraded my nginx controller from old stable repository to new ingress-nginx repository version 3.3.0. Upgrade was succeeded without an issue.</p> <p>My ingress resources stopped working after the upgrade and after annotating <code>kubernetes.io/ingress.class: nginx</code> to the existing resources, I could see the below message in the nginx pods. This is the output for my kiali ingress resources.</p> <pre><code>I1008 10:53:00.046817 9 event.go:278] Event(v1.ObjectReference{Kind:&quot;Ingress&quot;, Namespace:&quot;istio-system&quot;, Name:&quot;istio-kiali&quot;, UID:&quot;058a7b68-191a-4cdf-a0dd-023faffbb6a5&quot;, APIVersion:&quot;networking.k8s.io/v1beta1&quot;, ResourceVersion:&quot;26912&quot;, FieldPath:&quot;&quot;}): type: 'Normal' reason: 'UPDATE' Ingress istio-system/istio-kiali </code></pre> <p>Still I'm not able to access it. Does anyone have any idea about the issue?</p> <p>Your valuable thoughts and suggestions would be appreciated.</p>
<p>I'm using the upstream nginx ingress and use helm controller to install it. BTW, I have carefully gone through the values and overridden the values as below using the helm release. Now, it is working fine. All of my ingresses came online to serve traffic even without the annotation.</p> <p>No errors appeared in the logs. I suppose my previous values may caused the issue. I'm sharing the updated and fixed values as below, I hope it will help someone to who got the similar issue.</p> <pre><code>controller: kind: DaemonSet hostNetwork: true hostPort: enabled: true ports: http: 80 https: 443 dnsPolicy: ClusterFirstWithHostNet nodeSelector: role: minion extraArgs: &quot;default-server-port&quot;: 8182 service: enabled: false publishService: enabled: false </code></pre>
<p>I have a cluster on GKE and one of my deployments run tornado web app to receive http requests. This deployment is exposed by a LoadBalancer. I send a simple http request to the LoadBalancer ip, which must run on server side for ~10 minutes. After exactly 5 minutes, I get:</p> <pre><code>requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionResetError(104, 'Connection reset by peer')) </code></pre> <p>I tried:</p> <ul> <li>Using the communication locally on my computer (both client and server) and haven't got the reset.</li> <li>I made kubectl port-forward directly to the deployment (local client -&gt; kubectl port-forward -&gt; deployment -&gt; server) and haven't got the connection reset. So basically I'm pretty sure it's on the loadbalnacer side.</li> <li>I made a backend config with this configuration:</li> </ul> <pre><code>apiVersion: cloud.google.com/v1beta1 kind: BackendConfig metadata: name: my-bsc-backendconfig spec: timeoutSec: 3600 </code></pre> <p>and my loadbalancer is configured like this:</p> <pre><code>apiVersion: v1 kind: Service metadata: annotations: cloud.google.com/backend-config: '{&quot;ports&quot;: {&quot;5000&quot;:&quot;my-bsc-backendconfig&quot;}' creationTimestamp: &quot;2020-12-24T10:08:54Z&quot; finalizers: - service.kubernetes.io/load-balancer-cleanup labels: name: wesnapp-flask name: wesnapp-flask-service namespace: default resourceVersion: &quot;14652233&quot; selfLink: /api/v1/namespaces/default/services/wesnapp-flask-service uid: a922e9cb-4702-481f-b1a9-e09df1653ff7 spec: clusterIP: 10.64.9.113 externalTrafficPolicy: Cluster ports: - nodePort: 31429 port: 5000 protocol: TCP targetPort: 5000 selector: name: wesnapp-flask sessionAffinity: None type: LoadBalancer status: loadBalancer: ingress: - ip: x.x.x.x </code></pre> <p>Any ideas how to solve this problem? Thanks</p>
<p>Your assumption is right. GCE LoadBalancer kills the connection.<br /> As mentioned in <a href="https://cloud.google.com/load-balancing/docs/l7-internal" rel="nofollow noreferrer">this Google document</a>, there is a <code>Stream idle timeout</code> configured to 300 seconds (5 minutes) and can't be changed. HTTP streams become idle after 5 minutes without activity.</p>
<p>I'm doing a deployment on the GKE service and I find that when I try to access the page the message </p> <p>ERR_CONNECTION_REFUSED</p> <p>I have defined a load balancing service for deployment and the configuration is as follows.</p> <p>This is the .yaml for the deployment</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: bonsai-onboarding spec: selector: matchLabels: app: bonsai-onboarding replicas: 2 template: metadata: labels: app: bonsai-onboarding spec: containers: - name: bonsai-onboarding image: "eu.gcr.io/diaphanum/onboarding-iocash-master_web:v1" ports: - containerPort: 3000 </code></pre> <p>This is the service .yaml file.</p> <pre><code>apiVersion: v1 kind: Service metadata: name: lb-onboarding spec: type: LoadBalancer selector: app: bonsai-onboarding ports: - protocol: TCP port: 3000 targetPort: 3000 </code></pre> <p>This working fine, and all is green in GKE :)</p> <pre><code>kubectl get pods,svc NAME READY STATUS RESTARTS AGE pod/bonsai-onboarding-8586b9b699-flhbn 1/1 Running 0 3h23m pod/bonsai-onboarding-8586b9b699-p9sn9 1/1 Running 0 3h23m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP XX.xx.yy.YY &lt;none&gt; 443/TCP 29d service/lb-onboarding LoadBalancer XX.xx.yy.YY XX.xx.yy.YY 3000:32618/TCP 3h </code></pre> <p>Then when i tried to connect the error is ERR_CONNECTION_REFUSED</p> <p>I think is about the network because y did the next test from my local machine</p> <pre><code>Ping [load balancer IP] ---&gt; Correct Telnet [Load Balancer IP] 3000 ---&gt; Correct </code></pre> <p>From cloud shell i forward the port 3000 to 8080 and in other cloudShell make a Curl <a href="http://localhost:8080" rel="nofollow noreferrer">http://localhost:8080</a>, and work fine.</p> <p>Any idea about the problem?</p> <p>Thanks in advance</p>
<p>I've changed a little bit your deployment to check it on my cluster because your image was unreachable:</p> <ul> <li><p>deployment:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: bonsai-onboarding spec: selector: matchLabels: app: bonsai-onboarding replicas: 2 template: metadata: labels: app: bonsai-onboarding spec: containers: - name: bonsai-onboarding image: nginx:latest ports: - containerPort: 80 </code></pre></li> <li><p>service:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: lb-onboarding spec: type: LoadBalancer selector: app: bonsai-onboarding ports: - protocol: TCP port: 3000 targetPort: 80 </code></pre></li> </ul> <p>and it works out of the box:</p> <pre><code>kubectl get pods,svc NAME READY STATUS RESTARTS AGE pod/bonsai-onboarding-7bdf584499-j2nv7 1/1 Running 0 6m58s pod/bonsai-onboarding-7bdf584499-vc7kh 1/1 Running 0 6m58s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.XXX.XXX.1 &lt;none&gt; 443/TCP 8m35s service/lb-onboarding LoadBalancer 10.XXX.XXX.230 35.XXX.XXX.235 3000:31637/TCP 67s </code></pre> <p>and I'm able reach <code>35.XXX.XXX.235:3000</code> from any IP:</p> <pre><code>Welcome to nginx! ... Thank you for using nginx. </code></pre> <p>You can check if your app is reachable using this command:</p> <pre><code>nmap -Pn $(kubectl get svc lb-onboarding -o jsonpath='{.status.loadBalancer.ingress[*].ip}') </code></pre> <p>Maybe the cause of your problem with "ERR_CONNECTION_REFUSED" in configuration of your image? I found no problem with your deployment and load balancer configuration.</p>
<p>I'm trying to make a development environment using minikube.<br> I'm using <code>minikube image load</code> to upload local images to the cluster.<br> here is an example deployment:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 kind: Deployment metadata: name: sso spec: selector: matchLabels: app: mm template: metadata: labels: app: mm name: sso spec: containers: - name: sso image: sso-service imagePullPolicy: Never resources: limits: memory: &quot;128Mi&quot; cpu: &quot;500m&quot; ports: - containerPort: 3000 imagePullSecrets: - name: pull-secret </code></pre> <p>The first time I run <code>minikube image load &quot;sso-service&quot;</code> the deployment restarts, but after that, loading a new image doesn't cause a rollout with the new image.<br> I also tried running <code>kubectl rollout restart</code>, did not help.<br> Is there any way to force the deployment to perform a rollout with the new image?</p>
<p>Managed to solve this myself. I made a script that would first generate a random number and use it as the image's tag, it would then use <code>kubectl set image</code> to update the image.</p> <pre class="lang-default prettyprint-override"><code>$tag = $(Get-Random) echo &quot;building image&quot; docker build --build-arg NPM_TOKEN=$(cat .token) -t &quot;sso-service:$tag&quot; . echo &quot;loading image&quot; minikube image load &quot;sso-service:$tag&quot; --daemon=true kubectl set image deployment/sso sso=&quot;sso-service:$tag&quot; </code></pre>
<p>I have deployed a Linkerd Service mesh and my Kubernetes cluster is configured with the Nginx ingress controller as a DaemonSet and all the ingresses are working fine also the Linkerd. Recently, I have added a traffic split functionality to run my blue/green setup I can reach through to these services with separate ingress resources. I have created an apex-web service as described <a href="https://github.com/BuoyantIO/emojivoto/blob/linux-training/training/traffic-split/web-apex.yml" rel="nofollow noreferrer">here</a>. If I reached you this service internally it perfectly working. I have created another ingress resources and I'm not able to test the blue/green functionality outside of my cluster. I'd like to mention that I have meshed (injected the Linkerd proxy) to all my Nginx pods but it is returning &quot;<code>503 Service Temporarily Unavailable</code>&quot; message from the Nginx.</p> <p>I went through the documentation and I have created ingress following <a href="https://linkerd.io/2/tasks/using-ingress/#nginx" rel="nofollow noreferrer">this</a>, I can confirm that below annotations were added to the ingress resources.</p> <pre><code>annotations: kubernetes.io/ingress.class: &quot;nginx&quot; nginx.ingress.kubernetes.io/configuration-snippet: | proxy_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port; grpc_set_header l5d-dst-override $service_name.$namespace.svc.cluster.local:$service_port; </code></pre> <p>but still no luck with the out side of the cluster.</p> <p>I'm testing with the given emojivoto app and all the traffic split and the apex-web services are in <a href="https://github.com/BuoyantIO/emojivoto/tree/linux-training/training/traffic-split" rel="nofollow noreferrer">this</a> training repository.</p> <p>I'm not quite sure what went wrong and how to fix this outside from the cluster. I'd really appreciate if anyone assist me to fix this Linkerd, Blue/Green issue.</p>
<p>I have raised this question in the Linkerd Slack channel and got this fixed with the wonderful support from the community. Seems Nginx doesn't like the service which doesn't have an endpoint. My configuration was correct and asked to change the service pointed in the traffic split to a service with an endpoint and it fixed the issue.</p> <p>In a nutshell, my traffic split was configured with web-svc and web-svc-2 services. I have changed the traffic split spec.service to the same web-svc and it worked</p> <p>Here is the traffic split configuration after the update.</p> <pre><code>apiVersion: split.smi-spec.io/v1alpha1 kind: TrafficSplit metadata: name: web-svc-ts namespace: emojivoto spec: # The root service that clients use to connect to the destination application. service: web-svc # Services inside the namespace with their own selectors, endpoints and configuration. backends: - service: web-svc # Identical to resources, 1 = 1000m weight: 500m - service: web-svc-2 weight: 500m </code></pre> <p>Kudos to the Linkerd team who supported me to fix this issue. It is working like a charm.</p>
<p>I've set up a skaffold project with a computer and 3 raspberry Pi's. 2 raspberry pi's form the kubernetes cluster and the third is running as an unsecured docker repository. The 4th computer is a PC I'm using to code on using skaffold to push to the repo and cluster. As I'm new to kubernetes and Skaffold I'm not sure how to configure my skaffold.yaml file to connect to the cluster since it's not on my local host.</p> <p>I believe that i'm meant to do something about the kubecontext but I'm not sure how I do this when the cluster is not running on the same system as skaffold. Would anyone be able to point me in the direction of some resources or explain to me how to set this up. I can currently push the images successfully to the repo but I just don't know where and what to put in the skaffold.yaml file to get it to do the final stage of creating the pods on the cluster from the images i've made.</p> <p>Thanks in advance and any questions please let me know. I'll leave the skaffold yaml file below in case that is needed.</p> <pre><code>apiVersion: skaffold/v2beta11 kind: Config metadata: name: webservice build: insecureRegistries: - 192.168.0.10:5000 artifacts: - image: node-webservice-app context: src/client docker: dockerfile: Dockerfile - image: node-webservice-server context: src/server docker: dockerfile: Dockerfile deploy: kubectl: manifests: - k8s/webservice.deployment.yaml - k8s/webservice.service.yaml </code></pre>
<p>The skaffold uses &quot;kubectl&quot; binary to deploy to cluster. Your skaffold yaml seems to be correct.</p> <p>In order to access your Kubernetes cluster, kubectl uses a configuration file. The default configuration file is located at ~/.kube/config and is referred to as the kubeconfig file.</p> <p>You will need correct <code>kubeconfig</code> file in your system with correct context set. To check current context use this <code>kubectl config current-context</code></p>
<p>I've been able to get two awesome technologies to work independently.</p> <ul> <li><a href="https://github.com/GoogleContainerTools/skaffold" rel="nofollow noreferrer">Skaffold</a></li> <li><a href="https://docs.docker.com/buildx/working-with-buildx/" rel="nofollow noreferrer">BuildX</a></li> </ul> <p>Unfortunatly I don't know how to use them both at the same time.</p> <p>I'm currently building and testing on my laptop (amd), then deploying to a Raspberri Pi 4 (arm64) running Kubernetes.</p> <p>To get this working I use something like:</p> <pre><code>docker buildx build --platform linux/amd64,linux/arm64 --tag my-registry/my-image:latest --push . </code></pre> <p>Before attempting to target an arm I was using skaffold.</p> <p>Is there any way to continue to target multi-playform whilst also using skaffold to build/deploy? If not, is there any recommendations for alternatives?</p> <p>Any advice/help is very appreciated, thank-you.</p>
<p>Found the missing piece. Skaffold has the ability to set a custom command, where I could use <code>buildx</code>.</p> <p><a href="https://github.com/GoogleContainerTools/skaffold/tree/master/examples/custom" rel="nofollow noreferrer">https://github.com/GoogleContainerTools/skaffold/tree/master/examples/custom</a></p> <pre><code>build: artifacts: - image: "foo/bar" context: . custom: buildCommand: ./custom-build.sh </code></pre> <p><strong>custom-build.sh</strong></p> <pre><code>docker buildx build \ --platform linux/arm64 \ --tag $IMAGE \ --push \ $BUILD_CONTEXT </code></pre>
<p>I have created 15 servers which contain the same machine learning program. However, each server has different arguments at the runtime. It is determined by the hostname. Each server also has a copy of a 5 1gb pkl files which contain training data.</p> <p>So for example right now, I have created 15 servers in the cloud with the following names.</p> <pre><code>ml-node-1 ml-node-2 ml-node-3 .. ml-node-15 </code></pre> <p>So when my program runs on <code>ml-node-1</code> it looks like this, <code>python3 mlscript.py $HOSTNAME.csv</code> and it will run <code>python3 mlscript.py ml-node-1.csv</code>. Each server will run the script which is meant for its hostname.</p> <p>My problem is that I have to create 15 copies of the 5gb pkl data in each server before they are run. I find this very inefficient and costly therefore I am looking up Kubernetes as a solution. From the documentation, I can see that containers within a pod can share a persistent volume. This way I might be able to mitigate copying the 5gb pkl data 15 times.</p> <p>However, I am trying to figure out the naming of the servers/containers. I figure that I would need 1 pod with a shared persistent volume and 15 containers. According to what I can understand from the documentation, <a href="https://kubernetes.io/docs/concepts/containers/container-environment/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/containers/container-environment/</a>, all the containers within the pod will share the same hostname.</p> <p>How do I differentiate them and give them specific hostnames so that I can still have a different argument running within each container? My docker images are the standard debian image with my machine learning script.</p>
<p>Rather than relying on custom hostnames for a scalable multi-host environment like Kubernetes (which is not designed to allow for this), as a more feasible solution (<a href="https://stackoverflow.com/questions/65576309#comment115941183_65576309">suggested</a> by <a href="https://stackoverflow.com/users/137650">MatsLindh</a>) you could write some code in your <code>mlscript.py</code> that generates a unique random key on startup and then watches an <code>input</code> directory for changes in a continuous loop: This directory would contain the available files to be picked up by the different containers, and the script would rename a given file with the generated key when it assigns it to the running server instance in the container, with &quot;unassigned&quot; files (not containing a key with the same format in the name) being considered available for other instances.</p> <p>This would allow you to scale the workload up or down from Kubernetes as well as have several files processed by the same container instance.</p>
<p>I have 2 microservices, one preparing a file and another reading it to handle HTTP requests. So I'm going to create a PVC and two deployments, one for each microservice. The deployment for the &quot;writing&quot; microservice will consist of a single pod, another deployment will be horizontally scalable. <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer">There are 3 access modes</a>, but none of them seems to fit my needs perfectly, and the docs are not clear to me. So which PVC access mode should I choose? It's very desirable to be able to keep those pods on different nodes.</p>
<p>You need storage backend that supports <code>ReadWriteMany</code> access modes and then set appropriate access mode for each deployment at the claim level (for pod that generates a file you would use <code>ReadWriteOnce</code> and for the second deployment you would use <code>ReadOnlyMany</code> mode).</p> <p>So in order this to work you will have to use <code>nfs</code>, <code>cephfs</code> or other plugin that support <code>ReadWriteMany</code>. More detailed plugin list can be found <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer">here</a>.</p>
<p>Reading Kubernetes documentation:</p> <p><a href="https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/</a></p> <pre><code>128974848, 129e6, 129M, 123Mi </code></pre> <p>What are the differences between M and Mi here?</p> <p>If I want to request 128GB of RAMs, how many Mi is the correct number? 128000Mi? Thank you!</p>
<p>&quot;MB&quot; is the <a href="https://en.wikipedia.org/wiki/Byte#Multiple-byte_units" rel="noreferrer">metric unit</a>, where 1 MB = 10<sup>6</sup> B and 1 GB = 10<sup>9</sup> B.</p> <p>&quot;MiB&quot; is the power 2 based unit, where 1 MiB = 1024<sup>2</sup> B = 1048576 B.</p> <p>Thus, 128 GB = 128 · 10<sup>9</sup> B = 122070.3 MiB.</p>
<p>Hi I am trying to start <code>minikube</code> that's why I ran </p> <pre><code>minikube start --vm-driver=none </code></pre> <p>But it shows in the console the below lines:</p> <blockquote> <p>minikube v1.9.2 on Amazon 2 (Xen/amd64) Using the none driver based on user configuration X Sorry, Kubernetes v1.18.0 requires conntrack to be installed in root's path</p> </blockquote> <p>Note that i have installed <code>kubectl minikube</code> and <code>docker</code>.</p> <p>Please help me to sort out this issues.</p>
<p>I had the same issue. Install 'conntrack' with</p> <pre><code>sudo apt install conntrack </code></pre> <p>Then continue to start your minikube:</p> <pre><code>sudo minikube start --vm-driver=none </code></pre>
<p>When I try to run this task, I get the following error:</p> <pre class="lang-py prettyprint-override"><code>from airflow.contrib.operators.kubernetes_pod_operator import KubernetesPodOperator from airflow import DAG from datetime import datetime, timedelta default_args = { "owner": "airflow", "depends_on_past": False, "start_date": datetime(2015, 6, 1), "email": ["[email protected]"], "email_on_failure": False, "email_on_retry": False, "retries": 1, "retry_delay": timedelta(minutes=5), } dag = DAG("kubernetes", default_args=default_args, schedule_interval=None) k = KubernetesPodOperator( namespace='kubernetes', image="ubuntu:16.04", cmds=["bash", "-cx"], arguments=["echo", "10", "echo pwd"], labels={"foo": "bar"}, name="airflow-test-pod", is_delete_pod_operator=True, in_cluster=True, task_id="task-two", get_logs=True, dag=dag) </code></pre> <p>Error: </p> <pre><code> File "/usr/local/lib/python3.7/site-packages/kubernetes/config/kube_config.py", line 491, in safe_get key in self.value): TypeError: argument of type 'NoneType' is not iterable </code></pre> <p>What am I doing wrong? I'm using puckel/airflow and the correct dependencies. <code>&lt;https://github.com/puckel/docker-airflow&gt;</code> I need to edit something in airflow.cfg? I don't know where to search for this.</p>
<p>It seems, you don't have config_file parameter set, so KubernetesPodOperator fall back to its default value, which probably doesn't exist as well.</p> <p>My suggestion would be to add "config_file=/path/to/kube_config.yaml". In the following file you also provide your credentials/tokens. </p>
<p>So Kubernetes events seems to have options for watch for all kinds of things like pod up/down creation/deletion etc.. - but I want to watch for a namespace creation/deletion itself - and I can't find an option to do that.</p> <p>I want to know if someone created a namespace or deleted one. Is this possible?</p> <p>Rgds, Gopa.</p>
<p>So I got it working, pasting the entire code below for anyone's future reference</p> <pre class="lang-golang prettyprint-override"><code>package main import ( &quot;os&quot; &quot;os/exec&quot; &quot;os/signal&quot; &quot;strings&quot; &quot;syscall&quot; &quot;time&quot; &quot;github.com/golang/glog&quot; v1 &quot;k8s.io/apimachinery/pkg/apis/meta/v1&quot; &quot;k8s.io/apimachinery/pkg/util/runtime&quot; &quot;k8s.io/client-go/informers&quot; &quot;k8s.io/client-go/kubernetes&quot; &quot;k8s.io/client-go/rest&quot; &quot;k8s.io/client-go/tools/cache&quot; &quot;k8s.io/client-go/tools/clientcmd&quot; ) func newNamespace(obj interface{}) { ns := obj.(v1.Object) glog.Error(&quot;New Namespace &quot;, ns.GetName()) } func modNamespace(objOld interface{}, objNew interface{}) { } func delNamespace(obj interface{}) { ns := obj.(v1.Object) glog.Error(&quot;Del Namespace &quot;, ns.GetName()) } func watchNamespace(k8s *kubernetes.Clientset) { // Add watcher for the Namespace. factory := informers.NewSharedInformerFactory(k8s, 5*time.Second) nsInformer := factory.Core().V1().Namespaces().Informer() nsInformerChan := make(chan struct{}) //defer close(nsInformerChan) defer runtime.HandleCrash() // Namespace informer state change handler nsInformer.AddEventHandler(cache.ResourceEventHandlerFuncs{ // When a new namespace gets created AddFunc: func(obj interface{}) { newNamespace(obj) }, // When a namespace gets updated UpdateFunc: func(oldObj interface{}, newObj interface{}) { modNamespace(oldObj, newObj) }, // When a namespace gets deleted DeleteFunc: func(obj interface{}) { delNamespace(obj) }, }) factory.Start(nsInformerChan) //go nsInformer.GetController().Run(nsInformerChan) go nsInformer.Run(nsInformerChan) } func main() { kconfig := os.Getenv(&quot;KUBECONFIG&quot;) glog.Error(&quot;KCONFIG&quot;, kconfig) var config *rest.Config var clientset *kubernetes.Clientset var err error for { if config == nil { config, err = clientcmd.BuildConfigFromFlags(&quot;&quot;, kconfig) if err != nil { glog.Error(&quot;Cant create kubernetes config&quot;) time.Sleep(time.Second) continue } } // creates the clientset clientset, err = kubernetes.NewForConfig(config) if err != nil { glog.Error(&quot;Cannot create kubernetes client&quot;) time.Sleep(time.Second) continue } break } watchNamespace(clientset) glog.Error(&quot;Watch started&quot;) term := make(chan os.Signal, 1) signal.Notify(term, os.Interrupt) signal.Notify(term, syscall.SIGTERM) select { case &lt;-term: } } </code></pre>
<p>I have used the following commands to generate the root certificate and the actual certificate.It says that the certificate is not authorized to sign other certificates.Am I missing something? I am new to these tls certificates</p> <p>Root certificate:</p> <pre><code>openssl genrsa -out root-key.pem 2048 openssl req -new -key root-key.pem -subj &quot;/C=IN/ST=RJ/CN=cluster.local/[email protected]&quot; -out ca.csr openssl x509 -req -in ca.csr -signkey root-key.pem -CAcreateserial -out root-cert.pem -days 1000 </code></pre> <p>Actual certificate:</p> <pre><code>openssl genrsa -out ca-key.pem 2048 openssl req -new -key ca-key.pem -subj `&quot;/C=IN/ST=RJ/CN=cluster.local/[email protected]&quot;` -out ca-user-admin.csr openssl x509 -req -in ca-user-admin.csr -CA root-cert.pem -CAkey root-key.pem -CAcreateserial -out ca-cert.pem -days 1000 </code></pre> <p>And for the cert-chain.pem, I am doing <strong>cp root-cert.pem cert-chain.pem</strong></p> <pre><code>echo &quot;&quot; &gt;&gt; cert-chain.pem cat ca-cert.pem &gt;&gt; cert-chain.pem </code></pre> <p>I am referring these steps after I created these <a href="https://istio.io/latest/docs/tasks/security/cert-management/plugin-ca-cert/" rel="nofollow noreferrer">https://istio.io/latest/docs/tasks/security/cert-management/plugin-ca-cert/</a></p> <p>Please help me out with this. Am I missing something here ? Or should I get my certificates signed by some other trusted CA ?</p>
<p>I was able to generate the certificates using these commands. I am not sure if this is the correct way to do them. After following these steps my istiod was able to sign the workloads with my own CA. After this, I followed the steps given in this page <a href="https://istio.io/latest/docs/tasks/security/cert-management/plugin-ca-cert/" rel="nofollow noreferrer">https://istio.io/latest/docs/tasks/security/cert-management/plugin-ca-cert/</a></p> <pre><code>CONFIG=&quot; [req] distinguished_name=dn [ dn ] [ ext ] basicConstraints=CA:TRUE,pathlen:0 &quot; openssl req -config &lt;(echo &quot;$CONFIG&quot;) -new -newkey rsa:2048 -nodes \ -subj &quot;/C=IN/O=dEVOPS/OU=DevOps/ST=RJ/CN=cluster.local/[email protected]&quot; -x509 -extensions ext -keyout root-key.pem -out root-cert.pem cp root-cert.pem ca-cert.pem cp root-key.pem ca-key.pem cp ca-cert.pem cert-chain.pem </code></pre>
<p>i am looking forward to replacing my Nginx ingress with ambassador API gateway with minimal changes is it possible ?</p> <p>what is difference between Ambassador Edge Stack &amp; Ambassador API gateway i have followed this document and found configuration AES in helm chart.</p> <p><a href="https://www.getambassador.io/docs/latest/topics/install/install-ambassador-oss/" rel="nofollow noreferrer">https://www.getambassador.io/docs/latest/topics/install/install-ambassador-oss/</a></p>
<p>It is possible, sure. </p> <p>According to [1] the difference between Ambassador Edge Stack &amp; Ambassador API gateway is the number of features. Edge Stack seems to pack more features together. Check the link for details.</p> <p>This should help too [2]</p> <p>[1] <a href="https://www.getambassador.io/docs/latest/tutorials/getting-started/" rel="nofollow noreferrer">https://www.getambassador.io/docs/latest/tutorials/getting-started/</a></p> <p>[2] <a href="https://cloud.google.com/solutions/exposing-grpc-services-on-gke-using-envoy-proxy#alternative_ways_to_route_grpc_traffic" rel="nofollow noreferrer">https://cloud.google.com/solutions/exposing-grpc-services-on-gke-using-envoy-proxy#alternative_ways_to_route_grpc_traffic</a></p>
<p>We have the version 1.12.10-gke.22 of kubernetes master, but we needed change to a 1.15.9-gke.24. </p> <p>We running the command to clusters upgrade : </p> <pre><code>gcloud container clusters upgrade &lt;cluster-name&gt; --master --cluster-version 1.15.9-gke.24 --zone us-central1-c </code></pre> <p>And we receive the response: </p> <pre><code>Master cannot be upgraded to "1.15.9-gke.24": cannot upgrade the master more than a minor version at a time. </code></pre> <p>In Google Cloud Platform console, we have the message: </p> <p><a href="https://i.stack.imgur.com/9u4Rd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9u4Rd.png" alt="enter image description here"></a></p> <p>Any ideas or solutions? Thanks</p>
<p>Some of our customers experienced the same error. They cannot upgrade from 1.12.10-gke.22 because the 1.13 upgrade is not available. </p> <p>The fix is in the works currently. Meanwhile to workaround the issue you can create a cluster with master version 1.15.9-gke.24 and migrate your workloads from the old cluster.</p>
<p><a href="https://i.stack.imgur.com/dMwWE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dMwWE.png" alt="enter image description here" /></a></p> <p>Error in text:</p> <blockquote> <p>Error from server: Get <a href="https://10.128.15.203:10250/containerLogs/default/postgres-54db6bdb8b-cmrsb/postgres" rel="nofollow noreferrer">https://10.128.15.203:10250/containerLogs/default/postgres-54db6bdb8b-cmrsb/postgres</a>: EOF</p> </blockquote> <p>How could I solve this issue? And what can be reason? I've used <a href="https://severalnines.com/database-blog/using-kubernetes-deploy-postgresql" rel="nofollow noreferrer">this tutorial</a> for configuring all stuff.</p> <p><strong>kubectl describe pods postgres-54db6bdb8b-cmrsb</strong></p> <pre><code>Name: postgres-54db6bdb8b-cmrsb Namespace: default Priority: 0 Node: gke-booknotes-pool-2-c1d23e62-r6nb/10.128.15.203 Start Time: Sat, 14 Dec 2019 23:27:20 +0700 Labels: app=postgres pod-template-hash=54db6bdb8b Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container postgres Status: Running IP: 10.56.1.3 IPs: &lt;none&gt; Controlled By: ReplicaSet/postgres-54db6bdb8b Containers: postgres: Container ID: docker://1a607cfb9a8968d708ff79419ec8bfc7233fb5ad29fb1055034ddaacfb793d6a Image: postgres:10.4 Image ID: docker-pullable://postgres@sha256:9625c2fb34986a49cbf2f5aa225d8eb07346f89f7312f7c0ea19d82c3829fdaa Port: 5432/TCP Host Port: 0/TCP State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: ContainerCannotRun Message: error while creating mount source path '/mnt/data': mkdir /mnt/data: read-only file system Exit Code: 128 Started: Sat, 14 Dec 2019 23:54:00 +0700 Finished: Sat, 14 Dec 2019 23:54:00 +0700 Ready: False Restart Count: 25 Requests: cpu: 100m Environment Variables from: postgres-config ConfigMap Optional: false Environment: &lt;none&gt; Mounts: /var/lib/postgresql/data from postgredb (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-t48dw (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: postgredb: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: postgres-pv-claim ReadOnly: false default-token-t48dw: Type: Secret (a volume populated by a Secret) SecretName: default-token-t48dw Optional: false QoS Class: Burstable Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 32m default-scheduler Successfully assigned default/postgres-54db6bdb8b-cmrsb to gke-booknotes-pool-2-c1d23e62-r6nb Normal Pulled 28m (x5 over 30m) kubelet, gke-booknotes-pool-2-c1d23e62-r6nb Container image &quot;postgres:10.4&quot; already present on machine Normal Created 28m (x5 over 30m) kubelet, gke-booknotes-pool-2-c1d23e62-r6nb Created container postgres Warning Failed 28m (x5 over 30m) kubelet, gke-booknotes-pool-2-c1d23e62-r6nb Error: failed to start container &quot;postgres&quot;: Error response from daemon: error while creating mount source path '/mnt/data': mkdir /mnt/data: read-only file system Warning BackOff 27m (x10 over 29m) kubelet, gke-booknotes-pool-2-c1d23e62-r6nb Back-off restarting failed container Warning Failed 23m (x4 over 25m) kubelet, gke-booknotes-pool-2-c1d23e62-r6nb Error: failed to start container &quot;postgres&quot;: Error response from daemon: error while creating mount source path '/mnt/data': mkdir /mnt/data: read-only file system Warning BackOff 22m (x11 over 25m) kubelet, gke-booknotes-pool-2-c1d23e62-r6nb Back-off restarting failed container Normal Pulled 22m (x5 over 25m) kubelet, gke-booknotes-pool-2-c1d23e62-r6nb Container image &quot;postgres:10.4&quot; already present on machine Normal Created 22m (x5 over 25m) kubelet, gke-booknotes-pool-2-c1d23e62-r6nb Created container postgres Normal Pulled 19m (x4 over 20m) kubelet, gke-booknotes-pool-2-c1d23e62-r6nb Container image &quot;postgres:10.4&quot; already present on machine Normal Created 19m (x4 over 20m) kubelet, gke-booknotes-pool-2-c1d23e62-r6nb Created container postgres Warning Failed 19m (x4 over 20m) kubelet, gke-booknotes-pool-2-c1d23e62-r6nb Error: failed to start container &quot;postgres&quot;: Error response from daemon: error while creating mount source path '/mnt/data': mkdir /mnt/data: read-only file system Warning BackOff 18m (x11 over 20m) kubelet, gke-booknotes-pool-2-c1d23e62-r6nb Back-off restarting failed container Normal Created 15m (x4 over 17m) kubelet, gke-booknotes-pool-2-c1d23e62-r6nb Created container postgres Warning Failed 15m (x4 over 17m) kubelet, gke-booknotes-pool-2-c1d23e62-r6nb Error: failed to start container &quot;postgres&quot;: Error response from daemon: error while creating mount source path '/mnt/data': mkdir /mnt/data: read-only file system Normal Pulled 14m (x5 over 17m) kubelet, gke-booknotes-pool-2-c1d23e62-r6nb Container image &quot;postgres:10.4&quot; already present on machine Warning BackOff 12m (x19 over 17m) kubelet, gke-booknotes-pool-2-c1d23e62-r6nb Back-off restarting failed container Normal Pulled 5m38s (x5 over 8m29s) kubelet, gke-booknotes-pool-2-c1d23e62-r6nb Container image &quot;postgres:10.4&quot; already present on machine Normal Created 5m38s (x5 over 8m27s) kubelet, gke-booknotes-pool-2-c1d23e62-r6nb Created container postgres Warning Failed 5m37s (x5 over 8m24s) kubelet, gke-booknotes-pool-2-c1d23e62-r6nb Error: failed to start container &quot;postgres&quot;: Error response from daemon: error while creating mount source path '/mnt/data': mkdir /mnt/data: read-only file system Warning BackOff 5m24s (x10 over 7m58s) kubelet, gke-booknotes-pool-2-c1d23e62-r6nb Back-off restarting failed container </code></pre> <p>Here is also my yaml files:</p> <p><strong>deployment.yaml</strong></p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: postgres spec: replicas: 1 template: metadata: labels: app: postgres spec: containers: - name: postgres image: postgres:10.4 imagePullPolicy: &quot;IfNotPresent&quot; ports: - containerPort: 5432 envFrom: - configMapRef: name: postgres-config volumeMounts: - mountPath: /var/lib/postgresql/data name: postgredb volumes: - name: postgredb persistentVolumeClaim: claimName: postgres-pv-claim </code></pre> <p><strong>postgres-configmap.yaml</strong></p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: postgres-config labels: app: postgres data: POSTGRES_DB: postgresdb POSTGRES_USER: postgresadmin POSTGRES_PASSWORD: some_password </code></pre> <p><strong>postgres-service.yaml</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: name: postgres labels: app: postgres spec: type: NodePort ports: - port: 5432 selector: app: postgres </code></pre> <p><strong>postgres-storage.yaml</strong></p> <pre><code>kind: PersistentVolume apiVersion: v1 metadata: name: postgres-pv-volume labels: type: local app: postgres spec: storageClassName: manual capacity: storage: 5Gi accessModes: - ReadWriteOnce hostPath: path: &quot;/mnt/data&quot; --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: postgres-pv-claim labels: app: postgres spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 5Gi </code></pre> <p>After I've changed RWM to RWO - then I this (I've deleted old instances and have created new one):</p> <pre><code>Name: postgres-54db6bdb8b-wgvr2 Namespace: default Priority: 0 Node: gke-booknotes-pool-1-3e566443-dc08/10.128.15.236 Start Time: Sun, 15 Dec 2019 04:56:57 +0700 Labels: app=postgres pod-template-hash=54db6bdb8b Annotations: kubernetes.io/limit-ranger: LimitRanger plugin set: cpu request for container postgres Status: Running IP: 10.56.6.13 IPs: &lt;none&gt; Controlled By: ReplicaSet/postgres-54db6bdb8b Containers: postgres: Container ID: docker://1070018c2a670cc7e0248e6269c271c3cba022fdd2c9cc5099a8eb4da44f7d65 Image: postgres:10.4 Image ID: docker-pullable://postgres@sha256:9625c2fb34986a49cbf2f5aa225d8eb07346f89f7312f7c0ea19d82c3829fdaa Port: 5432/TCP Host Port: 0/TCP State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: ContainerCannotRun Message: error while creating mount source path '/mnt/data': mkdir /mnt/data: read-only file system Exit Code: 128 Started: Sun, 15 Dec 2019 10:56:21 +0700 Finished: Sun, 15 Dec 2019 10:56:21 +0700 Ready: False Restart Count: 76 Requests: cpu: 100m Environment Variables from: postgres-config ConfigMap Optional: false Environment: &lt;none&gt; Mounts: /var/lib/postgresql/data from postgredb (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-t48dw (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: postgredb: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: postgres-pv-claim ReadOnly: false default-token-t48dw: Type: Secret (a volume populated by a Secret) SecretName: default-token-t48dw Optional: false QoS Class: Burstable Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning BackOff 81s (x1629 over 6h) kubelet, gke-booknotes-pool-1-3e566443-dc08 Back-off restarting failed container </code></pre>
<h3>1. How to fix CrashLoopBackOff (postgres) - GCP</h3> <p>The issue is here: error while creating mount source path '/mnt/data': mkdir /mnt/data: <strong>read-only</strong> file system.</p> <p>You need to make sure <code>postgres-pv-claim</code> is writable. You need to recreate the pv and pv claim with RWO access (you must have mistyped it to RO instead which is why you ran into the issue) then try to deploy postgres pod which should fix the issue.</p> <h3>2. Fixing FailedScheduling 69s (x10 over 7m35s) default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 4 times)</h3> <p>Provisioning persistent volume in GKE you don't need to create PersistentVolume objects they are dynamically created by GKE. So solve the <code>Warning FailedScheduling 69s (x10 over 7m35s)</code> issue</p> <ol> <li>remove <code>storageClassName</code> property from your pvc and</li> <li>delete pv</li> </ol> <p>which should fix the issue. Please see below revised <code>postgres-storage.yaml</code>.</p> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata: name: postgres-pv-claim labels: app: postgres spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi </code></pre>
<p>I’ve two <strong>differen</strong>t apps in k8s that need needs to read data,e.g. <code>AppA</code> from <code>AppB</code>, both are deployed on the <strong>same cluster</strong></p> <p>The <code>tricky</code> part here is that I need the both apps will deployed to any cluster and know the host and port to connect, I <strong>don't</strong> want to use hard-coded values.</p> <p>e.g. </p> <p>App A reads data from App B</p> <p>App <code>B</code> is web application with rest API hence app <code>A</code> needs to call like <a href="http://10.26.131.136:9090/api/app/getconfig" rel="nofollow noreferrer">http://10.26.131.136:9090/api/app/getconfig</a></p> <p>App A know the service path: like <code>api/app/getconfig</code> of App <code>B</code> but how it can know the the <strong>host and the port</strong> of appB?</p> <p>I cannot use it hardcoded , this works if I use <code>type:LoadBalacer</code> but this is hard-coded host and port, I need somehow to determine it on run-time maybe with serviceName, etc?</p>
<p>Note: The Kube-DNS naming convention is <code>service.namespace.svc.cluster-domain.tld</code> and the default cluster domain is <code>cluster.local</code></p> <p>So you can refer to your app as ..svc as long as the services are in the same cluster. You then need to check the ports on which the apps are listening by issuing: </p> <pre><code>kubectl -n &lt;namespace&gt; get svc </code></pre> <p>note the service identifier and issue: </p> <pre><code>kubectl -n &lt;namespace&gt; get svc &lt;identifier&gt; -o yaml </code></pre> <p>This will list you the service manifest where you can see which port the app is listening on. </p>
<p>Kubernetes Blue-green deployment, I am patching the Kubernetes-application-service to redirect the traffic from app-v1 to app-v2(Behind the load balancer). if any connection is ongoing on during that "patching", will be disconnected? and if not !! how I can test this?</p> <p>what is the best approach as per you for version deployment with the warm handover(without any connection loss) from app-v1 to app-v2?</p>
<p>The question seems to be about supporting two versions at the same time. That is kind of <a href="https://github.com/ContainerSolutions/k8s-deployment-strategies/tree/master/canary" rel="nofollow noreferrer">Canary deployment</a>, which make production traffic to gradually shifting from app-v1 to app-v2. </p> <p>This could be achieved with:</p> <ul> <li>Allow deployments to have HPA with custom metric that based on number of connections. That is, when it reaches certain number of connections scale up/down.</li> <li>Allow two deployments at the same time, <code>app-v1</code> and <code>app-v2</code>.</li> <li>Allow new traffic to route on new deployment via some Ingress annotation, but still keeping access to the old version, so no existing connection be dropped.</li> <li>Now, all the new requests will be routed to the new version. The HPA eventually, scale down pods from old version. (You can even allow deployment to have zero replicas).</li> </ul> <p>Addition to your question above blue-green deployments. The blue-green deployment is about having two identical environments, where one environment active at a time, let's say <code>blue</code> is active on production now. Once you have a new version ready for deployment, say <code>green</code>, is deployed and tested separately. Finally, you switched the traffic to the green environment, when you are happy with the test result on green environment. So <code>green</code> become active while <code>blue</code> become idle or terminated later sometime. (Referred from martin fowler <a href="https://martinfowler.com/bliki/BlueGreenDeployment.html" rel="nofollow noreferrer">article</a>).</p> <p>In Kubernetes, this can be achieved with having two identical deployments. Here is a good <a href="https://kubernetes.io/blog/2018/04/30/zero-downtime-deployment-kubernetes-jenkins/" rel="nofollow noreferrer">reference</a>. </p> <p>Basically, you can have two identical deployments, assume you have current deployment <code>my-deployment-blue</code> is on production. Once you are ready with the new version, you can deploy it as a completely new deployment, lets say <code>my-deployment-green</code>, and use a separate test service to test the <code>green</code> environment. Finally, switch the traffic to the <code>my-deployment-green</code> when all test are passed.</p>
<p>I can't find where the kubelet logs are located for Docker Desktop (Windows). There's a similar question <a href="https://stackoverflow.com/questions/34113476/where-are-the-kubernetes-kubelet-logs-located">here</a>, but the answers all refer to linux/kind installs of kubernetes.</p>
<p>To get <code>kubelet</code> logs you need to get access to the virtual machine that docker daemon runs in. Since there is no <code>ssh</code> available there is workaround for this:</p> <p>Here`s how to login into VM:</p> <pre><code>docker run --privileged -it -v /:/host -v /var/run/docker.sock:/var/run/docker.sock jongallant/ubuntu-docker-client </code></pre> <p>and then use this command to get the kubelet logs:</p> <pre><code>ls /host/var/log/kubelet* </code></pre> <p>Please note that this is just workaround for tool that was designed for testing and it`s not the official supported way. This case also describes how to <a href="https://stackoverflow.com/questions/44370059/connect-with-ssh-to-docker-daemon-on-windows">ssh to docker deamon</a>.</p>
<p>I have written a argo dag to trigger spark job in recursion until the condition satisfies. I have a counter parameter which needs to be incremented by 1 after every successful completion of spark job. But this isnt happening. Here is the snippet of my workflow.</p> <pre><code> templates: - name: test-dag dag: tasks: - name: test-spark-job template: test-spark-job - name: loop-it template: backfill-dag dependencies: [backfill-spark-job] when: &quot;{{=asInt(workflow.parameters.counter)}} &lt;= {{=asInt(workflow.parameters.batchsize)}}&quot; arguments: parameters: - name: counter value: {{=asInt(workflow.parameters.counter)}}+1 </code></pre>
<p>It <code>+1</code> should be part of the expression. Try:</p> <pre><code>arguments: parameters: - name: value value: &quot;{{=asInt(workflow.parameters.counter) + 1}}&quot; </code></pre>
<p>Can anyone pls help me with Open-Shift Routes?</p> <p>I have set up a Route with Reencrypt TLS termination. Calls made to the service endpoint (<a href="https://openshift-pmi-dev-reencrypt-default.apps.vapidly.os.fyre.ibm.com" rel="nofollow noreferrer">https://openshift-pmi-dev-reencrypt-default.apps.vapidly.os.fyre.ibm.com</a>) results in:</p> <p><a href="https://i.stack.imgur.com/Ww8ST.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ww8ST.png" alt="enter image description here"></a></p> <p>Requests made to the URL does not seem to reach the pods, it is returning a 503 Application not available error. The liberty application is running fine on port 8543, application logs looks clean. </p> <p>I am unable to identify the root cause of this error, The requests made on external https URLs does not make it to the application pod. Any suggestions on how to get the endpoint url's working?</p> <p>Thanks for your help in advance!</p> <p>Openshift version 4.2 Liberty version 19</p> <p><strong>Route.yaml</strong></p> <pre><code>kind: Route apiVersion: route.openshift.io/v1 metadata: name: openshift-pmi-dev-reencrypt namespace: default selfLink: &gt;- /apis/route.openshift.io/v1/namespaces/default/routes/openshift-pmi-dev-reencrypt uid: 5de29e0d-16b6-11ea-a1ab-0a580afe00ab resourceVersion: '7059134' creationTimestamp: '2019-12-04T16:51:50Z' labels: app: apm-pm-api annotations: openshift.io/host.generated: 'true' spec: host: openshift-pmi-dev-reencrypt-default.apps.vapidly.os.fyre.ibm.com subdomain: '' path: /ibm/pmi/service to: kind: Service name: apm-pm-api-service weight: 100 port: targetPort: https tls: termination: reencrypt insecureEdgeTerminationPolicy: None wildcardPolicy: None status: ingress: - host: openshift-pmi-dev-reencrypt-default.apps.vapidly.os.fyre.ibm.com routerName: default conditions: - type: Admitted status: 'True' lastTransitionTime: '2019-12-04T16:51:50Z' wildcardPolicy: None routerCanonicalHostname: apps.vapidly.os.fyre.ibm.com </code></pre> <p><strong>Service.yaml</strong></p> <pre><code>kind: Service apiVersion: v1 metadata: name: apm-pm-api-service namespace: default selfLink: /api/v1/namespaces/default/services/apm-pm-api-service uid: 989040ed-166c-11ea-b792-00000a1003d7 resourceVersion: '7062857' creationTimestamp: '2019-12-04T08:03:46Z' labels: app: apm-pm-api spec: ports: - name: https protocol: TCP port: 443 targetPort: 8543 selector: app: apm-pm-api clusterIP: 172.30.122.233 type: ClusterIP sessionAffinity: None status: loadBalancer: {} </code></pre>
<p>Looking at the snapshot, the browser is stating "Not Secure" for the connection. Is this an attempt to access the application over HTTP, not HTTPS?</p> <p>Having <code>spec.tls.insecureEdgeTerminationPolicy: None</code> means that traffic on insecure schemes (HTTP) is disabled - see the "Re-encryption Termination" section in <a href="https://docs.openshift.com/container-platform/3.11/architecture/networking/routes.html" rel="nofollow noreferrer">this doc</a>.</p> <p>I'd suggest to also use that documentation to determine if you may need to configure <code>spec.tls.destinationCACertificate</code>.</p>
<p>I am trying to create a copy of a file within a container using kubernetes python api.</p> <p>Below is the function I want to create:</p> <pre><code>def create_file_copy(self, file_name, pod_name, pod_namespace=None): if pod_namespace is None: pod_namespace = self.svc_ns stream(self.v1.connect_get_namespaced_pod_exec, name = pod_name, namespace = self.svc_ns ,command=['/bin/sh', '-c', 'cp file_name file_name_og'], stderr=True, stdin=True, stdout=True, tty=True) </code></pre> <p>NOTE: self.v1 is a kubernetes client api object which can access the kubernetes api methods.</p> <p>My question is around how do I parameterize file_name in "cp file_name file_name_og" in the command parameter ?</p> <p>No an expert in linux commands so any help is appreciated. Thanks</p>
<p>Assuming that both <code>file_name</code> and <code>file_name_og</code> are to be parameterized, this will make the <code>cp</code> copy command be constructed dynamically from function's arguments:</p> <pre><code>def create_file_copy(self, file_name, file_name_og, pod_name, pod_namespace=None): if pod_namespace is None: pod_namespace = self.svc_ns stream(self.v1.connect_get_namespaced_pod_exec, name = pod_name, namespace = self.svc_ns ,command=['/bin/sh', '-c', 'cp "' + file_name + '" "' + file_name_og + '"'], stderr=True, stdin=True, stdout=True, tty=True) </code></pre>
<p>I'm new to Kubernetes. In Kubernetes why NodePort alone has a default port range from <strong>30000 - 32767</strong>? Even if we change the default to user-defined port ranges why only <em><strong>2767</strong></em> ports are allowed?</p> <p>Please help me understand. Thanks in advance.</p>
<p>This range was picked to avoid conflicts with anything else on the host machine network since in many cases it is assigned dynamically (manual option is also possible). For example if you'll set it up from range 1-32767 your allocated <code>nodePort</code> might be in conflict with port 22.</p> <p>The reasons are pretty much well covered <a href="https://github.com/kubernetes/kubernetes/issues/9995" rel="noreferrer">here</a> by @thockin:</p> <blockquote> <ol> <li>We don't want service node ports to tromp on real ports used by the node</li> <li>We don't want service node ports to tromp on pod host ports.</li> <li>We don't want to randomly allocate someone port 80 or 443 or 22.</li> </ol> </blockquote> <p>Looking at the code I see that the range is not limited by it. You can find code snippets <a href="https://github.com/kubernetes/kubernetes/blob/59876df736c41093363f4c198aeec05e29c9c902/cmd/kube-apiserver/app/server.go#L197" rel="noreferrer">here</a>, <a href="https://github.com/kubernetes/kubernetes/blob/59876df736c41093363f4c198aeec05e29c9c902/cmd/kube-apiserver/app/server.go#L93" rel="noreferrer">here</a> and in the godocs <a href="https://godoc.org/github.com/mdevilliers/kubernetes/pkg/util#PortRange" rel="noreferrer">here</a>.</p> <p>I've also performed quick test when I set higher default range it works fine for me:</p> <pre><code>➜ temp kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-service NodePort 10.100.214.233 &lt;none&gt; 80:14051/TCP 68s my-service2 NodePort 10.97.67.57 &lt;none&gt; 80:10345/TCP 6s </code></pre>
<p>If I run the <code>kubectl create -f deployment.yaml</code> command with the following <code>deployment.yaml</code> file, everything succeeds.</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: my_app labels: app: my_app spec: containers: - name: my_app image: docker:5000/path_to_my_custom_image args: ["my_special_argument"] </code></pre> <p>However, now I want to have a custom "my_special_argument" as follows</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: my_app labels: app: my_app spec: containers: - name: my_app image: docker:5000/path_to_my_custom_image args: ["$(ARG)"] </code></pre> <p>and I want to somehow set the value of $ARG$ when I execute the <code>kubectl create -f deployment.yaml</code> command. How to do that?</p> <p>I am looking for something like: <code>kubectl create -f deployment.yaml --ARG=new_arg</code></p> <p>Can such command be executed?</p>
<p>You can use Environment variables in the <code>deployment.yaml</code></p> <pre><code>apiVersion: v1 kind: Pod metadata: name: my_app labels: app: my_app spec: containers: - name: my_app image: docker:5000/path_to_my_custom_image env: - name: SPECIAL_ARG_VAL value: "my_special_argument_val_for_my_app" args: ["$(SPECIAL_ARG_VAL)"] </code></pre> <p>Also, you can load the value for environment variables using Secrets or Configmaps.</p> <p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#configure-all-key-value-pairs-in-a-configmap-as-container-environment-variables" rel="nofollow noreferrer">Here</a> is an example loading value from configmap</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: my_app labels: app: my_app spec: containers: - name: my_app image: docker:5000/path_to_my_custom_image env: - name: SPECIAL_ARG_VAL valueFrom: configMapKeyRef: name: special-config key: SPECIAL_VAL_KEY args: ["$(SPECIAL_ARG_VAL)"] </code></pre> <p>You can create the configmap using <code>kubectl</code> as the following, but recommend to have a separate yaml file.</p> <pre><code>kubectl create configmap special-config --from-literal=SPECIAL_VAL_KEY=my_special_argument_val_for_my_app </code></pre> <p>You can even remove the <code>args</code> from the pod yaml above if you had the same environment variable defined in the Dockerfile for the image.</p>
<p>I have a GitHub Actions workflow that substitutes value in a deployment manifest. I use <code>kubectl patch --local=true</code> to update the image. This used to work flawlessly until now. Today the workflow started to fail with a <code>Missing or incomplete configuration info</code> error.</p> <p>I am running <code>kubectl</code> with <code>--local</code> flag so the config should not be needed. Does anyone know what could be the reason why <code>kubectl</code> suddenly started requiring a config? I can't find any useful info in Kubernetes GitHub issues and hours of googling didn't help.</p> <p>Output of the failed step in GitHub Actions workflow:</p> <pre><code>Run: kubectl patch --local=true -f authserver-deployment.yaml -p '{&quot;spec&quot;:{&quot;template&quot;:{&quot;spec&quot;:{&quot;containers&quot;:[{&quot;name&quot;:&quot;authserver&quot;,&quot;image&quot;:&quot;test.azurecr.io/authserver:20201230-1712-d3a2ae4&quot;}]}}}}' -o yaml &gt; temp.yaml &amp;&amp; mv temp.yaml authserver-deployment.yaml error: Missing or incomplete configuration info. Please point to an existing, complete config file: 1. Via the command-line flag --kubeconfig 2. Via the KUBECONFIG environment variable 3. In your home directory as ~/.kube/config To view or setup config directly use the 'config' command. Error: Process completed with exit code 1. </code></pre> <p>Output of <code>kubectl version</code>:</p> <pre><code>Client Version: version.Info{Major:&quot;1&quot;, Minor:&quot;19&quot;, GitVersion:&quot;v1.19.0&quot;, GitCommit:&quot;ffd68360997854d442e2ad2f40b099f5198b6471&quot;, GitTreeState:&quot;clean&quot;, BuildDate:&quot;2020-11-18T13:35:49Z&quot;, GoVersion:&quot;go1.15.0&quot;, Compiler:&quot;gc&quot;, Platform:&quot;linux/amd64&quot;} </code></pre>
<p>As a workaround I installed kind (it does take longer for the job to finish, but at least it's working and it can be used for e2e tests later).</p> <p>Added this step:</p> <pre><code>- name: Setup kind run: kubectl version uses: engineerd/[email protected] </code></pre> <p>Also use <code>--dry-run=client</code> as an option for your kubectl command.</p> <p>I do realize this is not the proper solution.</p>
<p>I wan to create service account with token in Kubernetes. I tried this:</p> <p>Full log:</p> <pre><code>root@vmi1026661:~# ^C root@vmi1026661:~# kubectl create sa cicd serviceaccount/cicd created root@vmi1026661:~# kubectl get sa,secret NAME SECRETS AGE serviceaccount/cicd 0 5s serviceaccount/default 0 16d NAME TYPE DATA AGE secret/repo-docker-registry-secret Opaque 3 16d secret/sh.helm.release.v1.repo.v1 helm.sh/release.v1 1 16d root@vmi1026661:~# cat &lt;&lt;EOF | kubectl apply -f - apiVersion: v1 kind: Pod metadata: name: cicd spec: serviceAccount: cicd containers: - image: nginx name: cicd EOF pod/cicd created root@vmi1026661:~# kubectl exec cicd cat /run/secrets/kubernetes.io/serviceaccount/token &amp;&amp; echo kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. error: unable to upgrade connection: container not found (&quot;cicd&quot;) root@vmi1026661:~# kubectl exec cicd cat /run/secrets/kubernetes.io/serviceaccount/token &amp;&amp; echo kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. error: unable to upgrade connection: container not found (&quot;cicd&quot;) root@vmi1026661:~# kubectl create token cicd eyJhbGciOiJSUzI1NiIsImtpZCI6IlUyQzNBcmx3RFhBeGdWRjlibEtfZkRPMC12Z0RpU1BHYjFLaWN3akViVVUifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jY WwiXSwiZXhwIjoxNjY2NzkyNTIxLCJpYXQiOjE2NjY3ODg5MjEsImlzcyI6Imh0dHBzOi8va3ViZXJuZ XRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiO iJkZWZhdWx0Iiwic2VydmljZWFjY291bnQiOnsibmFtZSI6ImNpY2QiLCJ1aWQiOiI3ODhmNzUwMS0xZ WFjLTQ0YzktOWQ3Ni03ZjVlN2FlM2Q4NzIifX0sIm5iZiI6MTY2Njc4ODkyMSwic3ViIjoic3lzdGVtO nNlcnZpY2VhY2NvdW50OmRlZmF1bHQ6Y2ljZCJ9.iBkpVDQ_w_UZmbr3PnpouwtQlLz9FzJs_cJ7IYbY WUphBM4NO4o8gPgBfnHGPG3uFVbEDbgdY2TsuxHKss0FosiCdjYBiLn8dp_SQd1Rdk0TMYGCLAOWRgZE XjpmXMLBcHtC5TexJY-bIpvw7Ni4Xls5XPbGpfqL_fcPuUQR3Gurkmk7gPSly77jRKSaF-kzj0oq78MPtwHu92g5hnIZs7ZLaMLzo9EvDRT092RVZXiVF0FkmflnUPNiyKxainrfvWTiTAlYSZreX6JfGjimklTAKCue4w9CqWZGNyGGumqH02ucMQ xjAiHS6J_Goxyaho8QEvFsEhkVqNFndzbw root@vmi1026661:~# kubectl create token cicd --duration=999999h eyJhbGciOiJSUzI1NiIsImtpZCI6IlUyQzNBcmx3RFhBeGdWRjlibEtfZkRPMC12Z0RpU1BHYjFLaWN3akViVVUifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jY WwiXSwiZXhwIjo1MjY2Nzg1MzI2LCJpYXQiOjE2NjY3ODg5MjYsImlzcyI6Imh0dHBzOi8va3ViZXJuZ XRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiO iJkZWZhdWx0Iiwic2VydmljZWFjY291bnQiOnsibmFtZSI6ImNpY2QiLCJ1aWQiOiI3ODhmNzUwMS0xZ WFjLTQ0YzktOWQ3Ni03ZjVlN2FlM2Q4NzIifX0sIm5iZiI6MTY2Njc4ODkyNiwic3ViIjoic3lzdGVtO nNlcnZpY2VhY2NvdW50OmRlZmF1bHQ6Y2ljZCJ9.N1V7i0AgW3DihJDWcGbM0kDvFH_nWodPlqZjLSHM KvaRAfmujOxSk084mrmjkZwIzWGanA6pkTQHiBIAGh8UhR7ijo4J6S58I-5Dj4gu2UWVOpaBzDBrKqBD SapFw9PjKpZYCHjsXTCzx6Df8q-bAEk_lpc0CsfpbXQl2jpJm3TTtQp1GKuIc53k5VKz9ON8MXcHY8lEfNs78ew8GiaoX6M4_5LmjSNVMHtyRy-Z_oIH9yK8LcHLxh0wqMS7RyW9UKN_9-qH1h01NwrFFOQWpbstFVuQKAnI-RyNEZDc9FZMNwYd_n MwaKv54oNLx4TniOSOWxS7ZcEyP5b7U8mgBw root@vmi1026661:~# cat &lt;&lt;EOF | kubectl apply -f - apiVersion: v1 kind: Secret type: kubernetes.io/service-account-token metadata: name: cicd annotations: kubernetes.io/service-account.name: &quot;cicd&quot; EOF secret/cicd created root@vmi1026661:~# cat &lt;&lt;EOF | kubectl apply -f - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: ClusterRoleBind roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: cicd namespace: default EOF clusterrolebinding.rbac.authorization.k8s.io/ClusterRoleBind created root@vmi1026661:~# kubectl get sa,secret NAME SECRETS AGE serviceaccount/cicd 0 60s serviceaccount/default 0 16d NAME TYPE DATA AGE secret/cicd kubernetes.io/service-account-token 3 12s secret/repo-docker-registry-secret Opaque 3 16d secret/sh.helm.release.v1.repo.v1 helm.sh/release.v1 1 16d root@vmi1026661:~# kubectl describe secret cicd Name: cicd Namespace: default Labels: &lt;none&gt; Annotations: kubernetes.io/service-account.name: cicd kubernetes.io/service-account.uid: 788f7501-1eac-44c9-9d76-7f5e7ae3d872 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1099 bytes namespace: 7 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IlUyQzNBcmx3RFhBeGdWRjlibEtfZkRPMC12Z0RpU1BHYjFLaWN3akViVVUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZ XRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZ XJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImNpY2QiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2Nvd W50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiY2ljZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291b nQvc2VydmljZS1hY2NvdW50LnVpZCI6Ijc4OGY3NTAxLTFlYWMtNDRjOS05ZDc2LTdmNWU3YWUzZDg3M iIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmNpY2QifQ.Uqpr96YyYgdCHQ-GLP lDMYgF_kzO7LV5B92voDjIPlXa_IQxAL9BdQyFAQmSRS71tLxbm9dvQt8h6mCsfPE_-ixgcpStuNcPtw GLAvVqrALVW5Qb9e2o1oraMq2w9s1mNSF-J4UaaKvaWJY_2X7pYgSdiiWp7AZg6ygMsJEjVWg2-dLroM-lp1VDMZB_lJPjZ90-lkbsnxh7f_zUeI8GqSBXcomootRmDOZyCywFAeBeWqkLTb149VNPJpYege4nH7A1ASWg-_rCfxvrq_92V2vGFBSvQ T6-uzl_pOLZ452rZmCsd5fkOY17sbXXCOcesnQEQdRlw4-GENDcv7IA root@vmi1026661:~# kubectl describe sa cicd Name: cicd Namespace: default Labels: &lt;none&gt; Annotations: &lt;none&gt; Image pull secrets: &lt;none&gt; Mountable secrets: &lt;none&gt; Tokens: cicd Events: &lt;none&gt; root@vmi1026661:~# kubectl get sa cicd -oyaml apiVersion: v1 kind: ServiceAccount metadata: creationTimestamp: &quot;2022-10-26T12:54:45Z&quot; name: cicd namespace: default resourceVersion: &quot;2206462&quot; uid: 788f7501-1eac-44c9-9d76-7f5e7ae3d872 root@vmi1026661:~# kubectl get sa,secret NAME SECRETS AGE serviceaccount/cicd 0 82s serviceaccount/default 0 16d NAME TYPE DATA AGE secret/cicd kubernetes.io/service-account-token 3 34s secret/repo-docker-registry-secret Opaque 3 16d secret/sh.helm.release.v1.repo.v1 helm.sh/release.v1 1 16d root@vmi1026661:~# ^C root@vmi1026661:~# kubectl describe secret cicd Name: cicd Namespace: default Labels: &lt;none&gt; Annotations: kubernetes.io/service-account.name: cicd kubernetes.io/service-account.uid: 788f7501-1eac-44c9-9d76-7f5e7ae3d872 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1099 bytes namespace: 7 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IlUyQzNBcmx3RFhBeGdWRjlibEtfZkRPMC12Z0RpU1BHYjFLaWN3akViVVUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW5 0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImNpY2QiLCJrdWJlc m5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiY2ljZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6Ijc4OG Y3NTAxLTFlYWMtNDRjOS05ZDc2LTdmNWU3YWUzZDg3MiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmNpY2QifQ.Uqpr96YyYgdCHQ-GLPlDMYgF_kzO7LV5-02voDjIP lXa_IQxAL9BdQyFAQmSRS71tLxbm9dvQt8h6mCsfPE_-ixgcpStuNcPtwGLAvVqrALVW5Qb9e2o1oraMq2w9s1mNSF-J4UaaKvaWJY_2X7pYgSdiiWp7AZg6ygMsJEjVWg2-dLroM-lp1VDMZ B_lJPjZ9DtBblkbsnxh7f_zUeI8GqSBXcomootRmDOZyCywFAeBeWqkLTb149VNPJpYege4nH7A1ASWg-_rCfxvrq_92V2vGFBSvQT6-uzl_pOLZ452rZmCsd5fkOY17sbXXCOcesnQEQdRlw4-GENDcv7IA root@vmi1026661:~# root@vmi1026661:~# </code></pre> <p>As you can see I get error:</p> <pre><code>root@vmi1026661:~# kubectl exec cicd cat /run/secrets/kubernetes.io/serviceaccount/token &amp;&amp; echo kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. error: unable to upgrade connection: container not found (&quot;cicd&quot;) root@vmi1026661:~# kubectl exec cicd cat /run/secrets/kubernetes.io/serviceaccount/token &amp;&amp; echo kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. error: unable to upgrade connection: container not found (&quot;cicd&quot;) root@vmi1026661:~# kubectl create token cicd </code></pre> <p>Do you kno0w what should be the appropriate command to complete this step?</p> <p>EDIT: here is the result from troubleshooting</p> <pre><code>root@vmi1026661:~# kubectl get pods NAME READY STATUS RESTARTS AGE cicd 1/1 Running 0 67m repo-docker-registry-78d6c5bdb5-r68jb 0/1 Pending 0 16d root@vmi1026661:~# kubectl describe pod cicd Name: cicd Namespace: default Priority: 0 Service Account: cicd Node: vmi1026660/38.242.240.39 Start Time: Wed, 26 Oct 2022 14:54:57 +0200 Labels: &lt;none&gt; Annotations: &lt;none&gt; Status: Running IP: 10.244.1.13 IPs: IP: 10.244.1.13 Containers: cicd: Container ID: containerd://ab44fc463f97316ba807efce0c82e276cf06326e1d03846c1f6186484ff9fcbb Image: nginx Image ID: docker.io/library/nginx@sha256:47a8d86548c232e44625d813b45fd92e81d07c639092cd1f9a49d98e1fb5f737 Port: &lt;none&gt; Host Port: &lt;none&gt; State: Running Started: Wed, 26 Oct 2022 14:55:22 +0200 Ready: True Restart Count: 0 Environment: &lt;none&gt; Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xggpn (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-xggpn: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: &lt;nil&gt; DownwardAPI: true QoS Class: BestEffort Node-Selectors: &lt;none&gt; Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: &lt;none&gt; root@vmi1026661:~# </code></pre>
<p>This is how you can complete your desired commands:</p> <pre><code>kubectl exec -it &lt;POD_NAME&gt; -c &lt;CONTAINER_NAME&gt; -- /bin/bash cat /run/secrets/kubernetes.io/serviceaccount/token &amp;&amp; echo </code></pre> <p>NOTE: if you only have 1 container in your pod, you can omit <code>-c &lt;CONTAINER_NAME&gt;</code></p> <p><code>-it</code> is short for <code>stdin</code> and <code>tty</code> -&gt; this is your [COMMAND]</p> <p><code>/bin/bash</code> is an argument here, you can pass more than one</p> <p><code>--</code> separates the arguments you want to pass</p> <p>After the first command is ran, you're inside a bash shell and can run whatever other commands inside the container.</p>
<p>I'm using Java Fabric k8s client to get List of the PODs where phase is equal to &quot;Running&quot;. I have 2 containers in the POD. What I noticed is getPods() method is returning POD as Running even one of the 2 containers is still not in READY state.</p> <p>Why is this happening when all the containers in the pod are not in READY state?</p>
<p><code>Running</code> is a Pod <em>status</em> state that is defined as:</p> <blockquote> <p>&quot;The Pod has been bound to a node, and all of the containers have been created. At least one container is still running, or is in the process of starting or restarting.&quot;</p> </blockquote> <p>Just one of the Pod's containers needs to be booting up or running to sufficiently mark the whole Pod as <code>Running</code>. Possible states are <code>Pending, Running, Succeeded, Failed, Unknown</code>. Think of these as the highest-level description of the Pod state. Each of these states has a Pod <em>condition</em> array amongst other metadata that provide more details about the containers inside.</p> <p><code>Ready</code> is a Pod <em>condition</em> which is a sub-state of a Pod <code>status</code> (<code>status.condition</code>). In simplest terms, when a Pod is marked as <code>Ready</code>, it's fully booted up and able to accept <em>traffic</em>. Sometimes this does depend on how your Pod spec is set up; for example, if your Pod has a <code>readinessProbe</code>, it reaches <code>Ready</code> only if the <code>readinessProbe</code> succeeds.</p> <p>Example: <code>kubectl get po</code></p> <pre><code>NAME READY STATUS RESTARTS AGE nani-play-8b6b89455-9g674 1/1 Running 0 13s </code></pre> <p>If I explore deeper into the pod via <code>kubectl describe po nani-play-8b6b89455-9g674</code>, amongst the other information is</p> <pre><code>Conditions: Type Status Initialized True Ready True &lt;---- 👀 ContainersReady True PodScheduled True </code></pre>
<p>I'm using the client-go API in Go to access the list of Pods under a given controller (Deployment). While querying the list of pods belonging to it using the selector labels, you get an array of <code>PodConditions</code> - <a href="https://pkg.go.dev/k8s.io/api/core/v1?tab=doc#PodCondition" rel="noreferrer">https://pkg.go.dev/k8s.io/api/core/v1?tab=doc#PodCondition</a>.</p> <p>This is well aligned with the official documentation of pod conditions - <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-conditions" rel="noreferrer">https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-conditions</a>. But the documentation isn't clear how to access this array of entries. Is it sorted by most recent entry first? For e.g. if I want to access only the most recent status of the Pod, how should it be done? From one of the trials I did in my local cluster, I got updates (Pod Conditions array) for one of the controller's Pods as below</p> <pre><code>{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-29 08:01:15 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-07-29 08:01:22 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-07-29 08:01:22 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-29 08:01:15 +0000 UTC } </code></pre> <p>As you can see that the given Pod has transitioned from <code>ContainersReady</code> to <code>Ready</code> just about at the same time <code>08:01:22 +0000 UTC</code>. But neither of them are in the first or last index.</p> <p>So TLDR, the question is how to infer the latest Pod condition type and status, from this array of values?</p>
<p>The Pod didn't transition from <code>ContainersReady</code> to <code>Ready</code>, the <code>ConditionStatus</code> of those <code>PodConditionTypes</code> change from <code>False</code> to <code>True</code>.<br /> The <code>PodCondition</code> array holds the details about each ConditionType, but they are not correlated, and you shouldn't rely on the order of the PodCondition updates.<br /> Instead, you can monitor the details of each PodCondition that interests you.</p> <p>If you just want to know if the pod is running or not, take a look at <a href="https://pkg.go.dev/k8s.io/api/core/v1?tab=doc#PodPhase" rel="nofollow noreferrer"><code>PodPhase</code></a>. It's also part of the <code>PodStatus</code> struct.</p>
<p>I seem to have an issue with my C# applications that I containerize and put into pod, cant seem to reach services withing the cluster?</p> <p>I made a simple code example </p> <pre><code>using System; using System.Net.NetworkInformation; namespace pingMe { class Program { static void Main(string[] args) { Console.WriteLine("Hello World!"); Ping ping = new Ping(); PingReply pingresult = ping.Send("example-service.default.svc"); if (pingresult.Status.ToString() == "Success") { Console.WriteLine("I can reach"); } } } } </code></pre> <p>which should be able to ping this within the cluster </p> <pre><code>PS C:\Helm&gt; kubectl apply -f https://k8s.io/examples/service/access/hello-application.yaml deployment.apps/hello-world created PS C:\Helm&gt; kubectl expose deployment hello-world --type=ClusterIP --name=example-service service/example-service exposed PS C:\Helm&gt; kubectl get service/example-service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE example-service ClusterIP 10.97.171.216 &lt;none&gt; 8080/TCP 14s PS C:\Helm&gt; kubectl get deployment.apps/hello-world NAME READY UP-TO-DATE AVAILABLE AGE hello-world 2/2 2 2 61s </code></pre> <p>but In the cluster I get this: </p> <pre><code>Hello World! Unhandled exception. System.Net.NetworkInformation.PingException: An exception occurred during a Ping request. ---&gt; System.Net.Internals.SocketExceptionFactory+ExtendedSocketException (00000005, 0xFFFDFFFF): Name or service not known at System.Net.Dns.InternalGetHostByName(String hostName) at System.Net.Dns.GetHostAddresses(String hostNameOrAddress) at System.Net.NetworkInformation.Ping.GetAddressAndSend(String hostNameOrAddress, Int32 timeout, Byte[] buffer, PingOptions options) --- End of inner exception stack trace --- at System.Net.NetworkInformation.Ping.GetAddressAndSend(String hostNameOrAddress, Int32 timeout, Byte[] buffer, PingOptions options) at System.Net.NetworkInformation.Ping.Send(String hostNameOrAddress, Int32 timeout, Byte[] buffer, PingOptions options) at System.Net.NetworkInformation.Ping.Send(String hostNameOrAddress) at pingMe.Program.Main(String[] args) in /src/pingMe/Program.cs:line 12 </code></pre> <p>How come... Adding the port number does not help..</p>
<p>On the surface it all looks OK. I would start debugging by eliminating possibilities. From the "worker" pod can you ping the ClusterIP and not the service name? Can you successfully do a DNS lookup on any service or pod? If you can access the Cluster IP, but pinging or resolving the service name continues to fail then DNS seems more suspect. </p>
<p>I'm using GKE and Helm v3 and I'm trying to create/reserve a static IP address using ComputeAddress and then to create DNS A record with the previously reserved IP address.</p> <p>Reserve IP address</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: compute.cnrm.cloud.google.com/v1beta1 kind: ComputeAddress metadata: name: ip-address annotations: cnrm.cloud.google.com/project-id: project-id spec: location: global </code></pre> <p>Get reserved IP address</p> <pre><code>kubectl get computeaddress ip-address -o jsonpath='{.spec.address}' </code></pre> <p>Create DNS A record</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: dns.cnrm.cloud.google.com/v1beta1 kind: DNSRecordSet metadata: name: dns-record-a annotations: cnrm.cloud.google.com/project-id: project-id spec: name: &quot;{{ .Release.Name }}.example.com&quot; type: &quot;A&quot; ttl: 300 managedZoneRef: external: example-com rrdatas: - **IP-ADDRESS-VALUE** &lt;---- </code></pre> <p>Is there a way to reference the IP address value, created by ComputeAddress, in the DNSRecordSet resource?</p> <p>Basically, I need something similar to the output values in Terraform.</p> <p>Thanks!</p>
<p>It's interesting that something similar exists for GKE Ingress where we can reference reserved IP address and managed SSL certificate using annotations:</p> <pre><code>annotations: kubernetes.io/ingress.global-static-ip-name: my-static-address </code></pre> <p>I have no idea why there is not something like this for DNSRecordSet resource. Hopefully, GKE will introduce it in the future.</p> <p>Instead of running two commands, I've found a workaround by using Helm's hooks.</p> <p>First, we need to define Job as <strong>post-install</strong> and <strong>post-upgrade</strong> hook which will pick up the reserved IP address when it becomes ready and then create appropriate DNSRecordSet resource with it. The script which retrieves the IP address, and manifest for DNSRecordSet are passed through ConfigMap and mounted to Pod.</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: batch/v1 kind: Job metadata: name: &quot;{{ .Release.Name }}-dns-record-set-hook&quot; annotations: # This is what defines this resource as a hook. Without this line, the # job is considered part of the release. &quot;helm.sh/hook&quot;: post-install,post-upgrade &quot;helm.sh/hook-delete-policy&quot;: before-hook-creation,hook-succeeded spec: template: metadata: name: &quot;{{ .Release.Name }}-dns-record-set-hook&quot; spec: restartPolicy: OnFailure containers: - name: post-install-job image: alpine:latest command: ['sh', '-c', '/opt/run-kubectl-command-to-set-dns.sh'] volumeMounts: - name: volume-dns-record-scripts mountPath: /opt - name: volume-writable mountPath: /mnt volumes: - name: volume-dns-record-scripts configMap: name: dns-record-scripts defaultMode: 0777 - name: volume-writable emptyDir: {} </code></pre> <p>ConfigMap definition with the script and manifest file:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: ConfigMap metadata: creationTimestamp: null name: dns-record-scripts data: run-kubectl-command-to-set-dns.sh: |- # install kubectl command apk add curl &amp;&amp; \ curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.15.1/bin/linux/amd64/kubectl &amp;&amp; \ chmod u+x kubectl &amp;&amp; \ mv kubectl /bin/kubectl # wait for reserved IP address to be ready kubectl wait --for=condition=Ready computeaddress/ip-address # get reserved IP address IP_ADDRESS=$(kubectl get computeaddress ip-address -o jsonpath='{.spec.address}') echo &quot;Reserved address: $IP_ADDRESS&quot; # update IP_ADDRESS in manifest sed &quot;s/##IP_ADDRESS##/$IP_ADDRESS/g&quot; /opt/dns-record.yml &gt; /mnt/dns-record.yml # create DNS record kubectl apply -f /mnt/dns-record.yml dns-record.yml: |- apiVersion: dns.cnrm.cloud.google.com/v1beta1 kind: DNSRecordSet metadata: name: dns-record-a annotations: cnrm.cloud.google.com/project-id: project-id spec: name: &quot;{{ .Release.Name }}.example.com&quot; type: A ttl: 300 managedZoneRef: external: example-com rrdatas: - &quot;##IP_ADDRESS##&quot; </code></pre> <p>And, finally, for (default) Service Account to be able to retrieve the IP address and create/update DNSRecordSet, we need to assign some roles to it:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: dnsrecord-setter rules: - apiGroups: [&quot;compute.cnrm.cloud.google.com&quot;] resources: [&quot;computeaddresses&quot;] verbs: [&quot;get&quot;, &quot;list&quot;] - apiGroups: [&quot;dns.cnrm.cloud.google.com&quot;] resources: [&quot;dnsrecordsets&quot;] verbs: [&quot;get&quot;, &quot;create&quot;, &quot;patch&quot;] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: dnsrecord-setter subjects: - kind: ServiceAccount name: default roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: dnsrecord-setter </code></pre>
<p>I have a node.js API run inside a Docker container on Kubernetes cluster within a pod. The pod is connected to Kubernetes service of type LoadBalancer, so I can connect to it from outside, and also from the Swagger UI, by passing to the Swagger UI which is run as another Docker container on the same Kubernetes cluster an API IP address <code>http://&lt;API IP address&gt;:&lt;port&gt;/swagger.json.</code></p> <p>But in my case I would like to call the API endpoints via Swagger UI using the service name like this <code>api-service.default:&lt;port&gt;/swagger.json</code> instead of using an external API IP address.</p> <p>For Swagger UI I' am using the latest version of swaggerapi/swagger-ui docker image from here: <a href="https://hub.docker.com/r/swaggerapi/swagger-ui" rel="nofollow noreferrer">https://hub.docker.com/r/swaggerapi/swagger-ui</a></p> <p>If I try to assign the <code>api-service.default:&lt;port&gt;/swagger.json</code> to Swagger-UI container environment variable then the Swagger UI result is: <strong>Failed with load API definition</strong></p> <p><img src="https://i.stack.imgur.com/3jBD0.jpg" alt="swagger.screenshot" /></p> <p>Which I guess is obvious because the browser does not recognize the internal cluster service name.</p> <p>Is there any way to communicate Swagger UI and API in Kubernetes cluster using service names?</p> <p><strong>--- Additional notes ---</strong></p> <p>The Swagger UI CORS error is misleading in that case. I am using this API from many other services.</p> <p><img src="https://i.stack.imgur.com/X8gyr.jpg" alt="enter image description here" /></p> <p>I have also tested the API CORS using cURL.</p> <p><img src="https://i.stack.imgur.com/lc4dI.jpg" alt="enter image description here" /></p> <p>I assume that swagger-ui container inside a pod can resolve that internal cluster service name, but the browser cannot because the browser works out of my Kubernetes cluster.</p> <p>On my other web services running in the browser (out of my cluster) served on nginx which also consumes this API, I use the nginx reverse proxy mechanizm.</p> <p><a href="https://docs.nginx.com/nginx/admin-guide/web-server/reverse-proxy/" rel="nofollow noreferrer">https://docs.nginx.com/nginx/admin-guide/web-server/reverse-proxy/</a></p> <p>This mechanizm redirects my API request invoked from the browser level to the internal cluster service name: <code>api-service.default:8080</code> where the nginx server is actually running. I mean the nginx is runnig on the cluster, browser not.</p> <p>Unfortunately, I dont't how to achive this in that swagger ui case.</p> <p>Swagger mainfest file:</p> <pre><code># SERVICE apiVersion: v1 kind: Service metadata: name: swagger-service labels: kind: swagger-service spec: selector: tier: api-documentation ports: - protocol: 'TCP' port: 80 targetPort: 8080 type: LoadBalancer --- # DEPLOYMENT apiVersion: apps/v1 kind: Deployment metadata: name: swagger-deployment labels: kind: swagger-deployment spec: replicas: 1 selector: matchLabels: tier: api-documentation template: metadata: labels: tier: api-documentation spec: containers: - name: swagger image: swaggerapi/swagger-ui imagePullPolicy: Always env: - name: URL value: 'http://api-service.default:8080/swagger.json' </code></pre> <p>API manifest file:</p> <pre><code># SERVICE apiVersion: v1 kind: Service metadata: name: api-service labels: kind: api-service spec: selector: tier: backend ports: - protocol: 'TCP' port: 8080 targetPort: 8080 type: LoadBalancer --- # DEPLOYMENT apiVersion: apps/v1 kind: Deployment metadata: name: api-deployment labels: kind: api-deployment spec: replicas: 1 selector: matchLabels: tier: backend template: metadata: labels: tier: backend spec: containers: - name: api image: &lt;my-api-image&gt;:latest </code></pre>
<p>I solved it by adding nginx reverse proxy to /etc/nginx/nginx.conf file in swagger UI container which redirects all requests ended with /swagger.json to the API service.</p> <p>After this file changes you need to reload the nginx server: <code>nginx -s reload</code></p> <pre><code>server { listen 8080; server_name localhost; index index.html index.htm; location /swagger.json { proxy_pass http://api-service.default:8080/swagger.json; } location / { absolute_redirect off; alias /usr/share/nginx/html/; expires 1d; location ~* \.(?:json|yml|yaml)$ { #SWAGGER_ROOT expires -1; include cors.conf; } include cors.conf; } </code></pre> <p><strong>Important</strong> is to assign only <code>/swagger.json</code> to ENV of the SwaggerUI continer. It is mandatory because requests must be routed to nginx in order to be resolved.</p> <p>Swagger manifest</p> <pre><code># SERVICE apiVersion: v1 kind: Service metadata: name: swagger-service labels: kind: swagger-service spec: selector: tier: api-documentation ports: - protocol: 'TCP' port: 80 targetPort: 8080 type: LoadBalancer --- # DEPLOYMENT apiVersion: apps/v1 kind: Deployment metadata: name: swagger-deployment labels: kind: swagger-deployment spec: replicas: 1 selector: matchLabels: tier: api-documentation template: metadata: labels: tier: api-documentation spec: containers: - name: swagger image: swaggerapi/swagger-ui imagePullPolicy: Always env: - name: URL value: '/swagger.json' </code></pre>
<p>I am trying to attach an IAM role to a pod's service account from within the POD in EKS.</p> <pre><code>kubectl annotate serviceaccount -n $namespace $serviceaccount eks.amazonaws.com/role-arn=$ARN </code></pre> <p>The current role attached to the <code>$serviceaccount</code>is outlined below:</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: common-role rules: - apiGroups: [&quot;&quot;] resources: - event - secrets - configmaps - serviceaccounts verbs: - get - create </code></pre> <p>However, when I execute the <code>kubectl</code> command I get the following:</p> <pre><code>error from server (forbidden): serviceaccounts $serviceaccount is forbidden: user &quot;system:servi....&quot; cannot get resource &quot;serviceaccounts&quot; in API group &quot;&quot; ... </code></pre> <p>Is my role correct? Why can't I modify the service account?</p>
<p>Kubernetes by default will run the pods with <code>service account: default</code> which don`t have the right permissions. Since I cannot determine which one you are using for your pod I can only assume that you are using either default or some other created by you. In both cases the error suggest that the service account your are using to run your pod does not have proper rights.</p> <p>If you run this pod with service account type default you will have add the appropriate rights to it. Alternative way is to run your pod with another service account created for this purpose. Here`s an example:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: ServiceAccount metadata: name: run-kubectl-from-pod </code></pre> <p>Then you will have to create appropriate role (you can find full list of verbs <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#role-v1-rbac-authorization-k8s-io" rel="nofollow noreferrer">here</a>):</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: modify-service-accounts rules: - apiGroups: [&quot;&quot;] resources: - serviceaccounts verbs: - get - create - patch - list </code></pre> <p>I'm using here more verbs as a test. <code>Get</code> and <code>Patch</code> would be enough for this use case. I`m mentioning this since its best practice to provide as minimum rights as possible.</p> <p>Then create your role accordingly:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: modify-service-account-bind subjects: - kind: ServiceAccount name: run-kubectl-from-pod roleRef: kind: Role name: modify-service-accounts apiGroup: rbac.authorization.k8s.io </code></pre> <p>And now you just have reference that service account when your run your pod:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Pod metadata: name: run-kubectl-in-pod spec: serviceAccountName: run-kubectl-from-pod containers: - name: kubectl-in-pod image: bitnami/kubectl command: - sleep - &quot;3600&quot; </code></pre> <p>Once that is done, you just exec into the pod:</p> <pre><code>➜ kubectl-pod kubectl exec -ti run-kubectl-in-pod sh </code></pre> <p>And then annotate the service account:</p> <pre><code>$ kubectl get sa NAME SECRETS AGE default 1 19m eks-sa 1 36s run-kubectl-from-pod 1 17m $ kubectl annotate serviceaccount eks-sa eks.amazonaws.com/role-arn=$ARN serviceaccount/eks-sa annotated $ kubectl describe sa eks-sa Name: eks-sa Namespace: default Labels: &lt;none&gt; Annotations: eks.amazonaws.com/role-arn: Image pull secrets: &lt;none&gt; Mountable secrets: eks-sa-token-sldnn Tokens: &lt;none&gt; Events: &lt;none&gt; </code></pre> <p>If you encounter any issues with request being refused please start with reviewing <a href="https://kubernetes.io/docs/reference/access-authn-authz/authorization/#review-your-request-attributes" rel="nofollow noreferrer">your request attributes</a> and <a href="https://kubernetes.io/docs/reference/access-authn-authz/authorization/#determine-the-request-verb" rel="nofollow noreferrer">determine the appropriate request verb</a>.</p> <p>You can also check your access with <code>kubectl auth can-i</code> command:</p> <pre><code>kubectl-pod kubectl auth can-i patch serviceaccount </code></pre> <p>API server will respond with simple <code>yes</code> or <code>no</code>.</p> <hr /> <p><strong>Please Note</strong> that If you want to patch a service account to use an IAM role you will have delete and re-create any existing pods that are assocaited with the service account to apply credentials environment variables. You can read more about it <a href="https://docs.aws.amazon.com/eks/latest/userguide/specify-service-account-role.html" rel="nofollow noreferrer">here</a>.</p> <hr />
<p>Kubernetes v1.20.0 , on Ubuntu 20.04.1, docker 19.3.11</p> <p>Using the following configuration I am able to create a deployment in the namespace &quot;live&quot; using this serviceaccount's token, but I am unable to delete the deployment using the same token.</p> <p>Trying to find the correct Role to bind to my programmatic ServiceAccount user, but I am unable to get the correct role mapping for deployment deletion. I've tried with and without slashes, asterisks in the resource names, and pluralizing the Resources.</p> <p>Here's the error i get from the API:</p> <pre><code> warnings.warn( Exception (403) Reason: Forbidden HTTP response headers: HTTPHeaderDict({'Cache-Control': 'no-cache, private', 'Content-Type': 'application/json', 'X-Content-Type-Options': 'nosniff', 'X-Kubernetes-Pf-Flowschema-Uid': 'xxxx-bf2f-43b3-af5d-3ce15b086080', 'X-Kubernetes-Pf-Prioritylevel-Uid': 'xxxx-0d42-4b08-820e-396249f74107', 'Date': 'Mon, 28 Dec 2020 13:33:00 GMT', 'Content-Length': '408'}) HTTP response body: {&quot;kind&quot;:&quot;Status&quot;,&quot;apiVersion&quot;:&quot;v1&quot;,&quot;metadata&quot;:{},&quot;status&quot;:&quot;Failure&quot;,&quot;message&quot;:&quot;deployments.apps \&quot;live-stream-deploy-testing123\&quot; is forbidden: User \&quot;system:serviceaccount:live:live-serviceaccount\&quot; cannot delete resource \&quot;deployments\&quot; in API group \&quot;apps\&quot; in the namespace \&quot;live\&quot;&quot;,&quot;reason&quot;:&quot;Forbidden&quot;,&quot;details&quot;:{&quot;name&quot;:&quot;live-stream-deploy-testing123&quot;,&quot;group&quot;:&quot;apps&quot;,&quot;kind&quot;:&quot;deployments&quot;},&quot;code&quot;:403} </code></pre> <p>Here is the deployment i'm trying to delete:</p> <pre><code>$ kubectl -n live get deploy NAME READY UP-TO-DATE AVAILABLE AGE live-stream-deploy-testing123 1/1 1 1 76m </code></pre> <p>Here's where i'm currently at with my role definition ( I have tried with asterisks in Resource Names as well this did not work for create, many iterations later:):</p> <pre><code>$ kubectl -n live describe role Name: live-serviceaccount-role Labels: &lt;none&gt; Annotations: &lt;none&gt; PolicyRule: Resources Non-Resource URLs Resource Names Verbs --------- ----------------- -------------- ----- deployment/ [] [] [get watch create list delete] deployment [] [] [get watch create list delete] deployments/ [] [] [get watch create list delete] deployments [] [] [get watch create list delete] service/ [] [] [get watch create list delete] service [] [] [get watch create list delete] services/ [] [] [get watch create list delete] services [] [] [get watch create list delete] services [] [] [get watch create list delete] deployment.apps/ [] [] [get watch create list delete] deployment.apps [] [] [get watch create list delete] deployments.apps/ [] [] [get watch create list delete] deployments.apps [] [] [get watch create list delete] service.apps/ [] [] [get watch create list delete] service.apps [] [] [get watch create list delete] services.apps/ [] [] [get watch create list delete] services.apps [] [] [get watch create list delete] </code></pre> <p>Role binding:</p> <pre><code>$ kubectl -n live describe rolebindings Name: live-serviceaccount-rolebinding Labels: &lt;none&gt; Annotations: &lt;none&gt; Role: Kind: Role Name: live-serviceaccount-role Subjects: Kind Name Namespace ---- ---- --------- ServiceAccount live-serviceaccount live </code></pre> <p>pip package kubernetes==12.0.1. I have not tested it against a previous version. Script that results in error:</p> <pre><code>import kubernetes.client from kubernetes.client.rest import ApiException import logging logger = logging.getLogger(__name__) DEPLOYMENT_NAME = &quot;live-stream-deploy-testing123&quot; configuration = kubernetes.client.Configuration() configuration.api_key['authorization'] = &quot;service account token here...&quot; configuration.api_key_prefix['authorization'] = 'Bearer' configuration.verify_ssl = False configuration.host = &quot;https://my_k8s_API:6443&quot; with kubernetes.client.ApiClient(configuration) as api_client: try: api_instance = kubernetes.client.AppsV1Api(api_client) api_instance.delete_namespaced_deployment( name=DEPLOYMENT_NAME, namespace=&quot;live&quot;) logger.info(&quot;Deployment deleted. %s&quot; % DEPLOYMENT_NAME) except ApiException as e: logger.error(&quot;Exception %s\n&quot; % e) </code></pre> <p>Some additional &quot;can-i&quot; action:</p> <pre><code>$ kubectl -n live auth can-i --as=system:serviceaccount:live:live-serviceaccount delete deploy yes $ kubectl auth can-i --as=system:serviceaccount:live:live-serviceaccount --list -n live Resources Non-Resource URLs Resource Names Verbs selfsubjectaccessreviews.authorization.k8s.io [] [] [create] selfsubjectrulesreviews.authorization.k8s.io [] [] [create] deployment/ [] [] [get watch create list delete] deployment [] [] [get watch create list delete] deployments/ [] [] [get watch create list delete] deployments [] [] [get watch create list delete] service/ [] [] [get watch create list delete] service [] [] [get watch create list delete] services/ [] [] [get watch create list delete] services [] [] [get watch create list delete] deployment.apps/ [] [] [get watch create list delete] deployment.apps [] [] [get watch create list delete] deployments.apps/ [] [] [get watch create list delete] deployments.apps [] [] [get watch create list delete] service.apps/ [] [] [get watch create list delete] service.apps [] [] [get watch create list delete] services.apps/ [] [] [get watch create list delete] services.apps [] [] [get watch create list delete] [/.well-known/openid-configuration] [] [get] [/api/*] [] [get] [/api] [] [get] [/apis/*] [] [get] [/apis] [] [get] [/healthz] [] [get] [/healthz] [] [get] [/livez] [] [get] [/livez] [] [get] [/openapi/*] [] [get] [/openapi] [] [get] [/openid/v1/jwks] [] [get] [/readyz] [] [get] [/readyz] [] [get] [/version/] [] [get] [/version/] [] [get] [/version] [] [get] [/version] [] [get] </code></pre> <p>Here are the YAMLs that produced the above outputs. They are all on the live namespace. Forgive the mess on the roles, but I've been trying to iterate through to a solution:</p> <pre><code>$ cat role.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: live-serviceaccount-role namespace: live rules: - apiGroups: [&quot;&quot;,&quot;apps&quot;] resources: [&quot;services&quot;, &quot;services/&quot;, &quot;deployment&quot;, &quot;deployment/&quot;, &quot;service&quot;, &quot;service/&quot;, &quot;deployments&quot;, &quot;deployments/&quot;] resourceNames: [&quot;&quot;] verbs: [&quot;get&quot;, &quot;watch&quot;, &quot;create&quot;, &quot;list&quot;, &quot;delete&quot;] - apiGroups: [&quot;&quot;] resources: [&quot;services&quot;] resourceNames: [&quot;&quot;] verbs: [&quot;get&quot;, &quot;watch&quot;, &quot;create&quot;, &quot;list&quot;, &quot;delete&quot;] $ cat serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: name: live-serviceaccount namespace: live $ cat rolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: live-serviceaccount-rolebinding namespace: live subjects: - kind: ServiceAccount name: live-serviceaccount # &quot;name&quot; is case sensitive namespace: live roleRef: kind: Role #this must be Role or ClusterRole name: live-serviceaccount-role # this must match the name of the Role or ClusterRole you wish to bind to apiGroup: &quot;rbac.authorization.k8s.io&quot; </code></pre> <p>Here is a sample of the script that successfully lights up the deployment in the live namespace:</p> <pre><code>import kubernetes.client from kubernetes import client import logging logger = logging.getLogger(__name__) DEPLOYMENT_NAME = &quot;live-stream-deploy-testing123&quot; POD_APP_LABEL = &quot;lab&quot; configuration = kubernetes.client.Configuration() # Configure API key authorization: BearerToken configuration.api_key['authorization'] = &quot;service account token here...&quot; # Uncomment below to setup prefix (e.g. Bearer) for API key, if needed configuration.api_key_prefix['authorization'] = 'Bearer' configuration.verify_ssl = False # Defining host is optional and default to http://localhost configuration.host = &quot;https://&lt;myk8sapi&gt;:6443&quot; # Enter a context with an instance of the API kubernetes.client #with kubernetes.client.ApiClient(configuration) as api_client: with kubernetes.client.ApiClient(configuration) as api_client: api_instance = kubernetes.client.AppsV1Api(api_client) def create_deployment_object(): # Configureate Pod template container container = kubernetes.client.V1Container( name=&quot;lab-container&quot;, # redacted for privacy image=&quot;nginx:latest&quot;, termination_message_path=&quot;/var/log/messages&quot;, image_pull_policy=&quot;IfNotPresent&quot;, resources=client.V1ResourceRequirements( requests={&quot;memory&quot;: &quot;5Gi&quot;}, limits={&quot;memory&quot;: &quot;16Gi&quot;} ), volume_mounts=[ client.V1VolumeMount(mount_path=&quot;/etc/nginx/nginx.conf.template&quot;, name=&quot;nginxconftemplate&quot;, sub_path=&quot;nginx.conf&quot;) ] ) volume1 = client.V1Volume( name=&quot;nginxconftemplate&quot;, config_map=client.V1ConfigMapVolumeSource( name=&quot;live-nginx-conf&quot;, default_mode=0o0777 ) ) # Create and configurate a spec section template = client.V1PodTemplateSpec( metadata=client.V1ObjectMeta(labels={&quot;app&quot;: POD_APP_LABEL}), spec=client.V1PodSpec(containers=[container], volumes=[volume1], node_selector={&quot;backend&quot;: &quot;yes&quot;}, ) ) # Create the specification of deployment spec = client.V1DeploymentSpec( replicas=1, template=template, selector={'matchLabels': {'app': POD_APP_LABEL}}) # Instantiate the deployment object deployment = client.V1Deployment( api_version=&quot;apps/v1&quot;, kind=&quot;Deployment&quot;, metadata=client.V1ObjectMeta(name=DEPLOYMENT_NAME), spec=spec) return deployment def create_deployment(api_instance, deployment): # Create deployement api_response = api_instance.create_namespaced_deployment( body=deployment, namespace=&quot;live&quot;) logger.info(&quot;Deployment created. status='%s'&quot; % str(api_response.status)) deployment = create_deployment_object() create_deployment(api_instance, deployment) </code></pre>
<p>While your answer covers the solution it does not explain why the new set of roles that you provided are working successfully in contrast to the previous ones.</p> <p>The <code>can-i</code> command was a bit misleading in this case. Result of it, was in fact correct since with this role you were able to delete but only chosen/mentioned resources. Since you used <code>resourceNames</code> with an empty list <code>[&quot;&quot;]</code>, your role did not allow to delete anything.</p> <p>To simply test this I placed the exact deployment name in the <code>resourceNames</code>:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: live-serviceaccount-role namespace: live rules: - apiGroups: [&quot;&quot;,&quot;apps&quot;] resources: [&quot;services&quot;, &quot;services/&quot;, &quot;deployment&quot;, &quot;deployment/&quot;, &quot;service&quot;, &quot;service/&quot;, &quot;deployments&quot;, &quot;deployments/&quot;] resourceNames: [&quot;live-stream-deploy-testing123&quot;] verbs: [&quot;get&quot;, &quot;watch&quot;, &quot;create&quot;, &quot;list&quot;, &quot;delete&quot;] </code></pre> <p>Once I did that I launched your code and deployment was successfully deleted. So to sum up You should either fill the <code>resourceNames</code> with names of resources or omit it completely.</p>
<p>I'm trying to upgrade my gke cluster from this command:</p> <pre><code>gcloud container clusters upgrade CLUSTER_NAME --cluster-version=1.15.11-gke.3 \ --node-pool=default-pool --zone=ZONE </code></pre> <p>I get the following output:</p> <pre><code>Upgrading test-upgrade-172615287... Done with 0 out of 5 nodes (0.0%): 2 being processed...done. Timed out waiting for operation &lt;Operation clusterConditions: [] detail: u'Done with 0 out of 5 nodes (0.0%): 2 being processed' name: u'operation-NUM-TAG' nodepoolConditions: [] operationType: OperationTypeValueValuesEnum(UPGRADE_NODES, 4) progress: &lt;OperationProgress metrics: [&lt;Metric intValue: 5 name: u'NODES_TOTAL'&gt;, &lt;Metric intValue: 0 name: u'NODES_FAILED'&gt;, &lt;Metric intValue: 0 name: u'NODES_COMPLETE'&gt;, &lt;Metric intValue: 0 name: u'NODES_DONE'&gt;] stages: []&gt; … status: StatusValueValuesEnum(RUNNING, 2) …&gt; ERROR: (gcloud.container.clusters.upgrade) Operation [DATA_SAME_AS_IN_TIMEOUT] is still running </code></pre> <p>I just discovered <code>gcloud config set builds/timeout 3600</code> so I hope this doesn't happen again, like in my CI. But if it does, is there a gcloud command that lets me know that the upgrade is still in progress? These two didn't provide that:</p> <pre><code>gcloud container clusters describe CLUSTER_NAME --zone=ZONE gcloud container node-pools describe default-pool --cluster=CLUSTER_NAME --zone=ZONE </code></pre> <p><strong>Note:</strong> Doing this upgrade in the console took <strong>2 hours</strong> so I'm not surprised the command-line attempt timed out. This is for a CI, so I'm fine looping and sleeping for 4 hours or so before giving up. But what's the command that will let me know when the cluster is being upgraded, and when it either finishes or fails? The UI is showing the cluster is still undergoing the upgrade, so I assume there is some command.</p> <p>TIA as usual</p>
<p>Bumped into the same issue.</p> <p>All <code>gcloud</code> commands, including <code>gcloud container operations wait OPERATION_ID</code>(<a href="https://cloud.google.com/sdk/gcloud/reference/container/operations/wait" rel="nofollow noreferrer">https://cloud.google.com/sdk/gcloud/reference/container/operations/wait</a>), have the same 1-hour timeout.</p> <p>At this point, there is no other way to wait for the upgrade to complete than to query <code>gcloud container operations list</code> and check the STATUS in a loop.</p>
<p>I have a minikube Kubernetes set up with two pods, each having one container. One for my Vue frontend and one for my backend API. I've also got two services attached to the pods.</p> <p>My understanding is because the frontend and backend IP addresses change when the Pod is restarted or moved to a different node, we shouldn't use the IP Addresses to link them but a Service instead.</p> <p>So in my case, my frontend would call my backend through the Service (which can also be used as the hostname) e.g. Service is called <code>myapi-service</code>, use <code>http://myapi-service</code></p> <p>My problem is after I launch my front end, any request it sends using the above hostname doesn't work, it's not able to connect to my backend.</p> <p><strong>app-deployment.yaml</strong></p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: myapi-deployment labels: app: myrapi spec: replicas: 1 selector: matchLabels: app: myapi template: metadata: labels: app: myapi spec: containers: - name: myapi image: myapi imagePullPolicy: Never ports: - containerPort: 80 env: - name: TZ value: America/Toronto - name: ASPNETCORE_ENVIRONMENT value: Development_Docker --- apiVersion: apps/v1 kind: Deployment metadata: name: myui-deployment labels: app: myui spec: replicas: 1 selector: matchLabels: app: myui template: metadata: labels: app: myui spec: containers: - name: myui image: myui imagePullPolicy: Never ports: - containerPort: 8080 env: - name: NODE_ENV value: Development </code></pre> <p><strong>app-service.yaml</strong></p> <pre><code>apiVersion: v1 kind: Service metadata: name: myapi-service labels: run: myapi-service spec: ports: - port: 80 protocol: TCP selector: app: myapi type: NodePort --- apiVersion: v1 kind: Service metadata: name: myui-service labels: run: myui-service spec: ports: - port: 8080 protocol: TCP selector: app: myui type: NodePort </code></pre> <p><a href="https://i.stack.imgur.com/GZYFA.png" rel="nofollow noreferrer">Kubernetes Service</a></p> <p>Am I missing a piece here/doing something wrong? Thanks so much.</p> <p><strong>UPDATE</strong>: If I go into my frontend container</p> <p><code>curl myapi-service/swagger/index.html</code></p> <p>It's able to pull up the API's swagger page</p> <p><strong>UPDATE 2, SOLUTION:</strong></p> <p>I refactored my <code>Dockerfile</code> to use NGINX to serve my front end Vue app</p> <p><strong>Dockerfile</strong></p> <pre><code>FROM node:14 as builder # make the 'app' folder the current working directory WORKDIR /app # copy both 'package.json' and 'package-lock.json' (if available) COPY package*.json ./ # install project dependencies RUN npm install # copy project files and folders to the current working directory (i.e. 'app' folder) COPY . . # build app RUN npm run build FROM nginx:alpine COPY ./.nginx/nginx.conf /etc/nginx/nginx.conf ## Remove default nginx index pagec RUN rm -rf /usr/share/nginx/html/* # Copy from the stage 1 COPY --from=builder /app/dist /usr/share/nginx/html EXPOSE 80 ENTRYPOINT [&quot;nginx&quot;, &quot;-g&quot;, &quot;daemon off;&quot;] </code></pre> <p>and created a folder called <code>.nginx</code> in my front end's root folder with the <code>nginx.conf</code> file inside it. <strong>nginx.conf</strong></p> <pre><code>worker_processes 4; events { worker_connections 1024; } http { server { listen 80; root /usr/share/nginx/html; include /etc/nginx/mime.types; location /appui { try_files $uri /index.html; } location /api/ { proxy_pass http://myapi-service; } } } </code></pre> <p>No Ingress controller required. The front end was able to talk to the backend as explained in Mikolaj's answer.</p> <p>Hope someone out there can find this useful!~ ^</p>
<p>You cannot reach your backend pod from your frontend pod using kubernetes DNS like http://myapi-service because your frontend is running in the browser - outside your cluster. The browser doesn't undestrand the kubernetes DNS therefore cannot resolve your http://myapi-service url.</p> <p>If you want to communicate frontend with your backend using <code>K8S DNS</code> you need to use any web server like <code>nginx</code>. The web server that host your frontend app is actually run on the kubernetes cluster so it understands the <code>K8S DNS</code>.</p> <p>In your frontend code you need to change the api calls. Insead of directly call the api you need to first call youe web server.</p> <p>For example: replace <strong>http://api-service/api/getsomething</strong> to <strong>/api/getsomething</strong></p> <p><strong>/api/getsomething</strong> - this will tell the browser that it will send the request to the same server that served your frontend app (<code>nginx</code> in this case)</p> <p>Then via <code>nginx</code> server the call can be forwarder to your api using the <code>K8S DNS</code>. (It is called <strong>reverse proxy</strong>)</p> <p>To forward your requests to api add some code to nginx config file.</p> <pre><code>location /api/ { proxy_pass http://api-service.default:port; } </code></pre> <p>*api-service - your k8s service name</p> <p>*default - name of k8s api-service namespace</p> <p>*port - api-service port</p> <p>From now all your frontend requests contains /api/.. phrase will be forwarded to your api-service/api/..</p> <p>/api/getsomething -&gt; http://api-service/api/getsomething</p>
<p>When attempting to install the snap microk8s 1.19/stable on a Linux machine we got any of the following errors:</p> <pre><code>error: cannot perform the following tasks: - Run configure hook of &quot;microk8s&quot; snap if present (run hook &quot;configure&quot;: </code></pre> <p>or</p> <pre><code> - Mount snap &quot;microk8s&quot; (1769) ([stop snap-microk8s-1769.mount] failed with exit status -1: *** stack smashing detected ***: terminated </code></pre> <p>or</p> <pre><code>+ /snap/microk8s/1769/kubectl --kubeconfig=/var/snap/microk8s/1769/credentials/client.config apply -f /var/snap/microk8s/1769/args/cni-network/cni.yaml The connection to the server 127.0.0.1:16443 was refused - did you specify the right host or port? </code></pre> <p>or</p> <pre><code>- Fetch and check assertions for snap &quot;microk8s&quot; (1769) (cannot verify snap &quot;microk8s&quot;, no matching signatures found) </code></pre> <p>We had microk8s previously installed but had removed it some time ago, just mentioning it in case this could help. I doubt it is something to do with previous remnants, we also did temporarily disable the firewall just to be sure it is not a firewall issue.</p> <p><strong>EDIT</strong>: this is now a long-gone issue and I forgot to post an update since it happened. The core issue seemed to have happened because the OS had a broken third-party software/application - totally unrelated - that was generating a colossal amount of logs and we were temporarily running out of space - between the logs were sorted out - where there was also a task cleaning the logs. If anyone gets such an issue, you might consider also checking if you have stable enough space for the installation to succeed during the installation. The upgrade route in the accepted answer was quick enough to apply before we randomly ran out of space. Once we fixed the unrelated issue with the other software/application - it was possible to install microk8s 1.19/stable directly without going through the upgrade route.</p>
<p>Seems like it might be an issue with the latest branch (v1.19) or in some way related to your OS. I have deployed this on ubuntu (20.04) and v1.19 worked fine for me.</p> <p>Installing version 1.18 with stable snap channel solved issue:</p> <pre><code>sudo snap install microk8s --classic --channel=1.18/stable </code></pre> <p>You may also want to try to update the microk8s:</p> <pre><code>sudo snap refresh microk8s --classic --channel=1.19/stable </code></pre> <p><a href="https://microk8s.io/docs/setting-snap-channel" rel="noreferrer">Here</a>`s more information how to check/use microk8s channels.</p>
<p>How can i upgrade the Kubernetes node plane to the latest version? I tried to upgrade it using Kubeadm, but the developers assumed that all the people are going to be using Linux.</p> <pre><code>PS C:\Users\Taha.jo&gt; kubectl get nodes NAME STATUS ROLES AGE VERSION docker-desktop Ready control-plane 39h v1.25.9 </code></pre> <pre><code>PS C:\Users\Taha.jo\Desktop&gt; .\kubeadm.exe upgrade plan couldn't create a Kubernetes client from file &quot;\\etc\\kubernetes\\admin.conf&quot;: failed to load admin kubeconfig: open \etc\kubernetes\admin.conf: The system cannot find the path specified. To see the stack trace of this error execute with --v=5 or higher </code></pre>
<p>Upgrading the Kubernetes node plane to the latest version can be done using kubeadm, even if you are not using Linux as your primary operating system. Although the kubeadm tool is primarily designed for Linux-based environments, you can still perform the upgrade using a Linux virtual machine or a container.</p> <p>Here's a general outline of the steps involved in upgrading the Kubernetes node plane using kubeadm:</p> <p>1 - Prepare a Linux environment: Set up a Linux virtual machine (VM) or container on your non-Linux operating system. This will serve as your Linux-based environment for running the kubeadm commands.</p> <p>2 Install the necessary components: In your Linux environment, install Docker and kubeadm, as these are the key components required for the upgrade process. Refer to the official Kubernetes documentation for the specific installation steps for your Linux distribution.</p> <p>3 Drain and cordon the node: On your Kubernetes cluster, mark the node you want to upgrade as &quot;unschedulable&quot; and evict any running pods to other nodes. You can use the following command on your non-Linux operating system to access your cluster remotely:</p> <pre class="lang-bash prettyprint-override"><code>kubectl drain &lt;node-name&gt; --ignore-daemonsets </code></pre> <p>4 - Perform the upgrade: In your Linux environment, use kubeadm to upgrade the node. Connect to your Kubernetes cluster using the following command:</p> <pre class="lang-bash prettyprint-override"><code>kubeadm upgrade node </code></pre> <p>This command will fetch the necessary upgrade scripts and perform the upgrade on the node.</p> <p>5 - Uncordon the node: After the upgrade is complete, mark the node as &quot;schedulable&quot; again, allowing new pods to be scheduled on it:</p> <pre class="lang-bash prettyprint-override"><code>kubectl uncordon &lt;node-name&gt; </code></pre> <p>6 - Verify the upgrade: Run the following command to ensure that the node has been successfully upgraded:</p> <pre class="lang-bash prettyprint-override"><code>kubectl get nodes </code></pre> <p>Check that the node's status and version are updated to the latest version.</p> <p>7 Repeat for other nodes: If you have multiple nodes in your cluster, repeat steps 3-6 for each node until all nodes in the cluster are upgraded.</p> <p>Remember to always take proper backups and follow best practices when upgrading your Kubernetes cluster to avoid any potential issues. Additionally, it's recommended to consult the official Kubernetes documentation and release notes for the specific version you are upgrading to, as there may be version-specific considerations or additional steps required.</p> <p>Note: If you encounter any compatibility issues or limitations with running Linux-based tools on your non-Linux operating system, you might consider using a Linux-based virtual machine or a container to perform the upgrade process.</p>
<p><a href="https://microk8s.io/docs/registry-private" rel="nofollow noreferrer">microk8s document &quot;Working with a private registry&quot;</a> leaves me unsure what to do. The <strong>Secure registry</strong> portion says <strong>Kubernetes</strong> does it one way (no indicating whether or not Kubernetes' way applies to microk8), and <strong>microk8s</strong> uses <strong>containerd</strong> inside its implementation.</p> <p>My YAML file contains a reference to a private container on dockerhub.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: blaw spec: replicas: 1 selector: matchLabels: app: blaw strategy: type: Recreate template: metadata: labels: app: blaw spec: containers: - image: johngrabner/py_blaw_service:v0.3.10 name: py-transcribe-service </code></pre> <p>When I <strong>microk8s kubectl apply</strong> this file and do a <strong>microk8s kubectl describe</strong>, I get:</p> <pre><code>Warning Failed 16m (x4 over 18m) kubelet Failed to pull image &quot;johngrabner/py_blaw_service:v0.3.10&quot;: rpc error: code = Unknown desc = failed to pull and unpack image &quot;docker.io/johngrabner/py_blaw_service:v0.3.10&quot;: failed to resolve reference &quot;docker.io/johngrabner/py_blaw_service:v0.3.10&quot;: pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed </code></pre> <p>I have verified that I can download this repo from a console doing a docker pull command.</p> <p>Pods using public containers work fine in microk8s.</p> <p>The file <strong>/var/snap/microk8s/current/args/containerd-template.toml</strong> already contains something to make dockerhub work since public containers work. Within this file, I found</p> <pre><code> # 'plugins.&quot;io.containerd.grpc.v1.cri&quot;.registry' contains config related to the registry [plugins.&quot;io.containerd.grpc.v1.cri&quot;.registry] # 'plugins.&quot;io.containerd.grpc.v1.cri&quot;.registry.mirrors' are namespace to mirror mapping for all namespaces. [plugins.&quot;io.containerd.grpc.v1.cri&quot;.registry.mirrors] [plugins.&quot;io.containerd.grpc.v1.cri&quot;.registry.mirrors.&quot;docker.io&quot;] endpoint = [&quot;https://registry-1.docker.io&quot;, ] [plugins.&quot;io.containerd.grpc.v1.cri&quot;.registry.mirrors.&quot;localhost:32000&quot;] endpoint = [&quot;http://localhost:32000&quot;] </code></pre> <p>The above does not appear related to authentication.</p> <p>On the internet, I found instructions to create a secret to store credentials, but this does not work either.</p> <pre><code>microk8s kubectl create secret generic regcred --from-file=.dockerconfigjson=/home/john/.docker/config.json --type=kubernetes.io/dockerconfigjson </code></pre>
<p>While you have created the secret you have to then setup your deployment/pod to use that secret in order to download the image. This can be achieved with <code> imagePullSecrets</code> as described on the microk8s document you mentioned.</p> <p>Since you already created your secret you just have reference it in your deployment:</p> <pre class="lang-yaml prettyprint-override"><code>... spec: containers: - image: johngrabner/py_blaw_service:v0.3.10 name: py-transcribe-service imagePullSecrets: - name: regcred ... </code></pre> <p>For more reading check how to <a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-pod-that-uses-your-secret" rel="nofollow noreferrer">Pull an Image from a Private Registry.</a></p>
<p>So i have this project that i already deployed in GKE and i am trying to make the CI/CD from github action. So i added the workflow file which contains</p> <pre><code>name: Build and Deploy to GKE on: push: branches: - main env: PROJECT_ID: ${{ secrets.GKE_PROJECT }} GKE_CLUSTER: ${{ secrets.GKE_CLUSTER }} # Add your cluster name here. GKE_ZONE: ${{ secrets.GKE_ZONE }} # Add your cluster zone here. DEPLOYMENT_NAME: ems-app # Add your deployment name here. IMAGE: ciputra-ems-backend jobs: setup-build-publish-deploy: name: Setup, Build, Publish, and Deploy runs-on: ubuntu-latest environment: production steps: - name: Checkout uses: actions/checkout@v2 # Setup gcloud CLI - uses: google-github-actions/setup-gcloud@94337306dda8180d967a56932ceb4ddcf01edae7 with: service_account_key: ${{ secrets.GKE_SA_KEY }} project_id: ${{ secrets.GKE_PROJECT }} # Configure Docker to use the gcloud command-line tool as a credential # helper for authentication - run: |- gcloud --quiet auth configure-docker # Get the GKE credentials so we can deploy to the cluster - uses: google-github-actions/get-gke-credentials@fb08709ba27618c31c09e014e1d8364b02e5042e with: cluster_name: ${{ env.GKE_CLUSTER }} location: ${{ env.GKE_ZONE }} credentials: ${{ secrets.GKE_SA_KEY }} # Build the Docker image - name: Build run: |- docker build \ --tag &quot;gcr.io/$PROJECT_ID/$IMAGE:$GITHUB_SHA&quot; \ --build-arg GITHUB_SHA=&quot;$GITHUB_SHA&quot; \ --build-arg GITHUB_REF=&quot;$GITHUB_REF&quot; \ . # Push the Docker image to Google Container Registry - name: Publish run: |- docker push &quot;gcr.io/$PROJECT_ID/$IMAGE:$GITHUB_SHA&quot; # Set up kustomize - name: Set up Kustomize run: |- curl -sfLo kustomize https://github.com/kubernetes-sigs/kustomize/releases/download/v3.1.0/kustomize_3.1.0_linux_amd64 chmod u+x ./kustomize # Deploy the Docker image to the GKE cluster - name: Deploy run: |- ./kustomize edit set image LOCATION-docker.pkg.dev/PROJECT_ID/REPOSITORY/IMAGE:TAG=$GAR_LOCATION-docker.pkg.dev/$PROJECT_ID/$REPOSITORY/$IMAGE:$GITHUB_SHA ./kustomize build . | kubectl apply -k ./ kubectl rollout status deployment/$DEPLOYMENT_NAME kubectl get services -o wide </code></pre> <p>but when the workflow gets to the deploy part, it shows an error</p> <pre><code>The Service &quot;ems-app-service&quot; is invalid: metadata.resourceVersion: Invalid value: &quot;&quot;: must be specified for an update </code></pre> <p>Now i have searched that this is actually not true because the resourceVersion is supposed to change for every update so i just removed it</p> <p>Here is my kustomization.yaml</p> <pre><code>apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - service.yaml - deployment.yaml </code></pre> <p>my deployment.yaml</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: &quot;1&quot; generation: 1 labels: app: ems-app name: ems-app namespace: default spec: progressDeadlineSeconds: 600 replicas: 3 revisionHistoryLimit: 10 selector: matchLabels: app: ems-app strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 25% type: RollingUpdate template: metadata: labels: app: ems-app spec: containers: - image: gcr.io/ciputra-nusantara/ems@sha256:70c34c5122039cb7fa877fa440fc4f98b4f037e06c2e0b4be549c4c992bcc86c imagePullPolicy: IfNotPresent name: ems-sha256-1 resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst restartPolicy: Always schedulerName: default-scheduler securityContext: {} terminationGracePeriodSeconds: 30 </code></pre> <p>and my service.yaml</p> <pre><code>apiVersion: v1 kind: Service metadata: annotations: cloud.google.com/neg: '{&quot;ingress&quot;:true}' finalizers: - service.kubernetes.io/load-balancer-cleanup labels: app: ems-app name: ems-app-service namespace: default spec: clusterIP: 10.88.10.114 clusterIPs: - 10.88.10.114 externalTrafficPolicy: Cluster ipFamilies: - IPv4 ipFamilyPolicy: SingleStack ports: - nodePort: 30261 port: 80 protocol: TCP targetPort: 80 selector: app: ems-app sessionAffinity: None type: LoadBalancer status: loadBalancer: ingress: - ip: 34.143.255.159 </code></pre>
<p>As the title of this question is more Kubernetes related than GCP related, I will answer since I had this same problem using AWS EKS.</p> <p><code>How to fix metadata.resourceVersion: Invalid value: 0x0: must be specified for an update</code> is an error that may appear when using <code>kubectl apply</code></p> <p><code>Kubectl apply</code> makes a <a href="https://kubernetes.io/docs/tasks/manage-kubernetes-objects/declarative-config/" rel="noreferrer">three-way-merge</a> between your local file, the live kubernetes object manifest and the annotation <code>kubectl.kubernetes.io/last-applied-configuration</code> in that live object manifest.</p> <p>So, for some reason, the value <code>resourceVersion</code> managed to be written in your <code>last-applied-configuration</code>, probably because of someone exporting the live manifests to a file, modifying it, and applying it back again.</p> <p>When you try to apply your new local file that doesn't have that value -and should not have it-, but the value is present in the <code>last-applied-configuration</code>, it thinks it should be removed from thye live manifest and specifically send it in the subsequent <code>patch</code> operation like <code>resourceVersion: null</code>, which should get rid of it. But it won't work and the local file breakes the rules (out of my knowledge as now) and becomes invalid.</p> <p>As <a href="https://feichashao.com/kubectl-apply-fail/" rel="noreferrer">feichashao</a> mentions, the way to solve it is to delete the <code>last-applied-configuration</code> annotation and apply again your local file.</p> <p>Once you did solved, you <code>kubectl apply</code> output will be like:</p> <pre><code>Warning: resource &lt;your_resource&gt; is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. </code></pre> <p>And your live manifests will be updated.</p>
<p>How can I get a particular Kubernetes annotation from a deployment resource using <code>kubectl</code>? I know I can dynamically set an annotation on a deployment using:</p> <pre><code>kubectl annotate deployment api imageTag=dev-ac5ba48.k1dp9 </code></pre> <p>Is there a single <code>kubectl</code> command to then read this deployments <code>imageTag</code> annotation?</p>
<p>You can use the following command to get the <code>imageTag</code> annotation (given that annotation exists):</p> <pre><code>kubectl get deploy DEPLOY_NAME -o jsonpath='{.metadata.annotations.imageTag}' </code></pre>
<p>I am running a local deployment and trying to redirect HTTPS traffic to my backend pods. I don't want SSL termination at the Ingress level, which is why I didn't use any tls secrets.</p> <p>I am creating a self signed cert within the container, and Tomcat starts up by picking that and exposing on 8443.</p> <p>Here is my Ingress Spec</p> <pre><code>apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: ingress-name annotations: kubernetes.io/ingress.class: &quot;nginx&quot; nginx.ingress.kubernetes.io/ssl-passthrough: &quot;true&quot; nginx.ingress.kubernetes.io/backend-protocol: &quot;HTTPS&quot; nginx.ingress.kubernetes.io/force-ssl-redirect: &quot;true&quot; #nginx.ingress.kubernetes.io/service-upstream: &quot;false&quot; kubernetes.io/ingress.class: {{ .Values.global.ingressClass }} nginx.ingress.kubernetes.io/affinity: &quot;cookie&quot; spec: rules: - http: paths: - path: /myserver backend: serviceName: myserver servicePort: 8443 </code></pre> <p>I used the above annotation in different combinations but I still can't reach my pod.</p> <p>My service routes</p> <pre><code># service information for myserver service: type: ClusterIP port: 8443 targetPort: 8443 protocol: TCP </code></pre> <p>I did see a few answers regarding this suggesting annotations, but that didn't seem to work for me. Thanks in advance!</p> <p>edit: The only thing that remotely worked was when I overwrote the ingress values as</p> <pre><code>nginx-ingress: controller: publishService: enabled: true service: type: NodePort nodePorts: https: &quot;40000&quot; </code></pre> <p>This does enable https, but it picks up kubernetes' fake certs, rather than my cert from the container</p> <p>Edit 2: For some reason, the ssl-passthrough is not working. I enforced it as</p> <pre><code>nginx-ingress: controller: extraArgs: enable-ssl-passthrough: &quot;&quot; </code></pre> <p>when I describe the deployment, I can see it in the args but when I check with <code>kubectl ingress-nginx backends</code> as described in <a href="https://kubernetes.github.io/ingress-nginx/kubectl-plugin/#backends" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/kubectl-plugin/#backends</a>, it says &quot;sslPassThrough:false&quot;</p>
<p>SSL Passthrough requires a specific flag to be passed to the nginx controller while starting since it is disabled by default.</p> <blockquote> <p>SSL Passthrough is <strong>disabled by default</strong> and requires starting the controller with the <a href="https://kubernetes.github.io/ingress-nginx/user-guide/cli-arguments/" rel="nofollow noreferrer"><code>--enable-ssl-passthrough</code></a> flag.</p> </blockquote> <p>Since <code>ssl-passthrough</code> works on layer 4 of the OSI model and not on the layer 7 (HTTP) using it will invalidate all the other annotations that you set on the ingress object.</p> <p>So at your deployment level you have specify this flag under <code>args</code>:</p> <pre><code>containers: - name: controller image: us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller:v0.34.1@sha256:0e072dddd1f7f8fc8909a2ca6f65e76c5f0d2fcfb8be47935ae3457e8bbceb20 imagePullPolicy: IfNotPresent lifecycle: preStop: exec: command: - /wait-shutdown args: - /nginx-ingress-controller - --enable-ssl-passthrough </code></pre>
<p>I must surely be missing something obvious. GCP provides me with all sorts of visible indications when a container has failed to start. For example:</p> <p><a href="https://i.stack.imgur.com/Iylay.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Iylay.png" alt="Failed deployment" /></a></p> <p><a href="https://i.stack.imgur.com/YmLDq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YmLDq.png" alt="Container Status" /></a></p> <p>But I cannot for the life of me figure out how to make it issue an alert when the container status is not OK.</p> <p>How is it done?</p>
<ul> <li> <blockquote> <p><a href="https://cloud.google.com/kubernetes-engine/docs/troubleshooting?&amp;_ga=2.52678688.-945193803.1584444246#CrashLoopBackOff" rel="nofollow noreferrer"><code>CrashLoopBackOff</code></a> indicates that a container is repeatedly crashing after restarting. A container might crash for many reasons, and checking a Pod's logs might aid in troubleshooting the root cause.</p> </blockquote> </li> </ul> <p>Apart from the error text message <code>Does not have minimum availability</code>, there could be other error text messages such as <code>Failed to pull image</code>. However, I recommend you to identify error text messages which are appropriate for your environment. You can check with <code>kubectl logs &lt;pod_name&gt;</code> or on Log Viewer.</p> <p>For your reference, here are explanations for <a href="https://cloud.google.com/kubernetes-engine/docs/troubleshooting#workload_issues" rel="nofollow noreferrer">pod issues</a>:</p> <ol> <li><strong>CrashLoopBackOff</strong> means the container was downloaded but failed to run</li> <li><strong>ImagePullBackOff</strong> means the image was not downloaded</li> <li><strong>&quot;Does not have minimum availability&quot;</strong> means that there are no resources available on cluster but not specific to a lack of resources. For instance there maybe nodes available but the pod is not scheduleable on them per the deployment.</li> <li><strong>&quot;Insufficient cpu&quot;</strong> means there is insufficient cpu on the nodes.</li> <li><strong>&quot;Unschedulable&quot;</strong> indicates that your Pod cannot be scheduled because of insufficient resources or some configuration error.</li> </ol> <hr /> <p>With that in mind, Here is the step-by-step for creating a Log based Metric for later creating an alert based on it.</p> <ol> <li><p>Setup a <a href="https://cloud.google.com/logging/docs/logs-based-metrics#getting_started" rel="nofollow noreferrer">Logs-based Metric</a> using the parameters:</p> <pre><code>resource.type=&quot;k8s_pod&quot; severity&gt;=WARNING unschedulable </code></pre> <p>You can replace the filter to something that is more appropriate for your case.</p> </li> <li><p>Create a label in the metric that will allow you to identify the pod that was <code>unschedulable</code> (or other status). This will also help with grouping when you create the alert for a failing pod.</p> </li> <li><p>In Stackdriver Monitoring, <a href="https://cloud.google.com/monitoring/alerts/using-alerting-ui" rel="nofollow noreferrer">create an alert</a> with the following parameters.</p> <ul> <li>Set the resource type to <code>k8s_pod</code></li> <li>Set the metric to the one you created in step 1</li> <li>Set <code>Group By</code> to the <code>pod_name</code> (also created in step 1)</li> <li>In the advanced aggregation section set the aligner to <code>sum</code> and the Alignment Period to <code>5m</code> (or what you thinks is more appropriate).</li> <li>Configure the condition triggers <code>For</code> to more than 1 minute to prevent the alert from firing over and over. This can also be configured per your requirement.</li> </ul> </li> </ol> <p>I hope this information is helpful, If you have any questions let me know in the comments.</p>
<p>I'm new to Helm. I'm trying to deploy a simple server on the master node. When I do helm install and see the details using the command kubectl get po,svc I see lot of pods created other than the pods I intend to deploy.So, My precise questions are:</p> <ol> <li>Why so many pods got created?</li> <li>How do I delete all those pods? Below is the output of the command kubectl get po,svc:</li> </ol> <p>NAME READY STATUS RESTARTS AGE</p> <p>pod/altered-quoll-stx-sdo-chart-6446644994-57n7k 1/1 Running 0 25m</p> <p>pod/austere-garfish-stx-sdo-chart-5b65d8ccb7-jjxfh 1/1 Running 0 25m</p> <p>pod/bald-hyena-stx-sdo-chart-9b666c998-zcfwr 1/1 Running 0 25m</p> <p>pod/cantankerous-pronghorn-stx-sdo-chart-65f5699cdc-5fkf9 1/1 Running 0 25m</p> <p>pod/crusty-unicorn-stx-sdo-chart-7bdcc67546-6d295 1/1 Running 0 25m</p> <p>pod/exiled-puffin-stx-sdo-chart-679b78ccc5-n68fg 1/1 Running 0 25m</p> <p>pod/fantastic-waterbuffalo-stx-sdo-chart-7ddd7b54df-p78h7 1/1 Running 0 25m</p> <p>pod/gangly-quail-stx-sdo-chart-75b9dd49b-rbsgq 1/1 Running 0 25m</p> <p>pod/giddy-pig-stx-sdo-chart-5d86844569-5v8nn 1/1 Running 0 25m</p> <p>pod/hazy-indri-stx-sdo-chart-65d4c96f46-zmvm2 1/1 Running 0 25m</p> <p>pod/interested-macaw-stx-sdo-chart-6bb7874bbd-k9nnf 1/1 Running 0 25m</p> <p>pod/jaundiced-orangutan-stx-sdo-chart-5699d9b44b-6fpk9 1/1 Running 0 25m</p> <p>pod/kindred-nightingale-stx-sdo-chart-5cf95c4d97-zpqln 1/1 Running 0 25m</p> <p>pod/kissing-snail-stx-sdo-chart-854d848649-54m9w 1/1 Running 0 25m</p> <p>pod/lazy-tiger-stx-sdo-chart-568fbb8d65-gr6w7 1/1 Running 0 25m</p> <p>pod/nonexistent-octopus-stx-sdo-chart-5f8f6c7ff8-9l7sm 1/1 Running 0 25m</p> <p>pod/odd-boxer-stx-sdo-chart-6f5b9679cc-5stk7 1/1 Running 1 15h</p> <p>pod/orderly-chicken-stx-sdo-chart-7889b64856-rmq7j 1/1 Running 0 25m</p> <p>pod/redis-697fb49877-x5hr6 1/1 Running 0 25m</p> <p>pod/rv.deploy-6bbffc7975-tf5z4 1/2 CrashLoopBackOff 93 30h</p> <p>pod/sartorial-eagle-stx-sdo-chart-767d786685-ct7mf 1/1 Running 0 25m</p> <p>pod/sullen-gnat-stx-sdo-chart-579fdb7df7-4z67w 1/1 Running 0 25m</p> <p>pod/undercooked-cow-stx-sdo-chart-67875cc5c6-mwvb7 1/1 Running 0 25m</p> <p>pod/wise-quoll-stx-sdo-chart-5db8c766c9-mhq8v 1/1 Running 0 21m</p>
<p>You can run the command <code>helm ls</code> to see all the deployed helm releases in your cluster.<br /> To remove the release (and every resource it created, including the pods), run: <code>helm delete RELEASE_NAME --purge</code>.</p> <p>If you want to delete all the pods in your namespace without your Helm release (I DON'T think this is what you're looking for), you can run: <code>kubectl delete pods --all</code>.</p> <p>On a side note, if you're new to Helm, consider starting with Helm v3 since it has many improvements, and specially because the migration from v2 to v3 can become cumbersome, and if you can avoid it - you should.</p>
<p>I've deployed cert-manager on cluster. It was working. Recently i noticed that when i try open service which configured through ingress complaining about expired certificate. I checked in kubernetes certificate shown as its up to date. But when i open service via browser it says that cert expired.</p> <p>I recreated issuer and certificate again and noticed that cert-manager didnt created &quot;certificaterequest&quot;. and &quot;order&quot;. So the question is how can i force cert-manager to create cert-request and order.</p>
<p>The first thing I'd would check is what is the actual expire date of the certificate that your browser is receiving while requesting the web page. You can check this by clicking the padlock near the address in your browser. If this looks correct, check your personal computer time and date whether it's showing the current one (not the future one).</p> <p>If this is not the case then you can force issuing the certificate with the following methods:</p> <ul> <li>Set the <code>renewBefore</code> field on the <code>certificate</code> resource to <code>1440h (two months before expire date</code>. It should be enough to trigger the certificate reissuing.</li> <li>Delete the <code>secret</code> first and then the <code>certificate</code>. This will cause the <code>cert-manager</code> to issue a new certificate.</li> </ul> <p>Also check out <a href="https://github.com/jetstack/cert-manager/issues/2641" rel="nofollow noreferrer">this</a> cert-manager issue on github.</p>
<p>Used this values.yaml for Prometheus-operator helm chart</p> <pre><code>prometheus-operator: fullnameOverride: prometheus-operator prometheusOperator: resources: limits: memory: 192Mi requests: memory: 128Mi prometheus: prometheusSpec: fullnameOverride: prometheus routePrefix: /prometheus externalUrl: https://prometheus:8443/prometheus/ retention: 30d serviceMonitorSelectorNilUsesHelmValues: false resources: limits: memory: 2.0Gi requests: memory: 1.7Gi storageSpec: volumeClaimTemplate: spec: selector: matchLabels: app: my-example-prometheus resources: requests: storage: 1Gi volumes: - emptyDir: {} name: config-vol volumeMounts: - mountPath: /etc/prometheus/config_vol name: config-vol </code></pre> <p>volume and volumemount in Prometheus are not applying?Installed by providing the dependency in chart.yaml file provided below and by above values.yaml file.</p> <pre><code>dependencies: - name: prometheus-operator version: &quot;8.13.12&quot; condition: prometheus-operator.create repository: https://kubernetes-charts.storage.googleapis.com </code></pre> <p>To reproduce this issue install Prometheus-operator with this dependency in chart.yaml and values.yaml and see whether volume and volumeMount provided in values.yaml are applied or not. There is no such error prometheus-operator is in running state but volume and volumeMount are not get applied.</p>
<p>It looks like the volume feature is available since version 8.13.13 of the prometheus operator. Here`s the <a href="https://github.com/helm/charts/commit/ef0d749132ecfa61b2ea47ccacafeaf5cf1d3d77" rel="nofollow noreferrer">commit</a> reference:</p> <pre><code>`prometheus.prometheusSpec.volumes` | Additional Volumes on the output StatefulSet definition. | `[]` | `prometheus.prometheusSpec.volumeMounts` | Additional VolumeMounts on the output StatefulSet definition. | `[]` | </code></pre> <p>Please update your operator to newer version.</p>
<p>I have made a minimal image of a fast-api program by building its Dockerfile. The imagename is <code>task1</code>. I'm trying to deploy it locally with minikube, kubectl by specifying a file named <code>task1-deployment.yaml</code> with the following structure:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: task1-deployment spec: replicas: 1 selector: matchLabels: app: task1 template: metadata: labels: app: task1 spec: containers: - name: task1 image: task1:latest imagePullPolicy: Never ports: - containerPort: 8080 </code></pre> <p>Whenever I run <code>kubectl -f apply task1-deployment.yaml</code> it shows a successful message, yet when running <code>kubectl get deployment</code>, task1 isn't ready and can't be accessed at localhost:8080. <br> The other official-pulled images work, what am I doing wrong?<br><br></p> <p>Note: &quot;imagePullPolicy&quot; doesn't change anything</p>
<p>Posting this as an answer instead of a comment:</p> <p>Minikube can't get images from your machine's docker - you need to load it into the minikube cluster by running the following command:</p> <pre><code>$ minikube image load &lt;image_name&gt; </code></pre>
<p>Consider a Dockerfile that uses the <a href="https://docs.docker.com/engine/reference/builder/#user" rel="nofollow noreferrer">USER</a> instruction for example</p> <pre><code>FROM adoptopenjdk:11-jre-hotspot as builder WORKDIR application ARG JAR_FILE=build/libs/*.jar COPY ${JAR_FILE} application.jar RUN java -Djarmode=layertools -jar application.jar extract FROM adoptopenjdk:11-jre-hotspot WORKDIR application COPY --from=builder application/dependencies/ ./ COPY --from=builder application/spring-boot-loader ./ COPY --from=builder application/snapshot-dependencies/ ./ COPY --from=builder application/application/ ./ VOLUME /tmp EXPOSE 8080 USER nobody ENTRYPOINT ["java", "org.springframework.boot.loader.JarLauncher"] </code></pre> <p>When Kuberenets runs the docker image does it respect the USER instruction or does it ignore it?</p> <p>If the user does not exist in the K8s cluster does K8s create the user? </p>
<p>Yes, it does. That's an important feature when securing containers (although using <code>gosu</code> or a similar tool would provide similar security).</p>
<p>I have a system where some long running tasks are done by processing messages from a message queue. The actual tasks are doing some significant processing on large videos.</p> <p>Here is the problem in the following steps:</p> <ol> <li>Process in pod takes message off queue, start processing video, this takes minutes.</li> <li>Developer makes change, releases, and a Kubernetes Deployment starts.</li> <li>During the deployment, the long running process gets killed and replaced by new node, which loses all work.</li> </ol> <p>Is there a mechanism to work around this in Kubernetes? Some kind of check to ensure that the worker in the pod is in a state that it can be destroyed safely? Almost something like a destroyProbe (the opposite of a readinessProbe)</p>
<p>Calling <code>preStop</code> hook before container is terminated should help you perform grace shutdown. <code>preStop</code> hook is configured at container level and allows you to run a custom command before the <code>SIGTERM</code> will be sent (please note that the termination grace period countdown actually starts before invoking the <code>preStop</code> hook and not once the <code>SIGTERM</code> signal will be sent).</p> <blockquote> <p>This hook is called immediately before a container is terminated due to an API request or management event such as liveness probe failure, preemption, resource contention and others. A call to the preStop hook fails if the container is already in terminated or completed state. It is blocking, meaning it is synchronous, so it must complete before the call to delete the container can be sent. No parameters are passed to the handler.</p> </blockquote> <p>Setting also appropriate <code>terminationGracePeriod</code> also matters since Kubernetes' management of the Container blocks until the preStop handler completes, unless the Pod's grace period expires. This means that termination grace period countdown starts before invoking the <code>preStop</code> hook and not once the <code>SIGTERM</code> signal is sent.</p> <p>Check <a href="https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/%29" rel="nofollow noreferrer">lifecycle hooks</a> and <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-termination" rel="nofollow noreferrer">pod termination</a> documents for more information.</p>
<p>I have a pod that needs to save data persistently to a place outside the pod. I think a persistentVolume is a good idea for that. The pod called writerPod needs ReadWrite-access to that volume.</p> <p>Multiple other Pods (I'm calling them readingPods) need to read the files that writerPod saved.</p> <p>Is it possible to have two persistentVolumeClaims (PVC) (that differ only in the accessMode ReadWriteOnce and ReadOnlyMany) that both bind the same PersistentVolume?</p>
<p>PVC can have more than one accessMode configured (both ReadOnlyMany and ReadWriteOnce):</p> <pre><code> accessModes: - ReadWriteOnce - ReadOnlyMany </code></pre> <p>However, as the names imply, you can mount the disk to many pods in ReadOnlyMany (AKA <code>ROX</code>) but only one pod at a time can use that disk in ReadWriteOnce mode (AKA <code>RWO</code>).</p> <p>If your readingPods should be up only after your writerPod has written its data - you can use the same PVC, just make sure your mounting the PVC with the readOnly flag set to true, for example:</p> <pre><code>volumes: - name: test-volume persistentVolumeClaim: claimName: my-pvc readOnly: true </code></pre> <p>If you're using a Cloud provider that supports the ReadWriteMany access mode (Unfortunately, Google is not one of them right now) it will of-course suit you in all scenarios. Check the <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes" rel="nofollow noreferrer">official documentation</a> to check the supported modes on each platform.</p>
<p>I have a cluster set up in Google Kubernetes Engine (GKE), with preemptible instances, TPU support, and 1 container per node.</p> <p>About twice per container per day I get this error calling e.g. <code>tf.io.gfile.glob(...)</code>:</p> <blockquote> <p>tensorflow.python.framework.errors_impl.FailedPreconditionError: Error executing an HTTP request: libcurl code 6 meaning 'Couldn't resolve host name', error details: Couldn't resolve host 'www.googleapis.com' when reading gs://bucket/dir/dir</p> </blockquote> <p>It only happens in GKE, not when running directly on a compute VM in Google Cloud.</p> <p>Is this something I should just expect in GKE and handle (e.g. back off and retry), or is there some kind of networking config issue or other underlying problem that I can do something about?</p>
<p>Since your cluster has only preemptible instances, the system pods will also restart every 24 hours (at least), including <a href="https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/" rel="nofollow noreferrer"><code>kube-dns</code></a>.<br /> You have a couple of options:</p> <ol> <li>Create another node pool (Of-course, this node pool doesn't have to include TPU nodes, you can try and use <code>n1-standard-1</code>, or even cheaper instances) in that GKE cluster to host the critical pods (e.g. <code>kube-dns</code>), and add a <a href="https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector" rel="nofollow noreferrer"><code>nodeSelector</code></a> field with that node pool to the critical deployments. For example:</li> </ol> <pre><code>nodeSelector: cloud.google.com/gke-nodepool: critical-node-pool </code></pre> <ol start="2"> <li>The number of <code>kube-dns</code> replicas is determined by <code>kube-dns-autoscaler</code> deployment. Increasing the number of <code>kube-dns</code> pods can help you make your system more resilient to node preemption. You can increase by editing the <code>kube-dns-autoscaler</code> ConfigMap in <code>kube-system</code> namespace.<br /> Edit the ConfigMap with <code>kubectl edit configmap dns-autoscaler --namespace=kube-system</code>. Look for this line: <code>linear: '{&quot;coresPerReplica&quot;:256,&quot;min&quot;:1,&quot;nodesPerReplica&quot;:16}'</code>.<br /> As explained <a href="https://kubernetes.io/docs/tasks/administer-cluster/dns-horizontal-autoscaling/#tuning-autoscaling-parameters" rel="nofollow noreferrer">here</a>, the &quot;min&quot; field indicates the minimal number of DNS bakends. You can increase the minimum number of replicas, or tweak the other parameters the satisfy your needs.<br /> If you use the Auto-Scaling, don't set the minimum number of <code>kube-dns</code> replicas to an higher number than your minimal node count in that cluster, as it'll keep the nodes up just to due to deployment of <code>kube-dns</code>, and with no relation to your application workload.</li> </ol>
<p>Hey trying to install jenkins on GKE cluster with this command </p> <p><code>helm install stable/jenkins -f test_values.yaml --name myjenkins</code> </p> <p>My version of helm and kubectl if matters</p> <pre><code>helm version Client: &amp;version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"} Server: &amp;version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"} </code></pre> <pre><code>kubectl version Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-13T11:51:44Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.11-gke.14", GitCommit:"56d89863d1033f9668ddd6e1c1aea81cd846ef88", GitTreeState:"clean", BuildDate:"2019-11-07T19:12:22Z", GoVersion:"go1.12.11b4", Compiler:"gc", Platform:"linux/amd64"} </code></pre> <p>Values downloaded with this command <code>helm inspect values stable/jenkins &gt; test_values.yaml</code> and modified:</p> <pre><code>cat test_values.yaml Master: adminPassword: 34LbGfq5LWEUgw // local testing resources: limits: cpu: '500m' memory: '1024' podLabels: nodePort: 32323 serviceType: ClusterIp Persistence: storageClass: 'managed-nfs-storage' size: 5Gi rbac: create: true </code></pre> <p>and some weird new error after update</p> <pre><code>$ helm install stable/jekins --name myjenkins -f test_values.yaml Error: failed to download "stable/jekins" (hint: running `helm repo update` may help) $ helm repo update Hang tight while we grab the latest from your chart repositories... ...Skip local chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ⎈ Happy Helming!⎈ $ helm install stable/jekins --name myjenkins -f test_values.yaml Error: failed to download "stable/jekins" (hint: running `helm repo update` may help) </code></pre>
<p>As I can see you're trying to install <code>stable/jekins</code> which isn't in the helm repo instead of <code>stable/je</code><strong>n</strong><code>kins</code>. Please update your question if it's just misspelling and I'll update my answer , but I've tried your command:</p> <pre><code>$helm install stable/jekins --name myjenkins -f test_values.yaml </code></pre> <p>and got the same error:</p> <pre><code>Error: failed to download "stable/jekins" (hint: running `helm repo update` may help) </code></pre> <p><strong>EDIT</strong> To solve next errors like:</p> <pre><code>Error: render error in "jenkins/templates/deprecation.yaml": template: jenkins/templates/deprecation.yaml:258:11: executing "jenkins/templates/deprecation.yaml" at &lt;fail "Master.* values have been renamed, please check the documentation"&gt;: error calling fail: Master.* values have been renamed, please check the documentation </code></pre> <p>and </p> <pre><code>Error: render error in "jenkins/templates/deprecation.yaml": template: jenkins/templates/deprecation.yaml:354:10: executing "jenkins/templates/deprecation.yaml" at &lt;fail "Persistence.* values have been renamed, please check the documentation"&gt;: error calling fail: Persistence.* values have been renamed, please check the documentation </code></pre> <p>and so on you also need to edit <code>test_values.yaml</code></p> <pre><code>master: adminPassword: 34LbGfq5LWEUgw resources: limits: cpu: 500m memory: 1Gi podLabels: nodePort: 32323 serviceType: ClusterIP persistence: storageClass: 'managed-nfs-storage' size: 5Gi rbac: create: true </code></pre> <p>And after that it's deployed successfully:</p> <pre><code>$helm install stable/jenkins --name myjenkins -f test_values.yaml NAME: myjenkins LAST DEPLOYED: Wed Jan 8 15:14:51 2020 NAMESPACE: default STATUS: DEPLOYED RESOURCES: ==&gt; v1/ConfigMap NAME AGE myjenkins 1s myjenkins-tests 1s ==&gt; v1/Deployment NAME AGE myjenkins 0s ==&gt; v1/PersistentVolumeClaim NAME AGE myjenkins 1s ==&gt; v1/Pod(related) NAME AGE myjenkins-6c68c46b57-pm5gq 0s ==&gt; v1/Role NAME AGE myjenkins-schedule-agents 1s ==&gt; v1/RoleBinding NAME AGE myjenkins-schedule-agents 0s ==&gt; v1/Secret NAME AGE myjenkins 1s ==&gt; v1/Service NAME AGE myjenkins 0s myjenkins-agent 0s ==&gt; v1/ServiceAccount NAME AGE myjenkins 1s NOTES: 1. Get your 'admin' user password by running: printf $(kubectl get secret --namespace default myjenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo 2. Get the Jenkins URL to visit by running these commands in the same shell: export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/component=jenkins-master" -l "app.kubernetes.io/instance=myjenkins" -o jsonpath="{.items[0].metadata.name}") echo http://127.0.0.1:8080 kubectl --namespace default port-forward $POD_NAME 8080:8080 3. Login with the password from step 1 and the username: admin For more information on running Jenkins on Kubernetes, visit: https://cloud.google.com/solutions/jenkins-on-container-engine </code></pre>
<p>I have started using Cass-Operator and the setup worked like a charm! <a href="https://github.com/datastax/cass-operator" rel="nofollow noreferrer">https://github.com/datastax/cass-operator</a>.</p> <p>I have an issue though. My cluster is up and running on GCP. But how do I access it from my laptop (basically from outside)? Sorry, I'm new to Kubernetes so I do not know how to access the cluster from outside?</p> <p>I can see the nodes are up on the GCP dashboard. I can ping the external IP of the nodes from my laptop but when I run <code>cqlsh external_ip 9042</code> then the connection fails.</p> <p>How do I go about connecting the K8s/Cassandra cluster to outside work so that my web application can access it?</p> <p>I would like to:</p> <ol> <li>have a url so that my web application uses that URL to connect to the cassandra/K8s cluster instead of IP address. Thus, I need a dns. Does it come by default in K8S? Would would be the url? Would K8s managing the dns mapping for me in some nodes get restarted?</li> <li>My web application should be able to reach Cassandra on 9042. It seems load balancing is done for http/https. The Cassandra application is not a http/https request. So I don't need port 80 or 443</li> </ol> <p>I have read few tutorials which talk about Service, Loadbalancer and Ingress. But I am unable to make a start.</p> <p>I created a service like this</p> <pre><code>kind: Service apiVersion: v1 metadata: name: cass-operator-service spec: type: LoadBalancer ports: - port: 9042 selector: name: cass-operator </code></pre> <p>Then created the service - <code>kubectl apply -f ./cass-operator-service.yaml</code></p> <p>I checked if the service was created using <code>kubectl get svc</code> and got output</p> <pre><code>NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cass-operator-service LoadBalancer 10.51.249.224 34.91.214.233 9042:30136/TCP 4m17s kubernetes ClusterIP 10.51.240.1 &lt;none&gt; 443/TCP 10h. </code></pre> <p>But when I run <code>cqlsh 34.91.214.233 9042</code> then the connection fails</p> <p>It seems that the requests to port 9042 would be forwarded to 30136. But They should be forwarded to 9042 as that is where the Cassandra image in the pods is listening for incoming requests</p> <p>UPDATE</p> <p>Tried targetPort but still no luck</p> <pre><code>manuchadha25@cloudshell:~ (copper-frame-262317)$ cat cass-operator-service.yaml kind: Service apiVersion: v1 metadata: name: cass-operator-service spec: type: LoadBalancer ports: - port: 9042 targetPort: 9042 selector: name: cass-operator manuchadha25@cloudshell:~ (copper-frame-262317)$ kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.51.240.1 &lt;none&gt; 443/TCP 11h manuchadha25@cloudshell:~ (copper-frame-262317)$ kubectl apply -f ./cass-operator-service.yaml service/cass-operator-service created manuchadha25@cloudshell:~ (copper-frame-262317)$ kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cass-operator-service LoadBalancer 10.51.255.184 &lt;pending&gt; 9042:30024/TCP 12s kubernetes ClusterIP 10.51.240.1 &lt;none&gt; 443/TCP 11h manuchadha25@cloudshell:~ (copper-frame-262317)$ kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cass-operator-service LoadBalancer 10.51.255.184 &lt;pending&gt; 9042:30024/TCP 37s kubernetes ClusterIP 10.51.240.1 &lt;none&gt; 443/TCP 11h manuchadha25@cloudshell:~ (copper-frame-262317)$ kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cass-operator-service LoadBalancer 10.51.255.184 34.91.214.233 9042:30024/TCP 67s kubernetes ClusterIP 10.51.240.1 &lt;none&gt; 443/TCP 11h manuchadha25@cloudshell:~ (copper-frame-262317)$ ping 34.91.214.233 PING 34.91.214.233 (34.91.214.233) 56(84) bytes of data. 64 bytes from 34.91.214.233: icmp_seq=1 ttl=109 time=7.89 ms </code></pre> <p><a href="https://i.stack.imgur.com/ab0X6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ab0X6.png" alt="enter image description here" /></a></p> <p>Querying all names spaces reveal the following</p> <p><a href="https://i.stack.imgur.com/jjH5E.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jjH5E.png" alt="enter image description here" /></a></p> <p>But querying pods with namespace cass-operator returns empty result</p> <pre><code>manuchadha25@cloudshell:~ (copper-frame-262317)$ kubectl get pods -l name=cass-operator No resources found in default namespace. </code></pre>
<ul> <li>Since you are new to Kubernetes, you probably are not familiar with <strong>StatefulSets</strong>:</li> </ul> <blockquote> <p>StatefulSet is the workload API object used to manage stateful applications.</p> <p>Manages the deployment and scaling of a set of <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/" rel="noreferrer">Pods</a>, <em>and provides guarantees about the ordering and uniqueness</em> of these Pods.</p> <p>Like a Deployment, a StatefulSet manages Pods that are based on an identical container spec. <strong>Unlike a Deployment</strong>, a StatefulSet maintains a sticky identity for each of their Pods. These pods are created from the same spec, but are not interchangeable: each has a persistent identifier that it maintains across any rescheduling.</p> </blockquote> <ul> <li>I recommend you to read these articles to learn more about it's mechanisms: <ul> <li><a href="https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/" rel="noreferrer">Kubernetes.io - Statefulsets</a></li> <li><a href="https://www.magalix.com/blog/kubernetes-statefulsets-101-state-of-the-pods" rel="noreferrer">Megalix - Statefulsets 101</a></li> <li><a href="https://itnext.io/exposing-statefulsets-in-kubernetes-698730fb92a1" rel="noreferrer">ITNext - Exposing Statefulsets in Kubernetes</a></li> </ul> </li> </ul> <hr /> <blockquote> <p>How do I go about connecting the K8s/Cassandra cluster to outside work so that my web application can access it?</p> </blockquote> <ul> <li>I found out that datastax/cass-operator is still developing their documentation, I found <a href="https://github.com/datastax/cass-operator/tree/f395fe44a36a9d4e7c26026618093ee134ebedf1/docs/proxy" rel="noreferrer">this document</a> that is not merged to master yet, but it explains very well about how to connect to Cassandra, I strongly recommend reading.</li> <li>There are several <a href="https://github.com/datastax/cass-operator/issues" rel="noreferrer">open issues</a> for documenting methods for connection from outside the cluster.</li> </ul> <p>I followed the guide in <a href="https://github.com/datastax/cass-operator" rel="noreferrer">https://github.com/datastax/cass-operator</a> to deploy the cass-operator + Cassandra Datacenter Example as from your images I believe you followed as well:</p> <pre><code>$ kubectl create -f https://raw.githubusercontent.com/datastax/cass-operator/v1.2.0/docs/user/cass-operator-manifests-v1.15.yaml namespace/cass-operator created serviceaccount/cass-operator created secret/cass-operator-webhook-config created customresourcedefinition.apiextensions.k8s.io/cassandradatacenters.cassandra.datastax.com created clusterrole.rbac.authorization.k8s.io/cass-operator-cluster-role created clusterrolebinding.rbac.authorization.k8s.io/cass-operator created role.rbac.authorization.k8s.io/cass-operator created rolebinding.rbac.authorization.k8s.io/cass-operator created service/cassandradatacenter-webhook-service created deployment.apps/cass-operator created validatingwebhookconfiguration.admissionregistration.k8s.io/cassandradatacenter-webhook-registration created $ kubectl create -f https://raw.githubusercontent.com/datastax/cass-operator/v1.2.0/operator/k8s-flavors/gke/storage.yaml storageclass.storage.k8s.io/server-storage created $ kubectl -n cass-operator create -f https://raw.githubusercontent.com/datastax/cass-operator/v1.2.0/operator/example-cassdc-yaml/cassandra-3.11.6/example-cassdc-minimal.yaml cassandradatacenter.cassandra.datastax.com/dc1 created $ kubectl get all -n cass-operator NAME READY STATUS RESTARTS AGE pod/cass-operator-78c6469c6-6qhsb 1/1 Running 0 139m pod/cluster1-dc1-default-sts-0 2/2 Running 0 138m pod/cluster1-dc1-default-sts-1 2/2 Running 0 138m pod/cluster1-dc1-default-sts-2 2/2 Running 0 138m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/cass-operator-metrics ClusterIP 10.21.5.65 &lt;none&gt; 8383/TCP,8686/TCP 138m service/cassandradatacenter-webhook-service ClusterIP 10.21.0.89 &lt;none&gt; 443/TCP 139m service/cluster1-dc1-all-pods-service ClusterIP None &lt;none&gt; &lt;none&gt; 138m service/cluster1-dc1-service ClusterIP None &lt;none&gt; 9042/TCP,8080/TCP 138m service/cluster1-seed-service ClusterIP None &lt;none&gt; &lt;none&gt; 138m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/cass-operator 1/1 1 1 139m NAME DESIRED CURRENT READY AGE replicaset.apps/cass-operator-78c6469c6 1 1 1 139m NAME READY AGE statefulset.apps/cluster1-dc1-default-sts 3/3 138m $ CASS_USER=$(kubectl -n cass-operator get secret cluster1-superuser -o json | jq -r '.data.username' | base64 --decode) $ CASS_PASS=$(kubectl -n cass-operator get secret cluster1-superuser -o json | jq -r '.data.password' | base64 --decode) $ echo $CASS_USER cluster1-superuser $ echo $CASS_PASS _5ROwp851l0E_2CGuN_n753E-zvEmo5oy31i6C0DBcyIwH5vFjB8_g </code></pre> <ul> <li>From the <code>kubectl get all</code> command above we can see there is an statefulset called <code>statefulset.apps/cluster1-dc1-default-sts</code> which controls the cassandra pods.</li> <li>In order to create a LoadBalancer service that makes available all the pods managed by this <code>statefulset</code> we need to use the same labels assigned to them:</li> </ul> <pre><code>$ kubectl describe statefulset cluster1-dc1-default-sts -n cass-operator Name: cluster1-dc1-default-sts Namespace: cass-operator CreationTimestamp: Tue, 30 Jun 2020 12:24:34 +0200 Selector: cassandra.datastax.com/cluster=cluster1,cassandra.datastax.com/datacenter=dc1,cassandra.datastax.com/rack=default Labels: app.kubernetes.io/managed-by=cass-operator cassandra.datastax.com/cluster=cluster1 cassandra.datastax.com/datacenter=dc1 cassandra.datastax.com/rack=default </code></pre> <ul> <li>Now let's create the LoadBalancer service yaml and use the labels as <code>selectors</code> for the service:</li> </ul> <pre><code>apiVersion: v1 kind: Service metadata: name: cassandra-loadbalancer namespace: cass-operator labels: cassandra.datastax.com/cluster: cluster1 cassandra.datastax.com/datacenter: dc1 cassandra.datastax.com/rack: default spec: type: LoadBalancer ports: - port: 9042 protocol: TCP selector: cassandra.datastax.com/cluster: cluster1 cassandra.datastax.com/datacenter: dc1 cassandra.datastax.com/rack: default </code></pre> <blockquote> <p><em>&quot;My web application should be able to reach Cassandra on 9042. It seems load balancing is done for http/https. The Cassandra application is not a http/https request. So I don't need port 80 or 443.&quot;</em></p> </blockquote> <ul> <li><p>When you create a Service of type <code>LoadBalancer</code>, a Google Cloud controller wakes up and configures a <a href="https://cloud.google.com/load-balancing/docs/network" rel="noreferrer">network load balancer</a> in your project. The load balancer has a stable IP address that is accessible from outside of your project.</p> </li> <li><p><strong>The network load balancer supports any and all ports</strong>. You can use Network Load Balancing to load balance TCP and UDP traffic. Because the load balancer is a pass-through load balancer, your backends terminate the load-balanced TCP connection or UDP packets themselves.</p> </li> <li><p>Now let's apply the yaml and note the Endpoint IPs of the pods being listed:</p> </li> </ul> <pre><code>$ kubectl apply -f cassandra-loadbalancer.yaml service/cassandra-loadbalancer created $ kubectl get service cassandra-loadbalancer -n cass-operator NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cassandra-loadbalancer LoadBalancer 10.21.4.253 146.148.89.7 9042:30786/TCP 5m13s $ kubectl describe svc cassandra-loadbalancer -n cass-operator Name: cassandra-loadbalancer Namespace: cass-operator Labels: cassandra.datastax.com/cluster=cluster1 cassandra.datastax.com/datacenter=dc1 cassandra.datastax.com/rack=default Annotations: Selector: cassandra.datastax.com/cluster=cluster1,cassandra.datastax.com/datacenter=dc1,cassandra.datastax.com/rack=default Type: LoadBalancer IP: 10.21.4.253 LoadBalancer Ingress: 146.148.89.7 Port: &lt;unset&gt; 9042/TCP TargetPort: 9042/TCP NodePort: &lt;unset&gt; 30786/TCP Endpoints: 10.24.0.7:9042,10.24.2.7:9042,10.24.3.9:9042 Session Affinity: None External Traffic Policy: Cluster Events: &lt;none&gt; </code></pre> <ul> <li>To test it, I'll use my cloud shell with a cassandra container to emulate your notebook using the <code>LoadBalancer</code> IP provided above:</li> </ul> <pre class="lang-sh prettyprint-override"><code>$ docker run -it cassandra /bin/sh # cqlsh -u cluster1-superuser -p _5ROwp851l0E_2CGuN_n753E-zvEmo5oy31i6C0DBcyIwH5vFjB8_g 146.148.89.7 9042 Connected to cluster1 at 146.148.89.7:9042. [cqlsh 5.0.1 | Cassandra 3.11.6 | CQL spec 3.4.4 | Native protocol v4] Use HELP for help. cluster1-superuser@cqlsh&gt; select * from system.peers; peer | data_center | host_id | preferred_ip | rack | release_version | rpc_address | schema_version | tokens -----------+-------------+--------------------------------------+--------------+---------+-----------------+-------------+--------------------------------------+-------------------------- 10.24.3.9 | dc1 | bcec6c12-49a1-41d5-be58-5150e99f5dfb | null | default | 3.11.6 | 10.24.3.9 | e84b6a60-24cf-30ca-9b58-452d92911703 | {'2248175870989649036'} 10.24.0.7 | dc1 | 68409f08-9d6e-4e40-91ff-f43581c8b6f3 | null | default | 3.11.6 | 10.24.0.7 | e84b6a60-24cf-30ca-9b58-452d92911703 | {'-1105923522927946373'} (2 rows) </code></pre> <hr /> <blockquote> <p><em>&quot;have a url so that my web application uses that URL to connect to the cassandra/K8s cluster instead of IP address. So I need a dns. Does it come by default in K8S? Would would be the url? Would K8s managing the dns mapping for me in some nodes get restarted?&quot;</em></p> </blockquote> <ul> <li>That documentation on cassandra-operator also has a section about <a href="https://github.com/datastax/cass-operator/tree/f395fe44a36a9d4e7c26026618093ee134ebedf1/docs/proxy#ingresses" rel="noreferrer">Ingress</a>, I recommend reading as well.</li> <li>Kubernetes does not come with a default DNS name.</li> <li>You will have to register a domain, point the DNS to the IP of the load balancer this way it will resolve the IP of the Network LoadBalancer.</li> <li>The Network LoadBalancer is bound to a Static Public IP, any changes in Kubernetes nodes will not cause service unavailability.</li> </ul> <p>If you have any question, let me know in the comments.</p>
<p>I have the following list of pods:</p> <pre><code>user@compute:~&gt; kubectl get pods -n test --template '{{range .items}}{{.metadata.name}}{{&quot;\n&quot;}}{{end}}' test-1 test-2 test-3 abc cdf dfg </code></pre> <p>I would like to get just the pods which includes <strong>test</strong> in the name. Is any way to accomplish that without using grep or for, but using template or any Kubernetes way of doing it?</p> <p>Thank you</p>
<p>The majority of Kubernetes maintainers (as of now), does not intend to make kubectl command more advanced than external tools made especially for data set parsing (e.g. grep, jq), and suggest still to use external tools in use case similar to yours. There is a PR planned for making this limitation more visible in Docs for end-users.</p> <p>Replacing output format from --template (go-template format) to --jsonpath would make theoretically possible to achieve your goal. But as it happens regex filter is not supported, and such a command results in following error:</p> <pre><code>error: error parsing jsonpath {.items[?(@.metadata.name=~/^test$/)].metadata.name}, unrecognized character in action: U+007E '~' </code></pre> <p>You check and follow the issue discussed here: <a href="https://github.com/kubernetes/kubernetes/issues/72220" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/72220</a></p> <p>Having said that, for now you you can give a try with jq instead grep command:</p> <pre><code>k get po -n monitoring -o json | jq '.items[] | select(.metadata.name|test(&quot;prom&quot;))| .metadata.name' &quot;prometheus-k8s-0&quot; &quot;prometheus-k8s-1&quot; &quot;prometheus-operator-65c77bdd6c-lmrlf&quot; </code></pre> <p>or use the only known to me workaround to achieve similar result w/o grep, use this very unattractive command:</p> <pre><code> k set env po/prometheus-k8s-0 -n monitoring -c prometheus-config* --dry-run=client --list # Pod prometheus-k8s-0, container prometheus-config-reloader # POD_NAME from field path metadata.name </code></pre> <p>*this command uses the fact that '-c' accept wildcards, moreover I'm assuming that container names are consistent with Pod names</p>
<p>In the spirit to make the wording of my question in a clean and simplistic fashion, I try to avoid using word such as 'redundancy', 'distributed', 'clustering', 'orchestrating', 'fault tolerant', 'container'</p> <p>If I am to write</p> <ul> <li>java program A </li> <li>java program B</li> </ul> <p>, two very similar programs, each multi-threaded </p> <p>My goal is to write A and B in such way that when A terminates unexpectedly, B will run instead. Since A and B share a centralized database, integrity of data is not the main concern here.</p> <p><strong>Question 1: what is the simplest and most elegant approach to enable the 'monitoring' mechanism needed to detect the termination of A and 'wake up' B, at each of the following scalability levels?</strong> </p> <p><strong>Level 1</strong>: one instance of each of A and B running on the same processor and RAM (e.g. should I use JMXConnector?)</p> <p><strong>Level 2</strong>: one instance of each of A and B running on different sets of processor and RAM within a LAN, e.g. two laptops at home. (e.g. use RMI, Docker, Kubernetes?)</p> <p><strong>Level 3</strong>: one instance of each of A and B running on different sets of processor and RAM on WAN, e.g. my laptop and a PC at a remote location</p> <p><strong>Level 4 <em>(yes, may overlap with Level 2 and 3, conceptually)</em></strong>: multiple instances of A and B running on different nodes of cloud service such as AWS Cloud and Google Cloud. (e.g. use RMI, Docker, Kubernetes?) </p> <p><strong>Question 2: if my ultimate goal is as per Level 4, but I'll start development of A and B on my laptop first, similar to level 1, what could be an overall good approach to do this whole development/deployment cycle?</strong> </p>
<p>Kubernetes is a good option for it's elasticity: from single host single node to huge deployments.</p> <blockquote> <ul> <li><p>For your Level 1 scenario you can run Kubernetes using virtualization on <a href="https://kubernetes.io/docs/tasks/tools/install-minikube/" rel="nofollow noreferrer"><strong>Minikube</strong></a>.</p> </li> <li><p>For your Level 2, 3 and 4 scenarios you can set up your Kubernetes Control Plane with <a href="https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/" rel="nofollow noreferrer"><strong>KubeAdm</strong></a>: It creates the Master Node and provides you a joining key, so you can add more Host Computers(Nodes) as you need.</p> </li> </ul> </blockquote> <p>After your initial stage on minikube you can easily export the <code>yaml</code> of your configurations, in order to run in a bigger local cluster, a hybrid or 100% on the Cloud.</p> <p><strong>Edit:</strong></p> <ul> <li><p>Whether Kubernetes platform is suitable or not for running a High Frequency Trading (HFT) like application within it, is yet another topic, and requires <strong>opening a seperate thread on SO.</strong></p> </li> <li><p>I can only remind here, that Kubernetes design was influenced by Google's Borg system, which in turn was made among to others to handle short-lived latency-sensitive requests (e.g. web search). Check wikipedia to learn more about this topic.</p> </li> <li><p>Nowadays applications running on Kubernetes can use natively underlying hardware. For instance, local storage directly attached to workers nodes (like NVMe SSDs drives) or GPU resources, which offers respectively consistent high performance of Disk I/O operations for your applications and can accelerate compute intensive tasks.</p> </li> </ul>
<p>I'm trying to provision an encrypted disk for GKE dynamically. But I really don't understand below part.</p> <pre><code>Grant permission to use the key You must assign the Compute Engine service account used by nodes in your cluster the Cloud KMS CryptoKey Encrypter/Decrypter role. This is required for GKE Persistent Disks to access and use your encryption key. The Compute Engine service account's name has the following format: service-[PROJECT_NUMBER]@compute-system.iam.gserviceaccount.com </code></pre> <p>Does it really necessary to grant "Cloud KMS CryptoKey Encrypter/Decryter" to Compute Engine Service account? Can I create a new SA and grant this role to it? The description said, the SA used by nodes. So I'm wondering if I can create a new SA and grant Cloud KMS role then use this SA to spin up GKE cluster. Then I think it should be available to provision encrypted disks for GKE.</p> <p>official document below:</p> <p><a href="https://cloud.google.com/kubernetes-engine/docs/how-to/using-cmek#dynamically_provision_an_encrypted" rel="nofollow noreferrer">dynamically_provision_an_encrypted</a></p>
<p>I tried to follow this <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/dynamic-provisioning-cmek" rel="nofollow noreferrer">documentation</a> step by step: </p> <ol> <li><p>create gke cluster (<a href="https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver" rel="nofollow noreferrer">check</a> Kubernetes Compatibility compatibility, I decided to stick with 1.14 this time), key-ring and key</p></li> <li><p>deploy CSI driver to the cluster</p> <p>2.1. download driver <code>$git clone https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver /PATH/gcp-compute-persistent-disk-csi-driver</code></p> <p>2.2. configure variables for your project in <code>/PATH/gcp-compute-persistent-disk-csi-driver/deploy/setup-project.sh</code></p> <p>2.3. create service account with <code>/PATH/gcp-compute-persistent-disk-csi-driver/deploy/setup-project.sh</code></p></li> </ol> <p>2.4. configure variables for driver deployment in <code>/PATH/gcp-compute-persistent-disk-csi-driver/deploy/kubernetes/deploy-driver.sh</code> and <code>/PATH/gcp-compute-persistent-disk-csi-driver/deploy/kubernetes/deploy-driver.shinstall-kustomize.sh</code></p> <p>2.5. deploy CSI driver (I stick with stable version)</p> <pre><code>$./deploy-driver.sh </code></pre> <ol start="3"> <li><p>enable the Cloud KMS API</p></li> <li><p>assign the <code>Cloud KMS CryptoKey Encrypter/Decrypter</code> role (<code>roles/cloudkms.cryptoKeyEncrypterDecrypter</code>) to the <code>Compute Engine Service Agent</code> (<code>service-[PROJECT_NUMBER]@compute-system.iam.gserviceaccount.com</code>)</p></li> <li><p>create StorageClass</p> <pre><code>$cat storage.yaml apiVersion: storage.k8s.io/v1beta1 kind: StorageClass metadata: name: csi-gce-pd provisioner: pd.csi.storage.gke.io parameters: type: pd-standard disk-encryption-kms-key: projects/test-prj/locations/europe-west3/keyRings/TEST-KEY-RING/cryptoKeys/TEST-KEY $kubectl describe storageclass csi-gce-pd Name: csi-gce-pd IsDefaultClass: No Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1beta1","kind":"StorageClass","metadata":{"annotations":{},"name":"csi-gce-pd"},"parameters":{"disk-encryption-kms-key":"projects/test-prj/locations/europe-west3/keyRings/TEST-KEY-RING/cryptoKeys/TEST-KEY","type":"pd-standard"},"provisioner":"pd.csi.storage.gke.io"} Provisioner: pd.csi.storage.gke.io Parameters: disk-encryption-kms-key=projects/test-prj/locations/europe-west3/keyRings/TEST-KEY-RING/cryptoKeys/TEST-KEY,type=pd-standard AllowVolumeExpansion: &lt;unset&gt; MountOptions: &lt;none&gt; ReclaimPolicy: Delete VolumeBindingMode: Immediate Events: &lt;none&gt; </code></pre></li> <li><p>create persistent volume</p> <p>$kubectl apply -f pvc.yaml<br> persistentvolumeclaim/podpvc created</p> <p>$kubectl describe pvc podpvc Name: podpvc Namespace: default StorageClass: csi-gce-pd Status: Bound Volume: pvc-b383584a-32c5-11ea-ad6e-42010a9c007d Labels: Annotations:</p> <pre><code> kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"podpvc","namespace":"default"},"spec":{"accessModes... pv.kubernetes.io/bind-completed: yes pv.kubernetes.io/bound-by-controller: yes volume.beta.kubernetes.io/storage-provisioner: pd.csi.storage.gke.io Finalizers: [kubernetes.io/pvc-protection] Capacity: 6Gi Access Modes: RWO VolumeMode: Filesystem Mounted By: &lt;none&gt; Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Provisioning 31m pd.csi.storage.gke.io_gke-test-cluster-default-pool-cd22e088-t1h0_c158f4fc-07ba-411e-8a94-74595f2b2f1d External provisioner is provisioning volume for claim "default/podpvc" Normal ExternalProvisioning 31m (x2 over 31m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "pd.csi.storage.gke.io" or manually created by system administrator Normal ProvisioningSucceeded 31m pd.csi.storage.gke.io_gke-test-cluster-default-pool-cd22e088-t1h0_c158f4fc-07ba-411e-8a94-74595f2b2f1d Successfully provisioned volume pvc-b383584a-32c5-11ea-ad6e-42010a9c007d </code></pre></li> </ol> <p>And it's successfully provisioned.</p> <p>Then I removed <code>Cloud KMS CryptoKey Encrypter/Decrypter</code> role from the <code>Compute Engine Service Agent</code> and persistent volume created at step 6 and tried again:</p> <pre><code>$kubectl apply -f pvc.yaml persistentvolumeclaim/podpvc created $kubectl describe pvc podpvc Name: podpvc Namespace: default StorageClass: csi-gce-pd Status: Pending Volume: Labels: &lt;none&gt; Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"podpvc","namespace":"default"},"spec":{"accessModes... volume.beta.kubernetes.io/storage-provisioner: pd.csi.storage.gke.io Finalizers: [kubernetes.io/pvc-protection] Capacity: Access Modes: VolumeMode: Filesystem Mounted By: &lt;none&gt; Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Provisioning 2m15s (x10 over 11m) pd.csi.storage.gke.io_gke-serhii-test-cluster-default-pool-cd22e088-t1h0_c158f4fc-07ba-411e-8a94-74595f2b2f1d External provisioner is provisioning volume for claim "default/podpvc" Warning ProvisioningFailed 2m11s (x10 over 11m) pd.csi.storage.gke.io_gke-serhii-test-cluster-default-pool-cd22e088-t1h0_c158f4fc-07ba-411e-8a94-74595f2b2f1d failed to provision volume with StorageClass "csi-gce-pd": rpc error: code = Internal desc = CreateVolume failed to create single zonal disk "pvc-b1a238b5-35fa-11ea-bec8-42010a9c01e6": failed to insert zonal disk: unkown Insert disk error: googleapi: Error 400: Cloud KMS error when using key projects/serhii-test-prj/locations/europe-west3/keyRings/SERHII-TEST-KEY-RING/cryptoKeys/SERHII-TEST-KEY: Permission 'cloudkms.cryptoKeyVersions.useToEncrypt' denied on resource 'projects/serhii-test-prj/locations/europe-west3/keyRings/SERHII-TEST-KEY-RING/cryptoKeys/SERHII-TEST-KEY' (or it may not exist)., kmsPermissionDenied Normal ExternalProvisioning 78s (x43 over 11m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "pd.csi.storage.gke.io" or manually created by system administrator </code></pre> <p>and persistent volume stayed in pending status.</p> <p>And, as you can see, in the <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/dynamic-provisioning-cmek" rel="nofollow noreferrer">documentation</a> it's necessary:</p> <blockquote> <p><strong>Grant permission to use the key</strong></p> <p>You must assign the Compute Engine service account used by nodes in your cluster the Cloud KMS CryptoKey Encrypter/Decrypter role. This is required for GKE Persistent Disks to access and use your encryption key.</p> </blockquote> <p>and it's not enough to create service account with <code>/PATH/gcp-compute-persistent-disk-csi-driver/deploy/setup-project.sh</code> provided by CSI driver. </p> <p><strong>EDIT</strong> Please notice that:</p> <blockquote> <p>For CMEK-protected node boot disks, this Compute Engine service account is the account which requires permissions to do encryption using your Cloud KMS key. This is true even if you are using a custom service account on your nodes.</p> </blockquote> <p>So, there's no way to use only service account in this case without Compute Engine service account, because CMEK-protected persistent volumes are managed by GCE, not by GKE. Meanwhile, you can provide only necessary perdition to your custom service account to improve security of your project. </p>
<p>I'm trying to deploy <a href="https://github.com/bitnami/charts/tree/master/bitnami/mysql" rel="noreferrer">bitnami/mysql</a> chart inside my <a href="https://minikube.sigs.k8s.io/docs/" rel="noreferrer">minikube</a>. I'm using Kubernetes v1.19, Minikube v1.17.1 and <a href="https://helm.sh/" rel="noreferrer">Helm 3</a></p> <p>I've created a PVC and PV as follow:</p> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata: name: mysql-pvc spec: storageClassName: standard accessModes: - ReadWriteOnce resources: requests: storage: 3Gi selector: matchLabels: id: mysql-pv ---- kind: PersistentVolume apiVersion: v1 metadata: name: mysql-pv labels: type: local id: mysql-pv spec: storageClassName: standard capacity: storage: 8Gi accessModes: - ReadWriteOnce hostPath: path: /var/lib/mysql </code></pre> <p>I've created the directory <code>/var/lib/mysql</code> by doing <code>sudo mkdir -p /var/lib/mysql</code> And this is how I create my PVC and PC:</p> <pre><code>kubectl apply -f mysql-pv-dev.yaml kubectl apply -f mysql-pvc-dev.yaml </code></pre> <p>Which seems to work:</p> <pre><code>NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mysql-pvc Bound mysql-pv 8Gi RWO standard 59s </code></pre> <p>I am deploying my <code>mysql</code> with: <code>helm upgrade --install dev-mysql -f mysql-dev.yaml bitnami/mysql</code></p> <p>Custom value file - <code>mysql-dev.yaml</code>:</p> <pre><code>auth: database: dev_db username: dev_user password: passworddev rootPassword: rootpass image: debug: true primary: persistence: existingClaim: mysql-pvc extraVolumeMounts: | - name: init mountPath: /docker-entrypoint-initdb.d extraVolumes: | - name: init hostPath: path: /home/dev/init_db_scripts/ type: Directory volumePermissions: enabled: true </code></pre> <p>The deployement works:</p> <pre><code>NAME READY STATUS RESTARTS AGE dev-mysql-0 0/1 Running 0 8s </code></pre> <p>the problem is that the pod never gets ready because:</p> <pre><code> Warning Unhealthy 0s (x2 over 10s) kubelet Readiness probe failed: mysqladmin: [Warning] Using a password on the command line interface can be insecure. mysqladmin: connect to server at 'localhost' failed error: 'Access denied for user 'root'@'localhost' (using password: YES)' </code></pre> <p><code>mysqld</code> is running inside the pod but for some reasons the root password isn't properly set because when I exec to the pod and try to connect to <code>mysql</code> I get:</p> <pre><code>$ kubectl exec -ti dev-mysql bash I have no name!@dev-mysql-0:/$ mysql -u root -prootpass mysql: [Warning] Using a password on the command line interface can be insecure. ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES) I have no name!@dev-mysql-0:/$ </code></pre> <p>Instead it's using the <a href="https://github.com/bitnami/charts/blob/master/bitnami/mysql/values.yaml" rel="noreferrer">default values</a> so if I try: <code>mysql -u root -p</code> without password it works great.</p> <p>Thanks</p>
<p>A bitnami engineer here, I was able to reproduce the issue, I'm going to create an internal task to resolve this. We will update this thread when we have more information.</p>
<p>I'm new to Kubernetes and Helm. I have installed k3d and helm: k3d version v1.7.0 k3s version v1.17.3-k3s1</p> <pre><code>helm version version.BuildInfo{Version:&quot;v3.2.4&quot;, GitCommit:&quot;0ad800ef43d3b826f31a5ad8dfbb4fe05d143688&quot;, GitTreeState:&quot;clean&quot;, GoVersion:&quot;go1.13.12&quot;} </code></pre> <p>I do have a cluster created with 10 worker nodes. When I try to install stackstorm-ha on the cluster I see the following issues:</p> <pre><code>helm install stackstorm/stackstorm-ha --generate-name --debug client.go:534: [debug] stackstorm-ha-1592860860-job-st2-apikey-load: Jobs active: 1, jobs failed: 0, jobs succeeded: 0 Error: failed post-install: timed out waiting for the condition helm.go:84: [debug] failed post-install: timed out waiting for the condition njbbmacl2813:~ gangsh9$ kubectl get pods Unable to connect to the server: net/http: TLS handshake timeout </code></pre> <p>kubectl describe pods either shows :</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled &lt;unknown&gt; default-scheduler Successfully assigned default/stackstorm-ha-1592857897-st2api-7f6c877b9c-dtcp5 to k3d-st2hatest-worker-5 Warning Failed 23m kubelet, k3d-st2hatest-worker-5 Error: context deadline exceeded Normal Pulling 17m (x5 over 37m) kubelet, k3d-st2hatest-worker-5 Pulling image &quot;stackstorm/st2api:3.3dev&quot; Normal Pulled 17m (x5 over 28m) kubelet, k3d-st2hatest-worker-5 Successfully pulled image &quot;stackstorm/st2api:3.3dev&quot; Normal Created 17m (x5 over 28m) kubelet, k3d-st2hatest-worker-5 Created container st2api Normal Started 17m (x4 over 28m) kubelet, k3d-st2hatest-worker-5 Started container st2api Warning BackOff 53s (x78 over 20m) kubelet, k3d-st2hatest-worker-5 Back-off restarting failed container </code></pre> <p>or</p> <pre><code>Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled &lt;unknown&gt; default-scheduler Successfully assigned default/stackstorm-ha-1592857897-st2timersengine-c847985d6-74h5k to k3d-st2hatest-worker-2 Warning Failed 6m23s kubelet, k3d-st2hatest-worker-2 Failed to pull image &quot;stackstorm/st2timersengine:3.3dev&quot;: rpc error: code = Unknown desc = failed to pull and unpack image &quot;docker.io/stackstorm/st2timersengine:3.3dev&quot;: failed to resolve reference &quot;docker.io/stackstorm/st2timersengine:3.3dev&quot;: failed to authorize: failed to fetch anonymous token: Get https://auth.docker.io/token?scope=repository%3Astackstorm%2Fst2timersengine%3Apull&amp;service=registry.docker.io: net/http: TLS handshake timeout Warning Failed 6m23s kubelet, k3d-st2hatest-worker-2 Error: ErrImagePull Normal BackOff 6m22s kubelet, k3d-st2hatest-worker-2 Back-off pulling image &quot;stackstorm/st2timersengine:3.3dev&quot; Warning Failed 6m22s kubelet, k3d-st2hatest-worker-2 Error: ImagePullBackOff Normal Pulling 6m10s (x2 over 6m37s) kubelet, k3d-st2hatest-worker-2 Pulling image &quot;stackstorm/st2timersengine:3.3dev&quot; </code></pre> <p>Kind of stuck here.</p> <p>Any help would be greatly appreciated.</p>
<p>The <code>TLS handshake timeout</code> error is very common when the machine that you are running your deployment on is running out of resources. Alternative issue is caused by slow internet connection or some proxy settings but we ruled out that since you can pull and run docker images locally and deploy small nginx webserver in your cluster.</p> <p>As you may notice in the <code>stackstorm</code> helm chart it installs a big amount of services/pods inside your cluster which can take up a lot of resources.</p> <blockquote> <p>It will install 2 replicas for each component of StackStorm microservices for redundancy, as well as backends like RabbitMQ HA, MongoDB HA Replicaset and etcd cluster that st2 replies on for MQ, DB and distributed coordination respectively.</p> </blockquote> <p>I deployed <code>stackstorm</code> on both k3d and GKE but I had to use fast machines in order to deploy this quickly and successfully.</p> <pre><code>NAME: stackstorm LAST DEPLOYED: Mon Jun 29 15:25:52 2020 NAMESPACE: default STATUS: deployed REVISION: 1 NOTES: Congratulations! You have just deployed StackStorm HA! </code></pre>
<p>i have a question, similar as describe here: <a href="https://stackoverflow.com/questions/53305955/gke-kubernetes-container-stdout-logs-format-changed">GKE kubernetes container stdout logs format changed</a></p> <p>in old version of stackdriver i had 1 sink with filter like this:</p> <pre><code>resource.type=container, resource.namespace_id=[NAMESPACE_NAME] resource.pod_id=[POD_NAME] </code></pre> <p>and logs was stored in bucket pretty well, like this:</p> <pre><code>logName=projects/[PROJECT-NAME]/logs/[CONTAINER-NAME] </code></pre> <p>...so i had folders whith logs for each container.</p> <p>But now i updated my stackdriver logging+monitoring to last version and now i have 2 folders stdout\stderr which contains all logs for all containers!</p> <pre><code>logName=projects/[PROJECT-NAME]/logs/stdout logName=projects/[PROJECT-NAME]/logs/stderr </code></pre> <p>All logs from many containers stored in this single folders! This is pretty uncomfortable =(</p> <p>I'v read about this in docs: <a href="https://cloud.google.com/monitoring/kubernetes-engine/migration#changes_in_log_entry_contents" rel="nofollow noreferrer">https://cloud.google.com/monitoring/kubernetes-engine/migration#changes_in_log_entry_contents</a></p> <blockquote> <p>The logName field might change. Stackdriver Kubernetes Engine Monitoring log entries use stdout or stderr in their log names whereas Legacy Stackdriver used a wider variety of names, including the container name. The container name is still available as a resource label.</p> </blockquote> <p>...but i can't find solution! Please, help me, how to make container per folder logging, like it was in old version of stackdriver?</p>
<p>Here is a workaround that has been suggested:</p> <ol> <li>Create a different sink for each of your containers filtered by resource.labels.container_name </li> <li>Export each sink to a different bucket</li> </ol> <p><strong>Note</strong>: If you configure each separate sink to the same bucket the logs will be combined.</p> <p>More details at <a href="https://b.corp.google.com/issues/149300373" rel="nofollow noreferrer">Google Issue Tracker</a> </p>
<p>I have a pod running in kubernetes and i need to run two commands in one line.</p> <p>Say,</p> <pre><code>kubectl exec -it &lt;pod name&gt; -n &lt;namespace &gt; -- bash -c redis-cli </code></pre> <p>above command will open redis-cli</p> <p>i want to run one more command after exec in one line ie info, i am trying below which is not working:</p> <pre><code>kubectl exec -it &lt;pod name&gt; -n &lt;namespace &gt; -- bash -c redis-cli -- info </code></pre>
<p><strong>You have to put your command and all the parameters between apostrophes.</strong></p> <p>in your example it would be:</p> <pre><code>kubectl exec -it &lt;pod_name&gt; -n &lt;namespace&gt; -- bash -c 'redis-cli info' </code></pre> <blockquote> <p><strong>From <code>Bash manual</code>: bash -c:</strong> If the -c option is present, then commands are read from the first non-option argument commaqnd_string.</p> </blockquote> <p><strong>Other option <em>(which in my opinion is a better approach)</em> is to get the output from the command with an instant pod, which creates, runs and deletes the pod right after that, like this:</strong></p> <pre><code>kubectl run --namespace &lt;YOUR_NAMESPACE&gt; &lt;TEMP_RANDOM_POD_NAME&gt; --rm --tty -i --restart='Never' --env REDIS_PASSWORD=$REDIS_PASSWORD --image docker.io/bitnami/redis:5.0.7-debian-10-r0 -- bash -c 'redis-cli -h redis-master -a $REDIS_PASSWORD info' </code></pre> <p><em>in my case the password was stored in a envvar called $REDIS_PASSWORD and I'm connecting to a server in a pod called redis-master. I let it as I runned it to show that you can use as much parameters as needed.</em></p> <p>POC:</p> <pre><code>user@minikube:~$ kubectl run --namespace default redis-1580466120-client --rm --tty -i --restart='Never' --env REDIS_PASSWORD=$REDIS_PASSWORD --image docker.io/bitnami/redis:5.0.7-debian-10-r0 -- bash -c 'redis-cli -h redis-master -a $REDIS_PASSWORD info' 10:41:10.65 10:41:10.66 Welcome to the Bitnami redis container 10:41:10.66 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-redis 10:41:10.66 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-redis/issues 10:41:10.67 Send us your feedback at [email protected] 10:41:10.67 Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe. # Server redis_version:5.0.7 redis_git_sha1:00000000 redis_git_dirty:0 ... {{{suppressed output}}} ... # CPU used_cpu_sys:1.622434 used_cpu_user:1.313600 used_cpu_sys_children:0.013942 used_cpu_user_children:0.008014 # Cluster cluster_enabled:0 # Keyspace pod "redis-1580466120-client" deleted </code></pre>
<p>So Ceph has a user-space page cache implementation in librbd. Does it allow users to mention how much page cache to allocate to each pod? If yes, can we dynamically change the allocations? </p>
<p>There is no reference to page cache allocation at the POD level according to documentation and issues in the project github.</p> <blockquote> <p>Ceph supports write-back caching for RBD. To enable it, add rbd cache = true to the [client] section of your <code>ceph.conf</code> file. By default <code>librbd</code> does not perform any caching. Writes and reads go directly to the storage cluster, and writes return only when the data is on disk on all replicas. With caching enabled, writes return immediately, unless there are more than <code>rbd cache max dirty</code> unflushed bytes. In this case, the write triggers writeback and blocks until enough bytes are flushed.</p> </blockquote> <p>This are the currently supported RDB Cache parameters and they must be inserted in the client section of your <code>ceph.conf</code> file:</p> <p><strong><code>rbd cache</code></strong> = The RBD cache size in bytes. | <em>Type: Boolean, Required: No, Default: false</em></p> <p><strong><code>rbd cache size</code></strong> = Enable caching for RADOS Block Device (RBD). | <em>Type: 64-bit Integer, Required: No, Default: 32 MiB</em></p> <p><strong><code>rbd cache max dirty</code></strong> = The <code>dirty</code> limit in bytes at which the cache triggers write-back. | If <code>0</code>, uses write-through caching. <em>Type: 64-bit Integer, Required: No, Constraint: Must be less than <code>rbd cache size</code>, Default: 24 MiB</em></p> <p><strong><code>rbd cache target dirty</code></strong> = The <code>dirty target</code> before the cache begins writing data to the data storage. Does not block writes to the cache. | <em>Type: 64-bit Integer, Required: No, Constraint: Must be less than <code>rbd cache max dirty</code>, Default: 16 MiB</em></p> <p><strong><code>rbd cache max dirty age</code></strong> = The number of seconds dirty data is in the cache before writeback starts. | <em>Type: Float, Required: No, Default: 1.0</em> rbd cache max dirty age</p> <p><strong><code>rbd cache writethrough until flush</code></strong> = Start out in write-through mode, and switch to write-back after the first flush request is received. Enabling this is a conservative but safe setting in case VMs running on rbd are too old to send flushes, like the virtio driver in Linux before 2.6.32. | <em>Type: Boolean, Required: No, Default: false</em></p>
<p>When I run this command:</p> <blockquote> <p>minikube start --vm-driver=hyperv</p> </blockquote> <p>minikube cannot start and displays the following error:</p> <blockquote> <p>minikube v1.7.2 on Microsoft Windows 10 Enterprise </p> <p>Using the hyperv driver based on user configuration</p> <p>! 'hyperv' driver reported an issue: C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe Get-WindowsOptionalFeature -FeatureName Microsoft-Hyper-V-All -Online failed:</p> <p>Suggestion: Start PowerShell as Administrator, and run: 'Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V -All'</p> <p>X hyperv does not appear to be installed</p> </blockquote> <p>so I followed the message displayed and I launched the command:</p> <blockquote> <p>Get-WindowsOptionalFeature -FeatureName Microsoft-Hyper-V-All -Online</p> </blockquote> <p>and it shows me the following result:</p> <blockquote> <p>FeatureName : Microsoft-Hyper-V-All DisplayName : Hyper-V Description : Provides management services and tools for creating and running virtual machines and their resources virtuels et de leurs ressources. RestartRequired : Possible State : Enabled CustomProperties :</p> </blockquote> <p>Also, I have verified that Hyper-v is installed correctly. I have minikube 1.7.2 installed. Any idea how to solve this issues ?</p> <p>Thanks for your help.</p>
<p>finally, i was able to launch minikube with --force flag, there is an issue with minikube 1.7.2 described here <a href="https://github.com/kubernetes/minikube/issues/6579" rel="noreferrer">#6579</a></p>
<p>I have provided an API gateway microservice at Azure Kubernetes. When I log in to get a token, it works, but if I want to access the resources with the Bearer Token, it does not allow me to do so.</p> <pre><code>@Override public void configure(HttpSecurity http) throws Exception { http.csrf().disable().authorizeRequests().antMatchers("/authenticate").permitAll(). antMatchers("/users").hasRole("ADMIN") .anyRequest().authenticated() .and().sessionManagement() .sessionCreationPolicy(SessionCreationPolicy.STATELESS); http.addFilterBefore(jwtRequestFilter, UsernamePasswordAuthenticationFilter.class); } </code></pre> <p>my login data are from an admin but he still does not allow me access from /users. The error code that comes to Postman is 403.</p> <p><a href="https://i.stack.imgur.com/o2Ung.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/o2Ung.jpg" alt="enter image description here"></a></p> <p>I suspect it is due to the Azure SQL firewall.</p> <p><a href="https://i.stack.imgur.com/S3V8C.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/S3V8C.jpg" alt="enter image description here"></a></p> <p>Can someone tell me why I can start a post request to my API gateway and a JWT but do not get a GetRequest for my users' data?</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: apigateway-front spec: replicas: 1 selector: matchLabels: app: apigateway-front template: metadata: labels: app: apigateway-front spec: nodeSelector: "beta.kubernetes.io/os": linux containers: - name: apigateway-front image: containerregistry.azurecr.io/apigateway:11 resources: requests: cpu: 100m memory: 128Mi limits: cpu: 250m memory: 512Mi ports: - containerPort: 8800 name: apigateway --- apiVersion: v1 kind: Service metadata: name: apigateway-front spec: type: LoadBalancer ports: - port: 8800 selector: app: apigateway-front --- apiVersion: apps/v1 kind: Deployment metadata: name: contacts spec: replicas: 1 selector: matchLabels: app: contacts template: metadata: labels: app: contacts spec: nodeSelector: "beta.kubernetes.io/os": linux containers: - name: contacts image: containerregistry.azurecr.io/contacts:12 resources: requests: cpu: 100m memory: 128Mi limits: cpu: 250m memory: 512Mi ports: - containerPort: 8100 name: contacts --- apiVersion: v1 kind: Service metadata: name: contacts spec: ports: - port: 8100 selector: app: contacts --- apiVersion: apps/v1 kind: Deployment metadata: name: templates spec: replicas: 1 selector: matchLabels: app: templates template: metadata: labels: app: templates spec: nodeSelector: "beta.kubernetes.io/os": linux containers: - name: templates image: containerregistry.azurecr.io/templates:13 resources: requests: cpu: 100m memory: 128Mi limits: cpu: 250m memory: 512Mi ports: - containerPort: 8200 name: templates --- apiVersion: v1 kind: Service metadata: name: templates spec: ports: - port: 8200 selector: app: templates </code></pre> <p>Logs from API-Gateway</p> <pre><code>2020-06-08 07:59:36.097 INFO 1700 --- [ main] s.ApiGateway.ApiGatewayApplication : No active profile set, falling back to default profiles: default 2020-06-08 07:59:37.115 INFO 1700 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data JPA repositories in DEFAULT mode. 2020-06-08 07:59:37.200 INFO 1700 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 73ms. Found 1 JPA repository interfaces. 2020-06-08 07:59:37.673 WARN 1700 --- [ main] o.s.boot.actuate.endpoint.EndpointId : Endpoint ID 'hystrix.stream' contains invalid characters, please migrate to a valid format. 2020-06-08 07:59:37.924 INFO 1700 --- [ main] o.s.cloud.context.scope.GenericScope : BeanFactory id=1f96386b-fb6d-3ddd-bccb-9a4c4b64c2fd 2020-06-08 07:59:39.047 INFO 1700 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8800 (http) 2020-06-08 07:59:39.062 INFO 1700 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat] 2020-06-08 07:59:39.062 INFO 1700 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.35] 2020-06-08 07:59:39.338 INFO 1700 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext 2020-06-08 07:59:39.338 INFO 1700 --- [ main] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 3192 ms 2020-06-08 07:59:39.484 WARN 1700 --- [ main] c.n.c.sources.URLConfigurationSource : No URLs will be polled as dynamic configuration sources. 2020-06-08 07:59:39.484 INFO 1700 --- [ main] c.n.c.sources.URLConfigurationSource : To enable URLs as dynamic configuration sources, define System property archaius.configurationSource.additionalUrls or make config.properties available on classpath. 2020-06-08 07:59:39.513 INFO 1700 --- [ main] c.netflix.config.DynamicPropertyFactory : DynamicPropertyFactory is initialized with configuration sources: com.netflix.config.ConcurrentCompositeConfiguration@77bc2e16 2020-06-08 07:59:39.599 WARN 1700 --- [ main] JpaBaseConfiguration$JpaWebConfiguration : spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 2020-06-08 07:59:39.939 INFO 1700 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting... 2020-06-08 07:59:40.688 INFO 1700 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Start completed. 2020-06-08 07:59:40.776 INFO 1700 --- [ main] o.hibernate.jpa.internal.util.LogHelper : HHH000204: Processing PersistenceUnitInfo [name: default] 2020-06-08 07:59:40.881 INFO 1700 --- [ main] org.hibernate.Version : HHH000412: Hibernate ORM core version 5.4.15.Final 2020-06-08 07:59:41.143 INFO 1700 --- [ main] o.hibernate.annotations.common.Version : HCANN000001: Hibernate Commons Annotations {5.1.0.Final} 2020-06-08 07:59:41.385 INFO 1700 --- [ main] org.hibernate.dialect.Dialect : HHH000400: Using dialect: org.hibernate.dialect.SQLServer2012Dialect 2020-06-08 07:59:42.377 INFO 1700 --- [ main] o.h.e.t.j.p.i.JtaPlatformInitiator : HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform] 2020-06-08 07:59:42.388 INFO 1700 --- [ main] j.LocalContainerEntityManagerFactoryBean : Initialized JPA EntityManagerFactory for persistence unit 'default' 2020-06-08 07:59:43.793 INFO 1700 --- [ main] o.s.s.web.DefaultSecurityFilterChain : Creating filter chain: any request, [org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@498b611e, org.springframework.security.web.context.SecurityContextPersistenceFilter@47fca3cc, org.springframework.security.web.header.HeaderWriterFilter@6c2dd88b, org.springframework.security.web.authentication.logout.LogoutFilter@3909a854, sendMessage.ApiGateway.JwtRequestFilter@1b98355f, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@6a0c7af6, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@3d7b3b18, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@6dde1bf5, org.springframework.security.web.session.SessionManagementFilter@484b5a21, org.springframework.security.web.access.ExceptionTranslationFilter@5bccaedb, org.springframework.security.web.access.intercept.FilterSecurityInterceptor@1e000a17] 2020-06-08 07:59:43.838 WARN 1700 --- [ main] c.n.c.sources.URLConfigurationSource : No URLs will be polled as dynamic configuration sources. 2020-06-08 07:59:43.838 INFO 1700 --- [ main] c.n.c.sources.URLConfigurationSource : To enable URLs as dynamic configuration sources, define System property archaius.configurationSource.additionalUrls or make config.properties available on classpath. 2020-06-08 07:59:44.010 INFO 1700 --- [ main] o.s.s.concurrent.ThreadPoolTaskExecutor : Initializing ExecutorService 'applicationTaskExecutor' 2020-06-08 07:59:44.219 WARN 1700 --- [ main] ion$DefaultTemplateResolverConfiguration : Cannot find template location: classpath:/templates/ (please add some templates or check your Thymeleaf configuration) 2020-06-08 07:59:44.672 INFO 1700 --- [ main] o.s.c.n.zuul.ZuulFilterInitializer : Starting filter initializer 2020-06-08 07:59:44.689 INFO 1700 --- [ main] o.s.b.a.e.web.EndpointLinksResolver : Exposing 2 endpoint(s) beneath base path '/actuator' 2020-06-08 07:59:44.769 INFO 1700 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8800 (http) with context path '' 2020-06-08 07:59:44.916 INFO 1700 --- [ main] s.ApiGateway.ApiGatewayApplication : Started ApiGatewayApplication in 10.045 seconds (JVM running for 15.368) 2020-06-08 08:19:19.354 INFO 1700 --- [nio-8800-exec-2] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring DispatcherServlet 'dispatcherServlet' 2020-06-08 08:19:19.355 INFO 1700 --- [nio-8800-exec-2] o.s.web.servlet.DispatcherServlet : Initializing Servlet 'dispatcherServlet' 2020-06-08 08:19:19.395 INFO 1700 --- [nio-8800-exec-2] o.s.web.servlet.DispatcherServlet : Completed initialization in 40 ms 2020-06-08 08:19:19.450 WARN 1700 --- [nio-8800-exec-2] o.s.c.n.zuul.web.ZuulHandlerMapping : No routes found from RouteLocator </code></pre>
<p>Problem solved. I forgot to write in the Authorization header in Postman</p>
<p>I am using below manifest to run some k8s Job, However i am not able to submit job successfully due to below error.</p> <pre><code>apiVersion: batch/v1 kind: Job metadata: name: spark-on-eks spec: template: spec: imagePullSecrets: - name: mycreds containers: - name: spark image: repo:buildversion command: - &quot;/bin/sh&quot; - &quot;-c&quot; - '/opt/spark/bin/spark-submit \ --master k8s://EKSEndpoint \ --deploy-mode cluster \ --name spark-luluapp \ --class com.ll.jsonclass \ --conf spark.jars.ivy=/tmp/.ivy \ --conf spark.kubernetes.container.image=repo:buildversion \ --conf spark.kubernetes.namespace=spark-pi \ --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark-sa \ --conf spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem \ --conf spark.kubernetes.authenticate.executor.serviceAccountName=spark-sa \ --conf spark.kubernetes.driver.pod.name=spark-job-driver \ --conf spark.executor.instances=4 \ local:///opt/spark/examples/App-buildversion-SNAPSHOT.jar \ [mks,env,reg,&quot;dd.mm.yyyy&quot;,&quot;true&quot;,&quot;off&quot;,&quot;db-comp-results&quot;,&quot;true&quot;,&quot;XX&quot;,&quot;XXX&quot;,&quot;XXXXX&quot;,&quot;XXX&quot;,$$,###] ' serviceAccountName: spark-pi restartPolicy: Never backoffLimit: 4 </code></pre> <p>Error: Error: ImagePullBackOff Normal Pulling Pulling image &quot;repo/buildversion&quot; Warning Failed Failed to pull image &quot;repo/buildversion&quot;: rpc error: code = Unknown desc = Error response from daemon: unauthorized: The client does not have permission for manifest</p> <p>i checked the secrets which i have listed, is already created and in use with already deployed applications.</p> <p>Is this issue is related to init containers which are being used as secret injection for pods/jobs, or something i am missing in my manifest, also, i am running above step as apart of Auotmation on one of the Jenkins Slave, and it works fine for other application-pods ( Not sure of k8s jobs )</p>
<p>Are you using port, docker path, or reverse proxy configuration in Artifactory?</p> <p>Validate first on another machine you can pull the image.</p> <p>i.e. (docker path)</p> <pre><code>docker login ${ARTIFACTORY_URL} docker pull ${ARTIFACTORY_URL}/repo/image:tag </code></pre> <p>I believe you may be using a reverse proxy config given the naming convention:</p> <pre><code>&quot;repo:buildversion&quot; </code></pre> <p>In this scenario you need to do a docker login to the repo:</p> <pre><code>docker login repo docker push repo:buildversion </code></pre> <p>What this means for k8s is you likely used the wrong docker-server URL and this is why authentication won't work even with a valid API key.</p> <p>If you are using reverse proxy try this:</p> <pre><code>kubectl create secret docker-registry mycred \ --docker-server=repo \ --docker-username=&lt;your-name&gt; \ --docker-password=&lt;your-api-key&gt; \ --docker-email=&lt;your-email&gt; </code></pre>
<p>I am new to kubernetes and want to follow the logs for two pods at the same time. But to do that i have to open two terminals and use the following command</p> <pre class="lang-sh prettyprint-override"><code>kubectl logs -f &lt;POD_NAME&gt; -n namespace </code></pre> <p>I was wondering if there was some better way to get the logs without opening multiple terminals as this creates a serious problem if the number of pods increase.</p> <p>I am not looking for some logging tool but some easy to set up way which can help me in achieving this.</p> <p>Thanks in advance.</p>
<p>As already explained in the previous you can use the labels if the pods shares any. If you have multiple containers in a pod you can use this command:</p> <pre><code>kubectl logs -n &lt;namespace&gt; -f deployment/&lt;app-name&gt; --all-containers=true </code></pre> <p>If you are looking for some simple tool you have couple of options:</p> <ul> <li><a href="https://github.com/stern/stern" rel="nofollow noreferrer">Stern</a></li> </ul> <p>Stern allows you to tail multiple pods on Kubernetes and multiple containers within the pod. Each result is color coded for quicker debugging. The query is a regular expression so the pod name can easily be filtered and you don't need to specify the exact id (for instance omitting the deployment id). If a pod is deleted it gets removed from tail and if a new pod is added it automatically gets tailed.</p> <p>With a simple command like this:</p> <pre><code>stern -n &lt;namespace&gt; &lt;app-name&gt; -t --since 30m </code></pre> <p>Stern will tail logs from the given namespace for that app name since last 30 minutes.</p> <ul> <li><a href="https://github.com/johanhaleby/kubetail" rel="nofollow noreferrer">Kubetail</a></li> </ul> <p>It`s a bash script that enables you to aggregate (tail/follow) logs from multiple pods into one stream. This is the same as running &quot;kubectl logs -f &quot; but for multiple pods.</p> <ul> <li><a href="https://github.com/boz/kail" rel="nofollow noreferrer">Kail</a></li> </ul> <p>Streams logs from all containers of all matched pods. Match pods by service, replicaset, deployment, and others. Adjusts to a changing cluster - pods are added and removed from logging as they fall in or out of the selection.</p> <ul> <li><a href="https://github.com/dtan4/k8stail" rel="nofollow noreferrer">k8stail</a></li> </ul> <p>It`s tail -f experience for Kubernetes Pods</p>
<p>We've been setting up a number of private GCP GKE clusters which work quite well. Each currently has a single node pool of 2 ContainerOS nodes.</p> <p>We also have a non-K8s Compute Engine in the network that is a FreeBSD NFS server and is configured for 10Gbps networking.</p> <p>When we log in to the K8s nodes, it appears that they do not support 10Gbps networking out of the box. We suspect this, because "large-receive-offload" seems to be turned off in the network interface(s).</p> <p>We have created persistent storage claims inside the Kubernetes clusters for shares from this fileserver, and we would like them to support the 10Gbps networking but worry that it is limited to 1Gbps by default.</p> <p>Google only seems to offer a few options for the image of its node-pools (either ContainerOS or Ubuntu). This is limited both through their GCP interface as well as the cluster creation command.</p> <p>My question is:</p> <ul> <li>Is it at all possible to support 10Gbps networking somehow in GCP GKE clusters?</li> </ul> <p>Any help would be much appreciated.</p>
<blockquote> <ul> <li>Is it at all possible to support 10Gbps networking somehow in GCP GKE clusters?</li> </ul> </blockquote> <p><strong>Yes, GKE natively supports 10GE connections out-of-the-box</strong>, just like Compute Engine Instances, but it <strong>does not support custom node images.</strong></p> <p>A good way to test your speed limits is using <strong><a href="https://iperf.fr/" rel="nofollow noreferrer">iperf3</a></strong>.</p> <p>I Created a GKE instance with default settings to test the connectivity speed.</p> <p>I also created a Compute Engine VM named <em>Debian9-Client</em> which will host our test, as you see below:</p> <p><a href="https://i.stack.imgur.com/F28l9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/F28l9.png" alt="Cloud Console"></a></p> <ul> <li><strong>First we set up our VM with iperf3 server running:</strong></li> </ul> <pre><code>❯ gcloud compute ssh debian9-client-us --zone "us-central1-a user@debian9-client-us:~$ iperf3 -s -p 7777 ----------------------------------------------------------- Server listening on 7777 ----------------------------------------------------------- </code></pre> <ul> <li><strong>Then we move to our GKE to run the test from a POD:</strong></li> </ul> <pre><code>❯ k get nodes NAME STATUS ROLES AGE VERSION gke-cluster-1-pool-1-4776b3eb-16t7 Ready &lt;none&gt; 16m v1.15.7-gke.23 gke-cluster-1-pool-1-4776b3eb-mp84 Ready &lt;none&gt; 16m v1.15.7-gke.23 ❯ kubectl run -i --tty --image ubuntu test-shell -- /bin/bash root@test-shell-845c969686-6h4nl:/# apt update &amp;&amp; apt install iperf3 -y root@test-shell-845c969686-6h4nl:/# iperf3 -c 10.128.0.5 -p 7777 Connecting to host 10.128.0.5, port 7777 [ 4] local 10.8.0.6 port 60946 connected to 10.128.0.5 port 7777 [ ID] Interval Transfer Bandwidth Retr Cwnd [ 4] 0.00-1.00 sec 661 MBytes 5.54 Gbits/sec 5273 346 KBytes [ 4] 1.00-2.00 sec 1.01 GBytes 8.66 Gbits/sec 8159 290 KBytes [ 4] 2.00-3.00 sec 1.08 GBytes 9.31 Gbits/sec 6381 158 KBytes [ 4] 3.00-4.00 sec 1.00 GBytes 8.62 Gbits/sec 9662 148 KBytes [ 4] 4.00-5.00 sec 1.08 GBytes 9.27 Gbits/sec 8892 286 KBytes [ 4] 5.00-6.00 sec 1.11 GBytes 9.51 Gbits/sec 6136 532 KBytes [ 4] 6.00-7.00 sec 1.09 GBytes 9.32 Gbits/sec 7150 755 KBytes [ 4] 7.00-8.00 sec 883 MBytes 7.40 Gbits/sec 6973 177 KBytes [ 4] 8.00-9.00 sec 1.04 GBytes 8.90 Gbits/sec 9104 212 KBytes [ 4] 9.00-10.00 sec 1.08 GBytes 9.29 Gbits/sec 4993 594 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth Retr [ 4] 0.00-10.00 sec 9.99 GBytes 8.58 Gbits/sec 72723 sender [ 4] 0.00-10.00 sec 9.99 GBytes 8.58 Gbits/sec receiver iperf Done. </code></pre> <p>The average transfer rate was 8.58Gits/sec on this test, proving that the cluster node is, by default, running with 10Gbps Ethernet.</p> <p>If I can help you further, just let me know in the comments.</p>
<p>I know this is somewhat specific of a question, but I'm having a problem I can't seem to track down. I have a single pod deployed to EKS - the pod contains a python app, and a varnish reverse caching proxy. I'm serving chunked json (that is, streaming lines of json, a la <a href="http://jsonlines.org/" rel="nofollow noreferrer">http://jsonlines.org/</a>), and it can be multiple GB of data.</p> <p>The first time I make a request, and it hits the python server, everything acts correctly. It takes (much) longer than the cached version, but the entire set of json lines is downloaded. However, now that it's cached in varnish, if I use curl, I get:</p> <pre><code>curl: (56) GnuTLS recv error (-110): The TLS connection was non-properly terminated. </code></pre> <p>or</p> <pre><code>curl: (56) GnuTLS recv error (-9): A TLS packet with unexpected length was received. </code></pre> <p>The SSL is terminated at the ELB, and when I use curl from the proxy container itself (using <code>curl http://localhost?....</code>), there is no problem.</p> <p>The hard part of this is that the problem is somewhat intermittent.</p> <p>If there is any advice in terms of clever <code>varnishlog</code> usage, or anything of the same ilk on AWS, I'd be much obliged.</p> <p>Thanks!</p>
<p>Because <em>TLS</em> is terminated on your <em>ELB</em> loadbalancers, the connection between should be in plain HTTP.</p> <p>The error is probably not coming from Varnish, because Varnish currently doesn't handle TLS natively. I'm not sure if <code>varnishlog</code> can give you better insights in what is actually happening.</p> <h2>Checklist</h2> <p>The only checklist I can give you is the following:</p> <ul> <li>Make sure the certificate you're using is valid</li> <li>Make sure you're connecting to your target group over HTTP, not HTTPS</li> <li>If you enable the PROXY protocol on your ELB, make sure Varnish has a <code>-a</code> listener that listens for <code>PROXY</code> protocol requests, on top of regular HTTP requests.</li> </ul> <h2>Debugging</h2> <p>Perform top-down debugging:</p> <ul> <li>Increase the verbosity of your <em>cURL</em> calls and try to get more information about the error</li> <li>Try accessing the logs of your <em>ELB</em> and get more details there</li> <li>Get more information from your <em>EKS</em> logs</li> <li>And finally, perform a <code>varnislog -g request -q "ReqUrl eq '/your-url'"</code> to get a full Varnishlog for a specific URL</li> </ul>
<p>Here is my problem: I have 3 services defined in a kubernetes yaml file:</p> <ul> <li>one front-end (website)</li> <li>one back-end : stateful, for user sessions</li> <li>one back-end : stateless</li> </ul> <p>I need session affinity on the stateful service, but not on the stateless nor front-end service. I need the session affinity to be cookie-based, not clientIP based.</p> <pre><code>mydomain/stateful ===&gt; Front-End Service (3 pods) ===&gt; Stateful Service (3 pods, need session affinity) mydomain/stateless ===&gt; Front-End Service (3 pods) ===&gt; Stateless Service (3 pods, do not need session affinity) </code></pre> <p>I tried to use Ingress service, but I fail to see how I can use it as a proxy in-between 2 services inside the Kubernetes Cluster. All the examples I see show how to use Ingress as a router for request coming from outside the Cluster.</p> <p>Here is my poc.yaml so far:</p> <pre><code>#################################################################### ######################### STATEFUL BACKEND ######################### # Deployment for pocbackend containers, listening on port 3000 apiVersion: apps/v1 kind: Deployment metadata: name: stateful-deployment spec: replicas: 3 selector: matchLabels: app: stateful-backend tier: backend template: metadata: labels: app: stateful-backend tier: backend spec: containers: - name: pocbackend image: pocbackend:2.0 ports: - name: http containerPort: 3000 --- # Service for Stateful containers, listening on port 3000 apiVersion: v1 kind: Service metadata: name: api-stateful spec: selector: app: stateful-backend tier: backend ports: - protocol: TCP port: 3002 targetPort: http #sessionAffinity: ClientIP --- ##################################################################### ######################### STATELESS BACKEND ######################### # Deployment for pocbackend containers, listening on port 3000 apiVersion: apps/v1 kind: Deployment metadata: name: stateless-backend spec: replicas: 3 selector: matchLabels: app: stateless-backend tier: backend template: metadata: labels: app: stateless-backend tier: backend spec: containers: - name: pocbackend image: pocbackend:2.0 ports: - name: http containerPort: 3000 --- # Service for Stateless containers, listening on port 3000 apiVersion: v1 kind: Service metadata: name: api-stateless spec: selector: app: stateless-backend tier: backend ports: - protocol: TCP port: 3001 targetPort: http --- ############################################################# ######################### FRONT END ######################### # deployment of the container pocfrontend listening to port 3500 apiVersion: apps/v1 kind: Deployment metadata: name: front-deployment spec: replicas: 1 selector: matchLabels: app: frontend tier: frontend template: metadata: labels: app: frontend tier: frontend spec: containers: - name: pocfrontend image: pocfrontend:2.0 ports: - name: http containerPort: 3500 --- # Service exposing frontend on node port 85 apiVersion: v1 kind: Service metadata: name: frontend-service spec: type: LoadBalancer selector: app: frontend tier: frontend ports: - protocol: TCP port: 85 targetPort: http </code></pre> <p>Do you know how to solve my problem?</p> <p>Thanks!</p>
<p>Natively Kubernetes itself does not provide <code>session affinity</code> on service [concept] level. </p> <p>The only way that comes to my mind is to use Istio and it's <code>Destination Rules</code>. Taken from the istio manual:</p> <blockquote> <p><code>DestinationRule</code> defines policies that apply to traffic intended for a service after routing has occurred. These rules specify configuration for load balancing, connection pool size from the sidecar, and outlier detection settings to detect and evict unhealthy hosts from the load balancing pool.</p> </blockquote> <p><a href="https://dev.to/peterj/what-are-sticky-sessions-and-how-to-configure-them-with-istio-1e1a" rel="noreferrer">This document</a> shows how to to configure <code>sticky session</code> with istio. </p>
<p>I have setup a <code>kubernetes</code> cluster with <code>kubeamd</code>; One control-plane and a worker node.</p> <p>Everything worked fine. Then I setup a Squid proxy on the worker node and in the <code>kubelet</code> config I have set <code>http_proxy=http://127.0.0.1:3128</code> essentially asking <code>kubelet</code> to use the proxy to communicate to the control-plane.</p> <p>I see, using tcpdump, network packets landing on the control plane from worker node, and I am able to issue the following command from worker as well;</p> <pre><code>kubectl get no --server=https://10.128.0.63:6443 NAME STATUS ROLES AGE VERSION k8-cp Ready master 6d6h v1.17.0 k8-worker NotReady &lt;none&gt; 6d6h v1.17.2 </code></pre> <p>but the worker status always remains NotReady. What might I be doing wrong?</p> <p>I am using Flannel here for networking.</p> <p>P.S. I have exported <code>http_proxy=http://127.0.0.1:3128</code> as an env variable as well before issuing</p> <pre><code>kubectl get no --server=https://10.128.0.63:6443 </code></pre> <p>from the worker node.</p> <p>If it matters here is the node status;</p> <pre><code>kubectl describe no k8-worker Name: k8-worker Roles: &lt;none&gt; Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=k8-worker kubernetes.io/os=linux Annotations: flannel.alpha.coreos.com/backend-data: {"VtepMAC":"fe:04:d6:53:ef:cc"} flannel.alpha.coreos.com/backend-type: vxlan flannel.alpha.coreos.com/kube-subnet-manager: true flannel.alpha.coreos.com/public-ip: 10.128.0.71 kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Wed, 29 Jan 2020 08:08:33 +0000 Taints: node.kubernetes.io/unreachable:NoExecute node.kubernetes.io/unreachable:NoSchedule Unschedulable: false Lease: HolderIdentity: k8-worker AcquireTime: &lt;unset&gt; RenewTime: Thu, 30 Jan 2020 11:51:24 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure Unknown Thu, 30 Jan 2020 11:48:25 +0000 Thu, 30 Jan 2020 11:52:08 +0000 NodeStatusUnknown Kubelet stopped posting node status. DiskPressure Unknown Thu, 30 Jan 2020 11:48:25 +0000 Thu, 30 Jan 2020 11:52:08 +0000 NodeStatusUnknown Kubelet stopped posting node status. PIDPressure Unknown Thu, 30 Jan 2020 11:48:25 +0000 Thu, 30 Jan 2020 11:52:08 +0000 NodeStatusUnknown Kubelet stopped posting node status. Ready Unknown Thu, 30 Jan 2020 11:48:25 +0000 Thu, 30 Jan 2020 11:52:08 +0000 NodeStatusUnknown Kubelet stopped posting node status. Addresses: InternalIP: 10.128.0.71 Hostname: k8-worker Capacity: cpu: 2 ephemeral-storage: 104844988Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7493036Ki pods: 110 Allocatable: cpu: 2 ephemeral-storage: 96625140781 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7390636Ki pods: 110 System Info: Machine ID: 3221f625fa75d20f08bceb4cacf74e20 System UUID: 6DD87A9F-7F72-5326-5B84-1B3CBC4D9DBE Boot ID: 7412bb51-869f-40de-8b37-dcbad6bf84b4 Kernel Version: 3.10.0-1062.9.1.el7.x86_64 OS Image: CentOS Linux 7 (Core) Operating System: linux Architecture: amd64 Container Runtime Version: docker://1.13.1 Kubelet Version: v1.17.2 Kube-Proxy Version: v1.17.2 PodCIDR: 10.244.1.0/24 PodCIDRs: 10.244.1.0/24 Non-terminated Pods: (3 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE --------- ---- ------------ ---------- --------------- ------------- --- default nginx-86c57db685-fvh28 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6d20h kube-system kube-flannel-ds-amd64-b8vbr 100m (5%) 100m (5%) 50Mi (0%) 50Mi (0%) 6d23h kube-system kube-proxy-rsr7l 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6d23h Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 100m (5%) 100m (5%) memory 50Mi (0%) 50Mi (0%) ephemeral-storage 0 (0%) 0 (0%) Events: &lt;none&gt; </code></pre> <p>Link to kubelet logs on worker:</p> <p><a href="https://pastebin.com/E90FNEXR" rel="nofollow noreferrer">https://pastebin.com/E90FNEXR</a></p>
<p>The Kube-controller-manager/node-controller is responsible for monitoring the health of the nodes monitoring the endpoint <strong>&quot;/healthz&quot;</strong> exposed by kubelet.</p> <p>So far you have configured a one way communication over proxy (from Node to Master).</p> <p>You need to do it for other components, especially Kube-controller-manager. This way you enable two way communication over HTTP Proxy.</p> <blockquote> <p>This is achievable by specifying <strong>HTTP_PROXY</strong> on <strong>KUBEADM INIT:</strong></p> <p><code>$ sudo http_proxy=192.168.1.20:3128 kubeadm init</code></p> </blockquote> <p>Learn more here: <a href="https://github.com/kubernetes/kubeadm/issues/182" rel="nofollow noreferrer"> Kubadm Issue 182</a></p> <ul> <li>It creates a one-time variable which is read-in by kubeadm, and then re-created inside all components of control plane also as Env Variable.</li> </ul> <p>You will see some output like this:</p> <pre><code>kubeadm@lab-1:~$ sudo http_proxy=192.168.1.20:3128 kubeadm init [init] Using Kubernetes version: v1.17.0 [preflight] Running pre-flight checks [WARNING HTTPProxy]: Connection to &quot;https://10.156.0.6&quot; uses proxy &quot;http://192.168.1.20:3128&quot;. If that is not intended, adjust your proxy settings [WARNING HTTPProxyCIDR]: connection to &quot;10.96.0.0/12&quot; uses proxy &quot;http://192.168.1.20:3128&quot;. This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration </code></pre> <ul> <li>Optionally you can do this manually through Env Variable, like you did for kubelet by adjusting kube-controller-manager's pod spec.</li> </ul> <p>Learn more here: <a href="https://github.com/kubernetes/kubeadm/issues/324" rel="nofollow noreferrer">Kubeadm Issue 324</a>.</p>
<p>I have a test cluster in GKE with several apps. Some of them must be exposed on a single ip and as a service of <code>type: LoadBalancer</code>.</p> <p>I've reserved static external address, and used it in yamls of my services as <code>loadBalancerIP</code>. But everything is ok, except one service. It's ftp server with ports <code>20-21</code>, and <code>30000-30005</code> for passive mode. GKE automatically configures loadbalancers for services with port range from the lowest one to the greatest for every service. So, obviously it overlaps any other service of my cluster with port range <code>20-30005</code> and this service external ip keeps pending state.</p> <p>Are there any solution of that problem? My thoughts brings me to using <code>externalIPs</code> field with manually created load balancer with forwarding rules and targets in gcp network services console. Or either both <code>loadbalancerIP</code> and <code>externalIPs</code> with same ip, but i am not sure about that. Is it will work correctly? Are there other solutions?</p>
<p>After trying almost everything, I've just realized, that with such GKE LB implementation behaviour, I am actually can create two services, first for active mode port range, second for passive. With selector to ftps app. Sounds not perfect, but this is single correctly working solution I've found so far.</p>
<p>I am trying to create a Helm chart for varnish to be deployed/run on Kubernetes cluster. While running the helm package which has varnish image from Docker community its throwing error</p> <pre><code>Readiness probe failed: HTTP probe failed with statuscode: 503 Liveness probe failed: HTTP probe failed with statuscode: 503 </code></pre> <p>Have shared <code>values.yaml</code>, <code>deployment.yaml</code>, <code>varnish-config.yaml</code>, <code>varnish.vcl</code>.</p> <p>Any solution approached would be welcomed....</p> <p><strong>Values.yaml:</strong></p> <pre class="lang-yaml prettyprint-override"><code> # Default values for tt. # This is a YAML-formatted file. # Declare variables to be passed into your templates. replicaCount: 1 #vcl 4.0; #import std; #backend default { # .host = "www.varnish-cache.org"; # .port = "80"; # .first_byte_timeout = 60s; # .connect_timeout = 300s; #} varnishBackendService: "www.varnish-cache.org" varnishBackendServicePort: "80" image: repository: varnish tag: 6.0.6 pullPolicy: IfNotPresent nameOverride: "" fullnameOverride: "" service: type: ClusterIP port: 80 #probes: # enabled: true ingress: enabled: false annotations: {} # kubernetes.io/ingress.class: nginx # kubernetes.io/tls-acme: "true" path: / hosts: - chart-example.local tls: [] # - secretName: chart-example-tls # hosts: # - chart-example.local resources: limits: memory: 128Mi requests: memory: 64Mi #resources: {} # We usually recommend not to specify default resources and to leave this as a conscious # choice for the user. This also increases chances charts run on environments with little # resources, such as Minikube. If you do want to specify resources, uncomment the following # lines, adjust them as necessary, and remove the curly braces after 'resources:'. # limits: # cpu: 100m # memory: 128Mi # requests: # cpu: 100m # memory: 128Mi nodeSelector: {} tolerations: [] affinity: {} </code></pre> <p><strong>Deployment.yaml:</strong></p> <pre class="lang-yaml prettyprint-override"><code> apiVersion: apps/v1beta2 kind: Deployment metadata: name: {{ include "varnish.fullname" . }} labels: app: {{ include "varnish.name" . }} chart: {{ include "varnish.chart" . }} release: {{ .Release.Name }} heritage: {{ .Release.Service }} spec: replicas: {{ .Values.replicaCount }} selector: matchLabels: app: {{ include "varnish.name" . }} release: {{ .Release.Name }} template: metadata: labels: app: {{ include "varnish.name" . }} release: {{ .Release.Name }} # annotations: # sidecar.istio.io/rewriteAppHTTPProbers: "true" spec: volumes: - name: varnish-config configMap: name: {{ include "varnish.fullname" . }}-varnish-config items: - key: default.vcl path: default.vcl containers: - name: {{ .Chart.Name }} image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" imagePullPolicy: {{ .Values.image.pullPolicy }} env: - name: VARNISH_VCL value: /etc/varnish/default.vcl volumeMounts: - name: varnish-config mountPath : /etc/varnish/ ports: - name: http containerPort: 80 protocol: TCP targetPort: 80 livenessProbe: httpGet: path: /healthcheck port: http port: 80 failureThreshold: 3 initialDelaySeconds: 45 timeoutSeconds: 10 periodSeconds: 20 readinessProbe: httpGet: path: /healthcheck port: http port: 80 initialDelaySeconds: 10 timeoutSeconds: 15 periodSeconds: 5 restartPolicy: "Always" resources: {{ toYaml .Values.resources | indent 12 }} {{- with .Values.nodeSelector }} nodeSelector: {{ toYaml . | indent 8 }} {{- end }} {{- with .Values.affinity }} affinity: {{ toYaml . | indent 8 }} {{- end }} {{- with .Values.tolerations }} tolerations: {{ toYaml . | indent 8 }} {{- end }} </code></pre> <p><strong>vanrnish-config.yaml:</strong></p> <pre class="lang-yaml prettyprint-override"><code> apiVersion: v1 kind: ConfigMap metadata: name: {{ template "varnish.fullname" . }}-varnish-config labels: app: {{ template "varnish.fullname" . }} chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" release: "{{ .Release.Name }}" heritage: "{{ .Release.Service }}" data: default.vcl: |- {{ $file := (.Files.Get "config/varnish.vcl") }} {{ tpl $file . | indent 4 }} </code></pre> <p><strong>varnish.vcl:</strong></p> <pre><code> # VCL version 5.0 is not supported so it should be 4.0 or 4.1 even though actually used Varnish version is 6 vcl 4.1; import std; # The minimal Varnish version is 5.0 # For SSL offloading, pass the following header in your proxy server or load balancer: 'X-Forwarded-Proto: https' backend default { #.host = "{{ default "google.com" .Values.varnishBackendService }}"; .host = "{{ .Values.varnishBackendService }}"; .port = "{{ .Values.varnishBackendServicePort }}"; #.port = "{{ default "80" .Values.varnishBackendServicePort }}"; .first_byte_timeout = 60s; .connect_timeout = 300s ; .probe = { .url = "/"; .timeout = 1s; .interval = 5s; .window = 5; .threshold = 3; } } backend server2 { .host = "74.125.24.105:80"; .probe = { .url = "/"; .timeout = 1s; .interval = 5s; .window = 5; .threshold = 3; } } import directors; sub vcl_init { new vdir = directors.round_robin(); vdir.add_backend(default); vdir.add_backend(server2); } #sub vcl_recv { # if (req.url ~ "/healthcheck"){ # error 200 "imok"; # set req.http.Connection = "close"; # } #} </code></pre>
<p>The fact that Kubernetes returns an <code>HTTP 503</code> error for both the readiness &amp; the liveliness probes means that there's probably something wrong with the connection to your backend.</p> <p>Interestingly, that's besides the point. Those probes aren't there to perform an end-to-end test of your HTTP flow. The probes are only there to verify if the service they are monitoring is responding.</p> <p>That's why you can just return a <em>synthetic</em> HTTP response when capturing requests that point to <code>/healthcheck</code>.</p> <p>Here's the VCL code to do it:</p> <pre><code>sub vcl_recv { if(req.url == "/healthcheck") { return(synth(200,"OK")); } } </code></pre> <p>That doesn't explain the fact why you're getting an <code>HTTP 503</code> error, but at least, the probes will work.</p>
<p>I want to list all nodes which are in ready state except the ones which have any kind of taint on them. How can I achieve this using jsonpath ?</p> <p>I tried below statement taken from k8s doc but it doesn't print what I want. I am looking for output such as -- <code>node01 node02</code>. There is no master node in the output as it has a taint on it. What kind of taint is not really significant here.</p> <pre><code>JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}' \ &amp;&amp; kubectl get nodes -o jsonpath="$JSONPATH" | grep "Ready=True" </code></pre>
<p>I have successfully listed my nodes that are <code>ready</code> and <code>not tainted</code> using <code>jq</code>.</p> <p>Here you have all the nodes: </p> <pre><code>$ kubectl get nodes gke-standard-cluster-1-default-pool-9c101360-9lvw Ready &lt;none&gt; 31s v1.13.11-gke.9 gke-standard-cluster-1-default-pool-9c101360-fdhr Ready &lt;none&gt; 30s v1.13.11-gke.9 gke-standard-cluster-1-default-pool-9c101360-gq9c Ready &lt;none&gt; 31s v1.13.11-gke. </code></pre> <p>Here I have tainted one node: </p> <pre><code>$ kubectl taint node gke-standard-cluster-1-default-pool-9c101360-9lvw key=value:NoSchedule node/gke-standard-cluster-1-default-pool-9c101360-9lvw tainted </code></pre> <p>And finally a command that list the <code>not tainted</code> and <code>ready</code> nodes: </p> <pre><code>$ kubectl get nodes -o json | jq -r '.items[] | select(.spec.taints|not) | select(.status.conditions[].reason=="KubeletReady" and .status.conditions[].status=="True") | .metadata.name' gke-standard-cluster-1-default-pool-9c101360-fdhr gke-standard-cluster-1-default-pool-9c101360-gq9c </code></pre>
<p>Error while trying to connect React frontend web to nodejs express api server into kubernetes cluster.</p> <p>Can navigate in browser to <code>http:localhost:3000</code> and web site is ok.</p> <p>But can't navigate to <code>http:localhost:3008</code> as expected (should not be exposed)</p> <p><strong>My goal is to pass <em>REACT_APP_API_URL</em> environment variable to frontend in order to set axios <code>baseURL</code> and be able to establish communication between front and it's api server.</strong></p> <p>deploy-front.yml</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: gbpd-front spec: selector: matchLabels: app: gbpd-api tier: frontend track: stable replicas: 2 template: metadata: labels: app: gbpd-api tier: frontend track: stable spec: containers: - name: react image: binomio/gbpd-front:k8s-3 ports: - containerPort: 3000 resources: limits: memory: "150Mi" requests: memory: "100Mi" imagePullPolicy: Always </code></pre> <p>service-front.yaml</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: gbpd-front spec: selector: app: gbpd-api tier: frontend ports: - protocol: "TCP" port: 3000 targetPort: 3000 type: LoadBalancer </code></pre> <p>Deploy-back.yaml</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 kind: Deployment metadata: name: gbpd-api spec: selector: matchLabels: app: gbpd-api tier: backend track: stable replicas: 3 # tells deployment to run 2 pods matching the template template: metadata: labels: app: gbpd-api tier: backend track: stable spec: containers: - name: gbpd-api image: binomio/gbpd-back:dev ports: - name: http containerPort: 3008 </code></pre> <p>service-back.yaml</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: v1 kind: Service metadata: name: gbpd-api spec: selector: app: gbpd-api tier: backend ports: - protocol: "TCP" port: 3008 targetPort: http </code></pre> <p>I tried many combinations, also tried adding "LoadBalancer" to backservice but nothing...</p> <p>I can connect perfecto to localhost:3000 and use frontend but frontend can't connect to backend service.</p> <p><strong>Question 1</strong>: What's is the ip/name to use in order to pass REACT_APP_API_URL to fronten correctly? <strong>Question 2</strong>: Why is curl localhost:3008 not answering?</p> <p>After 2 days trying almost everything in k8s official docs... can't figure out what's happening here, so any help will be much appreciated.</p> <p>kubectl describe svc gbpd-api Response:</p> <pre><code>kubectl describe svc gbpd-api Name: gbpd-api Namespace: default Labels: &lt;none&gt; Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"gbpd-api","namespace":"default"},"spec":{"ports":[{"port":3008,"p... Selector: app=gbpd-api,tier=backend Type: LoadBalancer IP: 10.107.145.227 LoadBalancer Ingress: localhost Port: &lt;unset&gt; 3008/TCP TargetPort: http/TCP NodePort: &lt;unset&gt; 31464/TCP Endpoints: 10.1.1.48:3008,10.1.1.49:3008,10.1.1.50:3008 Session Affinity: None External Traffic Policy: Cluster Events: &lt;none&gt; </code></pre>
<p>I tested your environment, and it worked when using a Nginx image, let's review the environment:</p> <ul> <li>The front-deployment is correctly described.</li> <li>The front-service exposes it as <a href="https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types" rel="nofollow noreferrer">loadbalancer</a>, meaning your frontend is accessible from outside, perfect.</li> <li>The back deployment is also correctly described.</li> <li>The backend-service stays with as ClusterIP in order to be only accessible from inside the cluster, great.</li> </ul> <p>Below I'll demonstrate the communication between front-end and back end.</p> <ul> <li>I'm using the same yamls you provided, just changed the image to Nginx for example purposes, and since it's a http server I'm changing containerport to 80.</li> </ul> <blockquote> <p>Question 1: What's is the ip/name to use in order to pass REACT_APP_API_URL to fronten correctly?</p> </blockquote> <ul> <li>I added the ENV variable to the front deploy as requested, and I'll use it to demonstrate also. You must use the service name to curl, I used the short version because we are working in the same namespace. you can also use the full name: <a href="http://gbpd-api.default.svc.cluster.local:3008" rel="nofollow noreferrer">http://gbpd-api.default.svc.cluster.local:3008</a></li> </ul> <hr> <p><strong>Reproduction:</strong></p> <ul> <li>Create the yamls and applied them:</li> </ul> <pre><code>$ cat deploy-front.yaml apiVersion: apps/v1 kind: Deployment metadata: name: gbpd-front spec: selector: matchLabels: app: gbpd-api tier: frontend track: stable replicas: 2 template: metadata: labels: app: gbpd-api tier: frontend track: stable spec: containers: - name: react image: nginx env: - name: REACT_APP_API_URL value: http://gbpd-api:3008 ports: - containerPort: 80 resources: limits: memory: "150Mi" requests: memory: "100Mi" imagePullPolicy: Always $ cat service-front.yaml cat: cat: No such file or directory apiVersion: v1 kind: Service metadata: name: gbpd-front spec: selector: app: gbpd-api tier: frontend ports: - protocol: "TCP" port: 3000 targetPort: 80 type: LoadBalancer $ cat deploy-back.yaml apiVersion: apps/v1 kind: Deployment metadata: name: gbpd-api spec: selector: matchLabels: app: gbpd-api tier: backend track: stable replicas: 3 template: metadata: labels: app: gbpd-api tier: backend track: stable spec: containers: - name: gbpd-api image: nginx ports: - name: http containerPort: 80 $ cat service-back.yaml apiVersion: v1 kind: Service metadata: name: gbpd-api spec: selector: app: gbpd-api tier: backend ports: - protocol: "TCP" port: 3008 targetPort: http $ kubectl apply -f deploy-front.yaml deployment.apps/gbpd-front created $ kubectl apply -f service-front.yaml service/gbpd-front created $ kubectl apply -f deploy-back.yaml deployment.apps/gbpd-api created $ kubectl apply -f service-back.yaml service/gbpd-api created </code></pre> <ul> <li>Remember, in Kubernetes the communication is <a href="https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/" rel="nofollow noreferrer">designed to be made between services</a>, because the pods are always recreated when there is a change in the deployment or when the pod fail.</li> </ul> <pre><code>$ kubectl get all NAME READY STATUS RESTARTS AGE pod/gbpd-api-dc5b4b74b-kktb9 1/1 Running 0 41m pod/gbpd-api-dc5b4b74b-mzpbg 1/1 Running 0 41m pod/gbpd-api-dc5b4b74b-t6qxh 1/1 Running 0 41m pod/gbpd-front-66b48f8b7c-4zstv 1/1 Running 0 30m pod/gbpd-front-66b48f8b7c-h58ds 1/1 Running 0 31m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/gbpd-api ClusterIP 10.0.10.166 &lt;none&gt; 3008/TCP 40m service/gbpd-front LoadBalancer 10.0.11.78 35.223.4.218 3000:32411/TCP 42m </code></pre> <ul> <li>The pods are the workers, and since they are replaceable by nature, we will connect to a frontend pod to simulate his behaviour and try to connect to the backend service (which is the network layer that will direct the traffic to one of the backend pods).</li> <li>The nginx image does not come with <code>curl</code> preinstalled, so I will have to install it for demonstration purposes:</li> </ul> <pre><code>$ kubectl exec -it pod/gbpd-front-66b48f8b7c-4zstv -- /bin/bash root@gbpd-front-66b48f8b7c-4zstv:/# apt update &amp;&amp; apt install curl -y done. root@gbpd-front-66b48f8b7c-4zstv:/# curl gbpd-api:3008 &lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;Welcome to nginx!&lt;/title&gt; ... </code></pre> <ul> <li>Now let's try using the environment variable that was defined:</li> </ul> <pre><code>root@gbpd-front-66b48f8b7c-4zstv:/# printenv | grep REACT REACT_APP_API_URL=http://gbpd-api:3008 root@gbpd-front-66b48f8b7c-4zstv:/# curl $REACT_APP_API_URL &lt;!DOCTYPE html&gt; &lt;html&gt; &lt;head&gt; &lt;title&gt;Welcome to nginx!&lt;/title&gt; ... </code></pre> <hr> <p><strong>Considerations:</strong></p> <blockquote> <p>Question 2: Why is curl localhost:3008 not answering?</p> </blockquote> <ul> <li>Since all yamls are correctly described you must check if <code>image: binomio/gbpd-back:dev</code> is correctly serving on port 3008 as intended.</li> <li>Since it's not a public image, I can't test it, so I'll give you troubleshooting steps: <ul> <li>just like we logged inside the front-end pod you will have to log into this backend-pod and test <code>curl localhost:3008</code>.</li> <li>If it's based on a linux distro with apt-get, you can run the commands just like I did on my demo:</li> <li>get the pod name from backend deploy (example: <code>gbpd-api-6676c7695c-6bs5n</code>)</li> <li>run <code>kubectl exec -it pod/&lt;POD_NAME&gt; -- /bin/bash</code></li> <li>then run <code>apt update &amp;&amp; apt install curl -y</code></li> <li>and test <code>curl localhost:3008</code></li> <li>if no answer run `apt update &amp;&amp; apt install net-tools</li> <li>and test <code>netstat -nlpt</code>, it will have to show you the output of the services running and the respective port, example:</li> </ul></li> </ul> <pre><code>root@gbpd-api-585df9cb4d-xr6nk:/# netstat -nlpt Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 1/nginx: master pro </code></pre> <ul> <li>If the pod does not return nothing even on this approach, you will have to check the code in the image.</li> </ul> <p>Let me know if you need help after that!</p>
<p>I'm experimenting with spinning up a Varnish instance to speed up a slow but static endpoint of a service. The service is already running in a Kubernetes cluster so I'm basing the setup on the official docker image and following the advice of baking the VCL file into the image:</p> <pre><code>FROM varnish:6.2 COPY default.vcl /etc/varnish/ </code></pre> <p>I'm starting with a small amount of configuration in <code>default.vcl</code>:</p> <pre><code>vcl 4.0; backend default { .host = &quot;172.17.0.1:8018&quot;; # IP for local experimentation } # only cache '/v1/xyz/...' sub vcl_recv { if (req.url ~ &quot;^/v1/xyz/&quot;) { return (hash); } return (pass); } </code></pre> <p>I would like to be able to declare the target backend in the deployment file, either using env vars or cli args.</p> <p>The <code>-b</code> flag seemed perfect for the job, but fails with <code>Error: Only one of -b or -f can be specified</code>.</p> <p>And using <code>std.getenv</code> in the <code>backend</code> block doesn't work either:</p> <pre><code>import std; backend default { .host = std.getenv(&quot;VARNISH_TAGET_HOST&quot;); } </code></pre> <p>results in</p> <pre><code>Expected CSTR got 'std' (program line 369), at ('/etc/varnish/default.vcl' Line 6 Pos 17) .host = std.getenv(&quot;VARNISH_TAGET_HOST&quot;); ----------------###------------------------------ </code></pre> <p>Is there some way (not including <code>sed</code>-like hacks) by which I can configure the backend without hardcoding it into the VCL?</p>
<h2>Varnish Enterprise has dynamic backends</h2> <p><em>Varnish Cache</em>, the open source version of Varnish, only allows static backend definitions.</p> <p>The only way you can define backends <em>on-the-fly</em>, is by using <em>Varnish Enterprise</em>, the commercial version of the software.</p> <p>See <a href="https://docs.varnish-software.com/varnish-cache-plus/vmods/goto/" rel="nofollow noreferrer">https://docs.varnish-software.com/varnish-cache-plus/vmods/goto/</a> for more information about the <em>dynamic backends</em> feature.</p> <h2>Why -b &amp; -f cannot be combined</h2> <p>Apparently the <code>-b</code> parameter is a shorthand for the following command:</p> <pre><code>varnishadm vcl.inline boot &lt;&lt; EOF vcl 4.1; backend default { .host = &quot;&lt;addr&gt;&quot;; } EOF </code></pre> <p>So in fact <code>-b</code> already creates and loads <em>VCL</em> in the background, which makes this option mutually exclusive with <code>-f</code></p>
<p>I have a digital ocean kubernetes and an ingress controller routing traffic. but one of the pods needs to accept TCP traffic; so i would like to make the ingress to accept the TCP traffic and route to the pod. i followed this</p> <p><a href="https://minikube.sigs.k8s.io/docs/tutorials/nginx_tcp_udp_ingress/" rel="nofollow noreferrer">https://minikube.sigs.k8s.io/docs/tutorials/nginx_tcp_udp_ingress/</a></p> <p>and</p> <p><a href="https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/</a></p> <p>after following, i still cannot connect to the port.</p> <p>Below is what i have:</p> <p>Load. balancer:</p> <pre><code>kind: Service apiVersion: v1 metadata: name: ingress-nginx namespace: ingress-nginx labels: app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx spec: selector: # app: speed-transmission-app app.kubernetes.io/name: ingress-nginx app.kubernetes.io/part-of: ingress-nginx type: LoadBalancer ports: - name: http port: 80 targetPort: 80 protocol: TCP - name: https port: 443 targetPort: 443 protocol: TCP - name: transmission-port port: 9000 targetPort: 8998 protocol: TCP </code></pre> <p>config map</p> <pre><code> apiVersion: v1 kind: ConfigMap metadata: name: tcp-services namespace: ingress-nginx data: 9000: &quot;staging/speed-transmission-service:9000&quot; </code></pre> <p>Now when i try to connect to the load balancer external IP at port 9000, i get connection lost.</p> <p>I will really appreciate help on how to configure this. thanks.</p>
<p>After searching everywhere for how to do this i came accross another stackoverflow answer that explain how this could work. it worked well and its what i still use now. the only issue is just that its not a long term solution as i usually have my loadbalancer goes down anytime any of the services that is listening on the tcp service goes down, thus affecting the whole cluster. this is however on DOKS, am not sure of how it will behave on another platform. <a href="https://stackoverflow.com/questions/61430311/exposing-multiple-tcp-udp-services-using-a-single-loadbalancer-on-k8s">here</a> is the link to the answer. i will update this answer if i find a more stable solution.</p>