prompt
stringlengths
65
38.7k
response
stringlengths
41
29.1k
<p>I research about Kubernetes and actually saw that they do load balancer on a same node. So if I'm not wrong, one node means one server machine, so what good it be if doing load balancer on the same server machine. Because it will use same CPU and RAM to handle requests. First I thought that load balancing would do on separate machine to share resource of CPU and RAM. So I wanna know the point of doing load balancing on same server.</p>
<p>If you can do it on one node , it doesn't mean that you should do it , specially in production environment.</p> <ul> <li>the production cluster will have least 3 or 5 nodes min</li> <li>kubernetes will spread the replicas across the cluster nodes in balancing node workload , pods ends up on different nodes</li> <li>you can also configure on which nodes your pods land</li> <li>use advanced scheduling , pod affinity and anti-affinity</li> <li>you can also plug you own schedular , that will not allow placing the replica pods of the same app on the same node</li> <li>then you define a service to loadbalance across pods on different nodes</li> <li>kube proxy will do the rest</li> </ul> <p>here is a useful read:</p> <p><a href="https://itnext.io/keep-you-kubernetes-cluster-balanced-the-secret-to-high-availability-17edf60d9cb7" rel="nofollow noreferrer">https://itnext.io/keep-you-kubernetes-cluster-balanced-the-secret-to-high-availability-17edf60d9cb7</a></p> <blockquote> <p>So you generally need to choose a level of availability you are comfortable with. For example, if you are running three nodes in three separate availability zones, you may choose to be resilient to a single node failure. Losing two nodes might bring your application down but the odds of loosing two data centres in separate availability zones are low.</p> <p>The bottom line is that there is no universal approach; only you can know what works for your business and the level of risk you deem acceptable.</p> </blockquote>
<p>I am trying to create some sample kubernetes pod file.</p> <pre><code>cat &lt;&lt; EOF | kubectl create -f - apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - name: nginx image: nginx EOF </code></pre> <p>But on executing this I am getting below error.</p> <blockquote> <p>error: error validating "pod.yaml": error validating data: [ValidationError(Pod): unknown field "containers" in io.k8s.api.core.v1.Pod, ValidationError(Pod): unknown field "na me" in io.k8s.api.core.v1.Pod]; if you choose to ignore these errors, turn validation off with --validate=false</p> </blockquote>
<p>I am not sure about the exact issue but it got resolved with proper space indentation</p> <pre><code>--- apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - name: nginx image: nginx </code></pre> <p>It worked for me now with proper spaces. Mybad</p>
<p>I am using <code>Kops</code> to setup my Kubernetes cluster.</p> <p>This is how, I installed kops,</p> <pre><code>wget https://github.com/kubernetes/kops/releases/download/1.6.1/kops-linux-amd64 chmod +x kops-linux-amd64 sudo mv kops-linux-amd64 /usr/local/bin/kops </code></pre> <p>My <code>Kubernetes</code> cluster is pretty old. Now, I want to upgrade it to latest version. I know that <code>kops upgrade cluster</code> works to upgrade the <code>Kubernetes</code>. But, before upgrading the <code>Kubernetes</code>, I want to make sure that my <code>Kops</code> version is latest.</p> <p>Should I just remove running <code>kops</code>, that is </p> <pre><code>rm -rf /usr/local/bin/kops </code></pre> <p>then download the latest release and place it in <code>/usr/local/bin/</code> </p> <pre><code>wget https://github.com/kubernetes/kops/releases/download/1.11.0/kops-darwin-amd64 chmod +x kops-darwin-amd64 sudo mv kops-darwin-amd64 /usr/local/bin/kops </code></pre> <p>Is the above procedure correct ? If not, then what is the recommended way to upgrade the <code>Kops</code> ?</p>
<p>Since you're using mac try,</p> <pre><code>brew upgrade kops </code></pre>
<p>I want to install Helm in Gitlab's k8s integration with reference to <a href="https://docs.gitlab.com/ee/user/project/clusters/#adding-an-existing-kubernetes-cluster" rel="nofollow noreferrer">https://docs.gitlab.com/ee/user/project/clusters/#adding-an-existing-kubernetes-cluster</a></p> <p>but responses is 401 when I clicked Helm Tiler's <code>Install</code> button.</p> <p>My process is below.</p> <ol> <li>deploy k8s in gcp</li> <li>to get <code>API_URL</code> run this </li> </ol> <pre><code>$ kubectl cluster-info | grep 'Kubernetes master' | awk '/http/ {print $NF}' https://xx.xxx.xx.xx // set this `API_URL` </code></pre> <ol start="3"> <li>create gitlab's service account</li> </ol> <pre><code>$ kubectl create -f - &lt;&lt;EOF apiVersion: v1 kind: ServiceAccount metadata: name: gitlab namespace: default EOF $ kubectl create -f - &lt;&lt;EOF kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: gitlab-cluster-admin subjects: - kind: ServiceAccount name: gitlab namespace: default roleRef: kind: ClusterRole name: cluster-admin apiGroup: rbac.authorization.k8s.io EOF $ kubectl get secrets default-token-xxxx kubernetes.io/service-account-token 3 25d gitlab-token-xxxx kubernetes.io/service-account-token 3 21h tls-sample kubernetes.io/tls 2 24d </code></pre> <p>so, I choice <code>gitlab-token-xxxx</code>.</p> <ol start="4"> <li>to get <code>CA Certificate</code> run this </li> </ol> <pre><code>$ kubectl get secret gitlab-token-xxxx -o jsonpath="{['data']['ca\.crt']}" | base64 --decode -----BEGIN CERTIFICATE----- MIIDDDCCAfSgAwIBAgIRAJ0S/Fsf1dDFRZP9TCnby60wDQYJKoZIhvcNAQELBQAw ...... ..... FZ1tsRI3EbTNuKsyKtvjwg== -----END CERTIFICATE----- </code></pre> <p>I used this as <code>CA Certificate</code></p> <ol start="5"> <li>to get <code>Token</code> run this </li> </ol> <pre><code>$ kubectl get secret &lt;secret name&gt; -o jsonpath="{['data']['token']}" | base64 --decode eyJhbGciOi......... </code></pre> <p>I used this as <code>Token</code></p> <ol start="6"> <li>I filled out in <a href="https://i.stack.imgur.com/2DTFw.png" rel="nofollow noreferrer">this page.</a></li> </ol> <p>Please teach me correct way!</p>
<p>Thank you for seeing this question.</p> <p>I installed it without displaying an error by the following method! Thank you very much!</p> <pre><code>kubectl create clusterrolebinding gitlab-internal-cluster-rule --clusterrole=cluster-admin --serviceaccount=gitlab-managed-apps:default </code></pre>
<p><strong>Problem statement:</strong><br> Currently we are running k8s in multiple environments e.g. dev, uat,staging. It becomes very difficult to identify for us just by looking at k8s dashboard UI. Do we have any facility to customize k8s dashboard indicating somewhere in header or footer cluster or environment we are using?</p>
<p>Since K8S is open source, you should have the ability to do whatever you want. You will ofcourse need to play with the code and build you own custom dashboard image.</p> <p>You can start off from here </p> <blockquote> <p><a href="https://github.com/kubernetes/dashboard/tree/master/src/app/frontend" rel="nofollow noreferrer">https://github.com/kubernetes/dashboard/tree/master/src/app/frontend</a> </p> </blockquote>
<p>I'm trying to set up a kubernetes cluster with a couple backend services, which are served through an ingress instance.</p> <p>I've set up my Deployment, Services and Ingress in kubernetes. Yet, due to an unknown error, I can't get the ingress working and act as a load balancer for my backend services.</p> <pre><code>Name | Status | Type | Endpoints | Pods | Namespace | Cluster ev-ingress | OK | Ingress | */evauth | 0 / 0 | default |standard-cluster-1 ev-auth-service | OK | Node port | &lt;NODE_PORT_IP&gt;:80 TCP| 1 / 1 |default | standard-cluster-1 </code></pre> <p>backend.yml</p> <pre><code>--- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: ev-auth spec: selector: matchLabels: app: ev-auth replicas: 1 template: metadata: labels: app: ev-auth spec: containers: - name: ev-auth image: private_repository/ev-auth readinessProbe: httpGet: path: /health port: 3000 livenessProbe: httpGet: path: /health port: 3000 ports: - containerPort: 3000 env: - name: PORT value: "3000" - name: AMQP_CONNECTION value: amqp://xxxxxxx - name: CALLBACK value: "CALLBACK" - name: CONSUMER_KEY value: xxxxxxxxx - name: CONSUMER_SECRET value: xxxxxxxx --- apiVersion: v1 kind: Service metadata: name: ev-auth-service labels: app: ev-auth spec: type: NodePort selector: app: ev-auth ports: - name: normal port: 80 targetPort: 3000 protocol: TCP </code></pre> <p>ingress.yml</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ev-ingress spec: rules: - http: paths: - path: /evauth backend: serviceName: ev-auth-service servicePort: 80 </code></pre> <p>What am I missing here? I made sure /evauth indeed works, (I'm not sure if that's even necessary to match but, anyway). Still, the Ingress mapping shows "0/0" for pods. When I call the "<a href="http://cluster_ip/evauth" rel="nofollow noreferrer">http://cluster_ip/evauth</a>", I get "default backend - 404"</p> <p>Any help is appreciated. </p> <p>Thanks.</p>
<p>Turns out, I was hasty. Apparently I had to wait for a while. </p> <p>After 10 minutes, things were working as expected.</p>
<p>I'v enabled heapster on minikube</p> <pre><code>minikube addons start heapster </code></pre> <p>And custom metrics with</p> <pre><code>minikube start --extra-config kubelet.EnableCustomMetrics=true </code></pre> <p>My deployment looks like</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: kubia spec: replicas: 1 template: metadata: name: kubia labels: app: kubia annotations: pod.beta.kubernetes.io/init-containers: '[ { "name": "setup", "image": "busybox", "imagePullPolicy": "IfNotPresent", "command": ["sh", "-c", "echo \"{\\\"endpoint\\\": \\\"http://$POD_IP:8080/metrics\\\"}\" &gt; /etc/custom-metrics/definition.json"], "env": [{ "name": "POD_IP", "valueFrom": { "fieldRef": { "apiVersion": "v1", "fieldPath": "status.podIP" } } }], "volumeMounts": [ { "name": "config", "mountPath": "/etc/custom-metrics" } ] } ]' spec: containers: - image: luksa/kubia:qps name: nodejs ports: - containerPort: 8080 volumeMounts: - name: config mountPath: /etc/custom-metrics resources: requests: cpu: 100m volumes: - name: config emptyDir: </code></pre> <p>My hpa looks like</p> <pre><code>apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: kubia annotations: alpha/target.custom-metrics.podautoscaler.kubernetes.io: '{"items":[{"name":"qps", "value": "20"}]}' spec: maxReplicas: 5 minReplicas: 1 scaleTargetRef: apiVersion: extensions/v1beta1 kind: Deployment name: kubia targetCPUUtilizationPercentage: 1000000 </code></pre> <p>However I get target unknown</p> <pre><code>jonathan@ubuntu ~&gt; kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE kubia Deployment/kubia &lt;unknown&gt; / 1000000% 1 5 1 31m </code></pre> <p>And the following warnings from the hpa</p> <pre><code> Warning FailedGetResourceMetric 27m (x12 over 33m) horizontal-pod-autoscaler unable to get metrics for resource cpu: no metrics returned from heapster Warning FailedComputeMetricsReplicas 27m (x12 over 33m) horizontal-pod-autoscaler failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from heapster </code></pre>
<blockquote> <p><code>metrics-server</code> monitoring needs to be deployed in the cluster to provide metrics via the resource metrics API, as Horizontal Pod Autoscaler uses this API to collect metrics.</p> </blockquote> <p>Thus enable <code>metrics-server</code> addon via;</p> <pre><code>$ minikube addons enable metrics-server </code></pre> <p>Refer - <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#before-you-begin" rel="nofollow noreferrer">Horizontal Pod Autoscaler Walkthrough - Before you begin</a></p>
<p>I have a configmap where I have defined the following key-value mapping in the <code>data</code> section:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: namespace: test name: test-config data: TEST: "CONFIGMAP_VALUE" </code></pre> <p>then in the definition of my container (in the deployment / statefulset manifest) I have the following:</p> <pre><code> env: - name: TEST value: "ANOTHER_VALUE" envFrom: - configMapRef: name: test-config </code></pre> <p>When doing this I was expecting that the value from the configmap (TEST="CONFIGMAP_VALUE") will override the (default) value specified in the container spec (TEST="ANOTHER_VALUE"), but this is not the case (TEST always gets the value from the container spec). I couldn't find any relevant documentation about this - is it possible to achieve such env variable value overriding?</p>
<p>From <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.19/#container-v1-core" rel="noreferrer">Kubernetes API reference</a>:</p> <blockquote> <p><code>envFrom</code> : List of sources to populate environment variables in the container. The keys defined within a source must be a C_IDENTIFIER. All invalid keys will be reported as an event when the container is starting. When a key exists in multiple sources, the value associated with the last source will take precedence. Values defined by an Env with a duplicate key will take precedence. Cannot be updated.</p> </blockquote> <p>So above clearly states the <strong>env</strong> will take precedence than <strong>envFrom</strong>.</p> <blockquote> <p>When a key exists in multiple sources, the value associated with the last source will take precedence.</p> </blockquote> <p>So, for overriding, don't use <code>envFrom</code>, but define the value twice within <code>env</code>, see below:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: namespace: default name: test-config data: TEST: &quot;CONFIGMAP_VALUE&quot; --- apiVersion: v1 kind: Pod metadata: name: busy namespace: default spec: containers: - name: busybox image: busybox env: - name: TEST value: &quot;DEFAULT_VAULT&quot; - name: TEST valueFrom: configMapKeyRef: name: test-config key: TEST command: - &quot;sh&quot; - &quot;-c&quot; - &gt; while true; do echo &quot;$(TEST)&quot;; sleep 3600; done </code></pre> <p>Check:</p> <pre><code>kubectl logs busy -n default CONFIGMAP_VALUE </code></pre>
<p>Situation:<br> - users A, B, C, D<br> - team 1: user A, user B<br> - team 2: user C, user D </p> <p>Desired:<br> - each user has private volume<br> - each team has a shared volume --> users in team can see shared volume<br> - some users, based on permission, can see <em>both</em> shared volumes</p> <p>Searched for quite some time now, do not see a solution in the Docs.</p> <p>Ideas:<br> - Use Namespaces! problem --> can no longer see shared volume of other Namespace</p>
<p>This is an example of how you would do it. You can use namespaces for the different teams.</p> <p>Then you can use a <a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/#default-roles-and-role-bindings" rel="nofollow noreferrer"><code>Role</code></a> for each volume and assign to users accordingly. (Roles are namespaced). A sample Role would be:</p> <pre><code>kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: team1 name: volume-access rules: - apiGroups: [""] resources: ["persistentvolume", "persistentvolumeclaims"] resourceNames: ["my-volume"] verbs: ["update", "get", "list", "patch", "watch"] </code></pre> <p>Then your binding would be something like:</p> <pre><code>apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: pv-binding namespace: team1 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: volume-access subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: usera - apiGroup: rbac.authorization.k8s.io kind: User name: userb </code></pre> <p>The above would be shared by user A and user B. You can create separate roles for the volume that is private.</p>
<p>How to get resources (CPU and Memory) consumed by a kubernetes job <strong>at the end of job's lifecycle</strong>? Is this out of kubernetes job implementation's scope?</p> <p>Notes:</p> <ul> <li><code>kubectl describe job</code> provides only the limit/request specified.</li> <li>I am aware of external tools to capture the resource consumption. I'm looking for something that could be stored along with job metadata without using any external monitoring tools like prometheus.</li> </ul>
<p>I would not encourage you to only restrict yourself to <code>kubectl top pod</code>. This is only good for quick troubleshoot and sneak peek only.</p> <p>In production, you must have a more concrete framework for resource usage monitoring and I have found Prometheus very useful. Of course, when you are working on GCP, you may choose native monitoring toolsets also.</p>
<p>I have set up a kubenetes cluster by kubeadm. Now I want to set up the istio, but I found the istio document did not include the guides about the kubeadm. <a href="https://i.stack.imgur.com/VpYnD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VpYnD.png" alt="enter image description here"></a></p> <p>It include the instructions for minikube and other cloud provider, how to set up the istio for kubeadm?</p>
<p>These titles in your screenshot, are not instructions for installing Istio on those platforms, they are just Kubernetes installation on different platforms which are prerequisites for Istio installation. If you already have Kubernetes cluster installed (even via kubeadm), just follow <a href="https://istio.io/docs/setup/kubernetes/quick-start/#installation-steps" rel="nofollow noreferrer">Installation Setups</a> for Istio.</p>
<p>My company runs OpenShift v3.10 cluster consisting of 3 masters and 4 nodes. We would like to change URL of the OpenShift API and also the URL of the OpenShift web console. Which steps we need to take to successfully do so? </p> <p>We have already tried to update the o<code>penshift_master_cluster_hostname</code> and <code>openshift_master_cluster_public_hostname</code> variables to new DNS names, which resolve our F5 virtual hosts which load balances the traffic between our masters, and then started the upgrade Ansible playbook, but the upgrade fails. We have also tried to run the Ansible playbook which redeploys the cluster certificates, but after that step the OpenShift nodes status changes to <code>NotReady</code>.</p>
<p>We have solved this issue. What we had to do is to change the URL-s defined in the variables in the inventory file and then we executed the ANSIBLE playbook to update master configuration. The process of running that playbook is describe in the official <a href="https://docs.openshift.com/container-platform/3.10/install_config/master_node_configuration.html" rel="nofollow noreferrer">documentation</a>.</p> <p>After that we also had to update the OpenShift Web Console configuration map with new URL-s and then scale down and scale up the web-console deployment. The process on how to update the configuration of the web-console is described <a href="https://docs.openshift.com/container-platform/3.10/install_config/web_console_customization.html" rel="nofollow noreferrer">here</a>.</p>
<p>I'm setting up Airflow in Kubernetes Engine, and I now have the following (running) pods:</p> <ul> <li>postgres (with a mounted <code>PersistentVolumeClaim</code>)</li> <li>flower</li> <li>web (airflow dashboard)</li> <li>rabbitmq</li> <li>scheduler</li> <li>worker</li> </ul> <p>From Airflow, I'd like to run a task starting a pod which - in this case - downloads some file from an SFTP server. However, the <code>KubernetesPodOperator</code> in Airflow which should start this new pod can't run, because the kubeconfig cannot be found.</p> <p>The Airflow worker is configured as below. The other Airflow pods are exactly the same apart from different <code>args</code>.</p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: worker spec: replicas: 1 template: metadata: labels: app: airflow tier: worker spec: restartPolicy: Always containers: - name: worker image: my-gcp-project/kubernetes-airflow-in-container-registry:v1 imagePullPolicy: IfNotPresent env: - name: AIRFLOW_HOME value: "/usr/local/airflow" args: ["worker"] </code></pre> <p>The <code>KubernetesPodOperator</code> is configured as follows:</p> <pre class="lang-py prettyprint-override"><code>maybe_download = KubernetesPodOperator( task_id='maybe_download_from_sftp', image='some/image:v1', namespace='default', name='maybe-download-from-sftp', arguments=['sftp_download'], image_pull_policy='IfNotPresent', dag=dag, trigger_rule='dummy', ) </code></pre> <p>The following error shows there's no kubeconfig on the pod.</p> <pre><code>[2019-01-24 12:37:04,706] {models.py:1789} INFO - All retries failed; marking task as FAILED [2019-01-24 12:37:04,722] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp Traceback (most recent call last): [2019-01-24 12:37:04,722] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp File "/usr/local/bin/airflow", line 32, in &lt;module&gt; [2019-01-24 12:37:04,722] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp args.func(args) [2019-01-24 12:37:04,722] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp File "/usr/local/lib/python3.6/site-packages/airflow/utils/cli.py", line 74, in wrapper [2019-01-24 12:37:04,722] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp return f(*args, **kwargs) [2019-01-24 12:37:04,722] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp File "/usr/local/lib/python3.6/site-packages/airflow/bin/cli.py", line 490, in run [2019-01-24 12:37:04,722] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp _run(args, dag, ti) [2019-01-24 12:37:04,722] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp File "/usr/local/lib/python3.6/site-packages/airflow/bin/cli.py", line 406, in _run [2019-01-24 12:37:04,722] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp pool=args.pool, [2019-01-24 12:37:04,722] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp File "/usr/local/lib/python3.6/site-packages/airflow/utils/db.py", line 74, in wrapper [2019-01-24 12:37:04,722] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp return func(*args, **kwargs) [2019-01-24 12:37:04,722] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp File "/usr/local/lib/python3.6/site-packages/airflow/models.py", line 1659, in _run_raw_task [2019-01-24 12:37:04,723] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp result = task_copy.execute(context=context) [2019-01-24 12:37:04,723] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp File "/usr/local/lib/python3.6/site-packages/airflow/contrib/operators/kubernetes_pod_operator.py", line 90, in execute [2019-01-24 12:37:04,723] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp config_file=self.config_file) [2019-01-24 12:37:04,723] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp File "/usr/local/lib/python3.6/site-packages/airflow/contrib/kubernetes/kube_client.py", line 51, in get_kube_client [2019-01-24 12:37:04,723] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp return _load_kube_config(in_cluster, cluster_context, config_file) [2019-01-24 12:37:04,723] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp File "/usr/local/lib/python3.6/site-packages/airflow/contrib/kubernetes/kube_client.py", line 38, in _load_kube_config [2019-01-24 12:37:04,723] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp config.load_kube_config(config_file=config_file, context=cluster_context) [2019-01-24 12:37:04,723] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp File "/usr/local/airflow/.local/lib/python3.6/site-packages/kubernetes/config/kube_config.py", line 537, inload_kube_config [2019-01-24 12:37:04,723] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp config_persister=config_persister) [2019-01-24 12:37:04,723] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp File "/usr/local/airflow/.local/lib/python3.6/site-packages/kubernetes/config/kube_config.py", line 494, in_get_kube_config_loader_for_yaml_file [2019-01-24 12:37:04,723] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp with open(filename) as f: [2019-01-24 12:37:04,723] {base_task_runner.py:101} INFO - Job 8: Subtask maybe_download_from_sftp FileNotFoundError: [Errno 2] No such file or directory: '/usr/local/airflow/.kube/config' [2019-01-24 12:37:08,300] {logging_mixin.py:95} INFO - [2019-01-24 12:37:08,299] {jobs.py:2627} INFO - Task exited with return code 1 </code></pre> <p>I'd like the pod to start and "automatically" contain the context of the Kubernetes cluster it's in - if that makes sense. I feel like I'm missing something fundamental. Could anyone help?</p>
<p>As is described in <a href="https://airflow.apache.org/kubernetes.html#airflow.contrib.operators.kubernetes_pod_operator.KubernetesPodOperator" rel="nofollow noreferrer">The Fine Manual</a>, you will want <code>in_cluster=True</code> to advise KPO that it is, in fact, in-cluster.</p> <p>I would actually recommend filing a bug with Airflow because Airflow can <em>trivially</em> detect the fact that it is running inside the cluster, and should have a much more sane default than your experience.</p>
<p>I am on Jenkins 2.73.2.1 and using Kubernetes-jenkins plugin 1.4 to spin up dynamic slaves . However ,I am not able to start up parallel builds. Jenkins always puts it to build queue and executes one at a time.</p> <p>I have tried setting this while starting jenkins but it doesn't help either:</p> <pre><code>-Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85 </code></pre> <p>Is there any other setting we have to do at plugin configuration end for parallel pods running ?</p>
<p>You have to configure the following parameters correctly in order to run slave concurrently and as per your expectation :</p> <p>Under <code>Kubernetes Pod Template</code>,</p> <p>1) Set <code>Labels</code> for Pod template correctly. </p> <ul> <li>Make sure you have <code>Jenkins Job</code> with the same label configured.</li> <li>In that jenkins job's configuration, mark <code>Restrict where this project can be run</code> and provide the same label as you provided in <code>Labels</code> field of Jenkins Configuration.</li> </ul> <p>2) Set <code>Max number of instances</code>. This parameter will tell Jenkins <code>How many maximum slaves it can launch with the given label</code> </p> <p>3) Set <code>Time in minutes to retain agent when idle</code>. This parameter will tell Jenkins <code>Till how much time to retain slave (with the given label) on which no build is running</code>. </p> <ul> <li>Correctly configuring this will save you from <code>Kubernetes Pod Creation time</code>. </li> <li>Make sure <code>Pod Retention</code> policy is <code>Default</code>.</li> </ul> <p>Under <code>Cloud</code> section,</p> <p>1) Set <code>Container Cap</code>. This parameter will tell Jenkins <code>How many slaves can be spawned on Kubernetes</code>.</p> <ul> <li>This is limit on the total number of Pods that Jenkins can create on <code>Kubernetes cluster</code>.</li> <li>This limit applies cumulatively to all labels.</li> <li>Hence, if <code>Max number of instances</code> is greater than <code>Container Cap</code>. Jenkins will only be able to create slaves equal to <code>Container Cap</code> for the label at best.</li> <li>So ideally keep <code>Container Cap</code> equal to <code>Sum of (Max number of instances) of all labels</code></li> </ul> <p>While <code>starting Jenkins</code>,</p> <ul> <li>By default, Jenkins spawns agents conservatively. Say, if there are 2 builds in queue, it won't spawn 2 executors immediately. It will spawn one executor and wait for sometime for the first executor to be freed before deciding to spawn the second executor. Jenkins makes sure every executor it spawns is utilized to the maximum. </li> <li>If you want to override this behaviour and spawn an executor for each build in queue immediately without waiting, you can use these flags during Jenkins startup:</li> </ul> <blockquote> <pre><code>-Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85 </code></pre> </blockquote> <p><a href="https://github.com/jenkinsci/kubernetes-plugin" rel="nofollow noreferrer">Jenkins Kuberenetes Plugin' Github</a> repo has good one-line description of all parameters</p>
<h2>Question</h2> <p>Can I get nginx to call another microservice inside of AKS k8s prior to it routing to the requested api? - the goal being to speed up requests (fewer hops) and simplify build and deployment (fewer services).</p> <h2>Explanation</h2> <p>In our currently deployed Azure AKS (Kubernetes) cluster, we have an additional service I was hoping to replace with nginx. It's a routing microservice that calls out to a identity API prior to doing the routing.</p> <p>The reason is a common one I'd imagine, we recieve some kind of authentication token via some pre-defined header(s) (the standard <code>Authorization</code> header, or sometimes some bespoke ones used for debug tokens, and impersonation), we call from the routing API into the identity API with those pre-defined headers and get a user identity object in return.</p> <p>We then pass on this basic user identity object into the microservices so they have quick and easy access to the user and roles.</p> <p>A brief explanation would be:</p> <ul> <li>Nginx receives a request, off-loads SSL and route to the requested service.</li> <li>Routing API takes the authorization headers and makes a call to the Identity API.</li> <li>Identity API validations the authorization information and returns either an authorization error (when auth fails), or a serialized user identity object.</li> <li>Router API either returns there and then, for failure, or routes to the requested microservice (by cracking the request path), and attaches the user identity object as a header.</li> <li>Requested microservice can then turn that user identity object into a Claims Principal in the case of .NET Core for example.</li> </ul> <p><a href="https://i.stack.imgur.com/C7vwo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/C7vwo.png" alt="k8s with our own Router API"></a></p> <p>There are obviously options for merging the Router.API and the UserIdentity.API, but keeping the separation of concerns seems like a better move. I'd just to remove the Route.API, in-order to maintain that separation, but get nginx to do that work for me.</p>
<p>ProxyKit (<a href="https://github.com/damianh/ProxyKit" rel="nofollow noreferrer">https://github.com/damianh/ProxyKit</a>) could be a good alternative to nginx - it allows you to easily add custom logic to certain requests (for example I lookup API keys based on a tenant in URL) and you can cache the responses using CacheCow (see a recipe in ProxyKit source)</p>
<p>I am trying to write a CronJob for executing a shell script within a ConfigMap for Kafka.</p> <p>My intention is to reassign partitions at specific intervals of time.</p> <p>However, I am facing issues with it. I am very new to it. Any help would be appreciated.</p> <p>cron-job.yaml</p> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: partition-cron spec: schedule: "*/10 * * * *" startingDeadlineSeconds: 20 successfulJobsHistoryLimit: 5 jobTemplate: spec: completions: 2 template: spec: containers: - name: partition-reassignment image: busybox command: ["/configmap/runtimeConfig.sh"] volumeMounts: - name: configmap mountPath: /configmap restartPolicy: Never volumes: - name: configmap configMap: name: configmap-config </code></pre> <p>configmap-config.yaml</p> <pre><code>{{- if .Values.topics -}} {{- $zk := include "zookeeper.url" . -}} apiVersion: v1 kind: ConfigMap metadata: labels: app: {{ template "kafka.fullname" . }} chart: "{{ .Chart.Name }}-{{ .Chart.Version }}" heritage: "{{ .Release.Service }}" release: "{{ .Release.Name }}" name: {{ template "kafka.fullname" . }}-config data: runtimeConfig.sh: | #!/bin/bash set -e cd /usr/bin until kafka-configs --zookeeper {{ $zk }} --entity-type topics --describe || (( count++ &gt;= 6 )) do echo "Waiting for Zookeeper..." sleep 20 done until nc -z {{ template "kafka.fullname" . }} 9092 || (( retries++ &gt;= 6 )) do echo "Waiting for Kafka..." sleep 20 done echo "Applying runtime configuration using {{ .Values.image }}:{{ .Values.imageTag }}" {{- range $n, $topic := .Values.topics }} {{- if and $topic.partitions $topic.replicationFactor $topic.reassignPartitions }} cat &lt;&lt; EOF &gt; {{ $topic.name }}-increase-replication-factor.json {"version":1, "partitions":[ {{- $partitions := (int $topic.partitions) }} {{- $replicas := (int $topic.replicationFactor) }} {{- range $i := until $partitions }} {"topic":"{{ $topic.name }}","partition":{{ $i }},"replicas":[{{- range $j := until $replicas }}{{ $j }}{{- if ne $j (sub $replicas 1) }},{{- end }}{{- end }}]}{{- if ne $i (sub $partitions 1) }},{{- end }} {{- end }} ]} EOF kafka-reassign-partitions --zookeeper {{ $zk }} --reassignment-json-file {{ $topic.name }}-increase-replication-factor.json --execute kafka-reassign-partitions --zookeeper {{ $zk }} --reassignment-json-file {{ $topic.name }}-increase-replication-factor.json --verify {{- end }} {{- end -}} </code></pre> <p>My intention is to run the runtimeConfig.sh script as a cron job at regular intervals for partition reassignment in Kafka.</p> <p>I am not sure if my approach is correct.</p> <p>Also, I have randomly put <strong>image: busybox</strong> in the cron-job.yaml file. I am not sure about what should I be putting in there.</p> <p>Information Part</p> <pre><code>$ kubectl get cronjobs NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE partition-cron */10 * * * * False 1 5m 12m $ kubectl get pods NAME READY STATUS RESTARTS AGE elegant-hedgehog-metrics-server-58995fcf8b-2vzg6 1/1 Running 0 5d my-kafka-0 1/1 Running 1 12m my-kafka-1 1/1 Running 0 10m my-kafka-2 1/1 Running 0 9m my-kafka-config-644f815a-pbpl8 0/1 Completed 0 12m my-kafka-zookeeper-0 1/1 Running 0 12m partition-cron-1548672000-w728w 0/1 ContainerCreating 0 5m $ kubectl logs partition-cron-1548672000-w728w Error from server (BadRequest): container "partition-reassignment" in pod "partition-cron-1548672000-w728w" is waiting to start: ContainerCreating </code></pre> <p>Modified Cron Job YAML</p> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: partition-cron spec: schedule: "*/5 * * * *" startingDeadlineSeconds: 20 successfulJobsHistoryLimit: 5 jobTemplate: spec: completions: 1 template: spec: containers: - name: partition-reassignment image: busybox command: ["/configmap/runtimeConfig.sh"] volumeMounts: - name: configmap mountPath: /configmap restartPolicy: Never volumes: - name: configmap configMap: name: {{ template "kafka.fullname" . }}-config </code></pre> <p>Now, I am getting Status of Cron Job pods as <strong>ContainerCannotRun</strong>.</p>
<p>You've set the ConfigMap to <code>name: {{ template "kafka.fullname" . }}-config</code> but in the job you are mounting <code>configmap-config</code>. Unless you installed the Helm chart using <code>configmap</code> as the name of the release, that Job will never start. </p> <p>One way to fix it would be to define the volume as:</p> <pre><code> volumes: - name: configmap configMap: name: {{ template "kafka.fullname" . }}-config </code></pre>
<p>I have pod running my application. The pod also contains my secret. The secret mapped to <code>/secret/mysecret.json</code>. I connecting to my pod with ssh and try to remove the secret from this pod instance:</p> <pre><code>rm /secret/mysecret.json </code></pre> <p>I getting the Error:</p> <pre><code>rm: cannot remove 'mysecret.json': Read-only file system </code></pre> <p>According to <a href="https://kubernetes.io/docs/concepts/configuration/secret/" rel="nofollow noreferrer">this article</a>, I tried to changed the <code>readOnly</code> settings to <code>False</code>. No success.</p> <p>Also tried to unmount it, got errors:</p> <pre><code>$ umount /secret/mysecret.json umount: /app/secrets/app-specific: must be superuser to unmount </code></pre> <p>How I can delete secret from a pod?</p>
<p>The way you should handle this the kubernetes way is:</p> <pre><code>kubectl delete secret &lt;&lt;secret name goes here&gt;&gt; </code></pre>
<p>Is there any way to know that the number of pods have scaled up or down as a result of Horizontal Pod Autoscaling apart from kubectl get hpa command?</p> <p>I want to trigger a particular file on every scale up or scale down of pods</p>
<p>You can use status field of HPA to know when was last time HPA was executed. Details about this can be found with below command:</p> <pre><code>kubectl explain hpa.status </code></pre> <p>from this status , you can use lastScaleTime filed for your problem.</p> <pre><code>lastScaleTime &lt;string&gt; last time the HorizontalPodAutoscaler scaled the number of pods; used by the autoscaler to control how often the number of pods is changed. </code></pre>
<p>So I know how to check the yaml file of my service and deployment which is </p> <pre><code>$ kubectl get service/helloworld -o yaml $ kubectl get deployment/helloworld -o yaml </code></pre> <p>How do I find these files so I could edit them?</p> <p>I am using minikube if that helps</p>
<p>I would <strong><em>highly recommend</em></strong> you to change .yaml files and apply the resource again.</p> <p>But if you want for some reason do that on the fly, you can go with:</p> <pre><code>$ kubectl edit service/helloworld -o yaml $ kubectl edit deployment/helloworld -o yaml </code></pre>
<p>I just setup Istio on EKS. I noticed that the gateway controller (is that what I should call it?) creates an ELB and a corresponding security group that allows incoming traffic on a few different ports:</p> <p><a href="https://i.stack.imgur.com/hLn3C.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hLn3C.png" alt="Security Group Rules"></a></p> <p>Right now, all of these rules allow traffic from everywhere (0.0.0.0/0), but I'd like to be able to restrict this to my VPN server. Is there a way to specify a security group id (ideally), or at least an IP for these rules?</p>
<p>There is a way to specify the Source IP for Inbound Rules of default Istio's Ingress Gateway during installation/upgrade of Istio via Helm.</p> <p>You realize it by adjusting default values of Service object kind associated with your istio-ingressgateway pod:</p> <p>Here is how I'm doing it via Helm install:</p> <ol> <li>Install with helm using --set option to override default values (here default values of 'gateways' subcharts):</li> </ol> <blockquote> <pre><code> $ helm install install/kubernetes/helm/istio --name istio-maxi --namespace istio-system \ --set gateways.istio-ingressgateway.loadBalancerSourceRanges=143.231.0.0/16 </code></pre> </blockquote> <ol start="2"> <li>Here are the resulting Inbound Rules of ELB standing in front of Istio Ingress Gateway seen in AWS console:</li> </ol> <p><a href="https://i.stack.imgur.com/ftIUX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ftIUX.png" alt="enter image description here"></a></p> <p><strong>Important note:</strong></p> <p>The <a href="https://github.com/istio/istio/commit/9708b61413eb07d3f5801a56811368a726d13d45" rel="nofollow noreferrer">loadBalancerSourceRanges</a> field is available by now still in a <strong>Pre-release</strong> state (1.1.0-snapshot.5) of Istio helm chart</p>
<p>I've set up my Kubernetes cluster, and as part of that set up have set up an ingress rule to forward traffic to a web server.</p> <pre><code>--- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: alpha-ingress annotations: kubernetes.io/ingress.class: nginx certmanager.k8s.io/cluster-issuer: letsencrypt-prod spec: tls: - hosts: - alpha.example.com secretName: letsencrypt-prod rules: - host: alpha.example.com http: paths: - backend: serviceName: web servicePort: 80 </code></pre> <p>Eventually the browser times out with a 504 error and in the Ingress log I see </p> <blockquote> <p>2019/01/27 23:45:38 [error] 41#41: *4943 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 10.131.24.163, server: alpha.example.com, request: "GET / HTTP/2.0", upstream: "<a href="http://10.244.93.12:80/" rel="nofollow noreferrer">http://10.244.93.12:80/</a>", host: "alpha.example.com"</p> </blockquote> <p>I don't have any services on that IP address ...</p> <pre><code>╰─$ kgs --all-namespaces 130 ↵ NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default database ClusterIP 10.245.181.187 &lt;none&gt; 5432/TCP 4d8h default kubernetes ClusterIP 10.245.0.1 &lt;none&gt; 443/TCP 9d default user-api ClusterIP 10.245.41.8 &lt;none&gt; 9000/TCP 4d8h default web ClusterIP 10.245.145.213 &lt;none&gt; 80/TCP,443/TCP 34h ingress-nginx ingress-nginx LoadBalancer 10.245.25.107 &lt;external-ip&gt; 80:31680/TCP,443:32324/TCP 50m kube-system grafana ClusterIP 10.245.81.91 &lt;none&gt; 80/TCP 6d1h kube-system kube-dns ClusterIP 10.245.0.10 &lt;none&gt; 53/UDP,53/TCP,9153/TCP 9d kube-system prometheus-alertmanager ClusterIP 10.245.228.165 &lt;none&gt; 80/TCP 6d2h kube-system prometheus-kube-state-metrics ClusterIP None &lt;none&gt; 80/TCP 6d2h kube-system prometheus-node-exporter ClusterIP None &lt;none&gt; 9100/TCP 6d2h kube-system prometheus-pushgateway ClusterIP 10.245.147.195 &lt;none&gt; 9091/TCP 6d2h kube-system prometheus-server ClusterIP 10.245.202.186 &lt;none&gt; 80/TCP 6d2h kube-system tiller-deploy ClusterIP 10.245.11.85 &lt;none&gt; 44134/TCP 9d </code></pre> <p>If I view the resolv.conf file on the ingress pod, it returns what it should ...</p> <pre><code>╰─$ keti -n ingress-nginx nginx-ingress-controller-c595c6896-klw25 -- cat /etc/resolv.conf 130 ↵ nameserver 10.245.0.10 search ingress-nginx.svc.cluster.local svc.cluster.local cluster.local options ndots:5 </code></pre> <p>dig/nslookup/host aren't available on that container, but if I create a simple busybox instance it gets the right IP with that same config:</p> <pre><code>╰─$ keti busybox -- nslookup web Server: 10.245.0.10 Address 1: 10.245.0.10 kube-dns.kube-system.svc.cluster.local Name: web Address 1: 10.245.145.213 web.default.svc.cluster.local </code></pre> <p>Can anyone give me any ideas what to try next?</p> <p><strong>Update #1</strong></p> <p>Here is the config for <code>web</code>, as requested in the comments. I'm also investigating why I can't directly <code>wget</code> anything from <code>web</code> using a busybox inside the cluster.</p> <pre><code>apiVersion: v1 kind: Service metadata: labels: io.kompose.service: web app: web name: web spec: ports: - name: "80" port: 80 targetPort: 80 - name: "443" port: 443 targetPort: 443 selector: io.kompose.service: web status: loadBalancer: {} --- apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: app: web name: web spec: replicas: 1 strategy: type: RollingUpdate selector: matchLabels: app: web template: metadata: labels: io.kompose.service: web app: web spec: containers: - image: &lt;private docker repo&gt; imagePullPolicy: IfNotPresent name: web resources: {} imagePullSecrets: - name: gcr status: {} </code></pre> <p><strong>Update 2</strong></p> <p>As per Michael's comment below, the IP address that it has resolved for <code>web</code> is one of it's endpoints:</p> <pre><code>╰─$ k get endpoints web 130 ↵ NAME ENDPOINTS AGE web 10.244.93.12:443,10.244.93.12:80 2d </code></pre>
<p>So, this all boiled down to the php-fpm service not having any endpoints, because I'd misconfigured the service selector!</p> <p>Some of the more eagle eyed readers might have spotted that my config began life as a conversion from a docker-compose config file (my dev environment), and I've built on it from there.</p> <p>The problem came because I changed the labels &amp; selector for the deployment, but not the service itself.</p> <pre><code>apiVersion: v1 kind: Service metadata: name: user-api labels: io.kompose.service: user-api app: user-api spec: ports: - name: "9000" port: 9000 targetPort: 9000 selector: io.kompose.service: user-api status: loadBalancer: {} --- apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: app: user-api name: user-api spec: replicas: 1 selector: matchLabels: app: user-api template: metadata: labels: app: user-api spec: ... etc </code></pre> <p>You can see I was still using the old selector that kompose created for me, <code>io.kompose.service: user-api</code> instead of the newer <code>app: user-api</code></p> <p>I followed the advice from @coderanger, while the nginx service was responding, the php-fpm one wasn't.</p> <p>A quick look at the documentation for <a href="https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/" rel="nofollow noreferrer">Connecting Applications With Services</a> says :</p> <blockquote> <p>As mentioned previously, a Service is backed by a group of Pods. These Pods are exposed through endpoints. The Service’s selector will be evaluated continuously and the results will be POSTed to an Endpoints object also named my-nginx.</p> </blockquote> <p>When I checked the selector of both the service &amp; deployment template I saw they were different, now they match and everything works as expected.</p>
<p>I have a set of services that i want to expose as an ingress load balancer. I select nginx to be the ingress because of the ability to force http to https redirects.</p> <p>Having an ingress config like </p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: api-https annotations: nginx.ingress.kubernetes.io/ssl-redirect: true nginx.ingress.kubernetes.io/force-ssl-redirect: true nginx.org/ssl-services: "api,spa" kubernetes.io/ingress.class: nginx spec: tls: - hosts: - api.some.com - www.some.com secretName: secret rules: - host: api.some.com http: paths: - path: / backend: serviceName: api servicePort: 8080 - host: www.some.com http: paths: - path: / backend: serviceName: spa servicePort: 8081 </code></pre> <p>gke creates the nginx ingress load balancer but also another load balancer with backends and everything like if where not nginx selected but gcp as ingress.</p> <p>below screenshot shows in red the two unexpected LB and in blue the two nginx ingress LB one for our qa and prod env respectively.</p> <p><a href="https://i.stack.imgur.com/8uiT7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8uiT7.png" alt="gcplb"></a></p> <p>output from kubectl get services</p> <pre><code>xyz@cloudshell:~ (xyz)$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE api NodePort 1.2.3.4 &lt;none&gt; 8080:32332/TCP,4433:31866/TCP 10d nginx-ingress-controller LoadBalancer 1.2.6.9 12.13.14.15 80:32321/TCP,443:32514/TCP 2d nginx-ingress-default-backend ClusterIP 1.2.7.10 &lt;none&gt; 80/TCP 2d spa NodePort 1.2.8.11 &lt;none&gt; 8082:31847/TCP,4435:31116/TCP 6d </code></pre> <p>screenshot from gcp gke services view of the ingress with wrong info</p> <p><a href="https://i.stack.imgur.com/vz2Zz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vz2Zz.png" alt="ingress"></a></p> <p>Is this expected? </p> <p>Did i miss any configuration to prevent this extra load balancer for been created?</p>
<p>On GCP GKE the gcp ingress controller its enabled by default and will be always lead to a new LB in any ingress definition even if the .class its specified.</p> <p><a href="https://github.com/kubernetes/ingress-nginx/issues/3703" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/issues/3703</a></p> <p>So to fix it we should remove the gcp ingress controller from the cluster as mention on <a href="https://github.com/kubernetes/ingress-gce/blob/master/docs/faq/gce.md#how-do-i-disable-the-gce-ingress-controller" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-gce/blob/master/docs/faq/gce.md#how-do-i-disable-the-gce-ingress-controller</a></p>
<p><strong>Background</strong><br> I am testing the Kubernetes setting on <code>Minikube</code>. I have two simple services successfully setup and they are backed by simple docker image. Below is an example of my service configuration. I use <code>NodePort</code> to expose the service on port 80. </p> <pre><code># service 1 kind: Service apiVersion: v1 metadata: name: service1 spec: selector: app: service1 ports: - name: http protocol: TCP port: 80 targetPort: 8080 type: NodePort --- apiVersion: apps/v1 kind: Deployment metadata: name: service1-deployment labels: app: service1 spec: replicas: 1 selector: matchLabels: app: service1 template: metadata: labels: app: service1 spec: containers: - name: service1 image: service1 imagePullPolicy: Never ports: - containerPort: 8080 --- # service 2 kind: Service apiVersion: v1 metadata: name: service2 spec: selector: app: service2 ports: - name: http protocol: TCP port: 80 targetPort: 8080 type: NodePort --- apiVersion: apps/v1 kind: Deployment metadata: name: service2-deployment labels: app: service2 spec: replicas: 1 selector: matchLabels: app: service2 template: metadata: labels: app: service2 spec: containers: - name: service2 image: service2 imagePullPolicy: Never ports: - containerPort: 8080 </code></pre> <p><strong>Issue</strong><br> I use <code>docker exec -it</code> to go inside docker container. I can <code>curl service1</code> from <code>service2</code> container without any issue. However, if I try to <code>curl service2</code> from <code>service2</code> container, it gets a timeout connection error. </p> <p>Results from <code>curl -v service2</code></p> <blockquote> <p>Rebuilt URL to: service2/<br> Trying 10.101.116.46...<br> TCP_NODELAY set<br> connect to 10.101.116.46 port 80 failed: Connection timed out<br> Failed to connect to service2 port 80: Connection timed out<br> Closing connection 0<br> curl: (7) Failed to connect to service2 port 80: Connection timed out</p> </blockquote> <p>I guess the DNS records gets resolved correctly, because <code>10.101.116.46</code> is the correct IP attached to <code>service2</code>. Then what could be the issue cause this problem? </p> <p><strong>More Followup Tests</strong><br> From my understanding, the Kubernetes service internally maps the port to container port, so in my case it maps service port <code>80</code> to pod port <code>8080</code>. From <code>service2</code> container, I am able to <code>curl &lt;service2 pod ip&gt;:8080</code> successfully, but I am not able to <code>curl &lt;service2 ip&gt;</code>, which resolves connection time out error. And this happens exactly the same inside the <code>service1</code> container that it can access pod but no service. I do not understand is there any internal setting that I miss? </p>
<p>This could be any of these:</p> <ul> <li>The pod servicing service2 has a service that is listening on <code>127.0.0.1</code> or not listening on <code>0.0.0.0</code> (Any IP address)</li> <li>service2 has a redirect and your service only listen on port <code>80</code>. You would have to enable the other port (possibly <code>443</code>) and run <code>curl</code> with the <code>-L</code> option to follow the link.</li> <li>The pod servicing service2 is not even listening on port <code>80</code>.</li> </ul>
<p>I should be able to mount a local directory as a persistent volume data folder for a mysql docker container running under minikube/kubernetes. </p> <p>I don't have any problem achieving a shared volume running it with Docker directly, but running it under kubernetes, I'm not able to</p> <pre><code>osx 10.13.6 Docker Desktop Community version 2.0.0.2 (30215) Channel: stable 0b030e17ca Engine 18.09.1 Compose: 1.23.2 Machine 0.16.1 Kubernetes v1.10.11 minikube version: v0.33.1 </code></pre> <p>Steps to reproduce the behavior</p> <pre><code>install docker-for-mac and enable kubernetes </code></pre> <p>create a directory on the mac to be shared as the persistent volume storage, e.g.</p> <pre><code>sudo mkdir -m 777 -p /Users/foo/mysql </code></pre> <p>deployment.yml</p> <pre><code># For use on docker for mac kind: StorageClass apiVersion: storage.k8s.io/v1beta1 metadata: name: localstorage provisioner: docker.io/hostpath --- apiVersion: v1 kind: PersistentVolumeClaim metadata: labels: app: mysql name: mysql-pvc spec: storageClassName: localstorage accessModes: - ReadWriteOnce - ReadOnlyMany resources: requests: storage: 20Gi --- apiVersion: v1 kind: PersistentVolume metadata: name: mysql-pv labels: type: local spec: storageClassName: localstorage capacity: storage: 20Gi accessModes: - ReadWriteOnce - ReadOnlyMany hostPath: # this is the path on laptop? path: "/Users/foo/mysql" --- apiVersion: v1 kind: Service metadata: name: mysql-service spec: type: NodePort selector: app: mysql-service ports: - port: 3306 targetPort: 3306 --- apiVersion: apps/v1 kind: Deployment metadata: name: mysql-server labels: app: mysql-server spec: selector: matchLabels: app: mysql-server template: metadata: labels: app: mysql-server spec: containers: - name: mysql-server image: mysql:5.7 env: - name: MYSQL_ROOT_PASSWORD value: "" - name: MYSQL_ALLOW_EMPTY_PASSWORD value: "yes" ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-pvc # this is the path on the pod container? mountPath: "/mnt/data" volumes: - name: mysql-pvc persistentVolumeClaim: claimName: mysql-pvc </code></pre> <p>I can start up the pod, connect through mysql client, create a database, but when pod shuts down, the data does not persist and there is nothing written to the mounted data folder</p> <pre><code>kubectl create -f deployment.yml kubectl port-forward mysql-server-6b64c4545f-kp7h9 3306:3306 mysql -h 127.0.0.1 -P 3306 -u root mysql&gt; create database foo; Query OK, 1 row affected (0.00 sec) mysql&gt; show databases; +--------------------+ | Database | +--------------------+ | information_schema | | foo | | mysql | | performance_schema | | sys | +--------------------+ 5 rows in set (0.00 sec) </code></pre> <p>....</p> <p>deleting the deployment:</p> <pre><code>kubectl delete sc "localstorage" kubectl delete persistentvolume "mysql-pv" kubectl delete persistentvolumeclaim "mysql-pvc" kubectl delete service "mysql-service" kubectl delete deployment.apps "mysql-server" kubectl delete events --all </code></pre> <p>re-create and connect again as above</p> <pre><code>mysql&gt; show databases; +--------------------+ | Database | +--------------------+ | information_schema | | mysql | | performance_schema | | sys | +--------------------+ 4 rows in set (0.01 sec) mysql&gt; </code></pre>
<p>You must create a <a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="nofollow noreferrer"><strong>Persistent Volume</strong></a>, defining the <a href="https://kubernetes.io/docs/concepts/storage/storage-classes/#local" rel="nofollow noreferrer"><strong>Storage Class as Local</strong></a>, then map it to local path.</p> <p>Creating Storage Class</p> <p><strong>storage-class.yml</strong></p> <pre><code>kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer </code></pre> <p>Then run <code>kubectl create -f storage-class.yml</code></p> <p>Creating Persistent Value</p> <p><strong>pv-local.yaml</strong></p> <pre><code>apiVersion: v1 kind: PersistentVolume metadata: name: local-pv spec: capacity: storage: 10Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: local-storage local: path: /mnt/data nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - cka </code></pre> <p>Create persistent volume running <code>kubectl create -f pv-sdc.yml</code></p> <p>A last, create persistent volume claim</p> <p><strong>pvc1.yml</strong></p> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata: name: pvc1 spec: accessModes: - ReadWriteOnce storageClassName: local-storage resources: requests: storage: 10Gi </code></pre> <p>Create persistent volume clain running <code>kubectl create -f pvc1.yml</code></p> <p>To list persistent values run <code>kubectl get pv</code>. You should see some output like</p> <pre><code>NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv 10Gi RWO Retain Available local-storage 10s </code></pre> <p>The persistent volume will be available as soon as a node uses it.</p> <p><a href="https://serverascode.com/2018/09/19/persistent-local-volumes-kubernetes.html" rel="nofollow noreferrer">This</a> post may help you little bit more.</p>
<p>I have kibana deployed to kubernetes cluster as StatefulSet. However, when pointing my browser to the kibana, it returns {"statusCode":404,"error":"Not Found","message":"Not Found"}. Any advice and insight is appreciated. Here is the log that I see in the pod when accessing the application at the browser using <a href="http://app.domain.io/kibana" rel="nofollow noreferrer">http://app.domain.io/kibana</a></p> <pre><code>{"type":"response","@timestamp":"2019-01-29T04:18:50Z","tags":[],"pid":1,"method":"get","statusCode":404,"req":{"url":"/kibana","method":"get","headers":{"x-forwarded-for":"[IP]","x-forwarded-proto":"https","x-forwarded-port":"443","host":"[host]","x-amzn-trace-id":"Root=1-5c4fd42a-1261c1e0474144902a2d6840","cache-control":"max-age=0","upgrade-insecure-requests":"1","user-agent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36","accept":"text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8","accept-encoding":"gzip, deflate, br","accept-language":"en-US,en;q=0.9,zh-CN;q=0.8,zh-TW;q=0.7,zh;q=0.6,ko;q=0.5"},"remoteAddress":"[IP]","userAgent":"10.0.2.185"},"res":{"statusCode":404,"responseTime":19,"contentLength":9},"message":"GET /kibana 404 19ms - 9.0B"} </code></pre> <pre><code>apiVersion: v1 kind: Service metadata: name: svc-kibana labels: app: app-kibana spec: selector: app: app-kibana # tier: database ports: - name: kibana protocol: TCP port: 8080 targetPort: 5601 clusterIP: None # Headless --- apiVersion: apps/v1 kind: StatefulSet metadata: name: kibana spec: serviceName: "svc-kibana" podManagementPolicy: "Parallel" # Default is OrderedReady replicas: 1 # Default is 1 selector: matchLabels: app: app-kibana # Has to match .spec.template.metadata.labels template: metadata: labels: app: app-kibana # Has to match .spec.selector.matchLabels spec: terminationGracePeriodSeconds: 10 containers: - name: kibana securityContext: capabilities: add: - IPC_LOCK - SYS_RESOURCE image: kibana:6.5.4 imagePullPolicy: Always env: - name: ELASTICSEARCH_URL value: http://svc-elasticsearch:9200 - name: SERVER_BASEPATH value: /api/v1/namespaces/default/services/svc-kibana/proxy ports: - containerPort: 5601 name: kibana protocol: TCP </code></pre> <p>Here is the healthcheck from AWS ALB:</p> <pre><code>{"type":"response","@timestamp":"2019-01-29T06:30:53Z","tags":[],"pid":1,"method":"get","statusCode":200,"req":{"url":"/app/kibana","method":"get","headers":{"host":"[IP]:5601","connection":"close","user-agent":"ELB-HealthChecker/2.0","accept-encoding":"gzip, compressed"},"remoteAddress":"[IP]","userAgent":"[IP]"},"res":{"statusCode":200,"responseTime":27,"contentLength":9},"message":"GET /app/kibana 200 27ms - 9.0B"} </code></pre> <p>I tried to remove the ENV values and use ConfigMap mounted on /etc/kibana/kibana.yml with the following config but to no avail:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: creationTimestamp: "2019-01-29T02:12:55Z" name: kibana-config namespace: default resourceVersion: "4178388" selfLink: /api/v1/namespaces/default/configmaps/kibana-config uid: 63b10866-236b-11e9-a14d-482ae31e6a94 data: kibana.yml: |+ server.port: 5601 server.host: "0.0.0.0" elasticsearch.url: "http://svc-elasticsearch:9200" kibana.index: ".kibana" logging.silent: false logging.quiet: false logging.verbose: true </code></pre>
<p>It works now after I add the following to the Kibana config:</p> <pre><code> server.basePath: "/my-kibana" server.rewriteBasePath: true </code></pre> <p>Thanks to Matthew L Daniel, I have switched the healthcheck to <code>/my-kibana/api/status</code></p>
<p>I created kubernetes cluster on aws ec2 using kubeadm. Now I need to autoscale the K8's cluster when there are not enough resources on nodes to schedule new pods, How can I achieve autoscaling feature for my cluster?</p>
<p>Unfortunately there isn't a great answer if you mean you manually ran <code>kubeadm</code> on some EC2 instance. <code>cluster-autoscaler</code> is the thing to use, but it requires you deploy your nodes using Autoscaling Groups. It's possible to use ASGs and <code>kubeadm</code> but I don't know of anything off-the-shelf for it.</p>
<p>Assume I have a cluster with 2 nodes and a POD with 2 replicas. Can I have the guarantee that my 2 replicas are deployed in 2 differents nodes. So that when a node is down, the application keeps running. By default does the scheduler work on best effort mode to assign the 2 replicas in distinct nodes?</p>
<h2>Pod AntiAffinity</h2> <p>Pod anti-affinity can also to repel the pod from each other. so no two pods can be scheduled on same node.</p> <p>Use following configurations.</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx spec: replicas: 2 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - nginx topologyKey: "kubernetes.io/hostname" containers: - name: nginx image: nginx </code></pre> <p>This will use the anti-affinity feature so if you are having more than 2 nodes the there will be guarantee that no two pod will be scheduled on same node.</p>
<p>Im running <a href="https://github.com/EpistasisLab/tpot" rel="nofollow noreferrer">tpot</a> with dask running on kubernetes cluster on gcp, the cluster is 24 cores 120 gb memory with 4 nodes of kubernetes, my kubernetes yaml is </p> <pre><code>apiVersion: v1 kind: Service metadata: name: daskd-scheduler labels: app: daskd role: scheduler spec: ports: - port: 8786 targetPort: 8786 name: scheduler - port: 8787 targetPort: 8787 name: bokeh - port: 9786 targetPort: 9786 name: http - port: 8888 targetPort: 8888 name: jupyter selector: app: daskd role: scheduler type: LoadBalancer --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: daskd-scheduler spec: replicas: 1 template: metadata: labels: app: daskd role: scheduler spec: containers: - name: scheduler image: uyogesh/daskml-tpot-gcpfs # CHANGE THIS TO BE YOUR DOCKER HUB IMAGE imagePullPolicy: Always command: ["/opt/conda/bin/dask-scheduler"] resources: requests: cpu: 1 memory: 20000Mi # set aside some extra resources for the scheduler ports: - containerPort: 8786 --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: daskd-worker spec: replicas: 3 template: metadata: labels: app: daskd role: worker spec: containers: - name: worker image: uyogesh/daskml-tpot-gcpfs # CHANGE THIS TO BE YOUR DOCKER HUB IMAGE imagePullPolicy: Always command: [ "/bin/bash", "-cx", "env &amp;&amp; /opt/conda/bin/dask-worker $DASKD_SCHEDULER_SERVICE_HOST:$DASKD_SCHEDULER_SERVICE_PORT_SCHEDULER --nthreads 8 --nprocs 1 --memory-limit 5e9", ] resources: requests: cpu: 2 memory: 20000Mi </code></pre> <p>My data is 4 million rows and 77 columns, whenever i run fit on the tpot classifier, it runs on the dask cluster for a while then it crashes, the output log looks like </p> <pre><code>KilledWorker: ("('gradientboostingclassifier-fit-1c9d29ce92072868462946c12335e5dd', 0, 4)", 'tcp://10.8.1.14:35499') </code></pre> <p>I tried increasing threads per worker as suggested by the dask distributed docs, yet the problem persists. Some observations i have made are:</p> <ul> <li><p>It'll take longer time to crash if n_jobs is less (for n_jobs=4, it ran for 20 mins before crashing) where as crashes instantly for n_jobs=-1.</p></li> <li><p>It'll actually start working and get optimized model for fewer data, with 10000 data it works fine.</p></li> </ul> <p>So my question is, what changes and modifications do i need to make this work, I guess its doable as ive heard dask is capable of handling even bigger data than mine. </p>
<p>Best practices described on Dask`s <a href="https://kubernetes.dask.org/en/latest/#best-practices" rel="nofollow noreferrer">official</a> documentation page say: </p> <blockquote> <p>Kubernetes resource limits and requests should match the --memory-limit and --nthreads parameters given to the dask-worker command. Otherwise your workers may get killed by Kubernetes as they pack into the same node and overwhelm that nodes’ available memory, leading to <strong>KilledWorker</strong> errors.</p> </blockquote> <p>In your case these configuration parameters` values don`t match from what I can see:</p> <p>Kubernetes` Container limit <strong>20</strong> GB vs. dask-worker command limit <strong>5</strong> GB </p>
<p>Pod controlled by StatefulSet is stuck in <code>ContainerCreating</code> state</p> <p>kubectl get pods</p> <pre><code>md-0 1/1 Running 0 4h 10.242.208.59 node-5 md-1 1/1 Running 0 4h 10.242.160.36 node-6 md-2 0/1 ContainerCreating 0 4h &lt;none&gt; node-6 </code></pre> <p>kubectl describe pod md-2</p> <pre><code>Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedCreatePodSandBox 2m (x68 over 4h) kubelet, node-6 Failed create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded </code></pre> <p>kubectl describe statefulset md</p> <pre><code>Replicas: 3 desired | 3 total Pods Status: 2 Running / 1 Waiting / 0 Succeeded / 0 Failed ... Events: &lt;none&gt; </code></pre> <p>kubelet log from node-6</p> <pre><code>RunPodSandbox from runtime service failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded CreatePodSandbox for pod "md-2_exc(a995dd3d-158d-11e9-967b-6cb311235088)" failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded createPodSandbox for pod "md-2_exc(a995dd3d-158d-11e9-967b-6cb311235088)" failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded Error syncing pod a995dd3d-158d-11e9-967b-6cb311235088 ("md-2_exc(a995dd3d-158d-11e9-967b-6cb311235088)"), skipping: failed to "CreatePodSandbox" for "md-2_exc(a995dd3d-158d-11e9-967b-6cb311235088)" with CreatePodSandboxError: "CreatePodSandbox for pod \"md-2_exc(a995dd3d-158d-11e9-967b-6cb311235088)\" failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded" </code></pre> <p>I have two other StatefulSets and they works as expected. For some reason this one is broken. Also direct <code>kubectl run</code> and <code>docker run</code> works fine.</p> <p><strong>update 2019-01-18</strong></p> <p>After restoration of change's timeline I see that this specific pod was deleted with docker command bypassing kubernetes.</p> <p>Probably this somehow corrupted kubernetes' state or something.</p> <p>After numerous searching, asking and troubleshooting I still could not find what's exactly wrong. So I had to restart kubelet (<code>systemctl restart kubelet</code>) on node where pod was assigned. And the issue is gone.</p> <p>I hoped to understand how to check what exactly wrong with kubernetes (or kubelet?) but could not find any clues. And kubelet behavior remains black box in this case.</p>
<p>As <a href="https://stackoverflow.com/users/10901442/alexar">alexar</a> mentioned in update:</p> <p>After restoration of change's timeline I see that this specific pod was deleted with docker command bypassing kubernetes.</p> <p>Probably this somehow corrupted kubernetes' state or something.</p> <p>After numerous searching, asking and troubleshooting I still could not find what's exactly wrong. So I had to restart kubelet (systemctl restart kubelet) on node where pod was assigned. And the issue is gone.</p>
<p>I'm trying to mount a local directory to be used by a container in kubernetes, but getting this error:</p> <pre><code>$ kubectl logs mysql-pd chown: changing ownership of '/var/lib/mysql/': Input/output error </code></pre> <p>minikube version: v0.33.1</p> <p>docker for mac version: 2.0.0.2 (30215)</p> <p>Engine: 18.09.1</p> <p>Kubernetes: v1.10.11</p> <p>I'm starting up minikube with mounted directory:</p> <pre><code>minikube start --mount-string /Users/foo/mysql_data:/mysql_data --mount </code></pre> <p>deployment.yml</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: mysql-pd spec: containers: - image: mysql:5.7 name: mysql-container env: - name: MYSQL_ROOT_PASSWORD value: "" - name: MYSQL_ALLOW_EMPTY_PASSWORD value: "yes" ports: - containerPort: 3306 volumeMounts: - mountPath: "/var/lib/mysql" name: host-mount volumes: - name: host-mount hostPath: path: "/mysql_data" </code></pre>
<p>As @Matthew L Daniel mentioned in the comments, the main purpose of using <a href="https://kubernetes.io/docs/concepts/storage/volumes/#hostpath" rel="nofollow noreferrer">hostPath</a> is to mount a local folder from your machine which is hosting minikube inside to the nested Pod, therefore it's not necessary to mount local directory inside to minikube. Also, take a look at this <a href="https://kubernetes.io/docs/setup/minikube/#mounted-host-folders" rel="nofollow noreferrer">article</a> which explains some restrictions about host folder mounting for the particular VM driver in minikube.</p>
<p>I have create EKS cluster as specified in <a href="https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html" rel="noreferrer">https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html</a></p> <p>Added worker nodes as specified in above link Step 3: Launch and Configure Amazon EKS Worker Nodes</p> <p>In security Group also I added rule for enabling ssh to worker nodes. When I tried to login to worker node with 'ec2-user' username and with valid key SSH Login is not happening.</p> <p>Can anyone help me in debugging this issue ? </p>
<p>I found a workaround. I created an EC2 instance with same VPC which is used by worker node, also used the same security group and Key Pair for newly created EC2 instance. I tried to login to newly created EC2 instance which works like charm ( don't know Why it won't work for worker nodes). Once I logged into the instance tried SSH to worker nodes from there with Private IP which is working as expected.</p> <p>Again this a workaround. Not sure why I wasn't able to login to worker node.</p>
<p>We are deploying Cassandra docker image 3.10 via k8s as StatefullSet.</p> <p>I tried to set GC to G1GC adding <code>-XX:+UseG1GC</code> to JAVA_OPTS environment variable, but Cassandra is using the default CMS GC as set in the jvm.opts file.</p> <p>from running <code>ps aux</code> in the pod I'm getting Cassandra configuration:</p> <pre><code>USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND cassand+ 1 28.0 10.1 72547644 6248956 ? Ssl Jan28 418:43 java -Xloggc:/var/log/cassandra/gc.log -ea -XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -XX:+HeapDumpOnOutOfMemoryError -Xss256k -XX:StringTableSize=1000003 -XX:+AlwaysPreTouch -XX:-UseBiasedLocking -XX:+UseTLAB -XX:+ResizeTLAB -XX:+UseNUMA -XX:+PerfDisableSharedMem -Djava.net.preferIPv4Stack=true -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=1 -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSWaitDuration=10000 -XX:+CMSParallelInitialMarkEnabled -XX:+CMSEdenChunksRecordAlways -XX:+CMSClassUnloadingEnabled -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintHeapAtGC -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -XX:+PrintPromotionFailure -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=10M -Xms2G -Xmx2G -Xmn1G -XX:CompileCommandFile=/etc/cassandra/hotspot_compiler -javaagent:/usr/share/cassandra/lib/jamm-0.3.0.jar -Dcassandra.jmx.remote.port=7199 -Dcom.sun.management.jmxremote.rmi.port=7199 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.password.file=/etc/cassandra/jmxremote.password -Djava.library.path=/usr/share/cassandra/lib/sigar-bin -javaagent:/usr/share/cassandra/jmx_prometheus_javaagent-0.10.jar=7070:/etc/cassandra/jmx_prometheus_cassandra.yaml </code></pre> <p>there is no <code>-XX:+UseG1GC</code> property.</p> <p>Is there a way to override the jvm.opts at runtime, so I don't have to build the image for every small change? or I must add the costume jvm.opts file to the docker image I'm building?</p>
<p>Best and ideal option is ConfigMap. You can create ConfigMap for that file so that jvm.opts file can be accessed and changed from outside of pod. So without recreating new pod or even touching pod, you can change configuration as many times as you want.</p> <p>For more details refer : <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#create-configmaps-from-files" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#create-configmaps-from-files</a> </p>
<p>I've got a JupyterHub Kubernetes deployment.</p> <p>When I create and attach a persistent volume (PV) it wipes out the home directory that is part of my image. It replaces it with an empty home directory where anything is written will be persisted as expected (that is fine).</p> <p>How can I get the files from my image's home folder into the PV home folder?</p> <p>Here is an <a href="https://zero-to-jupyterhub.readthedocs.io/en/latest/user-environment.html" rel="noreferrer">example from the docs</a> that unfortunately seems to only copy from the new PV (not the image):</p> <pre><code>singleuser: lifecycleHooks: postStart: exec: command: ["cp", "-a", "src", "target"] </code></pre> <p>Here is my singleuser configuration:</p> <pre><code>singleuser: image: name: myimage tag: latest pullPolicy: Always storage: capacity: 10Gi dynamic: storageClass: standard </code></pre>
<p>The above should work fine.</p> <p>You are probably mounting the PV on your home directory that is the same home directory of the container. You can either mount the PV on a different directory and do the copy or create a new image where your data is not stored in your home directory. This is an example of how to use <a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="nofollow noreferrer"><code>mountPath</code></a>:</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: jypyterhuyb namespace: default spec: volumes: - name: myvol ... containers: - name: jypyter image: "jypytercontainer" volumeMounts: - name: myvol mountPath: /mnt/mypath </code></pre>
<p>Can I run a computer vision pipeline with Kubeflow? Is it a good idea, would it run efficiently?</p> <p>Let's say the steps of the pipeline would need to be image segmentation, some filtering and what not (gpu enabled opencv until now) and maybe a tensorflow serving for a CNN at the end.</p> <p>Any useful resources?</p> <p>Thanks,</p>
<p>The <a href="https://cloud.google.com/blog/products/ai-machine-learning/getting-started-kubeflow-pipelines" rel="nofollow noreferrer">kubeflow pipelines</a> would make a good fit with your specific use case. The idea is that you containerize all the individual steps that you want to have decoupled, something like: 1/ preprocessing, 2/ training, 3/ serving. Each container is designed so it can take the relevant arguments that you want to modify over time to run different versions of the pipeline. </p> <ul> <li>For the preprocessing image I would suggest to start from a GPU image with opencv installed that drops the output on Google Cloud Storage. </li> <li>For the training you could leverage the <code>google/cloud-sdk:latest</code> image that comes with gcloud command so you just copy over your code and run the ml engine command. </li> <li>For serving, you could use ml engine to deploy the model and thus start building the image from again <code>google/cloud-sdk:latest</code>, alternatively you could use the TF serving <a href="https://hub.docker.com/r/tensorflow/serving/tags/" rel="nofollow noreferrer">images</a> that are available off the shelf and you only need to specify the bucket where your saved model is stored and the model name <a href="https://www.tensorflow.org/serving/docker" rel="nofollow noreferrer">see instructions</a>. </li> </ul> <p>This <a href="https://towardsdatascience.com/how-to-create-and-deploy-a-kubeflow-machine-learning-pipeline-part-1-efea7a4b650f" rel="nofollow noreferrer">blog posts</a> describes how to build a similar pipeline.</p>
<p>I have a private registry, that it's accessed through the https protocol. But Kubernetes + Docker, always tries to use the http protocol <a href="http://myserver.com:8080" rel="nofollow noreferrer">http://myserver.com:8080</a> instead of <a href="https://myserver.com:8080" rel="nofollow noreferrer">https://myserver.com:8080</a>.</p> <p>How to force https protocol?</p> <p>Snippet of my <code>yaml</code> file that declares a Pod:</p> <pre><code> containers: - name: apl image: myserver.com:8080/myimage </code></pre> <p>Details of my environment:</p> <ul> <li>CentOS 7.3</li> <li>Docker 18.06</li> <li>Kubernetes (Minikube) 1.13.1</li> </ul> <p>Error message in Kubernetes logs:</p> <pre><code> Normal Pulling 30s (x4 over 2m2s) kubelet, minikube pulling image "docker.mydomain.com:30500/vision-ssh" Warning Failed 30s (x4 over 2m2s) kubelet, minikube Failed to pull image "docker.mydomain.com:30500/vision-ssh": rpc error: code = Unknown desc = Error response from daemon: Get http://docker.mydomain.com:30500/v2/: net/http: HTTP/1.x transport connection broken: malformed HTTP response "\x15\x03\x01\x00\x02\x02" Warning Failed 30s (x4 over 2m2s) kubelet, minikube Error: ErrImagePull Warning Failed 19s (x6 over 2m2s) kubelet, minikube Error: ImagePullBackOff Normal BackOff 4s (x7 over 2m2s) kubelet, minikube Back-off pulling image "docker.fccma.com:30500/vision-ssh" </code></pre> <p>If I try to specify the protocol in the name of the image, it complains: </p> <pre><code>couldn't parse image reference "https://docker.mydomain.com:30500/vision-ssh": invalid reference format </code></pre> <p>Followed this <a href="https://robertbrem.github.io/Microservices_with_Kubernetes/03_Docker_registry/01_Setup_a_docker_registry/" rel="nofollow noreferrer">guide</a> in order to create the image registry. It is already secured (HTTPS protocol and protected by user/password).</p>
<p>In the <code>/etc/hosts</code> file, the server <code>docker.mydomain.com</code> is mapped to 127.0.0.1. I've read in the <a href="https://docs.docker.com/engine/reference/commandline/dockerd/#insecure-registries" rel="nofollow noreferrer">docker docs</a> that local registries are always considered insecure. If I use a name that is mapped to the external IP, then Docker tries <code>https</code>.</p>
<p>I'm trying to get the value of a node annotation with kubernetes python client.</p> <p>This code print all the annotations for nodes with etcd nodes :</p> <pre><code>#!/usr/bin/python from kubernetes import client, config def main(): config.load_kube_config("./backup_kubeconfig_prod") label_selector = 'node-role.kubernetes.io/etcd' v1 = client.CoreV1Api() print("Listing nodes with their IPs:") ret = v1.list_node(watch=False, label_selector=label_selector) for i in ret.items: print(i.metadata.annotations) if __name__ == '__main__': main() </code></pre> <p>Output example :</p> <pre><code>{'flannel.alpha.coreos.com/kube-subnet-manager': 'true', 'flannel.alpha.coreos.com/backend-type': 'vxlan', 'flannel.alpha.coreos.com/backend-data': '{"VtepMAC":"96:70:f6:ab:4f:30"}', 'rke.cattle.io/internal-ip': '1.2.3.4', 'volumes.kubernetes.io/controller-managed-attach-detach': 'true', 'flannel.alpha.coreos.com/public-ip': '1.2.3.4', 'rke.cattle.io/external-ip': '1.2.3.4', 'node.alpha.kubernetes.io/ttl': '0'} </code></pre> <p>How can I print the value of <code>flannel.alpha.coreos.com/public-ip</code> for example ?</p>
<p>Data in <code>i.metadata.annotations</code> are dictionary type. </p> <p>You can print the value of the key <code>flannel.alpha.coreos.com/public-ip</code> using:</p> <pre><code>print(i.metadata.annotations["flannel.alpha.coreos.com/public-ip"]) </code></pre>
<p>I ran </p> <pre><code>kubectl create clusterrolebinding dashboard-admin -n default --clusterrole=cluste-admin --serviceaccount=default:dashboard </code></pre> <p>instead of </p> <pre><code>kubectl create clusterrolebinding dashboard-admin -n default --clusterrole=cluster-admin --serviceaccount=default:dashboard </code></pre> <p>I would like to make the dashboard admin cluster-admin instead of cluste-admin</p> <p>If I run </p> <pre><code>kubectl create clusterrolebinding dashboard-admin -n default --clusterrole=cluster-admin --serviceaccount=default:dashboard </code></pre> <p>terminal outputs</p> <p>Error from server (AlreadyExists): clusterrolebinding.rbac.authorizatoin.k8s.io "dashboard-admin" already exists</p> <p>When I access the dashboard from a browser on the machine I am prompted for a token and am able to login as expected. I have numerous errors all ending in "cluste-admin" not found. I would like these to all go away</p>
<p>The only way to make that happen now is you delete the <code>clusterrolebinding</code> and recreate it using:</p> <pre><code>kubectl delete clusterrolebinding dashboard-admin kubectl create clusterrolebinding dashboard-admin -n default --clusterrole=cluster-admin --serviceaccount=default:dashboard </code></pre>
<p>I am using a deployment yaml file ex:nginx which i am using port 30080. Now i wrote another deployment yaml file but i want to use port number 30080. </p> <blockquote> <p>The Service "web" is invalid: spec.ports[0].nodePort: Invalid value: 30080: >provided port is already allocated</p> </blockquote> <p>How can i use the port number 30080 for my new deployment web.yaml file. 1)Deleted the nginx pod running. 2)Deleted nginx deployment running.</p> <blockquote> <p>But how can i free up the port number 30080.</p> </blockquote> <p>i checked the port number:</p> <blockquote> <blockquote> <blockquote> <p>sudo iptables-save | grep 30080</p> </blockquote> </blockquote> </blockquote> <pre><code>-A KUBE-EXTERNAL-SERVICES -p tcp -m comment --comment "default/nginx-service: has no endpoints" -m addrtype --dst-type LOCAL -m tcp --dport 30080 -j REJECT --reject-with icmp- port-unreachable </code></pre>
<p>i deleted deployment and pod. But i forgot that service is running after deleting nginx service i am able to reuse the port number 30080 for other deployment.</p> <p><a href="https://stackoverflow.com/questions/19071512/socket-error-errno-48-address-already-in-use">socket.error: [Errno 48] Address already in use</a></p> <p>this question also helped me but it points to killing that process, here the process running is kube-proxy.</p> <blockquote> <p>sudo lsof -i:30080</p> <p>COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME kube-prox 3320 root 8u IPv6 40388 0t0 TCP *:30080 (LISTEN)</p> </blockquote> <p>which i cant delete.It might create issue deleting kube-prox</p> <p>Please let me know if this was the right approach or not.</p>
<p>I have an application with 5 microservices (iam, courses...). I want to know which is the best approach to migrate them to kubernetes. I was thinking to create namespaces by enviroment as google recomendes: 1. prod 2. dev 3. staging</p> <p>then I thought that may be better create namespace by environment and microservices. 1. iam-prod 2. iam-dev 3. iam-staging 1. courses-prod 2. courses-dev 3. courses-staging ... but this approach can be a little bit difficult to handle. Because I need to communicate between each other.</p> <p>Which approach do you think that is better?</p>
<p>Just like the other answer, you should create namespace isolation for <code>prod, dev and staging</code>. This will ensure a couple of nuances are taken care of...</p> <ol> <li>Ideally, your pods in either of the environments should not be talking across environments</li> <li>You can manage your network policies in a much cleaner and manageable way with this organization of k8s kinds</li> </ol>
<p>I'm working on some GCP apps which are dockerized in a Kubernetes cluster in GCP (I'm new to Docker and Kubernetes). In order to access some of the GCP services, the environment variable GOOGLE_APPLICATION_CREDENTIALS needs to point to a credentials file.<br/> <strong>Should the environment variable be set and that file included in:<br/> - each of the Docker images?<br/> - the Kubernetes cluster?</strong><br/></p> <p><strong>GCP specific stuff</strong><br/> This is the actual error: com.google.api.gax.rpc.PermissionDeniedException: io.grpc.StatusRuntimeException: PERMISSION_DENIED: Request had insufficient authentication scopes.</p> <p><strong>-Should the environment variable be set and that file included in:<br/> - each of the Compute Engine instances? - the main GCP console?</strong></p> <p>And, most importantly, HOW? :)</p>
<p>You'll need to create a service account (IAM &amp; Admin > Service Accounts), generate a key for it in JSON format and then give it the needed permissions (IAM &amp; Admin > IAM). If your containers need access to this, it's best practice to add it as a secret in kubernetes and mount it in your containers. Then set the environment variable to point to the secret which you've mounted:</p> <p>export GOOGLE_APPLICATION_CREDENTIALS="[PATH_TO_SECRET]"</p> <p>This page should get you going: <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform#step_4_import_credentials_as_a_secret" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/authenticating-to-cloud-platform#step_4_import_credentials_as_a_secret</a></p>
<p>I’m trying to run the following example: <a href="https://kubernetes.io/docs/tutorials/stateful-application/cassandra/" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/stateful-application/cassandra/</a> When I run on minikube, it runs well. But when I run on GKE, I see an error, <code>0/3 nodes are available: 3 Insufficient cpu.</code></p> <p>Anyone can help me please?</p> <p>Where I can increase CPU? On stateful_set or on kluster config?</p> <p>I created my cluster with terraform, with the following configurations:</p> <pre><code>resource "google_container_cluster" "gcloud_cluster" { name = "gcloud-cluster-${var.workspace}" zone = "us-east1-b" initial_node_count = 3 project = "${var.project}" addons_config { network_policy_config { disabled = true } } master_auth { username = "${var.username}" password = "${var.password}" } node_config { oauth_scopes = [ "https://www.googleapis.com/auth/devstorage.read_only", "https://www.googleapis.com/auth/logging.write", "https://www.googleapis.com/auth/monitoring", "https://www.googleapis.com/auth/service.management.readonly", "https://www.googleapis.com/auth/servicecontrol", "https://www.googleapis.com/auth/trace.append", "https://www.googleapis.com/auth/compute", ] } } </code></pre> <p>Thanks</p> <p><a href="https://i.stack.imgur.com/pTgRi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pTgRi.png" alt="0/3 nodes are available: 3 Insufficient cpu."></a></p>
<p>What is happening here is that by default your cluster is being created using n1-standard-1 machines which have only 1vCPU. </p> <p>You should add to your config information about machine type you want to use i.e:</p> <pre><code>resource "google_container_cluster" "gcloud_cluster" { name = "gcloud-cluster-${var.workspace}" zone = "us-east1-b" initial_node_count = 3 project = "${var.project}" addons_config { network_policy_config { disabled = true } } master_auth { username = "${var.username}" password = "${var.password}" } node_config { machine_type = "${var.machine_type}" oauth_scopes = [ "https://www.googleapis.com/auth/devstorage.read_only", "https://www.googleapis.com/auth/logging.write", "https://www.googleapis.com/auth/monitoring", "https://www.googleapis.com/auth/service.management.readonly", "https://www.googleapis.com/auth/servicecontrol", "https://www.googleapis.com/auth/trace.append", "https://www.googleapis.com/auth/compute", ] } } </code></pre> <p>and declare it in variable.tf file using either n1-standard-2 or n1-standard-4 i.e:</p> <pre><code>variable "machine_type" { type = "string" default = "n1-standard-4" } </code></pre>
<p>I had sticky session working in my dev environment with minibike with following configurations:</p> <p>Ingress:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: gl-ingress annotations: nginx.ingress.kubernetes.io/affinity: cookie kubernetes.io/ingress.class: "gce" kubernetes.io/ingress.global-static-ip-name: "projects/oceanic-isotope-199421/global/addresses/web-static-ip" spec: backend: serviceName: gl-ui-service servicePort: 80 rules: - http: paths: - path: /api/* backend: serviceName: gl-api-service servicePort: 8080 </code></pre> <p>Service:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: gl-api-service labels: app: gl-api annotations: ingress.kubernetes.io/affinity: 'cookie' spec: type: NodePort ports: - port: 8080 protocol: TCP selector: app: gl-api </code></pre> <p>Now that I have deployed my project to GKE sticky session no longer function. I believe the reason is that the Global Load Balancer configured in GKE does not have session affinity with the NGINX Ingress controller. Anyone have any luck wiring this up? Any help would be appreciated. I wanting to establish session affinity: Client Browser > Load Balancer > Ingress > Service. The actual session lives in the pods behind the service. Its an API Gateway (built with Zuul).</p>
<p><strong>Good news!</strong> Finally they have support for these kind of tweaks as beta features!</p> <p>Beginning with GKE version 1.11.3-gke.18, you can use an Ingress to configure these properties of a backend service:</p> <ul> <li>Timeout</li> <li>Connection draining timeout</li> <li>Session affinity</li> </ul> <p>The configuration information for a backend service is held in a custom resource named BackendConfig, that you can "attach" to a Kubernetes Service.</p> <p>Together with other sweet beta-features (like CDN, Armor, etc...) you can find how-to guides here: <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/configure-backend-service" rel="noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/configure-backend-service</a></p>
<p>My deployment is using a couple of volumes, all defined as <code>ReadWriteOnce</code>.</p> <p>When applying the deployment to a clean cluster, pod is created successfuly.</p> <p>However, if I update my deployment (i.e update container image), when a new pod is created for my deployment it will always fail on volume mount:</p> <pre><code>/Mugen$ kubectl get pods NAME READY STATUS RESTARTS AGE my-app-556c8d646b-4s2kg 5/5 Running 1 2d my-app-6dbbd99cc4-h442r 0/5 ContainerCreating 0 39m /Mugen$ kubectl describe pod my-app-6dbbd99cc4-h442r Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 9m default-scheduler Successfully assigned my-app-6dbbd99cc4-h442r to gke-my-test-default-pool-671c9db5-k71l Warning FailedAttachVolume 9m attachdetach-controller Multi-Attach error for volume "pvc-b57e8a7f-1ca9-11e9-ae03-42010a8400a8" Volume is already used by pod(s) my-app-556c8d646b-4s2kg Normal SuccessfulMountVolume 9m kubelet, gke-my-test-default-pool-671c9db5-k71l MountVolume.SetUp succeeded for volume "default-token-ksrbf" Normal SuccessfulAttachVolume 9m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-2cc1955a-1cb2-11e9-ae03-42010a8400a8" Normal SuccessfulAttachVolume 9m attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-2c8dae3e-1cb2-11e9-ae03-42010a8400a8" Normal SuccessfulMountVolume 9m kubelet, gke-my-test-default-pool-671c9db5-k71l MountVolume.SetUp succeeded for volume "pvc-2cc1955a-1cb2-11e9-ae03-42010a8400a8" Normal SuccessfulMountVolume 9m kubelet, gke-my-test-default-pool-671c9db5-k71l MountVolume.SetUp succeeded for volume "pvc-2c8dae3e-1cb2-11e9-ae03-42010a8400a8" Warning FailedMount 52s (x4 over 7m) kubelet, gke-my-test-default-pool-671c9db5-k71l Unable to mount volumes for pod "my-app-6dbbd99cc4-h442r_default(affe75e0-1edd-11e9-bb45-42010a840094)": timeout expired waiting for volumes to attach or mount for pod "default"/"my-app-6dbbd99cc4-h442r". list of unmounted volumes=[...]. list of unattached volumes=[...] </code></pre> <p>What is the best strategy to apply changes to such a deployment then? Will there have to be some service outage in order to use the same persistence volumes? (I wouldn't want to create new volumes - the data should maintain)</p>
<p>You will need to tolerate an outage here, due to the access mode. This will delete the existing Pods (unmounting the volumes) before creating new ones.</p> <p>A Deployment strategy - <code>.spec.strategy.type</code> - of “Recreate” will help achieve this: <a href="https://github.com/ContainerSolutions/k8s-deployment-strategies/blob/master/recreate/README.md" rel="noreferrer">https://github.com/ContainerSolutions/k8s-deployment-strategies/blob/master/recreate/README.md</a></p>
<p>I tried to install cilium with coredns in kubeadm</p> <p>kube: 1.12.3 cilium: 1.3.0</p> <p>I get this error:</p> <pre><code>Readiness probe failed: KVStore: Failure Err: Not able to connect to any etcd endpoints - etcd: 0/1 connected: http://127.0.0.1:31079 - context deadline exceeded </code></pre> <p>I don't know why and if i need to install etcd on the master server.</p> <blockquote> <p>kubectl get pods -n kube-system</p> </blockquote> <pre><code>cilium-9z4zd 0/1 Running 3 10m cilium-s4x2g 0/1 Running 3 10m coredns-576cbf47c7-44hp9 1/1 Running 2 9m29s coredns-576cbf47c7-6jst5 1/1 Running 2 9m29s etcd-ops-kube-master-dev 1/1 Running 0 9m29s kube-apiserver-ops-kube-master-dev 1/1 Running 0 9m29s kube-controller-manager-ops-kube-master-dev 1/1 Running 0 9m26s kube-proxy-79649 1/1 Running 0 38m kube-proxy-b56fk 1/1 Running 0 38m kube-scheduler-ops-kube-master-dev 1/1 Running 0 9m27s </code></pre>
<p>I had a similar issue playing with Kubernetes the hard way, this was because of a wrong certificate</p> <p>I did the following:</p> <p><code>kubectl -n kube-system logs &lt;etcd&gt;</code></p> <p>and found something like: <code>embed: rejected connection from "172.17.0.3:36950" (error "remote error: tls: bad certificate", ServerName "")</code></p> <p>I got the etcd config, you should have something like </p> <pre><code>$ kubectl -n kube-system get cm cilium-config -o yaml apiVersion: v1 data: clean-cilium-bpf-state: "false" clean-cilium-state: "false" cluster-name: default ct-global-max-entries-other: "262144" ct-global-max-entries-tcp: "524288" debug: "false" disable-ipv4: "false" etcd-config: |- --- endpoints: - https://&lt;ETCD_URL&gt;:2379 # # In case you want to use TLS in etcd, uncomment the 'ca-file' line # and create a kubernetes secret by following the tutorial in # https://cilium.link/etcd-config ca-file: '/var/lib/etcd-secrets/etcd-client-ca.crt' # # In case you want client to server authentication, uncomment the following # lines and create a kubernetes secret by following the tutorial in # https://cilium.link/etcd-config key-file: '/var/lib/etcd-secrets/etcd-client.key' cert-file: '/var/lib/etcd-secrets/etcd-client.crt' legacy-host-allows-world: "false" monitor-aggregation-level: none sidecar-istio-proxy-image: cilium/istio_proxy tunnel: vxlan kind: ConfigMap </code></pre> <p>Then I compared the keys of <code>kubectl -n kube-system get secret cilium-etcd-client-tls -o yaml</code> that provides 3 base64 values.</p> <p>I can then test the keys using <code>curl https://&lt;ETCD_URL&gt;:2379/v2/keys --cacert=etcd-client-ca.crt --cert=etcd-client.crt --key=etcd-client.key</code></p> <p>You should then have something like <code>{"action":"get","node":{"dir":true}}</code></p> <p>Then, you can inspect the deployment, on my side, I have</p> <pre><code>kind: Deployment metadata: labels: io.cilium/app: operator name: cilium-operator name: cilium-operator namespace: kube-system spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: io.cilium/app: operator name: cilium-operator strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 type: RollingUpdate template: metadata: creationTimestamp: null labels: io.cilium/app: operator name: cilium-operator spec: containers: - args: - --kvstore=etcd - --kvstore-opt=etcd.config=/var/lib/etcd-config/etcd.config command: - cilium-operator env: - name: POD_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace - name: K8S_NODE_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: spec.nodeName - name: CILIUM_DEBUG valueFrom: configMapKeyRef: key: debug name: cilium-config optional: true - name: CILIUM_CLUSTER_NAME valueFrom: configMapKeyRef: key: cluster-name name: cilium-config optional: true - name: CILIUM_CLUSTER_ID valueFrom: configMapKeyRef: key: cluster-id name: cilium-config optional: true - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: key: AWS_ACCESS_KEY_ID name: cilium-aws optional: true - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: key: AWS_SECRET_ACCESS_KEY name: cilium-aws optional: true - name: AWS_DEFAULT_REGION valueFrom: secretKeyRef: key: AWS_DEFAULT_REGION name: cilium-aws optional: true image: docker.io/cilium/operator:latest imagePullPolicy: Always name: cilium-operator resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/lib/etcd-config name: etcd-config-path readOnly: true - mountPath: /var/lib/etcd-secrets name: etcd-secrets readOnly: true dnsPolicy: ClusterFirst priorityClassName: system-node-critical restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: cilium-operator serviceAccountName: cilium-operator terminationGracePeriodSeconds: 30 volumes: - configMap: defaultMode: 420 items: - key: etcd-config path: etcd.config name: cilium-config name: etcd-config-path - name: etcd-secrets secret: defaultMode: 420 optional: true secretName: cilium-etcd-secrets``` </code></pre>
<p>Does anyone have a working example of using Snakemake with Azure Kubernetes Service (AKS)? If it is supported, which flags and setup are needed to use the Snakemake Kubernetes executor with AKS? What material there is out there is mostly on AWS with S3 buckets for storage.</p>
<p>I have never tried it, but you can basically take <a href="https://snakemake.readthedocs.io/en/stable/executable.html#executing-a-snakemake-workflow-via-kubernetes" rel="nofollow noreferrer">this</a> as a blueprint, and replace the google storage part with a storage backend that is working in Azure. As far as I know, Azure has its own storage API, but there are workarounds to expose an S3 interface (google for Azure S3). So, the strategy would be to setup an S3 API, and then use the S3 remote provider for Snakemake. In the future, Snakemake will also support Azure directly as a remote provider.</p>
<p>I have created a new GCP Kubernetes cluster. The cluster is private with NAT - not have connection to the internet. I also deploy <code>bastion</code> machine which allow my to connect into my private network (vpc) from the internet. This is the <a href="https://cloud.google.com/nat/docs/using-nat" rel="nofollow noreferrer">tutorial I based on</a>. SSH into <code>bastion</code> - working currently.</p> <p>The kubernetes master is not exposed outside. The result:</p> <pre><code>$ kubectl get pods The connection to the server 172.16.0.2 was refused - did you specify the right host or port? </code></pre> <p>So i install kubectl on <code>bastion</code> and run:</p> <pre><code>$ kubectl proxy --port 1111 Starting to serve on 127.0.0.1:3128 </code></pre> <p>now I want to connect my local <code>kubectl</code> to the remote proxy server. I installed secured tunnel to the <code>bastion</code> server and mapped the remote port into the local port. Also tried it with CURL and it's working.</p> <p>Now I looking for something like</p> <pre><code>$ kubectl --use-proxy=1111 get pods </code></pre> <p>(Make my local kubectl pass tru my remote proxy)</p> <p>How to do it?</p>
<p><code>kubectl proxy</code> acts exactly as an apiserver, exactly like the target apiserver - but the queries trough it are already authenticated. From your description, 'works with curl', it sounds like you've set it up correctly, you just need to target the client kubectl to it:</p> <pre><code>kubectl --server=http://localhost:1111 </code></pre> <p>(Where port 1111 on your local machine is where <code>kubectl proxy</code> is available; in your case trough a tunnel)</p> <p>If you need exec or attach trough <code>kubectl proxy</code> you'll need to run it with either <code>--disable-filter=true</code> or <code>--reject-paths='^$'</code>. Read the fine print and consequences for those options.</p> <h2>Safer way</h2> <p>All in all, this is not how I access clusters trough a bastion. The problem with above approach is if someone gains access to the bastion they immediately have valid Kubernetes credentials (as kubectl proxy needs those to function). It is also not the safest solution if the bastion is shared between multiple operators. One of the main points of a bastion would be that it never has credentials on it. What I fancy doing is accessing the bastion from my workstation with:</p> <pre><code>ssh -D 1080 bastion </code></pre> <p>That makes ssh act as SOCKS proxy. You need <code>GatewayPorts yes</code> in your sshd_config for this to work. Thereafter from the workstation I can use</p> <pre><code>HTTPS_PROXY=socks5://127.0.0.1:1080 kubectl get pod </code></pre>
<p>I'm attempting to connect to my MongoDB pod, but I'm failing. Previously, I was just using an online resource to host my MongoDB. Now I want to deploy my DB with Kubernetes. However, I'm having issues connecting to my DB pod via my Flask application and cannot find any examples that are using Minikube or python.</p> <p>This is how I'm attempting to connect to my pod and populate it:</p> <pre><code>be_host = os.getenv('MONGO-DB_SERVICE_HOST', 'mongo-db') be_port = os.getenv('MONGO-DB_SERVICE_PORT', '27017') url = 'http://{}:{}/rba-db'.format(be_host, be_port) app.config['MONGO_DBNAME'] = 'pymongo_db' app.config['MONGO_URI'] = url mongo = PyMongo(app) @app.route('/populate_db') def populate_db(): patient = mongo.db.patients patient.insert({'id': 1, 'fname': 'Jill', 'lname': 'Smith', 'age': '50', 'weight': '63.3', 'conditions': ['Stage 2 Diabetes', 'Cancer', 'Aids']}) patient.insert({'id': 2, 'fname': 'John', 'lname': 'Smith', 'age': '52', 'weight': '86.2', 'conditions': ['Heart Disease', 'Cancer']}) patient.insert({'id': 3, 'fname': 'Ryan', 'lname': 'Gosling', 'age': '25', 'weight': '75', 'conditions': ['Flu']}) patient.insert({'id': 4, 'fname': 'Sean', 'lname': 'Winnot', 'age': '21', 'weight': '82', 'conditions': ['Lupis']}) return "Patients Added." </code></pre> <p>This is my deployment:</p> <pre><code>kind: Service apiVersion: v1 metadata: name: mongo-db spec: type: NodePort selector: app: mongo-db ports: - protocol: TCP nodePort: 31003 port: 27017 targetPort: 27017 --- apiVersion: apps/v1 kind: Deployment metadata: name: mongo-db labels: app: mongo-db spec: replicas: 1 selector: matchLabels: app: mongo-db template: metadata: labels: app: mongo-db spec: containers: - name: mongo-db image: mongo:latest ports: - containerPort: 27017 </code></pre> <p>I have tried:</p> <pre><code>app.config["MONGO_URI"] = "mongodb://localhost:27017/myDatabase" </code></pre> <p>as suggested, but I get the error <code>pymongo.errors.OperationFailure: Authentication failed.</code> when trying to add to my db via /populate_db</p> <p>I've also tried:</p> <pre><code>mongo = MongoClient("mongodb://mongo:27017/patients") </code></pre> <p>with the same outcome as the latter.</p> <p><strong>Edit:</strong></p> <p>There was a problem with my docker image not updating correctly</p> <p><code>mongo = MongoClient("mongodb://mongo:27017/patients")</code></p> <p>works fine.</p>
<pre class="lang-py prettyprint-override"><code>url = 'http://{}:{}/rba-db'.format(be_host, be_port) </code></pre> <p><code>http://</code> is that right?</p> <pre class="lang-py prettyprint-override"><code>app.config["MONGO_URI"] = "mongodb://localhost:27017/myDatabase" </code></pre> <p>As far as I know, <code>mongo url = "mongodb://localhost:27017/myDatabase"</code></p>
<p>I've gone over the following docomentation page: <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/deployment/</a></p> <p>The example deployment yaml is as follows:</p> <pre><code>apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 </code></pre> <p>We can see here three different times where the label <code>app: nginx</code> is mentioned.</p> <p>Why do we need each of them? I had a hard time understanding it from the official documentation.</p>
<p>The <strong>first label</strong> is for deployment itself, it gives label for that particular deployment. Lets say you want to delete that deployment then you run following command:</p> <pre><code>kubectl delete deployment -l app=nginx </code></pre> <p>This will delete the entire deployment.</p> <p>The <strong>second label</strong> is <code>selector: matchLabels</code> which tells the resources(service etc) to match the pod according to label. So lets say if you want to create the service which has all the pods having labels of <code>app=nginx</code> then you provide following definition:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: nginx spec: type: LoadBalancer ports: - port: 80 selector: app: nginx </code></pre> <p>The above service will look for the matchLabels and bind pods which have label <code>app: nginx</code> assigned to them</p> <p>The <strong>third label</strong> is <code>podTemplate</code> labels, the <code>template</code> is actually <code>podTemplate</code>. It describe the pod that it is launched. So lets say you have two replica deployment and k8s will launch 2 pods with the label specified in <code>template: metadata: labels</code>. This is subtle but important difference, so you can have the different labels for deployment and pods generated by that deployment.</p>
<p>If I expose a (single) web service (say <code>http://a.b.c.d</code> or <code>https://a.b.c.d</code>) on a (small) Kubernetes 1.13 cluster, what is the benefit of using <code>Ingress</code> over a <code>Service</code> of type <code>ClusterIP</code> with <code>externalIPs [ a.b.c.d ]</code> alone?</p> <p>The address <code>a.b.c.d</code> is routed to one of my cluster nodes. <code>Ingress</code> requires installing and maintaining an <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">ingress controller</a>, so I am wondering when this is justified.</p>
<p>I've now come across a first concrete example where I see concrete benefit from using <code>Ingress</code> over a <code>Service</code> with <code>externalIPs</code>.</p> <p>A private Docker registry inside a Kubernetes cluster normally requires TLS credentials. With the Docker image <code>registry:2</code> one would have to mount those credentials e.g. from a <code>ConfigMap</code> into the container and have certain environment variables in the container (e.g. <code>REGISTRY_HTTP_TLS_CERTIFICATE</code>) point to them.</p> <p>As long as one can tolerate insecure access to the registry inside the cluster this becomes easier to mange with <code>Ingress</code>. Certificates can be put into a <code>Secret</code> which the <code>Ingress</code> resource can point to (<code>kubectl explain ingress.spec.tls.secretName</code>). There is no more need to pay alternative detailed attention to mounts or environment variables. TLS connections will be terminated at the ingress controller.</p>
<p>I am running Harbor Registry on my cluster and I have no problem pushing and pulling the images from outside of the cluster.</p> <p>Now I'd like to be able to create a pod from that registry. Something like this:</p> <pre><code>. kubectl run -i --tty --rm debug --image=harbor.harbor.svc.cluster.local/test/alpine:latest --restart=Never -- sh . </code></pre> <p>Is this possible?</p> <p><strong>Update</strong></p> <p>If I try to access the registry by its service name <code>harbor.harbor.svc.cluster.local</code> it doesn't work because the host name is not found.</p> <p>How can I reference my image? </p>
<p>As @Rajesh mentioned in comment, you need to create <a href="https://kubernetes.io/docs/concepts/services-networking/service/#nodeport" rel="nofollow noreferrer">NodePort</a> type service if you are on same subnet with your nodes or if you are using some cloud for your cluster such as AWS, GKE, also you can create <a href="https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer" rel="nofollow noreferrer">LoadBalancer</a> type service and access through External Loadbalancer to your registry. </p>
<p>I'm new to OpenShift. I have two projects|namespaces. In each I have a rest service. What I want is service from NS1 access service from NS2 without joining projects networks. Also SDN with multi tenant plugin.</p> <p>I found <a href="https://docs.openshift.com/container-platform/3.5/dev_guide/integrating_external_services.html" rel="nofollow noreferrer">example</a> on how to add external services to cluster as native. In NS1 I created an Endpoint for external IP of Service form NS2, but when I tried to create a Service in NS1 for this Endpoint, it failed cause there was no type tag (which wasn't in example also). </p> <p>I also tried ExternalName. For externalName key my value was URL of router to service in NS2. But it doesn't work pretty well, cause it always returns me a page with Application is not available. But app\service works.</p>
<p>Services in different namespaces are not external, but local to the cluster. So you simply access the services using DNS:</p> <p>for example: <code>servicename.svc.cluster.local</code> or simply <code>servicename.svc</code></p> <p>see also <a href="https://docs.openshift.com/enterprise/3.0/architecture/additional_concepts/networking.html" rel="nofollow noreferrer">https://docs.openshift.com/enterprise/3.0/architecture/additional_concepts/networking.html</a></p>
<p>I would like to deploy a sidecar container that is measuring the memory usage (and potentially also CPU usage) of the main container in the pod and then send this data to an endpoint.</p> <p>I was looking at cAdvisor, but Google Kubernetes Engine has hardcoded 10s measuring interval, and I need 1s granularity. Deploying another cAdvisor is an option, but I need those metrics only for a subset of pods, so it would be wasteful. </p> <p>Is it possible to write a sidecar container that monitors the main container metrics? If so, what tools could the sidecar use to gather the data? </p>
<p>That one second granularity will be probably the main showstopper for many monitoring tools. In theory you can script it on your own. You can use Docker stats API and read stats stream only for main pod. You will need to mount /var/run/docker.sock to the sidecar container. Curl example:</p> <pre><code>curl -N --unix-socket /var/run/docker.sock http:/containers/&lt;container-id&gt;/stats </code></pre> <p>Another option is to read metric from cgroups. But you will need more calculations in this case. Mounting of croups to the sidecar container will be required. See some examples of cgroup pseudo-files on <a href="https://docs.docker.com/config/containers/runmetrics/" rel="nofollow noreferrer">https://docs.docker.com/config/containers/runmetrics/</a></p>
<p>When I have a multi zone GKE cluster, the num-nodes run in each zone for my node pools.</p> <p>GKE uses <em>zonal</em> instance groups, one in each zone for my cluster's zones.</p> <p>It seems like this could be implemented with a <em>regional</em> instance group instead.</p> <p>It seems that GKE Node Pools and Regional instance groups are a similar age. Is the only reason node pools don't use regional instance groups simply it wasn't available as a GCE feature at the time?</p>
<p>As the other comment says this questions is not really suitable for Stack Overflow. It's an implementation detail of GKE –and not an important one to a user in practice.</p> <p>I work at Google (but I don't know the implementation details), but my guess would be because GKE needs to choose which 3 zones in a region it needs to use.</p> <p>For example, if user node pool is in <code>-a</code>, <code>-b</code>, <code>-d</code> zones, Google (internally) also needs to create GKE Master instances (not visible to users) in the same set of zones and probably the way to coordinate this is to explicitly describe which zones to use by creating separate "zonal node pools".</p> <p>But I might be wrong. :) In the end, you should not really care how it's implemented. You should not go make edits to managed instance groups created by GKE either. Maybe some day GKE will move on to "regional instance groups", too.</p>
<p>I am looking for a way to "write stream" some .mp4 video files -- as they are being generated by some python app -- to a google cloud storage bucket. The python app is containerised and deployed in GKE and currently executes fine as a web service. But the problem is that all the video files are locally generated and stored in a path (<code>tmp/processed</code>) inside the pod. </p> <p>However, I want the video files to be written to files in a google's storage bucket named <code>my_bucket</code>. </p> <p>I have read <strong>gcsfuse</strong> guidelines (<a href="https://github.com/maciekrb/gcs-fuse-sample" rel="noreferrer">https://github.com/maciekrb/gcs-fuse-sample</a>) on how to mount a bucket in Kubernetes pods and also read about <strong>boto</strong> (<a href="https://cloud.google.com/storage/docs/boto-plugin#streaming-transfers" rel="noreferrer">https://cloud.google.com/storage/docs/boto-plugin#streaming-transfers</a>) that is used to do the stream transfers to storage buckets. </p> <p>To mount <code>my_bucket</code> in <code>tmp/processed</code>, I have added the following lines to my app's deployment file (YAML):</p> <pre><code> lifecycle: postStart: exec: command: - gcsfuse - -o - nonempty - my_bucket - tmp/processed preStop: exec: command: - fusermount - -u - tmp/processed/ securityContext: capabilities: add: - SYS_ADMIN </code></pre> <p>I haven't used boto yet and thought maybe just mounting would be enough! But, my app gives me <strong>input/output error</strong> when trying to generate the video file. </p> <p>Now my question is that do I need to use both <strong>gcsfuse</strong> and <strong>boto</strong>, or just mounting the bucket in my GKE pod is enough? And am I doing the mounting right?</p> <hr> <p><strong>UPDATE</strong>: I verified that I did the mount correctly using the following command:</p> <p><code>kubectl exec -it [POD_NAME] bash</code></p>
<p>Problem solved! I only had to mount my bucket within the pod and that was it. The mounting script (as written above in my question) was done correctly. But, the problem that caused the <code>input/output error</code> was due to my GKE cluster that had insufficient permissions. Basically, the cluster didn't have the permission to read/write to storage and a couple of other permissions were needed by the project. So, I created a new cluster using the following command:</p> <pre><code>gcloud container clusters create [MY_CLUSTER_NAME] \ --scopes=https://www.googleapis.com/auth/userinfo.email,cloud-platform,https://www.googleapis.com/auth/devstorage.read_write,storage-rw,trace,https://www.googleapis.com/auth/trace.append,https://www.googleapis.com/auth/servicecontrol,compute-rw,https://www.googleapis.com/auth/compute,https://www.googleapis.com/auth/service.management.readonly,https://www.googleapis.com/auth/taskqueue \ --num-nodes 4 --zone "us-central1-c" </code></pre> <p>to be able to read/write from/to a storage bucket the cluster had to have the <code>https://www.googleapis.com/auth/devstorage.read_write</code> permission.</p> <p>Also, that there was no need to use <strong>boto</strong> and mounting through <strong>gcsfuse</strong> was enough for me to be able to write stream video files to <code>my_bucket</code>.</p>
<p>I set <code>concurrencyPolicy</code> to <code>Allow</code>, here is my <code>cronjob.yaml</code>:</p> <pre><code>apiVersion: batch/v1beta1 kind: CronJob metadata: name: gke-cron-job spec: schedule: '*/1 * * * *' startingDeadlineSeconds: 10 concurrencyPolicy: Allow successfulJobsHistoryLimit: 3 failedJobsHistoryLimit: 1 jobTemplate: spec: template: metadata: labels: run: gke-cron-job spec: restartPolicy: Never containers: - name: gke-cron-job-solution-2 image: docker.io/novaline/gke-cron-job-solution-2:1.3 env: - name: NODE_ENV value: 'production' - name: EMAIL_TO value: '[email protected]' - name: K8S_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name ports: - containerPort: 8080 protocol: TCP </code></pre> <p>After reading docs: <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/cronjobs" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/cronjobs</a></p> <p>I still don't understand how to use <code>concurrencyPolicy</code>.</p> <p>How can I run my cron job concurrency?</p> <p>Here is the logs of cron job: </p> <pre class="lang-sh prettyprint-override"><code>☁ nodejs-gcp [master] ⚡ kubectl logs -l run=gke-cron-job &gt; [email protected] start /app &gt; node ./src/index.js config: { ENV: 'production', EMAIL_TO: '[email protected]', K8S_POD_NAME: 'gke-cron-job-1548660540-gmwvc', VERSION: '1.0.2' } [2019-01-28T07:29:10.593Z] Start daily report send email: { to: '[email protected]', text: { test: 'test data' } } &gt; [email protected] start /app &gt; node ./src/index.js config: { ENV: 'production', EMAIL_TO: '[email protected]', K8S_POD_NAME: 'gke-cron-job-1548660600-wbl5g', VERSION: '1.0.2' } [2019-01-28T07:30:11.405Z] Start daily report send email: { to: '[email protected]', text: { test: 'test data' } } &gt; [email protected] start /app &gt; node ./src/index.js config: { ENV: 'production', EMAIL_TO: '[email protected]', K8S_POD_NAME: 'gke-cron-job-1548660660-8mn4r', VERSION: '1.0.2' } [2019-01-28T07:31:11.099Z] Start daily report send email: { to: '[email protected]', text: { test: 'test data' } } </code></pre> <p>As you can see, the <strong>timestamp</strong> indicates that the cron job is not concurrency.</p>
<p>It's because you're reading the wrong documentation. CronJobs aren't a GKE-specific feature. For the full documentation on CronJob API, refer to the Kubernetes documentation: <a href="https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#concurrency-policy" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/job/automated-tasks-with-cron-jobs/#concurrency-policy</a> (quoted below).</p> <p>Concurrency policy decides whether a new container can be started while the previous CronJob is still running. If you have a CronJob that runs every 5 minutes, and sometimes the Job takes 8 minutes, then you may run into a case where multiple jobs are running at a time. This policy decides what to do in that case.</p> <blockquote> <h2>Concurrency Policy</h2> <p>The .spec.concurrencyPolicy field is also optional. It specifies how to treat concurrent executions of a job that is created by this cron job. the spec may specify only one of the following concurrency policies:</p> <ul> <li><code>Allow</code> (default): The cron job allows concurrently running jobs</li> <li><code>Forbid</code>: The cron job does not allow concurrent runs; if it is time for a new job run and the previous job run hasn’t finished yet, the cron job skips the new job run</li> <li><code>Replace</code>: If it is time for a new job run and the previous job run hasn’t finished yet, the cron job replaces the currently running job run with a new job run</li> </ul> <p>Note that concurrency policy only applies to the jobs created by the same cron job. If there are multiple cron jobs, their respective jobs are always allowed to run concurrently.</p> </blockquote>
<p>I am running an application with GKE. It works fine but I can not figure out how to get the external IP of the service in a machine readable format. So i am searching a gcloud or kubectl command that gives me only the external IP or a url of the format <code>http://192.168.0.2:80</code> so that I can cut out the IP.</p>
<p>You can use the <a href="https://kubernetes.io/docs/reference/kubectl/jsonpath/" rel="nofollow noreferrer">jsonpath</a> output type to get the data directly without needing the additional <code>jq</code> to process the json:</p> <pre class="lang-bash prettyprint-override"><code>kubectl get services \ --namespace ingress-nginx \ nginx-ingress-controller \ --output jsonpath='{.status.loadBalancer.ingress[0].ip}' </code></pre> <h4>NOTE</h4> <p>Be sure to replace the namespace and service name, respectively, with yours.</p>
<p>I am using Kubernetes v1.13.0. My master is also functioning as a worker-node, so it has workload pods running on it, apart from control plane pods.</p> <p>The kubelet logs on my master show the following lines:</p> <pre> eviction_manager.go:340] eviction manager: must evict pod(s) to reclaim ephemeral-storage eviction_manager.go:358] eviction manager: pods ranked for eviction: kube-controller-manager-vm2_kube-system(1631c2c238e0c5117acac446b26d9f8c), kube-apiserver-vm2_kube-system(ce43eba098d219e13901c4a0b829f43b), etcd-vm2_kube-system(91ab2b0ddf4484a5ac6ee9661dbd0b1c) </pre> <p>Once the kube-apiserver pod is evicted, the cluster becomes unusable.</p> <p>What can I do to fix this? Should I add more ephemeral storage? How would I go about doing that? That means adding more space to the root partition on my host?</p> <p>My understanding is that ephemeral storage consists of <code>/var/log</code> and <code>/var/lib/kubelet</code> folders, which both come under the root partition.</p> <p>A <code>df -h</code> on my host shows:</p> <pre> Filesystem Size Used Avail Use% Mounted on /dev/vda1 39G 33G 6.2G 85% / </pre> <p>So it looks like the root partition has lot of memory left, and there is no disk pressure. So what is causing this issue? Some of my worker pods must be doing something crazy with storage, but it's still 6G seems like plenty of room.</p> <p>Will adding more space to the root partition fix this issue temporarily?</p> <p><code>kubectl describe vm2</code> gives the following info:</p> <pre> Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Fri, 11 Jan 2019 21:25:43 +0000 Wed, 05 Dec 2018 19:16:41 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Fri, 11 Jan 2019 21:25:43 +0000 Fri, 11 Jan 2019 20:58:07 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Fri, 11 Jan 2019 21:25:43 +0000 Wed, 05 Dec 2018 19:16:41 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Fri, 11 Jan 2019 21:25:43 +0000 Thu, 06 Dec 2018 17:00:02 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled Capacity: cpu: 8 ephemeral-storage: 40593708Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32946816Ki pods: 110 Allocatable: cpu: 8 ephemeral-storage: 37411161231 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 32844416Ki pods: 110 </pre> <p>It seems to me that there was pressure on ephemeral-storage, and the eviction manager is trying to reclaim some storage by evicting least recently used pods. But it should not evict the control plane pods, otherwise cluster is unusable.</p> <p>Currently, the Kubelet evicts the control plane pods. Then I try to manually start the apiserver and other control plane pods by adding and removing a space in the <code>/etc/kubernetes/manifests</code> files. This does start the apiserver, but then it again gets evicted. Ideally, the Kubelet should ensure that the static pods in <code>/etc/kubernetes/manifests</code> are always on and properly managed.</p> <p>I am trying to understand what is going on here, and how to fix this issue, so that my kubernetes cluster becomes more robust, and I don't have to keep manually restarting the apiserver.</p>
<p>I had this same problem and solved it by changing the threshold for evictionHard.</p> <p>Looking at <code>/etc/systemd/system/kubelet.service.d/10-kubeadm.conf</code> I have:</p> <pre><code>[Service] Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf" Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml" # This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env # This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use # the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file. EnvironmentFile=-/etc/default/kubelet ExecStart= ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS </code></pre> <p>so I see my config file for kubelet is <code>/var/lib/kubelet/config.yaml</code> </p> <p>Opening that I changed evitionHard settings to be (I think they were 10 or 15% before): </p> <pre><code>... evictionHard: imagefs.available: 1% memory.available: 100Mi nodefs.available: 1% nodefs.inodesFree: 1% ... </code></pre> <hr> <p>There is also the <code>--experimental-allocatable-ignore-eviction</code> (<a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="noreferrer">https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/</a>) setting which should completely disable eviction.</p>
<p>I am pretty new on Kubernetes.Assume that i have 2 slave 1 master cluster and i have a nfs server.I create a pv on nfs and a pvc for a pod.What if NFS Server down? Is the pod still continue to work ? </p>
<p>No pod will fail as you created claim on pv and using it in pod. So pod expect claim to be available. But as nfs server is down., pod will not find it so pod will be crashed. </p>
<p>I am using kubernetes to deploy a rails app to google container engine.</p> <p>Follow the kubernetes secrets document: <a href="http://kubernetes.io/v1.1/docs/user-guide/secrets.html" rel="nofollow">http://kubernetes.io/v1.1/docs/user-guide/secrets.html</a></p> <p>I created a web controller file:</p> <pre><code># web-controller.yml apiVersion: v1 kind: ReplicationController metadata: labels: name: web name: web-controller spec: replicas: 2 selector: name: web template: metadata: labels: name: web spec: containers: - name: web image: gcr.io/my-project-id/myapp:v1 ports: - containerPort: 3000 name: http-server env: secret: - secretName: mysecret </code></pre> <p>And created a secret file:</p> <pre><code># secret.yml apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque data: RAILS_ENV: production </code></pre> <p>When I run:</p> <pre><code>kubectl create -f web-controller.yml </code></pre> <p>It showed:</p> <pre><code>error: could not read an encoded object from web-controller.yml: unable to load "web-controller.yml": json: cannot unmarshal object into Go value of type []v1.EnvVar error: no objects passed to create </code></pre> <p>Maybe the yaml format is wrong in the <code>web-controller.yml</code> file. Then how to write?</p>
<p>secret.yml</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque stringData: RAILS_ENV: production </code></pre> <p>stringData is the easymode version of what you're after, one thing though. you'll see the cleartext original yaml used to create the secret in the annotation (and if you used the above method that means you'll have a human readable secret in your annotation, if you use the below method you'll have the base64'd secret in your annotation), unless you follow up with the erase annotation command like so:</p> <p><strong>kubectl apply -f secret.yml <br> kubectl annotate secret mysecret kubectl.kubernetes.io/last-applied-configuration- <br></strong> (the - at the end is what says to erase it) <br> <strong>kubectl get secret mysecret -n=api -o yaml <br></strong> (to confirm)</p> <p>Alternatively you'd do<br> <strong>Bash# echo production | base64 <br></strong> cHJvZHVjdGlvbgo=</p> <pre><code>apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque data: RAILS_ENV: cHJvZHVjdGlvbgo= </code></pre>
<p>It's probably something obvious but I don't seem to find a solution for joining 2 vectors in prometheus.</p> <pre><code>sum( rabbitmq_queue_messages{queue=~".*"} ) by (queue) * on (queue) group_left max( label_replace( kube_deployment_labels{label_daemon_name!=""}, "queue", "$1", "label_daemon_queue_name", "(.*)" ) ) by (deployment, queue) </code></pre> <p>Below a picture of the output of the two separate vectors.</p> <p><a href="https://i.stack.imgur.com/lGlFH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/lGlFH.png" alt="enter image description here"></a></p>
<p>Group left has the many on the left, so you've got the factors to the <code>*</code> the wrong way around. Try it the other way.</p>
<p>I have three environments - QA, Staging, Production. Each one has its own credentials.properties file</p> <p>Right now I just have one secret and it's referenced and mounted in my yaml file as follows</p> <pre><code> - name: identity-service-secret-here-credentials-volume mountPath: "/root/.secrets" . . . - name: identity-service-secret-here-credentials-volume secret: secretName: identity-service-secret-here-credentials </code></pre> <p>I want it to do the equivalent of</p> <pre><code>if(env = QA) secretName = secret-qa if(env = Staging) secretName = secret-staging if(env = Prod) secretName = secret-prod </code></pre>
<p>It is bad design (also from a security perspective) to have helm control structures directives to manage deployments across <code>dev, stage and prod</code> in one YAML file.</p> <p>It best to manage distinct k8s objects for respective deployments required in a distinct environment.</p> <p>It may be necessary to maintain a distinct Secret in each stage of the pipeline or to make modifications to it as it traverses through the pipeline. Also, take care that if you are storing the Secret as JSON or YAML in an SCM, there is some form of encryption to protect the sensitive information may be warranted.</p>
<p>I've made microservices with kubernetes.</p> <p>The each server consist of a pair of SpringServer and Database.</p> <p>Database has Public IP.</p> <p>The server works fine before move to kubernetes.</p> <p>But, now It isn't work.</p> <p>Here is my pod logs </p> <p>please help me </p> <p>I really don't know why spring cannot connect to Database.</p> <pre><code>[root@master book-service]# kubectl logs book-service-65d49674b8-z9x4g . ____ _ __ _ _ /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ \\/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |_\__, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: (v2.1.2.RELEASE) 2019-01-25 07:26:25.745 INFO 1 --- [ main] c.b.bookservice.BookServiceApplication : Starting BookServiceApplication on book-service-65d49674b8-z9x4g with PID 1 (/app.jar started by root in /) 2019-01-25 07:26:25.747 INFO 1 --- [ main] c.b.bookservice.BookServiceApplication : No active profile set, falling back to default profiles: default 2019-01-25 07:26:27.519 INFO 1 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Bootstrapping Spring Data repositories in DEFAULT mode. 2019-01-25 07:26:27.630 INFO 1 --- [ main] .s.d.r.c.RepositoryConfigurationDelegate : Finished Spring Data repository scanning in 97ms. Found 1 repository interfaces. 2019-01-25 07:26:28.460 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.transaction.annotation.ProxyTransactionManagementConfiguration' of type [org.springframework.transaction.annotation.ProxyTransactionManagementConfiguration$$EnhancerBySpringCGLIB$$cf55ef4d] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2019-01-25 07:26:28.512 INFO 1 --- [ main] trationDelegate$BeanPostProcessorChecker : Bean 'org.springframework.hateoas.config.HateoasConfiguration' of type [org.springframework.hateoas.config.HateoasConfiguration$$EnhancerBySpringCGLIB$$4ed63c7f] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying) 2019-01-25 07:26:29.428 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8081 (http) 2019-01-25 07:26:29.493 INFO 1 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat] 2019-01-25 07:26:29.494 INFO 1 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.14] 2019-01-25 07:26:29.510 INFO 1 --- [ main] o.a.catalina.core.AprLifecycleListener : The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path: [/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib] 2019-01-25 07:26:29.663 INFO 1 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext 2019-01-25 07:26:29.663 INFO 1 --- [ main] o.s.web.context.ContextLoader : Root WebApplicationContext: initialization completed in 3766 ms 2019-01-25 07:26:30.080 INFO 1 --- [ main] com.zaxxer.hikari.HikariDataSource : HikariPool-1 - Starting... 2019-01-25 07:27:01.139 ERROR 1 --- [ main] com.zaxxer.hikari.pool.HikariPool : HikariPool-1 - Exception during pool initialization. java.sql.SQLNonTransientConnectionException: Could not connect to address=(host=210.108.48.235)(port=3306)(type=master) : connect timed out at org.mariadb.jdbc.internal.util.exceptions.ExceptionMapper.get(ExceptionMapper.java:234) ~[mariadb-java-client-2.3.0.jar!/:na] at org.mariadb.jdbc.internal.util.exceptions.ExceptionMapper.connException(ExceptionMapper.java:95) ~[mariadb-java-client-2.3.0.jar!/:na] at org.mariadb.jdbc.internal.protocol.AbstractConnectProtocol.connectWithoutProxy(AbstractConnectProtocol.java:1203) ~[mariadb-java-client-2.3.0.jar!/:na] at org.mariadb.jdbc.internal.util.Utils.retrieveProxy(Utils.java:560) ~[mariadb-java-client-2.3.0.jar!/:na] at org.mariadb.jdbc.MariaDbConnection.newConnection(MariaDbConnection.java:174) ~[mariadb-java-client-2.3.0.jar!/:na] at org.mariadb.jdbc.Driver.connect(Driver.java:92) ~[mariadb-java-client-2.3.0.jar!/:na] at com.zaxxer.hikari.util.DriverDataSource.getConnection(DriverDataSource.java:136) ~[HikariCP-3.2.0.jar!/:na] at com.zaxxer.hikari.pool.PoolBase.newConnection(PoolBase.java:369) ~[HikariCP-3.2.0.jar!/:na] at com.zaxxer.hikari.pool.PoolBase.newPoolEntry(PoolBase.java:198) ~[HikariCP-3.2.0.jar!/:na] at com.zaxxer.hikari.pool.HikariPool.createPoolEntry(HikariPool.java:467) [HikariCP-3.2.0.jar!/:na] at com.zaxxer.hikari.pool.HikariPool.checkFailFast(HikariPool.java:541) [HikariCP-3.2.0.jar!/:na] at com.zaxxer.hikari.pool.HikariPool.&lt;init&gt;(HikariPool.java:115) [HikariCP-3.2.0.jar!/:na] at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:112) [HikariCP-3.2.0.jar!/:na] at org.springframework.jdbc.datasource.DataSourceUtils.fetchConnection(DataSourceUtils.java:157) [spring-jdbc-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.jdbc.datasource.DataSourceUtils.doGetConnection(DataSourceUtils.java:115) [spring-jdbc-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.jdbc.datasource.DataSourceUtils.getConnection(DataSourceUtils.java:78) [spring-jdbc-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:319) [spring-jdbc-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.jdbc.support.JdbcUtils.extractDatabaseMetaData(JdbcUtils.java:356) [spring-jdbc-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.boot.autoconfigure.orm.jpa.DatabaseLookup.getDatabase(DatabaseLookup.java:73) [spring-boot-autoconfigure-2.1.2.RELEASE.jar!/:2.1.2.RELEASE] at org.springframework.boot.autoconfigure.orm.jpa.JpaProperties.determineDatabase(JpaProperties.java:142) [spring-boot-autoconfigure-2.1.2.RELEASE.jar!/:2.1.2.RELEASE] at org.springframework.boot.autoconfigure.orm.jpa.JpaBaseConfiguration.jpaVendorAdapter(JpaBaseConfiguration.java:112) [spring-boot-autoconfigure-2.1.2.RELEASE.jar!/:2.1.2.RELEASE] at org.springframework.boot.autoconfigure.orm.jpa.HibernateJpaConfiguration$$EnhancerBySpringCGLIB$$ee2cb1bc.CGLIB$jpaVendorAdapter$4(&lt;generated&gt;) [spring-boot-autoconfigure-2.1.2.RELEASE.jar!/:2.1.2.RELEASE] at org.springframework.boot.autoconfigure.orm.jpa.HibernateJpaConfiguration$$EnhancerBySpringCGLIB$$ee2cb1bc$$FastClassBySpringCGLIB$$b3fde0d0.invoke(&lt;generated&gt;) [spring-boot-autoconfigure-2.1.2.RELEASE.jar!/:2.1.2.RELEASE] at org.springframework.cglib.proxy.MethodProxy.invokeSuper(MethodProxy.java:244) [spring-core-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.context.annotation.ConfigurationClassEnhancer$BeanMethodInterceptor.intercept(ConfigurationClassEnhancer.java:363) [spring-context-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.boot.autoconfigure.orm.jpa.HibernateJpaConfiguration$$EnhancerBySpringCGLIB$$ee2cb1bc.jpaVendorAdapter(&lt;generated&gt;) [spring-boot-autoconfigure-2.1.2.RELEASE.jar!/:2.1.2.RELEASE] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_111] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_111] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_111] at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_111] at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:154) [spring-beans-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:622) [spring-beans-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:456) [spring-beans-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1288) [spring-beans-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1127) [spring-beans-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:538) [spring-beans-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:498) [spring-beans-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:320) [spring-beans-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) ~[spring-beans-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:318) [spring-beans-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199) [spring-beans-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:277) ~[spring-beans-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1244) ~[spring-beans-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1164) ~[spring-beans-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:857) [spring-beans-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:760) [spring-beans-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:509) [spring-beans-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1288) [spring-beans-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1127) [spring-beans-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:538) [spring-beans-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:498) [spring-beans-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:320) [spring-beans-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) ~[spring-beans-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:318) [spring-beans-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199) [spring-beans-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.beans.factory.config.DependencyDescriptor.resolveCandidate(DependencyDescriptor.java:277) ~[spring-beans-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.beans.factory.support.DefaultListableBeanFactory.doResolveDependency(DefaultListableBeanFactory.java:1244) ~[spring-beans-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.beans.factory.support.DefaultListableBeanFactory.resolveDependency(DefaultListableBeanFactory.java:1164) ~[spring-beans-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.beans.factory.support.ConstructorResolver.resolveAutowiredArgument(ConstructorResolver.java:857) [spring-beans-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.beans.factory.support.ConstructorResolver.createArgumentArray(ConstructorResolver.java:760) [spring-beans-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.beans.factory.support.ConstructorResolver.instantiateUsingFactoryMethod(ConstructorResolver.java:509) [spring-beans-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.instantiateUsingFactoryMethod(AbstractAutowireCapableBeanFactory.java:1288) [spring-beans-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBeanInstance(AbstractAutowireCapableBeanFactory.java:1127) [spring-beans-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:538) [spring-beans-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:498) [spring-beans-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:320) [spring-beans-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:222) ~[spring-beans-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:318) [spring-beans-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199) [spring-beans-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:1083) ~[spring-context-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:853) ~[spring-context-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:546) ~[spring-context-5.1.4.RELEASE.jar!/:5.1.4.RELEASE] at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:142) ~[spring-boot-2.1.2.RELEASE.jar!/:2.1.2.RELEASE] </code></pre>
<p>Are you able to ping DB server from within the pod?<br> If you can ping successfully from pod, spring should connect to db. If not some issue with spring configuration. </p> <p>Just go to container using below command, </p> <pre><code>Kubectl exec -it POD_NAME bash Ping DB_SERVER </code></pre> <p>I faced similar situation when I was given DB machine name in pod but it was expecting full name along with domain. </p>
<p>For testing purposes, I want to set up the kubernetes master to be only accessible from the local machine and not the outside. Ultimately I am going to run a proxy server docker container on the machine that is opened up to the outside. This is all inside a minikube VM.</p> <p>I figure configuring kube-proxy is the way to go. I did the following</p> <pre><code>kubeadm config view &gt; ~/cluster.yaml # edit proxy bind address vi ~/cluster.yaml kubeadm reset rm -rf /data/minikube kubeadm init --config cluster.yaml </code></pre> <p>Upon doing <code>netstat -ln | grep 8443</code> i see <code>tcp 0 0 :::8443 :::* LISTEN</code> which means it didn't take the IP.</p> <p>I have also tried <code>kubeadm init --apiserver-advertise-address 127.0.0.1</code> but that only changes the advertised address to 10.x.x.x in the <code>kubeadm config view</code>. I feel that is probably the wrong thing anyways. I don't want the API server to be inaccessible to the other docker containers that need to access it or something.</p> <p>I have also tried doing this <code>kubeadm config upload from-file --config ~/cluster.yaml</code> and then attempting to manually restart the docker running kube-proxy. Also tried to restart the machine/cluster after kubeadm config change but couldn't figure that out. When you reboot a minikube VM by hand kubeadm command disappears and not even docker is running. Various online methods of restarting things dont seem to work either (could be just doing this wrong).</p> <p>Also tried editing the kube-proxy docker's config file (bound to a local dir) but that gets overwritten when i restart the docker. I dont get it.</p> <p>There's nothing in the kubernetes dashboard that allows me to edit the config file of the kube-proxy either (since its a daemonset).</p> <p>Ultimately, I wish to use an authenticated proxy server sitting infront of the k8s master (apiserver specifically). Direct access to the k8s master from outside the VM will not work.</p> <p>Thanks</p>
<p>you could limit it via the local network configuration. (Firewall, Routes) As far as I know, the API needs to be accessible, at least via the local network where the other nodes reside in. Except you want to have a single node "cluster".</p> <p>So, when you do not have a different network card, where you could advertise or bind the address to, you need to limit it then by the above mentioned Firewall or Route rules.</p> <p>To your initial question topic, did you look into this issue? <a href="https://github.com/kubernetes/kubernetes/issues/39586" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/issues/39586</a></p>
<p>Kubernetes allows to limit pod resource usage.</p> <pre><code>requests: cpu: 100m memory: 128Mi limits: cpu: 200m # which is 20% of 1 core memory: 256Mi </code></pre> <p>Let's say my kubernetes node has 2 core. And I run this pod with limit of CPU: 200m on this node. In this case, will my pod use it's underlying node's <strong>1Core's 200m</strong> or <strong>2Core's 100m+100m</strong>?</p> <p>This calculation is needed for my <a href="http://docs.gunicorn.org/en/stable/design.html#how-many-workers" rel="noreferrer">gunicorn worker's number formula</a>, or nginx worker's number etc.. In gunicorn documentation it says </p> <blockquote> <p>Generally we recommend (2 x $num_cores) + 1 as the number of workers to start off with.</p> </blockquote> <p>So should I use 5 workers? (my node has 2 cores). Or it doesn't even matter since my pod has only allocated 200m cpu and I should consider my pod has 1 core?</p> <p><em><strong>TLDR:</strong></em> How many cores do pods use when its cpu usage is limited by kubernetes? If I run <code>top</code> inside pod, I'm seeing 2 cores available. But I'm not sure my application is using this 2 core's 10%+10% or 1core's 20%..</p>
<p>It will limit to 20% of one core, i.e. 200m. Also, <code>limit</code> means a pod can touch a maximum of that much CPU and no more. So pod CPU utilization will not always touch the limit.</p> <p>Total CPU limit of a cluster is the total amount of cores used by all nodes present in cluster.</p> <p>If you have a 2 node cluster and the first node has 2 cores and second node has 1 core, K8s CPU capacity will be 3 cores (2 core + 1 core). If you have a pod which requests 1.5 cores, then it will not be scheduled to the second node, as that node has a capacity of only 1 core. It will instead be scheduled to first node, since it has 2 cores.</p>
<p>I'm failing to connect a Kubernetes Cluster with my GitLab CE server. I'm almost a newby when it comes to GCE and k8s. When I try to create a new k8s cluster from GitLab (not to connect an existing cluster), a notification comes up that billing isn't setup correctly yet. </p> <p><code>Please enable billing for one of your projects to be able to create a Kubernetes cluster, then try again.</code></p> <hr> <p>What I did:</p> <ul> <li>I did all steps as described in the <a href="https://docs.gitlab.com/ee/user/project/clusters/" rel="nofollow noreferrer">official tutorial</a> twice.</li> <li>I verified that GitLab Omniauth is working correctly.</li> <li>I enabled amongst others the following k8s APIs: Cloud Resource Manager API, Cloud Google+ API, Compute Engine API, Kubernetes Engine API</li> <li>The above-mentioned warning provides a link to the GCE billing dashboard. There, I enabled k8s billing (or I assume I did it): <ul> <li>I linked my newly created k8s project to a billing account.</li> <li>I upgraded my GCE account according to the web console's notification to enable billing. I assume this means that billing is enabled. I tried both a billing account with free tier promotion and another one without to avoid running into free tier troubles.</li> </ul></li> <li>I waited due to <a href="https://gitlab.com/gitlab-com/support-forum/issues/3022" rel="nofollow noreferrer">this issue</a> for half a day</li> </ul> <p>Still, I'm running into the same warning. I appreciate any solutions or hints how to proceed to get the connection installed.</p> <hr> <p>In case you want to come up with the solution <code>create and connect an existing server</code> - I wouldn't mind to do so, but I also tried that. I was able to figure out the API URL of my cluster (<code>kubectl cluster-info</code>), but it wasn't published to the web, therefore not accessible by GitLab. In case you know how to fix that, please let me know.</p>
<p>After inspecting the network traffic, it seems to be related to certain APIs not being enabled on your project.</p> <ol> <li>Log into your Google Cloud Platform Console.</li> <li>Make sure you have chosen your project</li> <li>From the menu, choose API &amp; Services -> Dashboard</li> <li>Click ENABLE APIS AND SERVICES link at the top of the page.</li> <li>Search for "Cloud Recource Manager" select it and Enable it.</li> <li>Go back to API library and search for "Cloud Billing" select and Enable it.</li> </ol> <p>Doing this fixed the problem for me. If this doesn't fix your problem, then you can inspect your page and look at the network traffic in your browser. You will see the calls to the Google services and you can read the responses to get more info.</p>
<p>I have been trying to setup a K8s cluster on a set of Raspberry Pi's. Here is a link to my GitHub page that describes the whole set up:</p> <p><a href="https://github.com/joesan/plant-infra/blob/master/pi/README.md" rel="nofollow noreferrer">https://github.com/joesan/plant-infra/blob/master/pi/README.md</a></p> <p>I'm now stuck with the last step where I join my worker nodes with the master. I did issue the join command on the worker node, but after that I check the nodes in the master and I get to see the following:</p> <pre><code>pi@k8s-master-01:~ $ kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master-01 Ready master 56m v1.9.6 k8s-worker-01 NotReady &lt;none&gt; 26m v1.9.6 k8s-worker-02 NotReady &lt;none&gt; 6m v1.9.6 </code></pre> <p>The question is, do I need to install the container network like weave also on the worker nodes?</p> <p>Here is the log file from the worker node:</p> <pre><code>pi@k8s-worker-02:~ $ journalctl -u kubelet -- Logs begin at Thu 2016-11-03 17:16:42 UTC, end at Tue 2018-05-01 11:35:54 UTC. -- May 01 11:27:28 k8s-worker-02 systemd[1]: Started kubelet: The Kubernetes Node Agent. May 01 11:27:30 k8s-worker-02 kubelet[334]: I0501 11:27:30.995549 334 feature_gate.go:226] feature gates: &amp;{{} map[]} May 01 11:27:31 k8s-worker-02 kubelet[334]: I0501 11:27:31.005491 334 controller.go:114] kubelet config controller: starting controller May 01 11:27:31 k8s-worker-02 kubelet[334]: I0501 11:27:31.005584 334 controller.go:118] kubelet config controller: validating combination of defaults and flags May 01 11:27:31 k8s-worker-02 kubelet[334]: W0501 11:27:31.052134 334 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d May 01 11:27:31 k8s-worker-02 kubelet[334]: I0501 11:27:31.084480 334 server.go:182] Version: v1.9.6 May 01 11:27:31 k8s-worker-02 kubelet[334]: I0501 11:27:31.085670 334 feature_gate.go:226] feature gates: &amp;{{} map[]} May 01 11:27:31 k8s-worker-02 kubelet[334]: I0501 11:27:31.092807 334 plugins.go:101] No cloud provider specified. May 01 11:27:31 k8s-worker-02 kubelet[334]: I0501 11:27:31.110132 334 certificate_store.go:130] Loading cert/key pair from ("/var/lib/kubelet/pki/kubelet-client.crt", "/var/lib/ May 01 11:27:39 k8s-worker-02 kubelet[334]: E0501 11:27:39.905417 334 machine.go:194] failed to get cache information for node 0: open /sys/devices/system/cpu/cpu0/cache: no suc May 01 11:27:39 k8s-worker-02 kubelet[334]: I0501 11:27:39.911993 334 server.go:428] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to / May 01 11:27:39 k8s-worker-02 kubelet[334]: I0501 11:27:39.914203 334 container_manager_linux.go:242] container manager verified user specified cgroup-root exists: / May 01 11:27:39 k8s-worker-02 kubelet[334]: I0501 11:27:39.914272 334 container_manager_linux.go:247] Creating Container Manager object based on Node Config: {RuntimeCgroupsName May 01 11:27:39 k8s-worker-02 kubelet[334]: I0501 11:27:39.914895 334 container_manager_linux.go:266] Creating device plugin manager: false May 01 11:27:39 k8s-worker-02 kubelet[334]: I0501 11:27:39.919031 334 kubelet.go:291] Adding manifest path: /etc/kubernetes/manifests May 01 11:27:39 k8s-worker-02 kubelet[334]: I0501 11:27:39.919197 334 kubelet.go:316] Watching apiserver May 01 11:27:39 k8s-worker-02 kubelet[334]: E0501 11:27:39.935754 334 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https:/ May 01 11:27:39 k8s-worker-02 kubelet[334]: E0501 11:27:39.937449 334 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:480: Failed to list *v1.Node: Get https://192.16 May 01 11:27:39 k8s-worker-02 kubelet[334]: E0501 11:27:39.937492 334 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:471: Failed to list *v1.Service: Get https://192 May 01 11:27:39 k8s-worker-02 kubelet[334]: W0501 11:27:39.948764 334 kubelet_network.go:139] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back t May 01 11:27:39 k8s-worker-02 kubelet[334]: I0501 11:27:39.949871 334 kubelet.go:577] Hairpin mode set to "hairpin-veth" May 01 11:27:39 k8s-worker-02 kubelet[334]: W0501 11:27:39.951008 334 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d May 01 11:27:39 k8s-worker-02 kubelet[334]: I0501 11:27:39.952122 334 client.go:80] Connecting to docker on unix:///var/run/docker.sock May 01 11:27:39 k8s-worker-02 kubelet[334]: I0501 11:27:39.952976 334 client.go:109] Start docker client with request timeout=2m0s May 01 11:27:39 k8s-worker-02 kubelet[334]: W0501 11:27:39.959045 334 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d May 01 11:27:39 k8s-worker-02 kubelet[334]: W0501 11:27:39.971616 334 cni.go:171] Unable to update cni config: No networks found in /etc/cni/net.d May 01 11:27:39 k8s-worker-02 kubelet[334]: I0501 11:27:39.971765 334 docker_service.go:232] Docker cri networking managed by cni May 01 11:27:40 k8s-worker-02 kubelet[334]: I0501 11:27:40.002411 334 docker_service.go:237] Docker Info: &amp;{ID:25GN:65LU:UXAR:CUUY:DOQH:ST4A:IQOE:PIDR:BKYC:UVJH:LI5H:HQSG Contai May 01 11:27:40 k8s-worker-02 kubelet[334]: I0501 11:27:40.002766 334 docker_service.go:250] Setting cgroupDriver to cgroupfs May 01 11:27:40 k8s-worker-02 kubelet[334]: I0501 11:27:40.058142 334 remote_runtime.go:43] Connecting to runtime service unix:///var/run/dockershim.sock May 01 11:27:40 k8s-worker-02 kubelet[334]: I0501 11:27:40.098202 334 kuberuntime_manager.go:186] Container runtime docker initialized, version: 18.04.0-ce, apiVersion: 1.37.0 May 01 11:27:40 k8s-worker-02 kubelet[334]: I0501 11:27:40.110512 334 server.go:755] Started kubelet May 01 11:27:40 k8s-worker-02 kubelet[334]: I0501 11:27:40.112242 334 kubelet_node_status.go:273] Setting node annotation to enable volume controller attach/detach May 01 11:27:40 k8s-worker-02 kubelet[334]: I0501 11:27:40.114014 334 server.go:129] Starting to listen on 0.0.0.0:10250 May 01 11:27:40 k8s-worker-02 kubelet[334]: E0501 11:27:40.114962 334 kubelet.go:1281] Image garbage collection failed once. Stats initialization may not have completed yet: fai May 01 11:27:40 k8s-worker-02 kubelet[334]: I0501 11:27:40.133665 334 server.go:299] Adding debug handlers to kubelet server. May 01 11:27:40 k8s-worker-02 kubelet[334]: E0501 11:27:40.141790 334 event.go:209] Unable to write event: 'Post https://192.168.0.101:6443/api/v1/namespaces/default/events: dia May 01 11:27:40 k8s-worker-02 kubelet[334]: E0501 11:27:40.175654 334 container_manager_linux.go:583] [ContainerManager]: Fail to get rootfs information unable to find data for May 01 11:27:40 k8s-worker-02 kubelet[334]: I0501 11:27:40.175765 334 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer May 01 11:27:40 k8s-worker-02 kubelet[334]: I0501 11:27:40.176241 334 volume_manager.go:247] Starting Kubelet Volume Manager lines 1-41 </code></pre> <p>Any idea as to why my worker nodes show up as NotReady?</p> <p>EDIT: I traced the error with the kubectl describe nodes command:</p> <pre><code>Name: k8s-worker-02 Roles: &lt;none&gt; Labels: beta.kubernetes.io/arch=arm beta.kubernetes.io/os=linux kubernetes.io/hostname=k8s-worker-02 Annotations: node.alpha.kubernetes.io/ttl=0 volumes.kubernetes.io/controller-managed-attach-detach=true Taints: &lt;none&gt; CreationTimestamp: Tue, 01 May 2018 11:26:50 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- OutOfDisk False Tue, 01 May 2018 11:40:17 +0000 Tue, 01 May 2018 11:26:43 +0000 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Tue, 01 May 2018 11:40:17 +0000 Tue, 01 May 2018 11:26:43 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Tue, 01 May 2018 11:40:17 +0000 Tue, 01 May 2018 11:26:43 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure Ready False Tue, 01 May 2018 11:40:17 +0000 Tue, 01 May 2018 11:26:43 +0000 KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized. WARNING: CPU hardcapping unsupported </code></pre> <p>How can I solve this?</p>
<ol> <li>Go to your master node in master node go to /etc/cni/net.d</li> <li>In that folder you will find a cni config file</li> <li>Copy that file and upload the same file in /etc/cni/net.d of your worker node</li> <li>your worker node will be ready in 1 to 2 mins</li> <li>if this is not working add a comment</li> </ol>
<p>In my deployment a pod can be in a situation where it needs to be recreated. In this case it can still process traffic but should be recreated asap.</p> <p>So I think about having a livenessProbe that reports failure if the pod needs to be restarted. The readiness probe will still report ok.</p> <p>I know that eventually kubernetes will recreate all pods and the system will be fine again. </p> <p>My question now is can this be done without outage? So lets assume all pods of a replicaset report are not alive at the same time. Will kubernetes kill them all and then replace them or will it act in a rolling update fashion where it starts a new pod, waits for it to be ready, then kills one not alive pod and continue this way until all are replaced?</p> <p>Is this the default behaviour of kubernetes? If not can it be configured to behave like this?</p>
<p>K8 will not use rolling to start pod if they are failed due to probe or any other reason. </p> <p>Also about probes,when to start liveness probe first time and how frequently to do that, is specified in liveness probe itself. As you have multiple replicas of same pod, these values will be same for all replicas of pods managed by single ReplicaSet. So yes this is default behavior.</p> <p>But at all you want to do this without outage, you can create two ReplicaSet who manages two different set of same pods but with different values for below liveness probe params:</p> <pre><code>initialDelaySeconds: Number of seconds after the container has started before liveness or readiness probes are initiated. periodSeconds: How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1. timeoutSeconds: Number of seconds after which the probe times out. Defaults to 1 second. successThreshold: Minimum consecutive successes for the probe to be considered successful after having failed. failureThreshold: When a Pod starts and the probe fails, Kubernetes will try failureThreshold times before giving up. </code></pre>
<p>I have a backend using https. I want to separate load on that back-end based on URL/path.</p> <p>I decided to use ingress to do this url/path based logic in order to move traffic to different back-ends ( same back-ends , just duplicated to different NodePorts )</p> <p>my question is how I can configure the ingress to receive https requests and to forward those https requests to the https back-end?</p> <p>thanks</p> <p>edit: I added the yaml file:</p> <pre><code>spec: rules: - http: paths: - backend: serviceName: service servicePort: 9443 path: /carbon - backend: serviceName: service2 servicePort: 9443 path: /oauth </code></pre> <p>for some reason I can;t change the rule form http to https</p>
<p><strong>Attention:</strong> This answer applies to the ingress-nginx solution provided by the kubernetes organisation on github (<a href="https://github.com/kubernetes/ingress-nginx" rel="noreferrer">https://github.com/kubernetes/ingress-nginx</a>)</p> <hr /> <p>If you want to use load balancing mechanisms in k8s you should use <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="noreferrer">services</a> instead and start multiple instances behind that service that way k8s will do the load balancing. If you want to use different versions of your backend (e.g. prod and test) your way of separating them is fine</p> <p>if your service is only reachable via https you need to add the following annotation to your ingress yaml: (<a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#backend-protocol" rel="noreferrer">documentation</a>)</p> <pre><code>nginx.ingress.kubernetes.io/backend-protocol: &quot;HTTPS&quot; </code></pre> <p>To secure ingress itself take a look at this: <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#tls" rel="noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/#tls</a></p> <p>But if you want that the backend services decrypt the TLS communication use the following annotation instead: (<a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#ssl-passthrough" rel="noreferrer">documentation</a>)</p> <pre><code>nginx.ingress.kubernetes.io/ssl-passthrough: &quot;true&quot; </code></pre> <p><strong>Edit:</strong></p> <p>The Ingress YAML should look like this if you want to reach the backend via TLS:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-name namespace: namespace-name annotations: nginx.ingress.kubernetes.io/backend-protocol: &quot;HTTPS&quot; spec: rules: - http: paths: - backend: serviceName: service servicePort: 9443 path: /carbon - backend: serviceName: service2 servicePort: 9443 path: /oauth </code></pre> <p>The Ingress YAML should look like this if you want to reach the backend via TLS with TLS decryption in the ingress controller:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-name namespace: namespace-name annotations: nginx.ingress.kubernetes.io/backend-protocol: &quot;HTTPS&quot; spec: tls: - hosts: - app.myorg.com secretName: tls-secret rules: - http: paths: - backend: serviceName: service servicePort: 9443 path: /carbon - backend: serviceName: service2 servicePort: 9443 path: /oauth </code></pre> <p>It's important to note that tls-secret is the name of a SecretConfig with a valid Certificate issued for the host (app.myorg.com)</p> <hr /> <p>The Ingress YAML should look like this if you want to reach the backend via TLS with TLS decryption in the backend:</p> <pre><code>apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: ingress-name namespace: namespace-name annotations: nginx.ingress.kubernetes.io/ssl-passthrough: &quot;true&quot; spec: rules: - http: paths: - backend: serviceName: service servicePort: 9443 path: /carbon - backend: serviceName: service2 servicePort: 9443 path: /oauth </code></pre> <p>I never tested the last version myself so i don't know if that actually works but I'd strongly advise reading <a href="https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#ssl-passthrough" rel="noreferrer">this</a> passage for that variant.</p>
<p>I want to reject all docker registries except my own one. I'm looking for a some kind of policies for docker registries and their images.</p> <p>For example my registry name is <code>registry.my.com</code>. I want to make kubernetes pulling/running images only from <code>registry.my.com</code>, so:</p> <pre><code>image: prometheus:2.6.1 </code></pre> <p>or any another should be rejected, while:</p> <pre><code>image: registry.my.com/prometheus:2.6.1 </code></pre> <p>shouldn't.</p> <p>Is there a way to do that?</p>
<p><a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/" rel="noreferrer">Admission Controllers</a> is what you are looking for.</p> <p>Admission controllers intercept operations to validate what should happen before the operation is committed by the api-server.</p> <p>An example is the <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#imagepolicywebhook" rel="noreferrer">ImagePolicyWebhook</a>, an admission controller that intercept Image operations to validate if it should be allowed or rejected.</p> <p>It will make a call to an REST endpoint with a payload like:</p> <pre><code>{ "apiVersion":"imagepolicy.k8s.io/v1alpha1", "kind":"ImageReview", "spec":{ "containers":[ { "image":"myrepo/myimage:v1" }, { "image":"myrepo/myimage@sha256:beb6bd6a68f114c1dc2ea4b28db81bdf91de202a9014972bec5e4d9171d90ed" } ], "annotations":[ "mycluster.image-policy.k8s.io/ticket-1234": "break-glass" ], "namespace":"mynamespace" } } </code></pre> <p>and the API answer with <strong>Allowed</strong>:</p> <pre><code>{ "apiVersion": "imagepolicy.k8s.io/v1alpha1", "kind": "ImageReview", "status": { "allowed": true } } </code></pre> <p>or <strong>Rejected</strong>:</p> <pre><code>{ "apiVersion": "imagepolicy.k8s.io/v1alpha1", "kind": "ImageReview", "status": { "allowed": false, "reason": "image currently blacklisted" } } </code></pre> <p>The endpoint could be a Lambda function or a container running in the cluster.</p> <p>This github repo <a href="https://github.com/flavio/kube-image-bouncer" rel="noreferrer">github.com/flavio/kube-image-bouncer</a> implements a sample using <a href="https://github.com/flavio/kube-image-bouncer/blob/master/handlers/image_policy.go" rel="noreferrer"><strong>ImagePolicyWebhook</strong></a> to reject containers using the tag "Latest". </p> <p>There is also the option to use the flag <em><code>registry-whitelist</code></em> on startup to a pass a comma separated list of allowed registries, this will be used by the <a href="https://github.com/flavio/kube-image-bouncer/blob/master/handlers/validating_admission.go" rel="noreferrer"><strong>ValidatingAdmissionWebhook</strong></a> to validate if the registry is whitelisted.</p> <p>.</p> <p>The other alternative is the project <a href="https://github.com/open-policy-agent/kubernetes-policy-controller" rel="noreferrer">Open Policy Agent</a>[OPA].</p> <p>OPA is a flexible engine used to create policies based on rules to match resources and take decisions according to the result of these expressions. It is a mutating and a validating webhook that gets called for matching Kubernetes API server requests by the admission controller mentioned above. In summary, the operation would work similarly as described above, the only difference is that the rules are written as configuration instead of code. The same example above rewritter to use OPA would be similar to this:</p> <pre><code>package admission import data.k8s.matches deny[{ "id": "container-image-whitelist", # identifies type of violation "resource": { "kind": "pods", # identifies kind of resource "namespace": namespace, # identifies namespace of resource "name": name # identifies name of resource }, "resolution": {"message": msg}, # provides human-readable message to display }] { matches[["pods", namespace, name, matched_pod]] container = matched_pod.spec.containers[_] not re_match("^registry.acmecorp.com/.+$", container.image) # The actual validation msg := sprintf("invalid container registry image %q", [container.image]) } </code></pre> <p>The above translates to: <em>deny any pod where the container image does not match the following registry <code>registry.acmecorp.com</code></em></p>
<p>I try to deploy one docker image that I build and is not on a public or private registry. </p> <p>I use the <code>imagePullPolicy: IfNotPresent</code> for the Kubernetes deployment.</p> <p>I use kubeadm v1.12 the error:</p> <pre><code>Normal Scheduled 35s default-scheduler Successfully assigned default/test-777dd9bc96-chgc7 to ip-10-0-1-154 Normal SandboxChanged 32s kubelet, ip-10-0-1-154 Pod sandbox changed, it will be killed and re-created. Normal BackOff 30s (x3 over 31s) kubelet, ip-10-0-1-154 Back-off pulling image "test_kube" Warning Failed 30s (x3 over 31s) kubelet, ip-10-0-1-154 Error: ImagePullBackOff Normal Pulling 15s (x2 over 34s) kubelet, ip-10-0-1-154 pulling image "test" Warning Failed 13s (x2 over 33s) kubelet, ip-10-0-1-154 Failed to pull image "test": rpc error: code = Unknown desc = Error response from daemon: pull access denied for test_kube, repository does not exist or may require 'docker login' Warning Failed 13s (x2 over 33s) kubelet, ip-10-0-1-154 Error: ErrImagePull </code></pre> <p>My deployment file: </p> <pre><code>apiVersion: apps/v1beta1 kind: Deployment vmetadata: name: test-kube spec: template: metadata: labels: app: test spec: containers: - name: test image: test imagePullPolicy: IfNotPresent ports: - containerPort: 3000 env: - name: SECRET-KUBE valueFrom: secretKeyRef: name: secret-test key: username </code></pre> <blockquote> <p>docker images]</p> </blockquote> <pre><code>REPOSITORY TAG test latest test test </code></pre> <p>In the deployment file i tried with </p> <blockquote> <p>image: test and with image: test:test</p> </blockquote> <p>The same error:</p> <blockquote> <p>Error: ErrImagePull</p> </blockquote>
<p>You should have a docker private registry on the master node of the kubernetes cluster so that if the pod is deployed on a node to pull the image from there. You can find the steps to create a Kubernetes cluster with docker private registry at: <a href="http://dradoaica.blogspot.com/2019/01/kubernetes-cluster-with-docker-private.html" rel="nofollow noreferrer">Kubernetes cluster with docker private registry</a></p> <p>6. Creating a docker private registry on master node</p> <pre><code># Set basic auth. rm -f /auth/* mkdir -p /auth docker run --entrypoint htpasswd registry:2 -Bbn test test &gt; /auth/htpasswd docker rm registry -f </code></pre> <pre><code># Set certificates auth. rm -f /certs/* mkdir -p /certs openssl genrsa 1024 &gt; /certs/registrykey.pem chmod 400 /certs/registrykey.pem openssl req -new -x509 -nodes -sha1 -days 365 -key /certs/registrykey.pem -out /certs/registry.pem -subj "/C=/ST=/L=/O=/OU=/CN=registry.com" &gt; /dev/null 2&gt;&amp;1 docker run -d -e REGISTRY_HTTP_ADDR=0.0.0.0:5000 -p 5000:5000 --restart=always --name registry -v `pwd`/auth:/auth -e "REGISTRY_AUTH=htpasswd" -e "REGISTRY_AUTH_HTPASSWD_REALM=Registry Realm" -e REGISTRY_AUTH_HTPASSWD_PATH=/auth/htpasswd -v `pwd`/certs:/certs -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/registry.pem -e REGISTRY_HTTP_TLS_KEY=/certs/registrykey.pem registry:2 # Create secret to be used in "imagePullSecrets" section of a pod kubectl create secret docker-registry regsecret --docker-server=192.168.147.3:5000 --docker-username=test --docker-password=test --namespace=kube-system # Push image in private registry. docker tag test-image:latest 192.168.147.3:5000/test-image docker push 192.168.147.3:5000/test-image </code></pre> <p>7. YAML example for pod with image from private registry</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: test-site labels: app: web spec: containers: - name: test image: 192.168.147.3:5000/test-image:latest ports: - containerPort: 8000 imagePullPolicy: Always imagePullSecrets: - name: regsecret </code></pre>
<p>The Kubernetes Horizontal Pod Autoscaler walkthrough in <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/" rel="noreferrer">https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/</a> explains that we can perform autoscaling on custom metrics. What I didn't understand is when to use the two API versions: v2beta1 and v2beta2. If anybody can explain, I would really appreciate it.</p> <p>Thanks in advance.</p>
<p>The first metrics <strong>autoscaling/V2beta1</strong> doesn't allow you to scale your pods based on custom metrics. That only allows you to scale your application based on <code>CPU</code> and <code>memory</code> utilization of your application</p> <p>The second metrics <strong>autoscaling/V2beta2</strong> allows users to autoscale based on custom metrics. It allow autoscaling based on metrics coming from outside of Kubernetes. A new External metric source is added in this api.</p> <pre><code>metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 50 </code></pre> <p>It will identify a specific metric to autoscale on based on metric name and a label selector. Those metrics can come from anywhere like a stackdriver or prometheus monitoring application and based on some query from prometheus you want to scale your application.</p> <p>It would always better to use <code>V2beta2</code> api because it can do scaling on CPU and memory as well as on custom metrics, while V2beta1 API can scale only on internal metrics.</p> <p>The snippet I mentioned in answer denotes how you can specify the target CPU utilisation in <code>V2beta2</code> API</p>
<p>I have a service (/deployment/pod) running in my Minikube (installed on my Mac) that needs to call an external http service that runs directly on my Mac (i.e. outside Minikube). The domain name of that external service is defined into my Mac /etc/hosts file. Yet, my service within Minikube cannot call that external service. Any idea what I need to configure where? Many thanks. C </p>
<p>Create <code>Endpoints</code> that will forward traffic to your desire external IP address (your local machine). You can directly connect using <code>Endpoints</code> but according to <code>Goole Cloud best practice</code> (<a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-mapping-external-services" rel="nofollow noreferrer">doc</a>) is to access it through a <code>Service</code></p> <p><a href="https://i.stack.imgur.com/gKLJd.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gKLJd.jpg" alt="enter image description here" /></a></p> <p>Create your <code>Endpoints</code></p> <pre><code>kind: Endpoints apiVersion: v1 metadata: name: local-ip subsets: - addresses: - ip: 10.240.0.4 # IP of your desire end point ports: - port: 27017 # Port that you want to access </code></pre> <p>Then create you <code>Service</code></p> <pre><code>kind: Service apiVersion: v1 metadata: name: local-ip spec: type: ClusterIP ports: - port: 27017 targetPort: 27017 </code></pre> <p>Now you can call external http service using the <code>Service</code> name. In this case <code>local-ip</code> like any other internal service of <code>minikube</code>.</p>
<p>A few days ago, I looked up why none of pods are being scheduled to the master node, and found this question: <a href="https://stackoverflow.com/questions/43147941/allow-scheduling-of-pods-on-kubernetes-master">Allow scheduling of pods on Kubernetes master?</a></p> <p>It tells that it is because the master node is tainted with "NoSchedule" effect, and gives the command to remove that taint.</p> <p>But before I execute that command on my cluster, I want to understand why it was there in the first place.</p> <p>Is there a reason why the master node should not run pods? Any best-practices it relates to?</p>
<p>The purpose of kubernetes is to deploy application easily and scale them based on the demand. The pod is a basic entity which runs the application and can be increased and decreased based on high and low demands respectively (Horizontal Pod Autoscalar).</p> <p>These worker pods needs to be run on worker nodes specially if you’re looking at big application where your cluster might scale upto 100’s of nodes based on demand (Cluster Autoscalar). These increasing pods can put up pressure on your nodes and once they do you can always increase the worker node in cluster using cluster autoscalar. Suppose, you made your master schedulable then the high memory and CPU pressure put your master at risk of crashing the master. Mind you can’t autoscale the master using autoscalar. This way you’re putting your whole cluster at risk. If you have single master then your will not be able to schedule anything if master crashed. If you have 3 master and one of them crashed, then the other two master has to take the extra load of scheduling and managing worker nodes and increasing the load on themselves and hence the increased risk of failure</p> <p>Also, In case of larger cluster, you already need the master nodes with high resources just to manage your worker nodes. You can’t put additional load on master nodes to run the workload as well in that case. Please have a look at the setting up large cluster in kubernetes <a href="https://kubernetes.io/docs/setup/cluster-large/" rel="noreferrer">here</a></p> <p>If you have manageable workload and you know it doesn’t increase beyond a certain level. You can make master schedulable. However for production cluster it is not recommended at all.</p>
<p>I'm running uwsgi+flask application, The app is running as a k8s pod.</p> <p>When i deploy a new pod (a new version), the existing pod get SIGTERM.</p> <p>This causes the master to stop accepting new connection <strong>at the same moment</strong>, what causes issues as the LB still pass requests to the pod (for a few more seconds). </p> <p>I would like the master to wait 30 sec BEFORE stop accepting new connections (When getting SIGTERM) but couldn't find a way, is it possible?</p> <p>My uwsgi.ini file: [uwsgi]</p> <pre><code>;https://uwsgi-docs.readthedocs.io/en/latest/HTTP.html http = :8080 wsgi-file = main.py callable = wsgi_application processes = 2 enable-threads = true master = true reload-mercy = 30 worker-reload-mercy = 30 log-5xx = true log-4xx = true disable-logging = true stats = 127.0.0.1:1717 stats-http = true single-interpreter= true ;https://github.com/containous/traefik/issues/615 http-keepalive=true add-header = Connection: Keep-Alive </code></pre>
<p>Seems like this is not possible to achieve using uwsgi:</p> <p><a href="https://github.com/unbit/uwsgi/issues/1974" rel="noreferrer">https://github.com/unbit/uwsgi/issues/1974</a></p> <p>The solution - (as mentioned on this kubernetes issue):</p> <p><a href="https://github.com/kubernetes/contrib/issues/1140" rel="noreferrer">https://github.com/kubernetes/contrib/issues/1140</a></p> <p>Is to use the prestop hook, quite ugly but will help to achieve zero downtime:</p> <pre class="lang-yaml prettyprint-override"><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx spec: template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 lifecycle: preStop: exec: command: ["/bin/sleep","5"] </code></pre> <p>The template is taken from this answer: <a href="https://stackoverflow.com/a/39493421/3659858">https://stackoverflow.com/a/39493421/3659858</a></p>
<p>I made a small setup of Kubernetes cluster. After few hours I tried to delete it. But, it impossible to delete the cloud resource. </p> <p>I tried from the GCP UI: </p> <p><a href="https://i.stack.imgur.com/7hWib.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7hWib.png" alt="enter image description here"></a></p> <p>and from with <code>gcloud</code>:</p> <pre><code>$ gcloud container clusters delete nat-test-cluster The following clusters will be deleted. - [nat-test-cluster] in [europe-west3-c] Do you want to continue (Y/n)? Y ERROR: (gcloud.container.clusters.delete) Some requests did not succeed: - args: [u'ResponseError: code=404, message=Not found: projects/proj/zones/europe-west3-c/clusters/nat-test-cluster.\nCould not find [nat-test-cluster] in [europe-west3-c].\nDid you mean [nat-test-cluster] in [us-central1-a]?'] exit_code: 1 message: ResponseError: code=404, message=Not found: projects/dotnet-core-cluster/zones/europe-west3-c/clusters/nat-test-cluster. Could not find [nat-test-cluster] in [europe-west3-c]. Did you mean [nat-test-cluster] in [us-central1-a]? </code></pre> <p>Those machines are looks like still working, but in accessible. I dont know what else to try. Contact gcp billing support to stop the billing but they said I dont have technial support plan and they can't help me. So annoying that I need to pay for support for problems not in my control.</p> <p>How to delete this cluster? What to do?</p>
<p>If we look at the documentation for deleting a cluster found here:</p> <p><a href="https://cloud.google.com/sdk/gcloud/reference/container/clusters/delete" rel="noreferrer">https://cloud.google.com/sdk/gcloud/reference/container/clusters/delete</a></p> <p>We find that it has an optional zone parameter. When you create (or delete) a cluster, you MUST supply a zone. If you do NOT supply a zone, your default zone (as believed by the gcloud command) will be used. In your output, we seem to see that your gcloud command thinks its DEFAULT zone is europe-west3-c while it appears that the zone in which the cluster lives is us-central1-a. I believe the solution will be to add the <code>--zone us-central1-a</code> parameter to your gcloud command.</p>
<p>I'm looking to update manually with the command <strong>kubectl autoscale</strong> my maximum number of replicas for auto scaling. </p> <p>however each time I run the command it creates a new hpa that fails to launch the pod why I don't know at all:(</p> <p>Do you have an idea how i can update manually with kubectl my HPA ? </p> <p><a href="https://gist.github.com/zyriuse75/e75a75dc447eeef9e8530f974b19c28a" rel="nofollow noreferrer">https://gist.github.com/zyriuse75/e75a75dc447eeef9e8530f974b19c28a</a></p>
<p>I think you are mixing two topics here, one is manually scale a pod (you can do it through a deployment applying <code>kubectl scale deploy {mydeploy} --replicas={#repl}</code>). In the other hand you have HPA (Horizontal Pod AutoScaler), in order to do this (HPA) you should have configured any app metrics provider system e.g: </p> <ul> <li><p>metrics server <a href="https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/metrics-server" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/metrics-server</a></p></li> <li><p>heapster (deprecated) <a href="https://github.com/kubernetes-retired/heapster" rel="nofollow noreferrer">https://github.com/kubernetes-retired/heapster</a> </p></li> </ul> <p>then you can create a HPA to handle your autoscaling, you can get more info on this link <a href="https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/</a></p> <p>Once created you can patch your HPA or deleted it and create it again</p> <p>kubectl delete hpa hpa-pod -n ns-svc-cas</p> <p>kubectl autoscale hpa-pod --min={#number} --max={#number} -n ns-svc-cas</p> <p>easiest way</p>
<p>Is probe frequency customizable in liveness/readiness probe?</p> <p>Also, how many times readiness probe fails before it removes the pod from service load-balancer? Is it customizable?</p>
<p>To customize the liveness/readiness probe frequency and other parameters we need to add liveness/readiness element inside the containers element of the yaml associated with that pod. A simple example of the yaml file is given below :</p> <pre><code>apiVersion: v1 kind: Pod metadata: name: liveness-exec spec: containers: - name: liveness-ex image: ubuntu args: - /bin/sh - -c - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy;sleep 600 livenessProbe: exec: command: - cat - /tmp/healthy initialDelaySeconds: 5 periodSeconds: 5 </code></pre> <p>the initialDelaySeconds parameter ensure that liveness probe is checked after 5sec of container start and periodSeconds ensures that it is checked after every 5 sec. For more parameters you can go to link : <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/</a> </p>
<p>I have so far been unable to find any azure library for creating a node within an AKS cluster. I can use the azure cli, but my goal is to use python.</p> <p>I can create the resource and resource groups using the azure python SDK - resource_groups.create_or_update('azure-sample-group', resource_group_params)</p> <p>Can someone point me to the right docs or some tips? I appreciate all your help.</p>
<p>you can do that, <a href="https://learn.microsoft.com/en-us/python/api/azure-mgmt-containerservice/azure.mgmt.containerservice.v2018_03_31.operations.managedclustersoperations?view=azure-python#create-or-update-resource-group-name--resource-name--parameters--custom-headers-none--raw-false--polling-true----operation-config-" rel="nofollow noreferrer">here's the docs</a> for the method(s) you are looking for. <a href="https://github.com/Azure/azure-sdk-for-python/blob/master/azure-mgmt-containerservice/azure/mgmt/containerservice/v2018_08_01_preview/operations/managed_clusters_operations.py" rel="nofollow noreferrer">Here's the SDK code</a> for the same stuff. Model for <a href="https://learn.microsoft.com/en-us/python/api/azure-mgmt-containerservice/azure.mgmt.containerservice.v2018_03_31.models.managedcluster?view=azure-python" rel="nofollow noreferrer">Managed Clusters</a></p> <p>Example code would be something like:</p> <pre><code>from azure.mgmt.containerservice import ContainerServiceClient # needed to create client containerservice_client = ContainerServiceClient(get_credentials(), SUBSCRIPTION) # same way like you would for the resource_management_client parameters = ManagedCluster( location=location, dns_prefix=dns_prefix, kubernetes_version=kubernetes_version, tags=stags, service_principal_profile=service_principal_profile, # this needs to be a model as well agent_pool_profiles=agent_pools, # this needs to be a model as well linux_profile=linux_profile, # this needs to be a model as well enable_rbac=true ) containerservice_client.managed_clusters.create_or_update(resource_group, name, parameters) </code></pre>
<p>I am running a Docker image using Kubernetes. I would like to pass to the container the digest of the image being used. So that the code inside the container can use this for debugging/logging. The problem is that I do not seem to be able to find a way to do this without hard-coding the image digest into the pod configuration.</p> <p>Is there a way to define pod configuration way so that it dynamically passes the digest as environment variable for whichever version of Docker image it ends up using?</p>
<p>Whatever Kubernetes happens to know can be injected using the <a href="https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/" rel="nofollow noreferrer">downward API</a>. That set of data is in <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.13/#pod-v1-core" rel="nofollow noreferrer">the API reference for Pod objects</a>.</p> <p>It looks like this should work:</p> <pre><code>env: - name: DOCKER_IMAGE_ID valueFrom: fieldRef: fieldPath: status.containerStatuses[0].imageID </code></pre> <p>You may prefer to inject the <code>spec.containers[0].image</code> name, which will be easier to understand after the fact. If you're using a tool like <a href="https://helm.sh/" rel="nofollow noreferrer">Helm</a> to generate the configuration, you can also use its values system:</p> <pre><code>image: {{ .Values.image }}:{{ .Values.tag }} env: - name: DOCKER_IMAGE_TAG value: {{ .Values.tag }} </code></pre>
<p>I just create aks And create the sample service.</p> <pre><code>kubectl get service azure-vote-front --watch NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE azure-vote-front LoadBalancer 10.0.1.71 13.71.XXX.XXX 80:31619/TCP 1h </code></pre> <p>I want to acccsss to 13.71.xxx.xxx:31619 but it is waiting not to return.</p>
<p>You just need to access the address <code>13.71.xxx.xxx</code> through the browser without the port 31619.</p>
<p>I have an ingress defined as:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: foo-ingress annotations: kubernetes.io/ingress.global-static-ip-name: zaz-address kubernetes.io/ingress.allow-http: "false" ingress.gcp.kubernetes.io/pre-shared-cert: foo-bar-com spec: rules: - host: foo.bar.com http: paths: - path: /zaz/* backend: serviceName: zaz-service servicePort: 8080 </code></pre> <p>Then the service <code>zap-service</code> is a <code>nodeport</code> defined as:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: zaz-service namespace: default spec: clusterIP: 10.27.255.88 externalTrafficPolicy: Cluster ports: - nodePort: 32455 port: 8080 protocol: TCP targetPort: 8080 selector: app: zap sessionAffinity: None type: NodePort </code></pre> <p>The <code>nodeport</code> is successfully selecting the two pods behind it serving my service. I can see in the <code>GKE</code> services list that the <code>nodeport</code> has an IP that looks internal.</p> <p>When I check in the same interface the <code>ingress</code>, it also looks all fine, but serving zero pods.</p> <p>When I describe the <code>ingress</code> on the other hand I can see:</p> <pre><code>Rules: Host Path Backends ---- ---- -------- foo.bar.com /zaz/* zaz-service:8080 (&lt;none&gt;) </code></pre> <p>Which looks like the <code>ingress</code> is unable to resolve the service <code>IP</code>. What am I doing wrong here? I cannot access the service through the external domain name, I am getting an error <code>404</code>.</p> <p>How can I make the ingress translate the domain name <code>zaz-service</code> into the proper <code>IP</code> so it can redirect traffic there?</p>
<p>Seems like the wildcards in the path are <a href="https://github.com/kubernetes/kubernetes/issues/41881" rel="nofollow noreferrer">not supported yet</a>. Any reason why not using just the following in your case?</p> <pre><code>spec: rules: - host: foo.bar.com http: paths: - path: /zaz backend: serviceName: zaz-service servicePort: 8080 </code></pre>
<p>I am looking for suggestions on Java API for Kubernetes that I can deploy docker image on kubernetes. My end goal is to be able to deploy docker image on kubernetes programmatically using Java. My current way to deploy the docker image is using the cmd <code>kubectl create -f xxx.yaml</code>. </p> <p>I have been googling, but I am unable to find much information for the Java API for this matter. It doesn't see that kubernetes client can handle this either. Appreciated with all the help. </p> <p>Thanks in advance</p>
<p>One tool in this space is the fabric8 Java client. You can use it to programmatically create and apply resources - <a href="https://github.com/fabric8io/kubernetes-client/blob/master/kubernetes-examples/src/main/java/io/fabric8/kubernetes/examples/FullExample.java" rel="nofollow noreferrer">https://github.com/fabric8io/kubernetes-client/blob/master/kubernetes-examples/src/main/java/io/fabric8/kubernetes/examples/FullExample.java</a></p> <p>Or to apply yaml from a file <a href="https://stackoverflow.com/questions/53501540/kubectl-apply-f-spec-yaml-equivalent-in-fabric8-java-api">kubectl apply -f &lt;spec.yaml&gt; equivalent in fabric8 java api</a></p>
<p>As I can see the below page, I can set up two or three hosts in one Ingress. <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#name-based-virtual-hosting" rel="noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/#name-based-virtual-hosting</a></p> <p>But how do I add a new host to existing ingress? I tried the commands like apply or patch, but it didn't work.</p> <p>Is there anyone who know this solution?</p> <pre><code>kubectl patch -f sample-ingress.yml -p ' {"metadata": {"name": "sample-ingress"}, "spec": [{ "host": "39500000.sample.com", "http": {"paths": [{ "backend": {"serviceName": "39500000", "servicePort": 8080} }] }}] }' The Ingress "sample-ingress" is invalid: spec.backend.serviceName: Required value </code></pre>
<p>I personally prefer to use <strong>PATCH</strong> as my preferred approach to add an new host to an existing kubernetes ingress.</p> <p>The command is going to look like this: <code>kubectl patch ingress my-ingress --type json --patch "$(cat patch.json)"</code></p> <p>where patch.json is</p> <pre><code>[ { "op" : "add" , "path" : "/spec/rules/-" , "value" : { "host": "evil.facebook.com", "http": { "paths": [ { "backend": { "serviceName": "tracker-app", "servicePort": 80 } } ] } } } ] </code></pre> <p>A few notes: - my-ingress is the name of the deployed ingress in the cluster - one can just paste the json instead of using this trick "$(cat patch.json)"</p> <p>Also, the main thing about this solution is that it <strong>leverages kubernetes own capabilities</strong> to merge json according to <a href="http://erosb.github.io/post/json-patch-vs-merge-patch/" rel="noreferrer">these rules</a>.</p> <p>More information patching from an official source can be <a href="https://kubernetes.io/docs/tasks/run-application/update-api-object-kubectl-patch/" rel="noreferrer">found here</a>.</p>
<p>I am currently trying to form a high availability Kubernetes cluster having 5 worker nodes and 3 master in on-premise server. I learned about implementation of high availability cluster by checking its documentation. Also I understood the implementation of HA cluster on AWS cloud or Azure cloud using Load Balancer functionality from appropriate cloud provider.</p> <p>My confusion is that, when I am creating the same high availability Kubernetes cluster in my on-premise server, then how I can use the Load Balancer functionality in implementation ?</p>
<p>You can use keepalived to setup the load balancer for master on your on premise setup. The keepalived daemon can be used to monitor services or systems and to automatically failover to a standby if problems occur. There will be one Active server on the master and other two master will be in backup mode.</p> <p>I have written a blog on how to setup kubernetes highly available cluster on premise. You can find it at below link:</p> <p><a href="https://velotio.com/blog/2018/6/15/kubernetes-high-availability-kubeadm" rel="nofollow noreferrer">https://velotio.com/blog/2018/6/15/kubernetes-high-availability-kubeadm</a></p> <p>I have used keepalived to setup the load balancer in above blog on my on-premise cluster.</p>
<p>I have the following configuration to setup the cluster using <a href="https://rancher.com/docs/rancher/v2.x/en/installation/ha/kubernetes-rke/" rel="nofollow noreferrer">Rancher (RKE)</a>.</p> <p>rancher-config.yml</p> <pre><code>nodes: - address: 192.168.88.204 internal_address: 172.16.22.12 user: dockeruser role: [controlplane,worker,etcd] - address: 192.168.88.203 internal_address: 172.16.32.37 user: dockeruser role: [controlplane,worker,etcd] - address: 192.168.88.202 internal_address: 172.16.42.73 user: dockeruser role: [controlplane,worker,etcd] services: etcd: snapshot: true creation: 6h retention: 24h </code></pre> <p>According <a href="https://rancher.com/docs/rancher/v2.x/en/installation/requirements/" rel="nofollow noreferrer">Rancher Networking</a>, I already open the following port for all nodes(192.168.88.204, 192.168.88.203, 192.168.88.202) as firewall-services.</p> <p>node-firewall.xml</p> <pre><code>&lt;?xml version="1.0" encoding="utf-8"?&gt; &lt;service&gt; &lt;port port="2376" protocol="tcp"/&gt; &lt;port port="2379" protocol="tcp"/&gt; &lt;port port="2380" protocol="tcp"/&gt; &lt;port port="8472" protocol="udp"/&gt; &lt;port port="9099" protocol="tcp"/&gt; &lt;port port="10250" protocol="tcp"/&gt; &lt;port port="443" protocol="tcp"/&gt; &lt;port port="6443" protocol="tcp"/&gt; &lt;port port="8472" protocol="udp"/&gt; &lt;port port="6443" protocol="tcp"/&gt; &lt;port port="10254" protocol="tcp"/&gt; &lt;port port="30000-32767" protocol="tcp"/&gt; &lt;/service&gt; -&gt; commmend firewall-offline-cmd --new-service-from-file=node-firewall.xml --name=node-firewall firewall-cmd --reload firewall-cmd --add-service node-firewall </code></pre> <p>My RKE is installed on 192.168.88.151. For RKE -> </p> <pre><code>rancher-firewall.xml &lt;?xml version="1.0" encoding="utf-8"?&gt; &lt;service&gt; &lt;port port="80" protocol="tcp"/&gt; &lt;port port="433" protocol="tcp"/&gt; &lt;port port="22" protocol="tcp"/&gt; &lt;port port="2376" protocol="tcp"/&gt; &lt;port port="6443" protocol="tcp"/&gt; &lt;/service&gt; firewall-offline-cmd --new-service-from-file=rancher-firewall.xml --name=rancher-firewall firewall-cmd --reload firewall-cmd --add-service rancher-firewall </code></pre> <p>So, I run the following commend to up my <code>RKE</code></p> <pre><code>rke up --config ./rancher-config.yml </code></pre> <p>log is</p> <pre><code>[root@localhost ~]# rke up --config ./rancher-config.yml INFO[0000] Building Kubernetes cluster INFO[0000] [dialer] Setup tunnel for host [192.168.88.204] INFO[0000] [dialer] Setup tunnel for host [192.168.88.203] INFO[0000] [dialer] Setup tunnel for host [192.168.88.202] INFO[0001] [network] Deploying port listener containers INFO[0001] [network] Port listener containers deployed successfully INFO[0001] [network] Running etcd &lt;-&gt; etcd port checks INFO[0001] [network] Successfully started [rke-port-checker] container on host [192.168.88.202] INFO[0001] [network] Successfully started [rke-port-checker] container on host [192.168.88.204] INFO[0001] [network] Successfully started [rke-port-checker] container on host [192.168.88.203] FATA[0016] [network] Host [192.168.88.202] is not able to connect to the following ports: [172.16.22.12:2379, 172.16.22.12:2380, 172.16.32.37:2379, 172.16.32.37:2380, 172.16.42.73:2380, 172.16.42.73:2379]. Please check network policies and firewall rules </code></pre> <p>My question is how to open the port for the <code>internal_address</code> for all nodes in <code>kubernates</code> cluster?</p>
<p>May be it is lack of my experience. I just share what I found. <code>internal_address</code> is have to be ip-address of (Gateway) of <code>docker</code>. To know the ip-address of docker for each node (192.168.88.204, 192.168.88.203, 192.168.88.202).</p> <p>Run the commend <code>docker network ls</code>. You might be get following network information.</p> <pre><code>NETWORK ID NAME DRIVER SCOPE aa13d08f2676 bridge bridge local 02eabe818790 host host local 1e5bb430d790 none null local </code></pre> <p>And run the commend <code>docker network inspect bridge</code> to get ip-addres of <code>bridge</code>. you will get the following similer info. </p> <pre><code>[ { "Name": "bridge", "Id": "aa13d08f2676e40df5a82521fccc4e402ef6b04f82bcd414cd065a1859b3799d", "Created": "2019-01-31T21:32:02.381082005-05:00", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": null, "Config": [ { "Subnet": "172.17.0.0/16", "Gateway": "172.17.0.1" } ] }, .... ... .. . ] </code></pre> <p>and configure <code>rancher-config.yml</code> as below and run <code>rke up --config ./rancher-config.yml</code> again</p> <pre><code>nodes: - address: 192.168.88.204 internal_address: 172.17.0.1 ... ... .. .. </code></pre>
<p>I have a kubernetes cluster on Amazon EKS and from time to time there appear some pods with the state <code>Unknown</code>. I read that this is because my pods have no memory limits set and after changing that no new pods with that state have appeared. But I tried to remove the existing ones using <code>kubectl delete pod &lt;pod_name&gt;</code> and it didn't work. How should I delete them?</p>
<p>You can force delete the pod like this:</p> <pre><code>kubectl delete pod &lt;pod_name&gt; --grace-period=0 --force </code></pre>
<p><strong>The Scenario:</strong> I have deployed a service using helm chart, I can see my service, hpa, deployment, pods etc. In my hpa setting: the min pod count is set to 1. I can see my Pod is running and able to handle service request.</p> <p>After a while --- I have executed -- "kubectl scale deploy --replicas=0" Once I run the above above command I can see my pod got deleted (although the hpa min pod setting was set to 1), I was expecting after a while hpa will scale up to the min pod count i.e. 1. However I don't see that happened, I have waited more than an hour and no new pod created by hpa. I have also tried sending a request to my Kubernetes service and I was thinking now hpa will scale up the pod, since there is no pod to serve the request, however the hps doesn't seem to do that, and I got a response that my Service is not available.</p> <p>Here is what I can see in kubectl get hpa NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE** test Deployment/xxxx /1000% 1 4 0 1h</p> <p><strong>Interestingly I found that hpa scale down quickly:</strong> when I execute "kubectl scale deploy --replicas=2" (please note that the in hpa count is 1), I can see 2 pods gets created quickly however within 5 mins, 1 pod gets removed by hpa.</p> <p>Is this is expected behavior of Kubernetes (particularly hpa) ? as in, if we delete all pods by executing --"kubectl scale deploy --replicas=0", a) the hpa won't block to reduce the replica count less than pod count configured (in hpa config) and b) the hpa won't scale up (based on the hpa spinning cycle) to the min number of pods as configured. and essentially c) until we redeploy or execute another round of "kubectl scale deploy" to update the replica count there will be no pods for this service.</p> <p>Is this expected behavior or a (possible) bug in the Kubernetes codebase ? I am using Kubernetes 1.8 version.</p>
<p>That was great observation. I was going through documentation of HPA and come across mathematical formula used by HPA to scale pods .and it looks like </p> <pre><code>TargetNumOfPods = ceil(sum(CurrentPodsCPUUtilization) / Target) </code></pre> <p>In your case, current pod utilization is zero as your pods count is zero . So mathematically this equation result into zero. So this this is a reason HPA is not working if pod count is zero.</p> <p><strong>a:</strong> HPA should not block manual scaling of pods as it get trigger only from resources (cpu, memory etc). Once you do scaling using "kubectl scale" or by any other means then HPA will come into picture depending on min, max replica and avg utilization value.</p> <p><strong>b:</strong> HPA scales up to min number of replicas if current count is non zero. I tried it and its working perfectly fine.</p> <p><strong>c:</strong> Yes unless you bring replica count to non-zero value, HPA will not work. So you have to scale up to some non zero value.</p> <p>Hope this answers your doubts about HPA.</p>
<p>I would like to provide DAGs to all Kubernetes airflow pods (web, scheduler, workers) via a persistent volume,</p> <pre><code>kubectl create -f pv-claim.yaml </code></pre> <p>pv-claim.yaml containing:</p> <pre><code>kind: PersistentVolumeClaim apiVersion: v1 metadata: name: airflow-pv-claim annotations: pv.beta.kubernetes.io/gid: "1000" pv.beta.kubernetes.io/uid: "1000" spec: storageClassName: standard accessModes: - ReadWriteMany resources: requests: storage: 1Gi </code></pre> <p>The deployment command is then:</p> <pre><code>helm install --namespace my_name --name "airflow" stable/airflow --values ~my_name/airflow/charts/airflow/values.yaml </code></pre> <p>In the chart stable/airflow, values.yaml also allows for specification of persistence:</p> <pre><code>persistence: enabled: true existingClaim: airflow-pv-claim accessMode: ReadWriteMany size: 1Gi </code></pre> <p>But if I do</p> <pre><code>kubectl exec -it airflow-worker-0 -- /bin/bash touch dags/hello.txt </code></pre> <p><strong>I get a permission denied error.</strong></p> <p>I have tried hacking the airflow chart to set up an initContainer to chown dags/:</p> <pre><code>command: ["sh", "-c", "chown -R 1000:1000 /dags"] </code></pre> <p>which is working for all but the workers (because they are created by flower?), as suggested at <a href="https://serverfault.com/a/907160/464205">https://serverfault.com/a/907160/464205</a></p> <p>I have also seen talk of fsGroup etc. - see e.g. <a href="https://stackoverflow.com/questions/50156124/kubernetes-nfs-persistent-volumes-permission-denied">Kubernetes NFS persistent volumes permission denied</a></p> <p>I am trying to avoid editing the airflow charts (which seems to require hacks to at least two deployments-*.yaml files, plus one other), but perhaps this is unavoidable.</p> <p><strong>Punchline:</strong></p> <p><strong>What is the easiest way to provision DAGs through a persistent volume to all airflow pods running on Kubernetes, with the correct permissions?</strong></p> <p>See also:</p> <p><a href="https://stackoverflow.com/questions/50767186/persistent-volume-atached-to-k8s-pod-group">Persistent volume atached to k8s pod group</a></p> <p><a href="https://stackoverflow.com/questions/50156124/kubernetes-nfs-persistent-volumes-permission-denied">Kubernetes NFS persistent volumes permission denied</a> [not clear to me how to integrate this with the airflow helm charts]</p> <p><a href="https://stackoverflow.com/questions/46974105/kubernetes-setting-custom-permissions-file-ownership-per-volume-and-not-per-p?noredirect=1&amp;lq=1">Kubernetes - setting custom permissions/file ownership per volume (and not per pod)</a> [non-detailed, non-airflow-specific]</p>
<p>It turns out you do, I think, have to edit the airflow charts, by adding the following block in <code>deployments-web.yaml</code> and <code>deployments-scheduler.yaml</code> under <code>spec.template.spec</code>:</p> <pre><code>kind: Deployment spec: template: spec: securityContext: runAsUser: 1000 runAsGroup: 1000 fsGroup: 1000 fsUser: 1000 </code></pre> <p>This allows one to get dags into airflow using e.g.</p> <pre><code>kubectl cp my_dag.py my_namespace/airflow-worker-0:/usr/local/airflow/dags/ </code></pre>
<p>I am deploying sample springboot application using fabric8 maven deploy. The build fails with SSLHandshakeException.</p> <pre><code>F8: Cannot access cluster for detecting mode: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target Failed to execute goal io.fabric8:fabric8-maven-plugin:3.1.80.redhat-000010:build (default) on project fuse-camel-sb-rest: Execution default of goal io.fabric8:fabric8-maven-plugin:3.1.80.redhat-000010:build failed: An error has occurred. sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target -&gt; [Help 1] org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal io.fabric8:fabric8-maven-plugin:3.1.80.redhat-000010:build (default) on project fuse-camel-sb-rest: Execution default of goal io.fabric8:fabric8-maven-plugin:3.1.80.redhat-000010:build failed: An error has occurred. </code></pre> <p>So, I downloaded the public certificate from the Openshift webconsole and added it to JVM using </p> <pre><code>C:\...\jdk.\bin&gt;keytool -import -alias rootcert -file C:\sample\RootCert.cer -keystore cacerts </code></pre> <p>and got message that its successfully added to the keystore and the list command shows the certificates added.</p> <pre><code> C:\...\jdk.\bin&gt;keytool -list -keystore cacerts Enter keystore password: Keystore type: JKS Keystore provider: SUN Your keystore contains 2 entries rootcert, May 18, 2018, trustedCertEntry, Certificate fingerprint (SHA1): XX:XX:XX:.......... </code></pre> <p>But the mvn:fabric8 deploy build still fails with the same exception.</p> <p>Can someone shed some light on this issue? Am I missing anything?</p>
<p>Jack's solution works fine on Windows. I exported certificate from webrowser, when I was on openshift webconsole. Then I added cert to cacerts in $JAVAHOME/jre/lib/security: keytool -import -alias my.alias -file C:\sample\RootCert.cer -keystore cacerts</p>
<p>I'm trying to deploy a laravel application in kubernetes at Google Cloud Platform.</p> <p>I followed couple of tutorials and was successful trying them locally on a docker VM.</p> <p><a href="https://learnk8s.io/blog/kubernetes-deploy-laravel-the-easy-way" rel="nofollow noreferrer">https://learnk8s.io/blog/kubernetes-deploy-laravel-the-easy-way</a></p> <p><a href="https://blog.cloud66.com/deploying-your-laravel-php-applications-with-cloud-66/" rel="nofollow noreferrer">https://blog.cloud66.com/deploying-your-laravel-php-applications-with-cloud-66/</a></p> <p>But when tried to deploy in kubernetes using an ingress to assign a domain name to the application. I keep getting the 502 bad gateway page.</p> <p>I'm using a nginx ingress controller with image k8s.gcr.io/nginx-ingress-controller:0.8.3 and my ingress is as following</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress annotations: kubernetes.io/ingress.class: "nginx" spec: tls: - hosts: - domainname.com secretName: sslcertificate rules: - host: domain.com http: paths: - backend: serviceName: service servicePort: 80 path: / </code></pre> <p>this is my application service </p> <pre><code>apiVersion: v1 kind: Service metadata: name: service labels: name: demo version: v1 spec: ports: - port: 80 targetPort: 8080 protocol: TCP selector: name: demo type: NodePort </code></pre> <p>this is my ingress controller</p> <pre><code>apiVersion: v1 kind: Service metadata: name: default-http-backend labels: k8s-app: default-http-backend spec: ports: - port: 80 targetPort: 8080 protocol: TCP name: http selector: k8s-app: default-http-backend --- apiVersion: v1 kind: ReplicationController metadata: name: default-http-backend spec: replicas: 1 selector: k8s-app: default-http-backend template: metadata: labels: k8s-app: default-http-backend spec: terminationGracePeriodSeconds: 60 containers: - name: default-http-backend # Any image is permissable as long as: # 1. It serves a 404 page at / # 2. It serves 200 on a /healthz endpoint image: gcr.io/google_containers/defaultbackend:1.0 livenessProbe: httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 30 timeoutSeconds: 5 ports: - containerPort: 8080 resources: limits: cpu: 10m memory: 20Mi requests: cpu: 10m memory: 20Mi --- apiVersion: v1 kind: ReplicationController metadata: name: nginx-ingress-controller labels: k8s-app: nginx-ingress-lb spec: replicas: 1 selector: k8s-app: nginx-ingress-lb template: metadata: labels: k8s-app: nginx-ingress-lb name: nginx-ingress-lb spec: terminationGracePeriodSeconds: 60 containers: - image: gcr.io/google_containers/nginx-ingress-controller:0.8.3 name: nginx-ingress-lb imagePullPolicy: Always readinessProbe: httpGet: path: /healthz port: 10254 scheme: HTTP livenessProbe: httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 timeoutSeconds: 1 # use downward API env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace ports: - containerPort: 80 hostPort: 80 - containerPort: 443 hostPort: 443 # we expose 18080 to access nginx stats in url /nginx-status # this is optional - containerPort: 18080 hostPort: 18080 args: - /nginx-ingress-controller - --default-backend-service=$(POD_NAMESPACE)/default-http-backend </code></pre> <p>and here is my laravel application deployment </p> <pre><code>apiVersion: extensions/v1beta1 kind: Deployment metadata: name: demo-rc labels: name: demo version: v1 spec: strategy: type: Recreate template: metadata: labels: name: demo version: v1 spec: containers: - image: gcr.io/projectname/laravelapp:vx name: app-pod ports: - containerPort: 8080 </code></pre> <p>I tried to add the domain entry to the hosts file but with no luck !! is there a specific configurations I have to add to the configmap.yaml file for the nginx ingress controller? </p>
<p>In short, to be able to reach your application via external domain name (singapore.smartlabplatform.com), you need to create a A DNS record for GCP L4 Load Balancer's external IP address (this is in other words EXTERNAL-IP of your default nginx-ingress-controller's Service), here seen as pending:</p> <pre><code>==&gt; v1/Service NAME TYPE CLUSTER-IP EXTERNAL-IP nginx-ingress-controller LoadBalancer 10.7.248.226 pending nginx-ingress-default-backend ClusterIP 10.7.245.75 none </code></pre> <p>how to do this? it's explained on the GKE tutorials page <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/configuring-domain-name-static-ip" rel="nofollow noreferrer">here</a>. </p> <p>In the current state of your environment you can only reach your application in two ways:</p> <ol> <li><p>From outside, via Load Balancer EXTERNAL-IP:</p> <p><a href="https://i.stack.imgur.com/q8nI8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/q8nI8.png" alt="enter image description here"></a></p></li> <li><p>From inside, your Kubernetes cluster using <strong>laravel-kubernetes-demo</strong> service dns name:</p></li> </ol> <blockquote> <p>$ curl laravel-kubernetes-demo.default.svc.cluster.local </p> <pre><code>&lt;title&gt;Laravel Kubernetes Demo :: LearnK8s&lt;/title&gt; </code></pre> </blockquote> <p>If you want all that magic, like the automatic creation of DNS records, happen along with appearance of <code>host: domain.com</code> in your ingress resource spec, you should use <a href="https://github.com/kubernetes-incubator/external-dns" rel="nofollow noreferrer">external-dns</a> (makes Kubernetes resources discoverable via public DNS servers), and <a href="https://github.com/kubernetes-incubator/external-dns/blob/master/docs/tutorials/gke.md" rel="nofollow noreferrer">here</a> is the tutorial on how to set it up specifically for GKE.</p>
<p>I have configured a readinessProbe on my pod with a binary execution check, which connects to my running server (in the same container) and retrieves some health information (like ready for traffic).</p> <p>Configured as a readinessProbe, the binary fails to contact my server and get the required info. It connects on a TCP socket. But it works correctly when I configured it as a livenessProbe.</p> <p>Configuration. To make it work, I only changed the type from readinessProbe to livenessProbe.</p> <pre><code>"readinessProbe": { "exec": { "command": [ "/opt/bin/ready_probe", "--check_ready_traffic", "--service=myServer-service" ] }, "initialDelaySeconds": 60, "timeoutSeconds": 5 }, </code></pre> <p>The service is for the server, to register it's host and port. This is OK.</p> <p>Version used: kubernetes v1.1.0-origin-1107-g4c8e6f4</p> <p>Thank you.</p>
<p><em>From the information provided, I can't determine conclusively whether the probe passes or fails in your case. Kubernetes can be kind of opaque if you don't know what to monitor, so it's easy to imagine someone misinterpreting the results of your experiment.</em></p> <p>There is no difference in the execution of the two types of probes -- only the consequences differ:</p> <ul> <li>liveness failure: reboots the container, <a href="https://stackoverflow.com/a/36846812">eventually</a></li> <li>readiness failure: disables communication</li> </ul> <p>Depending on your container, a liveness failure might be relatively harmless -- you might not even notice it.</p> <p>However, when you use a readiness probe, communication with your container will be disabled until <em>after</em> the probe passes. This means that the simple act of enabling a readiness proble with <code>initialDelaySeconds: 60</code> will prevent a service from connecting with your pod for the first minute -- regardless of the state of the associated container. This delay could have cascading consequences if dependent pods/services aren't configured to handle it.</p> <p>For a liveness probe, it is <a href="https://cloud.google.com/blog/products/gcp/kubernetes-best-practices-setting-up-health-checks-with-readiness-and-liveness-probes" rel="nofollow noreferrer">"very important"</a> to configure <code>initialDelaySeconds</code> (as was done in the question). For a readiness probe, this may not be so important -- and you might prefer it to be zero (<a href="https://stackoverflow.com/q/48572691">the default</a>) in order to allow for the possibility of a faster startup.</p> <hr> <p>Here's the code: <a href="https://github.com/kubernetes/kubernetes/tree/master/pkg/kubelet/prober" rel="nofollow noreferrer">https://github.com/kubernetes/kubernetes/tree/master/pkg/kubelet/prober</a></p>
<p>I would like to create serveral ConfiMaps with one helm telmplate. Therefore I have created a folder for the configs/values and one configfile per ConfigMap. I have read the helm template guide and found nothing helpful for my problem. Maybe I missunderstood the possibilities of helm. </p> <p>Afterwards there is a possibility to create one configmap from serveral files:</p> <pre><code>apiVersion: v1 kind: ConfigMap metadata: name: {{ .Release.Name }}-configmap data: {{- $files := .Files }} {{- range tuple "file1.yaml" "file2.yaml" }} {{ $files.Get . }} {{- end }} </code></pre> <p>Any recommendations would be helpful, Thanks,</p> <p>Best wishes</p>
<p>thank you for the response. I have something different in mind. My new code makes it a little bit more clearly. </p> <pre><code>{{ range $k, $v := .Values.configs }} apiVersion: v1 kind: ConfigMap metadata: name: configmap namespace: {{ $.Values.namespace }} labels: app: "{{base $v}}" data: key: {{$k}} value: {{$v}} {{ $.Files.Get $v }} {{ end }} </code></pre> <p>I have a Loop over the ConfigMap. My values.yaml looks like</p> <pre><code>configs name: configs/file1 name: configs/file2 </code></pre> <p>The values are in a separate folder configs, one file per configmap. </p> <p>The current problem is, that the result is one ConfigMap with the values of file2. I would expect two ConfigMaps. What is wrong here in my template. </p> <p>Thank you very much.</p>
<p>First for my question, I need to talk a bit about my enviroments:</p> <ol> <li><p>Google Basic Setup: <em>1x</em> f1-micro instance with 3 nodes</p></li> <li><p>Kubernetes Setup: nginx-ingress-controller, cert-manager, <em>1</em>-backend service with deployment, <em>1</em>-frontend service with deployment.</p></li> <li><p>Mongo Atlas Setup: <em>3</em>-replicaSet</p></li> </ol> <p>Setup should not be a prolbem, but It might give some scenario feelings.</p> <p>OK, Let comes to the issue, my Nodejs backend use the following url to connect to <strong>MonglAtlas database</strong>:</p> <pre><code>MONGODB_URI=mongodb+srv://username:[email protected]/test?retryWrites=true </code></pre> <p>IP Whitelist is my static public IP that use nginx-ingress to route. Let me define <code>my.domain</code> to my frontend webpage, and <code>my.domain/api/</code> to backend api. </p> <p>Everything is fine when IP Whitelist is <strong>ALLOW ACCESS FROM ANYWHERE</strong>, and backend could connect to MongoAtlas DB for no doubt.</p> <p>But when I delete that option, and add the IP that matched with <code>my.domain</code> (double check, I ping <code>my.domain</code> is absolutely same IP), and then backend could not find the database with following error:</p> <pre><code>MongoNetworkError: connection 4 to closed https.... </code></pre> <p>If there is something missing infos, please let me know. Any advice is appreciated!</p> <p>Another suspected is that I got <em>1</em> static IP and <em>3</em> ephemeral IP in VPC network. I guess It means 3 node with loadbalancer IP. If the backend use ephemeral IP to connect to MongoAtlas backend, I must check the pod that in which nodes and make that node static, but this make no sense for Kubernetes. I hope there is another solution :(</p>
<p>The solution I used is <a href="https://cloud.google.com/solutions/using-a-nat-gateway-with-kubernetes-engine" rel="nofollow noreferrer">NAT</a>. The concept is to establish <em>1-</em> Google Compute Engine Instance as NAT Gateway, and mapping all the egress to a static ip. Oh, the most important is all the steps above do not need to manual config, just follow the documentation, and everything should be work as expected.</p> <p>If there is a <strong>STATIC_ADDRESS QUOTA</strong> problem, you could change your ZONE and REGION to any QUOTA-remained area. For my case, us-central as NAT and us-west as Original Service. </p>
<p>I have multiple kvm nodes from different network. All these nodes has two iface <code>eth0: 10.0.2.15/24</code>, <code>eth1: 10.201.(14|12|11).0/24</code> and few manual routes between dc.</p> <pre><code>root@k8s-hv09:~# ip r default via 10.0.2.2 dev eth0 proto dhcp src 10.0.2.15 metric 100 10.0.2.0/24 dev eth0 proto kernel scope link src 10.0.2.15 10.0.2.2 dev eth0 proto dhcp scope link src 10.0.2.15 metric 100 10.201.12.0/24 dev eth1 proto kernel scope link src 10.201.12.179 10.201.14.0/24 via 10.201.12.2 dev eth1 proto static 10.201.11.0/24 via 10.201.12.2 dev eth1 proto static </code></pre> <p>Description for all nodes</p> <pre><code>Ubuntu 16.04/18.04 Kubernetes 1.13.2 Kubernetes-cni 0.6.0 docker-ce 18.06.1 </code></pre> <p>Master node(k8s-hv06)</p> <pre><code>apiVersion: kubeadm.k8s.io/v1beta1 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controlPlaneEndpoint: 10.201.14.176:6443 controllerManager: {} dns: type: CoreDNS etcd: external: caFile: "" certFile: "" endpoints: - http://10.201.14.176:2379 - http://10.201.12.180:2379 - http://10.201.11.171:2379 keyFile: "" imageRepository: k8s.gcr.io kind: ClusterConfiguration kubernetesVersion: v1.13.2 networking: dnsDomain: cluster.local podSubnet: 10.244.0.0/16 serviceSubnet: 10.96.0.0/12 scheduler: {} </code></pre> <p>Flannel v0.10.0 was used with rbac and additional arg --iface=eth1. One or more master nodes working fine.</p> <pre><code>root@k8s-hv06:~# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-86c58d9df4-b4tf9 1/1 Running 2 23h kube-system coredns-86c58d9df4-h6nq8 1/1 Running 2 23h kube-system kube-apiserver-k8s-hv06 1/1 Running 3 23h kube-system kube-controller-manager-k8s-hv06 1/1 Running 5 23h kube-system kube-flannel-ds-amd64-rsmhj 1/1 Running 0 21h kube-system kube-proxy-s5n8l 1/1 Running 3 23h kube-system kube-scheduler-k8s-hv06 1/1 Running 4 23h </code></pre> <p>But I cant add any worker node to cluster. For example, I have clear installation Ubuntu 18.04 with docker-ce, kubeadm, kubelet</p> <pre><code>root@k8s-hv09:~# dpkg -l | grep -E 'kube|docker' | awk '{print $1,$2,$3}' hi docker-ce 18.06.1~ce~3-0~ubuntu hi kubeadm 1.13.2-00 hi kubectl 1.13.2-00 hi kubelet 1.13.2-00 ii kubernetes-cni 0.6.0-00 </code></pre> <p>and I'm trying to add worker node(k8s-hv09) to cluster</p> <pre><code>root@k8s-hv06:~# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-hv06 Ready master 23h v1.13.2 k8s-hv09 Ready &lt;none&gt; 31s v1.13.2 root@k8s-hv06:~# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-86c58d9df4-b4tf9 1/1 Running 2 23h kube-system coredns-86c58d9df4-h6nq8 1/1 Running 2 23h kube-system kube-apiserver-k8s-hv06 1/1 Running 3 23h kube-system kube-controller-manager-k8s-hv06 1/1 Running 5 23h kube-system kube-flannel-ds-amd64-cqw5p 0/1 CrashLoopBackOff 3 113s kube-system kube-flannel-ds-amd64-rsmhj 1/1 Running 0 22h kube-system kube-proxy-hbnpq 1/1 Running 0 113s kube-system kube-proxy-s5n8l 1/1 Running 3 23h kube-system kube-scheduler-k8s-hv06 1/1 Running 4 23h </code></pre> <p><code>cni0</code> and <code>flannel.1</code> didn't create and connection to master node can't be established.</p> <pre><code>root@k8s-hv09:~# ip a | grep -E '(flannel|cni|cbr|eth|docker)' 2: eth0: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether e2:fa:99:0d:3b:05 brd ff:ff:ff:ff:ff:ff inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0 3: eth1: &lt;BROADCAST,MULTICAST,UP,LOWER_UP&gt; mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether c6:da:44:d9:2e:15 brd ff:ff:ff:ff:ff:ff inet 10.201.12.179/24 brd 10.201.12.255 scope global eth1 4: docker0: &lt;NO-CARRIER,BROADCAST,MULTICAST,UP&gt; mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:30:71:67:92 brd ff:ff:ff:ff:ff:ff inet 172.172.172.2/24 brd 172.172.172.255 scope global docker0 </code></pre> <pre><code>root@k8s-hv06:~# kubectl logs kube-flannel-ds-amd64-cqw5p -n kube-system -c kube-flannel I0129 13:02:09.244309 1 main.go:488] Using interface with name eth1 and address 10.201.12.179 I0129 13:02:09.244498 1 main.go:505] Defaulting external address to interface address (10.201.12.179) E0129 13:02:09.246907 1 main.go:232] Failed to create SubnetManager: error retrieving pod spec for 'kube-system/kube-flannel-ds-amd64-cqw5p': Get https://10.96.0.1:443/api/v1/namespaces/kube-system/pods/kube-flannel-ds-amd64-cqw5p: dial tcp 10.96.0.1:443: getsockopt: connection refused </code></pre> <pre><code>root@k8s-hv09:~# docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 64a9b21607cb quay.io/coreos/flannel "cp -f /etc/kube-fla…" 23 minutes ago Exited (0) 23 minutes ago k8s_install-cni_kube-flannel-ds-amd64-4k2dt_kube-system_b8f510e3-23c7-11e9-85a5-1a05eef25a13_0 2e0145137449 f0fad859c909 "/opt/bin/flanneld -…" About a minute ago Exited (1) About a minute ago k8s_kube-flannel_kube-flannel-ds-amd64-4k2dt_kube-system_b8f510e3-23c7-11e9-85a5-1a05eef25a13_9 90271ee02f68 k8s.gcr.io/kube-proxy "/usr/local/bin/kube…" 23 minutes ago Up 23 minutes k8s_kube-proxy_kube-proxy-6zgjq_kube-system_b8f50ef6-23c7-11e9-85a5-1a05eef25a13_0 b6345e9d8087 k8s.gcr.io/pause:3.1 "/pause" 23 minutes ago Up 23 minutes k8s_POD_kube-proxy-6zgjq_kube-system_b8f50ef6-23c7-11e9-85a5-1a05eef25a13_0 dca408f8a807 k8s.gcr.io/pause:3.1 "/pause" 23 minutes ago Up 23 minutes k8s_POD_kube-flannel-ds-amd64-4k2dt_kube-system_b8f510e3-23c7-11e9-85a5-1a05eef25a13_0 </code></pre> <p>I see command <code>/opt/bin/flanneld --iface=eth1 --ip-masq --kube-subnet-mgr</code> running on the worker node but terminate after container k8s_install-cni_kube-flannel-ds-amd64 stopped. File <code>/etc/cni/net.d/10-flannel.conflist</code> and directory <code>/opt/cni/bin</code> is presents.</p> <p>I dont understand the reason. If I add new master node to cluster it will work fine.</p> <pre><code>root@k8s-hv06:~# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-hv01 Ready master 17s v1.13.2 k8s-hv06 Ready master 22m v1.13.2 k8s-hv09 Ready &lt;none&gt; 6m22s v1.13.2 root@k8s-hv06:~# kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-86c58d9df4-b8th2 1/1 Running 0 23m kube-system coredns-86c58d9df4-hmm8q 1/1 Running 0 23m kube-system kube-apiserver-k8s-hv01 1/1 Running 0 2m16s kube-system kube-apiserver-k8s-hv06 1/1 Running 0 23m kube-system kube-controller-manager-k8s-hv01 1/1 Running 0 2m16s kube-system kube-controller-manager-k8s-hv06 1/1 Running 0 23m kube-system kube-flannel-ds-amd64-92kmc 0/1 CrashLoopBackOff 6 8m20s kube-system kube-flannel-ds-amd64-krdgt 1/1 Running 0 2m16s kube-system kube-flannel-ds-amd64-lpgkt 1/1 Running 0 10m kube-system kube-proxy-7ck7f 1/1 Running 0 23m kube-system kube-proxy-nbkvg 1/1 Running 0 8m20s kube-system kube-proxy-nvbcw 1/1 Running 0 2m16s kube-system kube-scheduler-k8s-hv01 1/1 Running 0 2m16s kube-system kube-scheduler-k8s-hv06 1/1 Running 0 23m </code></pre> <p>But not worker node.</p> <p>Update:</p> <p>I don't have issue connection to api server. My issue is two ifaces(<code>cni</code>, <code>flannel</code>). Without these ifaces I don't have sync between master and worker nodes. Ok let take once additional node and add it to cluster. If i use <code>kubeadm-init</code> with my config file all will work fine. Ifaces of flannel plugin is presents. Let make <code>kubeadm reset</code>** and <code>kubeadm join</code> this node to same cluster. Network interfaces is absent. But why? In the both case we have the same way to get data for network configuration from master api. If I had found any errors or warning I would have a clue.</p> <pre><code>** kubectl delete node &lt;node name&gt;(on master) kubeadm reset &amp;&amp; docker system prune -a &amp;&amp; reboot </code></pre>
<p>Fixed. API server was bind to eth0 instead of eth1. This is my mistake. I'm very embarrassed.</p> <p>Additional master node work fine because it healthcheck his own apiserver iface. But this don't work for worker node.</p> <p>/Close</p>
<p>I'm creating a pod through the <code>KubernetesPodOperator</code> in Airflow. This pod should mount a Google Cloud Storage to <code>/mnt/bucket</code> using <code>gcsfuse</code>. For this, the pod is required to be started with the <code>securityContext</code> parameter, such that it can become 'privileged'.</p> <p>It is <a href="https://stackoverflow.com/questions/52742455/airflow-kubernetespodoperator-pass-securitycontext-parameter">currently not possible</a> to pass the securityContext parameter through Airflow. Is there another way to work around this? Perhaps by setting a 'default' securityContext before the pods are even started? I've looked at creating a <code>PodSecurityPolicy</code>, but haven't managed to figure out a way.</p>
<p>A mutating admission controller would allow you to do that: <a href="https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#mutatingadmissionwebhook" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#mutatingadmissionwebhook</a></p> <p>The ibm-cloud team has a post about it, but I've never tried writing one: <a href="https://medium.com/ibm-cloud/diving-into-kubernetes-mutatingadmissionwebhook-6ef3c5695f74" rel="nofollow noreferrer">https://medium.com/ibm-cloud/diving-into-kubernetes-mutatingadmissionwebhook-6ef3c5695f74</a> and the folks at GiantSwarm have an end-to-end example using their Grumpy admission controller: <a href="https://docs.giantswarm.io/guides/creating-your-own-admission-controller/" rel="nofollow noreferrer">https://docs.giantswarm.io/guides/creating-your-own-admission-controller/</a></p> <p>I would use the labels, or annotations, or maybe even the image, to identify Pods launched by Airflow, and then mutate only them to set the <code>securityContext:</code> on the Pods to be the way you want.</p>
<p>I've previously used both types, I've also read through the docs at:</p> <p><a href="https://kubernetes.io/docs/concepts/storage/persistent-volumes/" rel="noreferrer">https://kubernetes.io/docs/concepts/storage/persistent-volumes/</a> <a href="https://kubernetes.io/docs/concepts/storage/volumes/" rel="noreferrer">https://kubernetes.io/docs/concepts/storage/volumes/</a></p> <p>However it's still not clear what the difference is, both seem to support the same storage types, the only thing that comes to mind is there seems to be a 'provisioning' aspect to persistent volumes.</p> <p><strong>What is the practical difference? Are there advantages / disadvantages between the two - or for what use case would one be better suited to than the other?</strong></p> <p><strong>Is it perhaps just 'synctactic sugar'?</strong></p> <p>For example NFS could be mounted as a volume, or a persistent volume. Both require a NFS server, both will have it's data 'persisted' between mounts. What difference would be had in this situation?</p>
<p>Volume decouples the storage from the Container. Its lifecycle is coupled to a pod. It enables safe container restarts and sharing data between containers in a pod.</p> <p>Persistent Volume decouples the storage from the Pod. Its lifecycle is independent. It enables safe pod restarts and sharing data between pods.</p>
<p>I have setup <code>kubernetes</code> in <code>ubuntu 16.04</code>. I am using kube version <code>1.13.1</code> and using weave for networking. I have initialized the cluster using :</p> <pre><code>sudo kubeadm init --token-ttl=0 --apiserver-advertise-address=192.168.88.142 </code></pre> <p>and weave:</p> <pre><code>kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" </code></pre> <p>All the pods seems to be running fine but <code>coredns</code> always remains in <code>CrashLoopBackOff</code> status. I have read mostly all the solution available for this. </p> <pre><code>NAME READY STATUS RESTARTS AGE coredns-86c58d9df4-h5plc 0/1 CrashLoopBackOff 7 18m coredns-86c58d9df4-l77rw 0/1 CrashLoopBackOff 7 18m etcd-tx-g1-209 1/1 Running 0 17m kube-apiserver-tx-g1-209 1/1 Running 0 17m kube-controller-manager-tx-g1-209 1/1 Running 0 17m kube-proxy-2jdpp 1/1 Running 0 18m kube-scheduler-tx-g1-209 1/1 Running 0 17m weave-net-npgnc 2/2 Running 0 13m </code></pre> <p>I initially started by editing the cordens file and deleting the loop. It resolves the issue but then later I realized that I wasn't able to ping <code>www.google.com</code> from within the container but I was able to ping the ip address of google.com. Thus deleting the loop is not a perfect solution.</p> <p>Next I tried looking at the <code>/etc/resolv.conf</code> and found below contents:</p> <pre><code># Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN nameserver 127.0.1.1 search APSDC.local </code></pre> <p>Here is the <a href="https://github.com/coredns/coredns/tree/master/plugin/loop#troubleshooting-loops-in-kubernetes-clusters" rel="nofollow noreferrer">workaround</a> provided on kubernetes page which says that any type IP address like 127.0.0.1 should be avoided. I am not able to understand this line as this file is automatically generated. How can make changes to the file so that coredns can work fine. Belo is the logs of coredns:</p> <pre><code>$ kubectl logs coredns-86c58d9df4-h5plc -n kube-system .:53 2019-01-31T17:26:43.665Z [INFO] CoreDNS-1.2.6 2019-01-31T17:26:43.666Z [INFO] linux/amd64, go1.11.2, 756749c CoreDNS-1.2.6 linux/amd64, go1.11.2, 756749c [INFO] plugin/reload: Running configuration MD5 = f65c4821c8a9b7b5eb30fa4fbc167769 [FATAL] plugin/loop: Forwarding loop detected in "." zone. Exiting. See https://coredns.io/plugins/loop#troubleshooting. Probe query: "HINFO 1423429973721138313.4523734933111484351.". </code></pre> <p>Can anyone please point me to right direction in order to resolve this issue. Please help. Thanks</p>
<p>I have resolved this issue. In my case I had below contents of <code>/etc/resolv.conf</code></p> <pre><code>nameserver 127.0.1.1 </code></pre> <p>I first used the below command to get the correct IP as the device was in client's network.</p> <pre><code>nmcli device show &lt;interfacename&gt; | grep IP4.DNS </code></pre> <p>After this I updated the file <code>/etc/resolvconf/resolv.conf.d/head</code> with below contents</p> <pre><code>nameserver 192.168.66.21 </code></pre> <p>and then run the below command to regenerate the resolv.conf</p> <pre><code>sudo resolvconf -u </code></pre> <p>After this I had below contents in <code>/etc/resolv.conf</code>:</p> <pre><code>nameserver 192.168.66.21 nameserver 127.0.1.1 </code></pre> <p>I then deleted the <code>coredns</code> pods and everything worked fine. Thanks.</p>
<p>I'm creating a pod through the <code>KubernetesPodOperator</code> in Airflow. This pod should mount a Google Cloud Storage to <code>/mnt/bucket</code> using <code>gcsfuse</code>. For this, the pod is required to be started with the <code>securityContext</code> parameter, such that it can become 'privileged'.</p> <p>It is <a href="https://stackoverflow.com/questions/52742455/airflow-kubernetespodoperator-pass-securitycontext-parameter">currently not possible</a> to pass the securityContext parameter through Airflow. Is there another way to work around this? Perhaps by setting a 'default' securityContext before the pods are even started? I've looked at creating a <code>PodSecurityPolicy</code>, but haven't managed to figure out a way.</p>
<p>Separate from the Mutating Admission Controller, it's also possible to deploy a DaemonSet into your cluster that mounts <code>/mnt/bucket</code> onto the host filesystem, and the the Airflow pods would use <code>{"name": "bucket", "hostPath": {"path": "/mnt/bucket"}}</code> as their <code>volume:</code>, which -- assuming it works -- would be a <strong>boatload</strong> less typing, and also not run the very grave risk of a Mutating Admission Controller borking your cluster and causing Pods to be mysteriously mutated</p>
<p>I have an ingress defined as:</p> <pre><code>apiVersion: extensions/v1beta1 kind: Ingress metadata: name: foo-ingress annotations: kubernetes.io/ingress.global-static-ip-name: zaz-address kubernetes.io/ingress.allow-http: "false" ingress.gcp.kubernetes.io/pre-shared-cert: foo-bar-com spec: rules: - host: foo.bar.com http: paths: - path: /zaz/* backend: serviceName: zaz-service servicePort: 8080 </code></pre> <p>Then the service <code>zap-service</code> is a <code>nodeport</code> defined as:</p> <pre><code>apiVersion: v1 kind: Service metadata: name: zaz-service namespace: default spec: clusterIP: 10.27.255.88 externalTrafficPolicy: Cluster ports: - nodePort: 32455 port: 8080 protocol: TCP targetPort: 8080 selector: app: zap sessionAffinity: None type: NodePort </code></pre> <p>The <code>nodeport</code> is successfully selecting the two pods behind it serving my service. I can see in the <code>GKE</code> services list that the <code>nodeport</code> has an IP that looks internal.</p> <p>When I check in the same interface the <code>ingress</code>, it also looks all fine, but serving zero pods.</p> <p>When I describe the <code>ingress</code> on the other hand I can see:</p> <pre><code>Rules: Host Path Backends ---- ---- -------- foo.bar.com /zaz/* zaz-service:8080 (&lt;none&gt;) </code></pre> <p>Which looks like the <code>ingress</code> is unable to resolve the service <code>IP</code>. What am I doing wrong here? I cannot access the service through the external domain name, I am getting an error <code>404</code>.</p> <p>How can I make the ingress translate the domain name <code>zaz-service</code> into the proper <code>IP</code> so it can redirect traffic there?</p>
<p>My mistake was, as expected, not reading the documentation thoroughly.</p> <p>The port stated in the <code>Ingress</code> path is not a "forwarding" mechanism but a "filtering" one. In my head it made sense that it would be redirecting <code>http(s)</code> traffic to port <code>8080</code>, which is the one where the <code>Service</code> behind was listening to, and the <code>Pod</code> behind the service too.</p> <p>Reality was that it would not route traffic which was not port <code>8080</code> to the service. To make it cleaner I changed the port in the <code>Ingress</code> from <code>8080</code> to <code>80</code> and in the <code>Service</code> the front-facing port from <code>8080</code> to <code>80</code> too.</p> <p>Now all requests coming from the internet can reach the server successfully.</p>
<p>I have a frontend application that works perfectly fine when I have just one instance of the application running in a kubernetes cluster. But when I scale up the deployment to have 3 replicas it shows a blank page on the first load and then after the refresh, it loads the page. As soon as I scale down the app to 1, it starts loading fine again. Here is the what the console prints in the browser.</p> <blockquote> <p>hub.xxxxx.me/:1 Refused to execute script from '<a href="https://hub.xxxxxx.me/static/js/main.5a4e61df.js" rel="nofollow noreferrer">https://hub.xxxxxx.me/static/js/main.5a4e61df.js</a>' because its MIME type ('text/html') is not executable, and strict MIME type checking is enabled.</p> </blockquote> <p><a href="https://i.stack.imgur.com/GYDaY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GYDaY.png" alt="enter image description here"></a> Adding the screenshot as well. Any ideas what might be the case. I know it is an infrastructure issue since it happens only when I scale the application.</p> <p>One thing I noticed is that 2 pods have a different js file that the other pod. </p> <blockquote> <p>2 pods have this file - build/static/js/main.b6aff941.js</p> <p>The other pod has this file - build/static/js/main.5a4e61df.js</p> </blockquote> <p>I think the mismatch is causing the problem. Any Idea how to fix this mismatch issue so that the pods always have the same build?</p>
<blockquote> <p>I think the mismatch is causing the problem. Any Idea how to fix this mismatch issue so that the pods always have the same build?</p> </blockquote> <p>Yes, this is actually pretty common in a build where those resources change like that. You actually won't want to use the traditional rolling-update mechanism, because your deployment is closer to a blue-green one: only one "family" of Pods should be in service at a time, else the <strong>html</strong> from Pod 1 is served but the subsequent request for the <strong>javascript</strong> from Pod 2 is 404</p> <p>There is also the pretty grave risk of a browser having a cached copy of the HTML, but kubernetes can't -- by itself -- help you with that.</p> <p>One pretty reasonable solution is to scale the Deployment to one replica, do the image patch, wait for the a-ok, then scale them back up, so there is only one source of truth for the application running in the cluster at a time. A rollback would look very similar: scale 1, rollback the deployment, scale up</p> <p>An alternative mechanism would be to use label patching, to atomically switch the <code>Service</code> (and presumably thus the <code>Ingress</code>) over to the new Pods all at once, but that would require having multiple copies of the application in the cluster at the same time, which for a front-end app is likely more trouble than it's worth.</p>