Question
stringlengths 65
39.6k
| Answer
stringlengths 38
29.1k
|
---|---|
<p><strong>Use case</strong></p>
<p>"I want to deploy elasticsearch + kibana, by using elastic official helm charts. I want to create a helm chart called data-viz with these charts as dependency."</p>
<p>Running <code>helm install data-viz --set cluster=toto</code> must create an elasticsearch cluster "toto", and a kibana configured with <code>elasticsearchHosts</code> = toto.</p>
<p><strong>Problem</strong></p>
<p>I see here <a href="https://github.com/helm/helm/blob/master/docs/chart_template_guide/subcharts_and_globals.md" rel="noreferrer">https://github.com/helm/helm/blob/master/docs/chart_template_guide/subcharts_and_globals.md</a> it's possible to configure sub-chart from main-chart <code>values.yml</code>. But I would like templatize sub-chart values.yml from main-chart values.yaml, is it possible?</p>
<p>I was thinking to something simple as:</p>
<pre><code>.
├── Chart.yaml
├── charts
│ ├── elasticsearch
│ │ ├── Chart.yaml
│ │ └── templates
│ │ └── values.yaml
│ ├── elasticsearch-7.4.0.tgz
│ └── kibana-7.4.0.tgz
├── requirements.lock
├── requirements.yaml
└── values.yaml
</code></pre>
<p><strong>Hack solution</strong></p>
<p>A small Python script that creates the <code>values.yaml</code> file from <code>sub-chart/values-template.yaml</code> + data.</p>
| <p><strong>TLDR:</strong> Not possible as of now, although there were several attempts to add such functionality that failed all of the committers to the project were unwilling to go through the additional bureaucracy required by the Helm project members.</p>
<p>The <a href="https://github.com/helm/helm/pull/6876" rel="nofollow noreferrer">PR 6876</a> "feat(helm): Adding values templates in order to customize values with go-template, for the chart and its dependencies" could be of interest:</p>
<blockquote>
<p>There have been many requests for a way to use templates for chart values.yaml (#2492, <a href="https://github.com/helm/helm/issues/2133#issuecomment-683222039" rel="nofollow noreferrer">#2133</a>, ...).<br />
The main reason being that it's currently impossible to derive the values of the subcharts from templates.</p>
<p>However, having templates in <code>values.yaml</code> make them unparsable and create a "chicken or the egg" problem when rendering them.</p>
<p>This merge request creates an intuitive solution without such a problem: an optional <code>values/</code> directory which templates are rendered using <code>values.yaml</code>, and then merge them with it.<br />
Those <code>values/</code> templates are only required for specific cases, work the same way as <code>templates/</code> templates, and <code>values.yaml</code> will remain the main parsable source of value.</p>
<p>The rendering order has also been changed to allow those new values to enable or disable dependencies, and to avoid rendering values templates of disabled dependencies.</p>
<h2>New possibilities:</h2>
<p>Can now customize dependencies values, which was previously totally impossible</p>
</blockquote>
<h3>values/subchart.yaml</h3>
<pre><code>subchart:
fullnameOverride: subchart-{{ .Relese.Name }}
debug: {{ default "false" .Values.debug }}
</code></pre>
<blockquote>
<p>Can derive dependencies conditions from values</p>
</blockquote>
<h3>Chart.yaml</h3>
<pre class="lang-yaml prettyprint-override"><code>dependencies:
- name: subchart
condition: subchart.enabled
</code></pre>
<h3>values/subchart.yaml</h3>
<pre class="lang-yaml prettyprint-override"><code>subchart:
{{- if eq .Values.environment "production" -}}
enabled: true
{{- else -}}
enabled: false
{{- end -}}
</code></pre>
<hr />
<p>Similarly, <a href="https://github.com/helm/helm/pull/8580" rel="nofollow noreferrer">PR 8580</a> "Added support to pass values to sub-charts via <code>map.yaml</code>" is of interest</p>
<blockquote>
<p>This PR allows developers to declare a <code>map.yaml</code> file on their chart, which can be used to map values in <code>values.yaml</code> to derived values that are used in templating, including sub-charts (see <a href="https://github.com/helm/helm/issues/8576" rel="nofollow noreferrer">#8576</a> for the long explanation).</p>
<p>This allows developers to write e.g.</p>
</blockquote>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
description: Chart with map and subcharts
name: chart-with-map
version: 0.0.1
dependencies:
- name: backend
version: 0.0.1
- name: frontend
version: 0.0.1
</code></pre>
<h3>values.yaml</h3>
<pre class="lang-yaml prettyprint-override"><code>domain: example.com
</code></pre>
<h3>map.yaml</h3>
<pre class="lang-yaml prettyprint-override"><code>backend:
uri: {{ printf "https://api.%s" .Values.domain }}
frontend:
uri: {{ printf "https://app.%s" .Values.domain }}
other_uri: {{ printf "https://blabla.%s" .Values.domain }}
</code></pre>
<blockquote>
<p>thereby not having to expose backend: uri, frontend: uri in values.yaml (for the subchart), nor ask chart users to have to pass the same value in multiple keys for consistency (or use global names that lead to naming collisions).</p>
<p>I.e. it allows subcharts to be populated with derived (or maped) values without exposing these values to values.yaml (the public interface of the chart).</p>
</blockquote>
<hr />
<p>This is being implemented/evaluated in:</p>
<ul>
<li><a href="https://github.com/helm/helm/pull/8677" rel="nofollow noreferrer">PR 8677</a>: "fix: limit the usage of chartutil.Values to avoid conversion bugs"</li>
<li><a href="https://github.com/helm/helm/pull/8679" rel="nofollow noreferrer">PR 8679</a>: "fix: do not merge and import values from disabled dependencies"</li>
<li><a href="https://github.com/helm/helm/pull/8690" rel="nofollow noreferrer">PR 8690</a>: "feat: add values templates to customize values with go-template"</li>
</ul>
<p>But that <a href="https://github.com/helm/helm/pull/6876#issuecomment-687233515" rel="nofollow noreferrer">will likely require</a> an <a href="https://github.com/helm/community/blob/master/hips/hip-0001.md" rel="nofollow noreferrer">HIP (Helm Improvement Proposal)</a>.</p>
<hr />
<p>Update Feb. 2021: <a href="https://github.com/helm/helm/pull/6876" rel="nofollow noreferrer">PR 6876</a> confirms in <a href="https://github.com/helm/helm/pull/6876#issuecomment-707002193" rel="nofollow noreferrer">this comment</a> a formal proposal is needed.</p>
|
<p>I would like to setup the AutoDevops functionality of Gitlab CI/CD, and for that I am trying to setup the existing kubernetes cluster as my environment. </p>
<p>However, the Gitlab requires Kubernetes Master API URL that uses to access the Kubernetes API. Kubernetes
exposes several APIs, we want the "base" URL that is common to all of them,
e.g., <a href="https://kubernetes.example.com" rel="nofollow noreferrer">https://kubernetes.example.com</a> rather than <a href="https://kubernetes.example.com/api/v1" rel="nofollow noreferrer">https://kubernetes.example.com/api/v1</a>.
we will get the API URL by running this command:</p>
<p><code>kubectl cluster-info | grep 'Kubernetes master' | awk '/http/ {print $NF}</code></p>
<p>which returns a https://</p>
<p>in my case, I have a private Ip which is <a href="https://172.10.1.x" rel="nofollow noreferrer">https://172.10.1.x</a></p>
<p>There aren't any documentations to help setup the gitlab CI for a Private GKE cluster.</p>
<p>How can I set the gitlab to access my kubernetes master with the help of a running VM instance or a pod's service IP ? or if there are any solutions/workarounds suggestions to achieve this, please help.</p>
<p><a href="https://gitlab.com/help/user/project/clusters/add_remove_clusters.md#add-existing-cluster" rel="nofollow noreferrer">Add Existing GKE cluster as Environment</a></p>
| <p>There is now (Sept. 2020) an alternative, but it was not free (GitLab.com Premium/Ultimate only), is free in part in 14.5+ (Nov. 2021). Then fully with 15.3 (Aug. 2022)</p>
<p>See <a href="https://about.gitlab.com/releases/2020/09/22/gitlab-13-4-released/#introducing-the-gitlab-kubernetes-agent" rel="nofollow noreferrer">GitLab 13.4</a></p>
<blockquote>
<h2>Introducing the GitLab Kubernetes Agent</h2>
<p>GitLab’s Kubernetes integration has long enabled deployment to Kubernetes clusters without manual setup. Many users have enjoyed the ease-of-use, while others have run into some challenges.</p>
<p>The current integration requires your cluster to be open to the Internet for GitLab to access it. For many organizations, this isn’t possible, because they must lock down their cluster access for security, compliance, or regulatory purposes. To work around these restrictions, users needed to create custom tooling on top of GitLab, or they couldn’t use the feature.</p>
<p>Today, we’re announcing the GitLab Kubernetes Agent: a new way to deploy to Kubernetes clusters. The Agent runs inside of your cluster, so you don’t need to open it to the internet. The Agent orchestrates deployments by pulling new changes from GitLab, rather than GitLab pushing updates to the cluster. No matter what method of GitOps you use, GitLab has you covered.</p>
<p>Note this is the first release of the Agent. Currently, the GitLab Kubernetes Agent has a configuration-driven setup, and enables deployment management by code. Some existing Kubernetes integration features, such as Deploy Boards and GitLab Managed Apps, are not yet supported. <a href="https://about.gitlab.com/direction/configure/kubernetes_management/" rel="nofollow noreferrer">Our vision</a> is to eventually implement these capabilities, and provide new security- and compliance-focused integrations with the Agent.</p>
<p><a href="https://about.gitlab.com/images/13_4/gitops-header.png" rel="nofollow noreferrer">https://about.gitlab.com/images/13_4/gitops-header.png</a> -- Introducing the GitLab Kubernetes Agent</p>
<p>See <a href="https://docs.gitlab.com/ee/user/clusters/agent" rel="nofollow noreferrer">Documentation</a> and <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/223061" rel="nofollow noreferrer">Issue</a>.</p>
</blockquote>
<hr />
<p>See also <a href="https://about.gitlab.com/releases/2020/10/22/gitlab-13-5-released/#install-the-gitlab-kubernetes-agent-with-omnibus-gitlab" rel="nofollow noreferrer">GitLab 13.5</a> (October 2020)</p>
<blockquote>
<h2>Install the GitLab Kubernetes Agent with Omnibus GitLab</h2>
<p>Last month we introduced the <a href="/releases/2020/09/22/gitlab-13-4-released/#introducing-the-gitlab-kubernetes-agent">GitLab Kubernetes Agent</a> for self-managed GitLab instances installed with Helm.</p>
<p>This release adds support for the <a href="https://about.gitlab.com/install/" rel="nofollow noreferrer">official Linux package</a>.</p>
<p>In this new Kubernetes integration, the Agent orchestrates deployments by pulling new changes from GitLab, rather than GitLab pushing updates to your cluster.</p>
<p>You can learn more about <a href="https://docs.gitlab.com/ee/user/clusters/agent/" rel="nofollow noreferrer">how the Kubernetes Agent works now</a> and <a href="/direction/configure/kubernetes_management/">check out our vision</a> to see what’s in store.</p>
<p>See <a href="https://docs.gitlab.com/ee/user/clusters/agent/" rel="nofollow noreferrer">Documentation</a> and <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/223060" rel="nofollow noreferrer">Issue</a>.</p>
</blockquote>
<hr />
<p>This is confirmed with <a href="https://about.gitlab.com/releases/2021/04/22/gitlab-13-11-released/#gitlab-kubernetes-agent-available-on-gitlabcom" rel="nofollow noreferrer">GitLab 13.11</a> (April 2021):</p>
<blockquote>
<h2>GitLab Kubernetes Agent available on GitLab.com</h2>
<p>The GitLab Kubernetes Agent is finally available on GitLab.com. By using the Agent, you can benefit from fast, pull-based deployments to your cluster, while GitLab.com manages the necessary server-side components of the Agent.</p>
<p>The GitLab Kubernetes Agent is the core building block of GitLab’s Kubernetes integrations.<br />
The Agent-based integration today supports pull-based deployments and Network Security policy integration and alerts, and will soon receive support for push-based deployments too.</p>
<p>Unlike the legacy, certificate-based Kubernetes integration, the GitLab Kubernetes Agent does not require opening up your cluster towards GitLab and allows fine-tuned RBAC controls around GitLab’s capabilities within your clusters.</p>
</blockquote>
<p>See <a href="https://docs.gitlab.com/ee/user/clusters/agent/" rel="nofollow noreferrer">Documentation</a> and <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/249596" rel="nofollow noreferrer">issue</a>.</p>
<hr />
<p>See <a href="https://about.gitlab.com/releases/2021/11/22/gitlab-14-5-released/#gitlab-kubernetes-agent-available-in-gitlab-free" rel="nofollow noreferrer">GitLab 14.5</a> (November 2021)</p>
<blockquote>
<h2>GitLab Kubernetes Agent available in GitLab Free</h2>
<p>Connecting a Kubernetes cluster with the GitLab Kubernetes Agent simplifies the setup for cluster applications and enables secure GitOps deployments to the cluster.</p>
<p>Initially, the GitLab Kubernetes Agent was available only for Premium users.</p>
<p>In our commitment to the open source ethos, we moved the core features of the GitLab Kubernetes Agent and the CI/CD Tunnel to GitLab Free.<br />
We expect that the open-sourced features are compelling to many users without dedicated infrastructure teams and strong requirements around cluster management.<br />
Advanced features remain available as part of the GitLab Premium offering.</p>
<p>See <a href="https://docs.gitlab.com/ee/user/clusters/agent/" rel="nofollow noreferrer">Documentation</a> and <a href="https://gitlab.com/groups/gitlab-org/-/epics/6290" rel="nofollow noreferrer">Epic</a>.</p>
</blockquote>
<hr />
<p>See <a href="https://about.gitlab.com/releases/2022/02/22/gitlab-14-8-released/#the-agent-server-for-kubernetes-is-enabled-by-default" rel="nofollow noreferrer">GitLab 14.8</a> (February 2022)</p>
<blockquote>
<h2>The agent server for Kubernetes is enabled by default</h2>
<p>The first step for using the agent for Kubernetes in self-managed instances is to enable the agent server, a backend service for the agent for Kubernetes. In GitLab 14.7 and earlier, we required a GitLab administrator to enable the agent server manually. As the feature matured in the past months, we are making the agent server enabled by default to simplify setup for GitLab administrators. Besides being enabled by default, the agent server accepts various configuration options to customize it according to your needs.</p>
<p>See <a href="https://docs.gitlab.com/ee/administration/clusters/kas.html" rel="nofollow noreferrer">Documentation</a> and <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/273372" rel="nofollow noreferrer">Issue</a>.</p>
</blockquote>
<hr />
<p>With <a href="https://about.gitlab.com/releases/2022/08/22/gitlab-15-3-released/#gitops-features-are-now-free" rel="nofollow noreferrer">GitLab 15.3</a> (August 2022):</p>
<blockquote>
<h2>GitOps features are now free</h2>
<p>When you use GitOps to update a Kubernetes cluster, also called a pull-based deployment, you get an improved security model, better scalability and stability.</p>
<p>The GitLab agent for Kubernetes has supported <a href="https://docs.gitlab.com/ee/user/clusters/agent/gitops.html" rel="nofollow noreferrer">GitOps workflows</a> from its initial release, but until now, the functionality was available only if you had a GitLab Premium or Ultimate subscription. Now if you have a Free subscription, you also get pull-based deployment support. The features available in GitLab Free should serve small, high-trust teams or be suitable to test the agent before upgrading to a higher tier.</p>
<p>In the future, we plan to add <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/337904" rel="nofollow noreferrer">built-in multi-tenant support</a> for Premium subscriptions. This feature would be similar to the impersonation feature already available for the <a href="https://docs.gitlab.com/ee/user/clusters/agent/ci_cd_workflow.html#restrict-project-and-group-access-by-using-impersonation" rel="nofollow noreferrer">CI/CD workflow</a>.</p>
<p>See <a href="https://docs.gitlab.com/ee/user/clusters/agent/gitops.html" rel="nofollow noreferrer">Documentation</a> and <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/346567" rel="nofollow noreferrer">Issue</a>.</p>
</blockquote>
<hr />
<p>See <a href="https://about.gitlab.com/releases/2022/09/22/gitlab-15-4-released/#deploy-helm-charts-with-the-agent-for-kubernetes" rel="nofollow noreferrer">GitLab 15.4</a> (September 2022)</p>
<blockquote>
<h2>Deploy Helm charts with the agent for Kubernetes</h2>
<p>You can now <a href="https://docs.gitlab.com/ee/user/clusters/agent/gitops/helm.html" rel="nofollow noreferrer">use the agent for Kubernetes to deploy
Helm charts to your Kubernetes cluster</a>.</p>
<p>Until now, the agent for Kubernetes only supported vanilla Kubernetes manifest files in its GitOps workflow.<br />
To benefit from the GitOps workflow, Helm users had to use a CI/CD job to render and commit resources.</p>
<p>The current release ships with Alpha support for Helm.<br />
Because Helm is a mature product, we consider the solution performant. However, <a href="https://docs.gitlab.com/ee/user/clusters/agent/gitops/helm.html#known-issues" rel="nofollow noreferrer">known issues</a> exist and the API might change without prior notice. We welcome your feedback in <a href="https://gitlab.com/groups/gitlab-org/-/epics/7938" rel="nofollow noreferrer">the
related epic</a>, where we discuss future improvements and next steps.</p>
<p>See <a href="https://docs.gitlab.com/ee/user/clusters/agent/gitops/helm.html" rel="nofollow noreferrer">Documentation</a> and <a href="https://gitlab.com/gitlab-org/gitlab/-/issues/371019" rel="nofollow noreferrer">Issue</a>.</p>
</blockquote>
|
<p>I am setting up Github Actions for a project repository. </p>
<p>The workflow consists of the following steps:</p>
<ul>
<li>Building a docker image </li>
<li>Pushing the image to a container registry</li>
<li>Rollout a Kubernetes deployment. </li>
</ul>
<hr>
<p>However, I have two different Kubernetes deployments: one for development, and one for production. Hence, I have also two Github Action workflows.</p>
<p>The Github Action workflow for development is triggered everytime that a commit is pushed:</p>
<pre><code>on:
push:
branches:
- master
</code></pre>
<p>But I don't want that for my production workflow. I would need a manual trigger, like a <em>Send to production</em> button. I didn't see anything close to that in the docs.</p>
<hr>
<p><strong><em>Is there a way to trigger a workflow manually in Github Actions?</em></strong></p>
<p><strong><em>How can I split my development and my production workflows to achieve what I want, either on Github Actions, Docker or Kubernetes?</em></strong></p>
| <blockquote>
<p>Is there a way to trigger a workflow manually in Github Actions?</p>
</blockquote>
<p>You might consider, <a href="https://github.blog/changelog/2020-07-06-github-actions-manual-triggers-with-workflow_dispatch/" rel="noreferrer">from July2020</a>:</p>
<blockquote>
<h2>GitHub Actions: Manual triggers with workflow_dispatch</h2>
</blockquote>
<p>(Note: or multiple workflows, through the <a href="https://stackoverflow.com/a/63317065/6309">new Composite Run Steps</a>, August 2020)</p>
<blockquote>
<p>You can now create workflows that are manually triggered with the new <code>workflow_dispatch</code> event.<br />
You will then see a '<code>Run workflow</code>' button on the Actions tab, enabling you to easily trigger a run.</p>
<p><a href="https://i.stack.imgur.com/reBTv.png" rel="noreferrer"><img src="https://i.stack.imgur.com/reBTv.png" alt="https://i2.wp.com/user-images.githubusercontent.com/1865328/86147571-2de93700-babf-11ea-8a08-e4beffd3abe9.png?ssl=1" /></a></p>
<p>You can choose which branch the workflow is run on.</p>
</blockquote>
<p><a href="https://stackoverflow.com/users/1631219/philippe">philippe</a> adds <a href="https://stackoverflow.com/questions/58933155/manual-workflow-triggers-in-github-actions/62768214#comment111440150_62768214">in the comments</a>:</p>
<blockquote>
<p>One thing that's not mentioned in the documentation: the workflow must exist on the default branch for the "<code>Run workflow</code>" button to appear.<br />
Once you add it there, you can continue developing the action on its own branch and the changes will take effect when run using the button</p>
</blockquote>
<p>The documentation goes on:</p>
<blockquote>
<p>In addition, you can optionally specify inputs, which GitHub will present as form elements in the UI. Workflow dispatch inputs are specified with the same format as action inputs.</p>
<p>For example:</p>
</blockquote>
<pre><code>on:
workflow_dispatch:
inputs:
logLevel:
description: 'Log level'
required: true
default: 'warning'
tags:
description: 'Test scenario tags'
</code></pre>
<blockquote>
<p>The triggered workflow receives the inputs in the <code>github.event</code> context.</p>
<p>For example:</p>
</blockquote>
<pre><code>jobs:
printInputs:
runs-on: ubuntu-latest
steps:
- run: |
echo "Log level: ${{ github.event.inputs.logLevel }}"
echo "Tags: ${{ github.event.inputs.tags }}"
</code></pre>
<hr />
<p><a href="https://stackoverflow.com/users/1032372/shim">shim</a> adds in <a href="https://stackoverflow.com/questions/58933155/manual-workflow-triggers-in-github-actions/62768214?noredirect=1#comment121548076_62768214">the comments</a>:</p>
<blockquote>
<p>You can add <code>workflow_dispatch</code> to a workflow that also has other triggers (like <code>on push</code> and / or <code>schedule</code>)</p>
<p><a href="https://gist.github.com/simonbromberg/d9dfbbd3fcf481dd59dc522f0d12cdf6" rel="noreferrer">For instance</a>:</p>
<pre class="lang-yaml prettyprint-override"><code>on:
workflow_dispatch:
push:
branches:
- master
pull_request:
types: [opened, synchronize, reopened]
</code></pre>
</blockquote>
|
<p>so basically i changed my website to a new kubernetes cluster. Now it is necessary to enable RBAC. The pipeline runs without any errors but unfortunately the certmanager SSH doesn't work anymore. I installed certmanager with gitlab so now I'm wondering if this could have anything to do with the change to RBAC? Unfortunetally i'm new with RBAC so I don't really understand if it could be related or not. Can anybody help?</p>
<p>Here is a picture from the error when I run whynopadlock: </p>
<p><a href="https://i.stack.imgur.com/cFX9F.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cFX9F.png" alt="whynopadlock 1"></a></p>
<p><a href="https://i.stack.imgur.com/MRYYP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MRYYP.png" alt="whynopadlock 2"></a></p>
<p>Edit: This is the output when I run:</p>
<pre><code>kubectl auth can-i --list --as=system:serviceaccount:gitlab-managed-apps:default
</code></pre>
<p><a href="https://i.stack.imgur.com/4xCb6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4xCb6.png" alt="enter image description here"></a></p>
| <p>Deploy cert-manager to managed cluster for SSL certificates is an issue requested for the past two years (<a href="https://gitlab.com/gitlab-org/gitlab-foss/-/issues/40635" rel="nofollow noreferrer">issue 40635</a>)<br>
<a href="https://gitlab.com/gitlab-org/gitlab/-/issues/29650" rel="nofollow noreferrer">Issue 29650</a> shows the all process is not yet well documented.</p>
<p>But you still have an <a href="https://gitlab.com/gitlab-org/gitlab-foss/-/issues/40635#note_130063501" rel="nofollow noreferrer">ingress working with cert manager</a>, meaning having the annotations expected by a cert manager:</p>
<ul>
<li><code>kubernetes.io/ingress.class: nginx</code></li>
<li><code>kubernetes.io/tls-acme: "true"</code> </li>
</ul>
<p>That could help with <a href="https://docs.gitlab.com/charts/installation/rbac.html" rel="nofollow noreferrer">RBAC</a>, even though <a href="https://gitlab.com/gitlab-org/gitlab/issues/33186" rel="nofollow noreferrer">issue 33186</a> shows this setup as not fully working.</p>
|
<p>I want to create a deployment with GitHub URL using<br>
<code>kubectl create -f github.com/deployment.yaml</code><br>
That <code>deployment.yaml</code> file is located in my private GitHub repository.<br>
How can I authenticate my kubectl to use my GitHub private repo and create that deployment?</p>
| <p>You could simply:</p>
<ul>
<li><a href="https://gist.github.com/Integralist/9482061" rel="nofollow noreferrer">curl the file from the private repo</a></li>
<li><a href="https://unix.stackexchange.com/a/505831/7490">pass its content to kubectl create -f</a></li>
</ul>
<p>That is, from <a href="https://gist.github.com/madrobby/9476733#gistcomment-2817366" rel="nofollow noreferrer">this example</a>:</p>
<pre><code>USER="me"
PASSWD="mypasswd"
OUTPUT_FILEPATH="./foo"
OWNER="mycompany"
REPOSITORY="boo"
RESOURCE_PATH="project-x/a/b/c.py"
TAG="my_unit_test"
curl \
-u "$USER:$PASSWD" \
-H 'Accept: application/vnd.github.v4.raw' \
-o "$OUTPUT_FILEPATH" \
-L "https://api.github.com/repos/$OWNER/$REPOSITORY/contents/$RESOURCE_PATH?ref=$TAG"
| kubectl create -f /dev/stdin
</code></pre>
|
<p>In the following scenario:</p>
<ol>
<li>Pod X has a toleration for a taint</li>
<li>However node A with such taint does not exists</li>
<li>Pod X get scheduled on a different node B in the meantime</li>
<li>Node A with the proper taint becomes Ready</li>
</ol>
<p>Here, Kubernetes does not trigger an automatic rescheduling of the pod X on node A as it is properly running on node B. Is there a way to enable that automatic rescheduling to node A?</p>
| <p>Natively, probably not, unless you:</p>
<ul>
<li>change the taint of <code>nodeB</code> to <code>NoExecute</code> (it probably already was set) :</li>
</ul>
<blockquote>
<p>NoExecute - the pod will be evicted from the node (if it is already running on the node), and will not be scheduled onto the node (if it is not yet running on the node).</p>
</blockquote>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/#concepts" rel="nofollow noreferrer">update the toleration of the pod</a></li>
</ul>
<p>That is:</p>
<blockquote>
<p>You can put multiple taints on the same node and <strong>multiple tolerations on the same pod</strong>.</p>
<p>The way Kubernetes processes multiple taints and tolerations is like a filter: start with all of a node’s taints, then ignore the ones for which the pod has a matching toleration; the remaining un-ignored taints have the indicated effects on the pod. In particular,</p>
<p>if there is at least one un-ignored taint with effect NoSchedule then Kubernetes will not schedule the pod onto that node</p>
</blockquote>
<p>If that is not possible, then using <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#node-affinity-beta-feature" rel="nofollow noreferrer">Node Affinity</a> could help (but that differs from taints)</p>
|
<p>I am using cloud build and GKE k8s cluster and i have setup CI/CD from github to cloud build.</p>
<p>I want to know is it good to add CI build file and Dockerfile in the repository or manage config file separately in another repository? </p>
<p>Is it good to add Ci & k8s config files with business logic repository?</p>
<p>What is best way to implement CI/CD cloud build to GKE with managing CI/k8 yaml files</p>
| <p>Yes, you can add deployment directives, typically in a dedicated folder of your project, which can in turn use a cicd repository</p>
<p>See "<a href="https://github.com/kelseyhightower/pipeline-application" rel="nofollow noreferrer"><code>kelseyhightower/pipeline-application</code></a>" as an example, where:</p>
<blockquote>
<p>Changes pushed to any branch except master should trigger the following actions:</p>
<ul>
<li>build a container image tagged with the build ID suitable for deploying to a staging cluster</li>
<li>clone the <a href="https://github.com/kelseyhightower/pipeline-infrastructure-staging" rel="nofollow noreferrer">pipeline-infrastructure-staging repo</a></li>
<li>patch the pipeline deployment configuration file with the staging container image and commit the changes to the pipeline-infrastructure-staging repo</li>
</ul>
<p>The pipeline-infrastructure-staging repo will deploy any updates committed to the master branch.</p>
</blockquote>
|
<p>I have an app running with Docker and the .git directory is ignored to reduced the project's size.</p>
<p>The problem is that every time an artisan command is ran this message is displayed and stored inside the logs of Kubernetes. Moreover, in some cases is the reason why some kubernetes tasks cannot reach the HEALTHY status.</p>
<p>I have a Cronjob with kubernetes which reaches just 2/3 and the only message that the logs displayed is this one.</p>
| <p><a href="https://github.com/monicahq/monica/pull/950" rel="noreferrer">monicahq/monica PR 950</a> is an example of a workaround where the Sentry configuration is modified to test for the presence of the Git repository, ensuring <code>php artisan config:cache</code> is run only once.</p>
<pre><code>// capture release as git sha
'release' => is_dir(__DIR__.'/../.git') ? trim(exec('git log --pretty="%h" -n1 HEAD')) : null,
</code></pre>
|
<p>I'm using Docker to create a container application and then deploy it to <a href="https://cloud.google.com/kubernetes-engine/" rel="noreferrer">kubernetes engine</a> but when the application is been initialized I get this error: </p>
<pre><code>err: open C:\Go/lib/time/zoneinfo.zip: no such file or directory
</code></pre>
<p><a href="https://i.stack.imgur.com/UAjLT.png" rel="noreferrer"><img src="https://i.stack.imgur.com/UAjLT.png" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/OCnpZ.png" rel="noreferrer"><img src="https://i.stack.imgur.com/OCnpZ.png" alt="enter image description here"></a></p>
| <p>You might consider building your Go application <a href="https://golang.org/doc/go1.15#time/tzdata" rel="nofollow noreferrer">with Go 1.15</a> (August 2020, two years later)</p>
<blockquote>
<h2>New embedded tzdata package</h2>
</blockquote>
<blockquote>
<p>Go 1.15 includes a new package, <a href="https://golang.org/pkg/time/tzdata/" rel="nofollow noreferrer"><code>time/tzdata</code></a>, that permits embedding the timezone database into a program.</p>
<ul>
<li>Importing this package (as <code>import _ "time/tzdata"</code>) permits the program to find timezone information even if the timezone database is not available on the local system.</li>
<li>You can also embed the timezone database by building with <code>-tags timetzdata</code>.</li>
</ul>
<p>Either approach increases the size of the program by about 800 KB.</p>
</blockquote>
<p>That way, once deployed to a Kubernetes engine as a Docker image, it won't have to try and load any timezone information, since said information is embedded in its program.</p>
<p><strong>Caveat</strong>: as <a href="https://stackoverflow.com/questions/52489347/how-to-create-a-binary-that-contains-zoneinfo-zip/69312649#comment122507561_63371576">commented</a> by <a href="https://stackoverflow.com/users/328115/dolmen">dolmen</a>:</p>
<blockquote>
<p>This solution has the <strong>drawback that the timezone information version is linked to the version of Go you use for building</strong>.</p>
<p>If you keep using a fixed version of Go for your project, you will keep timezones from that version.</p>
<p>Being able to update the timezone information independently from the Go version (update every time you rebuild the Docker image for deploy) might be a better choice.</p>
</blockquote>
<p>As an illustration, see dolmen's <a href="https://stackoverflow.com/a/69312649/6309">answer</a>.</p>
|
<p>I am tasked with building a new relic chart to show gitlab runner job count over time.
I am trying to determine what type of object is a gitlab runner job. Is is a deployment, a pod or a statefulset?</p>
<p>Any assistance with being able to visualize gitlab runner pods in new relic would be appreciated.</p>
| <p>As mentioned in "<a href="https://docs.gitlab.com/runner/executors/kubernetes.html" rel="nofollow noreferrer">Kubernetes executor for GitLab Runner</a>":</p>
<blockquote>
<p>The Kubernetes executor, when used with GitLab CI, connects to the Kubernetes API in the cluster creating <strong>a Pod for each GitLab CI Job</strong>.</p>
<p>This Pod is made up of, at the very least:</p>
<ul>
<li>a build container,</li>
<li>a helper container, and</li>
<li>an additional container for each service defined in the <code>.gitlab-ci.yml</code> or <code>config.toml</code> files.</li>
</ul>
</blockquote>
<p>Since those are pods, you should see them in a <a href="https://docs.newrelic.com/docs/kubernetes-pixie/kubernetes-integration/understand-use-data/kubernetes-cluster-explorer/" rel="nofollow noreferrer">NewRelic Kubernetes cluster explorer</a>.</p>
|
<p>I use the command <code>docker container update --cpus 1 target_container</code> to update the setting. But in most cases it not working and response "you must provide one or more flags when using this command", but somethings it working</p>
<p>Any one know the reason?</p>
<p>Some log is like,</p>
<pre><code>$ docker container update --cpus 0.5 target_container
you must provide one or more flags when using this command
$ docker container update --cpus 0.5 target_container
you must provide one or more flags when using this command
$ docker container update --cpus 0.5 target_container
target_container
</code></pre>
<p>the docker version is </p>
<pre><code>Client:
Version: 18.09.7
API version: 1.39
Go version: go1.10.4
Git commit: 2d0083d
Built: Fri Aug 16 14:19:38 2019
OS/Arch: linux/amd64
Experimental: false
Server:
Engine:
Version: 18.09.7
API version: 1.39 (minimum version 1.12)
Go version: go1.10.4
Git commit: 2d0083d
Built: Thu Aug 15 15:12:41 2019
OS/Arch: linux/amd64
Experimental: false
</code></pre>
| <p>The <a href="https://docs.docker.com/engine/reference/commandline/update/" rel="nofollow noreferrer">docker update</a> man page includes:</p>
<pre><code>--cpus API 1.29+
</code></pre>
<p>A <code>docker version</code> can show you if your API is equal or above <a href="https://docs.docker.com/engine/api/v1.29/" rel="nofollow noreferrer">the API 29</a>, which was with Docker 17.05.</p>
<p>For testing, try <code>--cpus=1</code> or <code>--cpus=0.5</code>, considering the argument is supposed to be "number of CPUs"</p>
<p>As usual with commands including an hyphen: <a href="https://stackoverflow.com/a/170148/6309">don't copy-paste it</a>, copy it manually.</p>
|
<p>We have a Keycloak 15.1.1 deployment on Kubernetes with multiple replicas with the AWS RDS Postgres 13 Database backend. I did not find any upgrade guide or experience of other people regarding this setup or even with Kubernetes with Postgres using PVC upgrading Keycloak with multiple major version changes.</p>
<p>Does anyone have any experience with the Keycloak upgrade on Kubernetes?</p>
<p>I went through the change log and was able to run Keycloak locally using docker-compose only in HTTP mode as we terminate SSL at the reverse proxy.</p>
<p>From upgrade instructions from Keycloak <a href="https://www.keycloak.org/docs/latest/upgrading/index.html" rel="nofollow noreferrer">documentation</a> is the following strategy is the right one without losing any data</p>
<p>Update the docker image with a new image running only in HTTP mode in our helm charts</p>
<p>Initially start the new deployment with only a single replica so that the database schema changes are applied</p>
<pre><code>kc.sh start --spi-connections-jpa-default-migration-strategy=update
</code></pre>
<p>When I tried to upgrade my local deployment with the above command, Keycloak was not accessible until the next restart.</p>
<p>Restart the deployment with more replicas with command</p>
<pre><code>kc.sh start --optimized
</code></pre>
| <p>I got the answer from the Keycloak GitHub support forums <a href="https://github.com/keycloak/keycloak/discussions/14682" rel="nofollow noreferrer">https://github.com/keycloak/keycloak/discussions/14682</a>.</p>
<p>Running kc.sh start automatically upgrades the DB on first install, and the first container running this command automatically locks the DB until migration is complete. So its not required to change my helm chart.</p>
|
<p>I'm running a Ruby app on Kubernetes with Minikube. </p>
<p>However, whenever I look at the logs I don't see the output I would have seen in my terminal when running the app locally.</p>
<p>I presume it's because it only shows stderr?</p>
<p>What can I do to see all types of console logs (e.g. from <code>puts</code> or <code>raise</code>)?</p>
<p>On looking around is this something to do with it being in detached mode - see the Python related issue: <a href="https://stackoverflow.com/questions/43969743/logs-in-kubernetes-pod-not-showing-up">Logs in Kubernetes Pod not showing up</a></p>
<p>Thanks.</p>
<p>=</p>
<p>As requested - here is the deployment.yaml</p>
<pre><code>apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: sample
spec:
replicas: 1
template:
metadata:
labels:
app: sample
spec:
containers:
- name: sample
image: someregistry
imagePullPolicy: Always
command: ["/bin/sh","-c"]
args: ["bundle exec rake sample:default --trace"]
envFrom:
- configMapRef:
name: sample
- secretRef:
name: sample
ports:
- containerPort: 3000
imagePullSecrets:
- name: regsecret
</code></pre>
| <p>As shown in this article, <code>kubectl logs pod apod</code> should show you stdout and stderr for a pod deployed in a minikube.</p>
<blockquote>
<p>By default in Kubernetes, Docker is configured to write a container's stdout and stderr to a file under /var/log/containers on the host system</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/#system-component-logs" rel="noreferrer">Kubernetes adds</a>:</p>
<blockquote>
<p>There are two types of system components: those that run in a container and those that do not run in a container.<br>
For example:</p>
<ul>
<li>The Kubernetes scheduler and kube-proxy run in a container.</li>
<li>The kubelet and container runtime, for example Docker, do not run in containers.</li>
</ul>
</blockquote>
<p>And:</p>
<blockquote>
<ul>
<li>On machines with systemd, the kubelet and container runtime write to journald. </li>
<li>If systemd is not present, they write to <code>.log</code> files in the <code>/var/log</code> directory.</li>
</ul>
<p>Similarly to the container logs, system component logs in the /var/log directory should be rotated.<br>
In Kubernetes clusters brought up by the kube-up.sh script, those logs are configured to be rotated by the logrotate tool daily or once the size exceeds 100MB.</p>
</blockquote>
<p><a href="https://i.stack.imgur.com/JzhOm.png" rel="noreferrer"><img src="https://i.stack.imgur.com/JzhOm.png" alt="https://d33wubrfki0l68.cloudfront.net/59b1aae2adcfe4f06270b99a2789012ed64bec1f/4d0ad/images/docs/user-guide/logging/logging-node-level.png"></a></p>
|
<p>Are the resources released once the kube job is finished?</p>
<p>I mean the associated pod resources, let say a job is run to completion and the associated pod is in a completed state, which was allocated 0.5 CPU, is the 0.5cpu released after the job is finished?</p>
<p>or is it released only after deleting the job?</p>
| <p>A <strong><a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase" rel="nofollow noreferrer">pod phase</a></strong> does not include "completed": you only have "Succeeded":</p>
<blockquote>
<p>All Containers in the Pod have terminated in success, and will not be restarted."</p>
</blockquote>
<p>Since this is a terminal state, its reserved resources won't be taken into consideration by the scheduler.</p>
|
<p>I have a Kafka cluster that is running on K8S. I am using the confluent kafka image as and I have an EXTERNAL listeners that is working.
How can I add SSL encryption? Should I use an ingress? Where can I find good documentation?
Thank you</p>
| <p>You have a <a href="https://gist.github.com/aramalipoor/f62355cd440584986bfd91bd69e52c49#file-0-readme-md" rel="noreferrer">manual way in this gist</a>, which does not use the confluent image.</p>
<p>But for <a href="https://github.com/confluentinc/cp-helm-charts" rel="noreferrer">Confluent and its Helm chart</a> (see "<a href="https://www.confluent.io/blog/getting-started-apache-kafka-kubernetes/" rel="noreferrer">Confluent Operator: Getting Started with Apache Kafka and Kubernetes</a>" from <a href="https://twitter.com/rohit2b" rel="noreferrer">Rohit Bakhshi</a>), you can follow:</p>
<p>"<a href="https://medium.com/weareservian/encryption-authentication-and-external-access-for-confluent-kafka-on-kubernetes-69c723a612fc" rel="noreferrer">Encryption, authentication and external access for Confluent Kafka on Kubernetes</a>" from <a href="https://github.com/ryan-c-m" rel="noreferrer">Ryan Morris</a></p>
<blockquote>
<p>Out of the box, the helm chart doesn’t support SSL configurations for encryption and authentication, or exposing the platform for access from outside the Kubernetes cluster. </p>
<p>To implement these requirements, there are a few modifications to the installation needed.<br>
In summary, they are:</p>
<ul>
<li>Generate some private keys/certificates for brokers and clients</li>
<li>Create Kubernetes Secrets to provide them within your cluster</li>
<li>Update the broker StatefulSet with your Secrets and SSL configuration</li>
<li>Expose each broker pod via an external service</li>
</ul>
</blockquote>
|
<p>I am creating k8s cluster from digital ocean but every time I am getting same warning after I create cluster and open that cluster in lens ID.</p>
<p>Here is the screenshot of warning:</p>
<p><a href="https://i.stack.imgur.com/6lZvD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6lZvD.png" alt="screenshot" /></a></p>
<pre><code>i did every soltion which i found but still can't able to remove the error.
</code></pre>
| <p>Check first if <a href="https://github.com/k3s-io/k3s/issues/1857#issuecomment-950154218" rel="noreferrer"><code>k3s-io/k3s</code> issue 1857</a> could help:</p>
<blockquote>
<p>I was getting the same error when I installed kubernetes cluster via <code>kubeadm</code>.</p>
<p>After reading all the comments on the subject, I thought that the problem might be caused by <code>containerd</code> and the following two commands solved my problem, maybe it can help:</p>
<pre class="lang-bash prettyprint-override"><code>systemctl restart containerd
systemctl restart kubelet
</code></pre>
</blockquote>
<p>And:</p>
<blockquote>
<p>This will need to be fixed upstream. I suspect it will be fixed when we upgrade to containerd v1.6 with the cri-api v1 changes</p>
</blockquote>
<p>So checking the <code>containerd</code> version can be a clue.</p>
|
<p>I want to access my Kubernetes cluster API in Go to run <code>kubectl</code> command to get available namespaces in my k8s cluster which is running on google cloud.</p>
<p>My sole purpose is to get namespaces available in my cluster by running <code>kubectl</code> command: kindly let me know if there is any alternative.</p>
| <p>You can start with <a href="https://github.com/kubernetes/client-go" rel="nofollow noreferrer"><code>kubernetes/client-go</code></a>, the Go client for Kubernetes, made for talking to a kubernetes cluster. (not through kubectl though: directly through the Kubernetes API)</p>
<p>It includes a <a href="https://github.com/kubernetes/client-go/blob/03bfb9bdcfe5482795b999f39ca3ed9ad42ce5bb/listers/core/v1/namespace.go#L28-L35" rel="nofollow noreferrer"><code>NamespaceLister</code>, which helps list Namespaces</a>.</p>
<p>See "<a href="https://medium.com/programming-kubernetes/building-stuff-with-the-kubernetes-api-part-4-using-go-b1d0e3c1c899" rel="nofollow noreferrer">Building stuff with the Kubernetes API — Using Go</a>" from <strong><a href="https://twitter.com/VladimirVivien" rel="nofollow noreferrer">Vladimir Vivien</a></strong></p>
<p><a href="https://i.stack.imgur.com/xVxMG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xVxMG.png" alt="https://cdn-images-1.medium.com/max/1000/1*4noxYkVSvMPmlbt1kSOIHw.png"></a></p>
<p><a href="https://stackoverflow.com/users/396567/michael-hausenblas">Michael Hausenblas</a> (Developer Advocate at Red Hat) proposes <a href="https://stackoverflow.com/questions/52325091/how-to-access-the-kubernetes-api-in-go-and-run-kubectl-commands#comment91597517_52325186">in the comments</a> documentations with <a href="http://using-client-go.cloudnative.sh/" rel="nofollow noreferrer"><code>using-client-go.cloudnative.sh</code></a></p>
<blockquote>
<p>A versioned collection of snippets showing how to use <code>client-go</code>. </p>
</blockquote>
|
<p>I'm trying to start minikube on ubuntu 18.04 inside nginx proxy manager docker network in order to setup some kubernetes services and manage the domain names and the proxy hosts in the nginx proxy manager platform.</p>
<p>so I have <code>nginxproxymanager_default</code> docker network and when I run <code>minikube start --network=nginxproxymanager_default</code> I get</p>
<blockquote>
<p>Exiting due to GUEST_PROVISION: Failed to start host: can't create with that IP, address already in use</p>
</blockquote>
<p>what might I been doing wrong?</p>
| <p>A similar error was reported with <a href="https://github.com/kubernetes/minikube/issues/12894" rel="nofollow noreferrer">kubernetes/minikube issue 12894</a></p>
<blockquote>
<p>please check whether there are other services using that IP address, and try starting minikube again.</p>
</blockquote>
<p>Considering <a href="https://minikube.sigs.k8s.io/docs/commands/start/" rel="nofollow noreferrer"><code>minikube start</code> man page</a></p>
<blockquote>
<h2><code>--network string</code></h2>
<p>network to run minikube with.<br />
Now it is used by docker/podman and KVM drivers.</p>
<p>If left empty, minikube will create a new network.</p>
</blockquote>
<p>Using an existing NGiNX network (as opposed to docker/podman) might not be supported.</p>
<p>I have seen <a href="https://hackernoon.com/setting-up-nginx-ingress-on-kubernetes-2b733d8d2f45" rel="nofollow noreferrer">NGiNX set up as ingress</a>, not directly as "network".</p>
|
<p>When I try to run a container in the cluster, I get a message "<code>deployment test created</code>" but when I look at the dashboard I can see that its in an error state (<code>Failed to pull image...</code>, it was not able to pull the image from the local minikube docker env due to authorization issues</p>
<p>My steps were:</p>
<ol>
<li>Start minikube using hyperv and setting the <code>--insecure-registry</code> switch to 10.0.0.0/8, also tried 0.0.0.0/0 - Kubernetes version 1.9.0 also tried 1.8.0</li>
<li>Set the <code>docker env</code> to the minikube docker via <code>minikube docker-env | Invoke-Expression</code></li>
<li>build docker image - image builds and exists in minikube local docker</li>
<li><code>kubectl run test --image test-service --port 1101</code></li>
</ol>
<p>This is the result:</p>
<p><a href="https://i.stack.imgur.com/IsEXO.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IsEXO.jpg" alt="Failed Deployment"></a></p>
<p>What am I missing?</p>
| <p>As discussed in the comments, <code>openfaas/faas-netes</code> issue 135 illustrates a similar issue, and mention as a possible solution:</p>
<blockquote>
<p><code>imagePullPolicy</code> if not mentioned should have defaulted to <code>Never</code> instead of <code>Always</code>.</p>
</blockquote>
<p>The <a href="https://stackoverflow.com/users/10387/tim-jarvis">OP Tim Jarvis</a> realized then:</p>
<blockquote>
<p>I realized it was not an auth issue, but that it was always wanting to pull from an external repo.<br>
The fix for me was to use the <code>imagePullPolicy</code> of <code>IfNotPresent</code>.</p>
</blockquote>
|
<p>I have installed a K8S cluster on laptop using Kubeadm and VirtualBox. It seems a bit odd that the cluster has to be up and running to see the documentation as shown below.</p>
<pre><code>praveensripati@praveen-ubuntu:~$ kubectl explain pods
Unable to connect to the server: dial tcp 192.168.0.31:6443: connect: no route to host
</code></pre>
<p>Any workaround for this?</p>
| <p>See "<a href="https://blog.heptio.com/kubectl-explain-heptioprotip-ee883992a243" rel="nofollow noreferrer">kubectl explain — #HeptioProTip</a>"</p>
<blockquote>
<p>Behind the scenes, <strong><code>kubectl</code> just made an API request to my Kubernetes cluster</strong>, grabbed the current Swagger documentation of the API version running in the cluster, and output the documentation and object types.</p>
</blockquote>
<p>Try <code>kubectl help</code> as an offline alternative, but that won't be as complete (limite to kubectl itself).</p>
|
<p>I am using <code>launcher.gcr.io/google/jenkins2</code> to run jenkins in gcp kubernetes engine.</p>
<p>Everything seems ok except that I get <code>Could not initialize class org.jfree.chart.JFreeChart</code> error for every chart that jenkins attempt to draw. I googled the error and almost everyone solves that with adding <code>-Djava.awt.headless=true</code>. As you can guess I already tried that and it does not work.</p>
<p>Ideas?</p>
| <p>One other possible solution/workaround is seen in <a href="https://issues.jenkins-ci.org/browse/JENKINS-39636" rel="nofollow noreferrer">JENKINS issue 39636</a>:</p>
<blockquote>
<p>I installed <code>libxext6 libxrender1 fontconfig libfontconfig</code> but it didn't help. After that I also installed <code>libjfreechart-java</code> but I still have the same problem.</p>
<p>Commenting out the <code>assistive_technologies</code> line in <code>/etc/java-8-openjdk/accessibility.properties</code> solved it.</p>
</blockquote>
<p>You can see that recommendation in <a href="https://stackoverflow.com/users/433558/tianon">tianon</a>'s <a href="https://stackoverflow.com/questions/21841269/performance-graphs-on-jenkins-causing-could-not-initialize-class-org-jfree-char#comment76900079_41428450">comment of that answer</a>:</p>
<blockquote>
<p>In my case it ended up being <a href="https://bugs.debian.org/798794" rel="nofollow noreferrer">bugs.debian.org/798794</a> (from "<a href="https://askubuntu.com/a/723503/5470">Assistive technology not found error while building <code>aprof-plot</code></a>").<br />
Adding "<code>RUN sed -i 's/^assistive_technologies=/#&/' /etc/java-8-openjdk/accessibility.properties</code>" to my <code>Dockerfile</code> fixed it. :)</p>
</blockquote>
|
<p>With k3d, I am receiving a DNS error when the pod tries to access a URL over the internet.</p>
<pre><code>ERROR:
getaddrinfo EAI_AGAIN DNS could not be resolved
</code></pre>
<p>How can I get past this error?</p>
| <p>It depends on your context, OS, version.</p>
<p>For instance, you will see various proxy issue in <a href="https://github.com/k3d-io/k3d/issues/209" rel="nofollow noreferrer"><code>k3d-io/k3d</code> issue 209</a></p>
<p>this could be related to the way k3d creates the docker network.</p>
<blockquote>
<p>Indeed, k3d creates a custom docker network for each cluster and when this happens resolving is done through the docker daemon.<br />
The requests are actually forwarded to the DNS servers configured in your host's <code>resolv.conf</code>. But through a single DNS server (the embedded one of docker).</p>
<p>This means that if your daemon.json is, like mine, not configured to provide extra DNS servers it defaults to 8.8.8.8 which does not resolve any company address for example.</p>
<p>It would be useful to have a custom options to provide to k3d when it starts the cluster and specify the DNS servers there</p>
</blockquote>
<p>Which is why there is "<a href="https://github.com/k3d-io/k3d/issues/220" rel="nofollow noreferrer">v3/networking: <code>--network</code> flag to attach to existing networks</a>", referring to <a href="https://k3d.io/v5.3.0/design/networking/" rel="nofollow noreferrer">Networking</a>.</p>
<p>Before that new flag:</p>
<blockquote>
<p>For those who have the problem, a simple fix is to mount your <code>/etc/resolve.conf</code> onto the cluster:</p>
<pre><code>k3d cluster create --volume /etc/resolv.conf:/etc/resolv.conf
</code></pre>
</blockquote>
|
<p>I am going to start with an example. Say I have an AKS cluster with three nodes. Each of these nodes runs a set of pods, let's say 5 pods. That's 15 pods running on my cluster in total, 5 pods per node, 3 nodes.</p>
<p>Now let's say that my nodes are not fully utilized at all and I decide to scale down to 2 nodes instead of 3.</p>
<p>When I choose to do this within Azure and change my node count from 3 to 2, Azure will close down the 3rd node. However, it will also delete all pods that were running on the 3rd node. How do I make my cluster reschedule the pods from the 3rd node to the 1st or 2nd node so that I don't lose them and their contents?</p>
<p>The only way I feel safe to scale down on nodes right now is to do the rescheduling manually.</p>
| <p>Assuming you are using Kubernetes deployments (or replica sets) then it should do this for you. Your deployment is configured with a set number of replicas to create for each pod when you remove a node the scheduler will see that the current active number is less than the desired number and create new ones.</p>
<p>If you are just deploying pods without a deployment, then this won't happen and the only solution is manually redeploying, which is why you want to use a deployment.</p>
<p>Bear in mind though, what you get created are new pods, you are not moving the previously running pods. Any state you had on the previous pods that is not persisted will be gone. This is how it is intended to work.</p>
|
<p>I am trying to find a DB (object storage) for my application. The application is really a wrapper over ISTIO Network Routing API. Basically simplifying the ISTIO configuratin for my network. Kubernetes (k8s) Custom Resource Definition (CRD) seems to fit my requirements. Also love the watch and REST API capability provided by CRD.</p>
<p>DB requirement</p>
<ul>
<li>Few 100 MBs data - worst case</li>
<li>Ability to watch objects</li>
<li>REST API support for object</li>
<li>Persistence</li>
<li>Around 2k writes/sec and similar/more reads/sec. Although I do have my application acting as a proxy for CRD where things can be cached.</li>
</ul>
<p>Why using CRD will be a good or a bad idea? Are there any performance implication using CRD. <a href="https://stackoverflow.com/questions/41063238/using-etcd-as-primary-store-database">This 2016 stackflow answer</a> suggest that etcd data is not in RAM. Whereas etcd link suggest that etcd can do <a href="https://github.com/etcd-io/etcd" rel="nofollow noreferrer">10k writes/sec</a> (so even things are not in RAM and purely in disk, who cares). </p>
<p>I am seeing multiple application using k8s CRDs.</p>
<ul>
<li>Helm uses CRDs to store releases</li>
<li>Istio uses CRDs to store their networking routing API objects</li>
</ul>
| <p>Considering that (<a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/" rel="nofollow noreferrer">CRD page</a>)</p>
<ul>
<li><p>A resource is an endpoint in the <strong>Kubernetes API</strong> that stores a collection of API objects of a certain kind. For example, the built-in pods resource contains a collection of Pod objects.</p></li>
<li><p>A custom resource is an extension of the <strong>Kubernetes API</strong> that is not necessarily available on every Kubernetes cluster. In other words, it represents a customization of a particular Kubernetes installation.</p></li>
</ul>
<p><a href="https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions" rel="nofollow noreferrer">A <strong>CRD</strong></a> is for extending Kubernetes itself, it is not for application data.</p>
<p>The <a href="https://github.com/bitnami-labs/helm-crd/blob/7dea271427769f46cd710d5d9a57ae6af811ae23/examples/mariadb.yaml" rel="nofollow noreferrer">helm-crd/examples/mariadb.yaml</a> is about lightweight metadata which will enable Kubernetes to download the right release and through Helm install it.</p>
<p>It is not to store data for a random application which could exist <em>without</em> Kubernetes (as opposed to Helm releases, which make sense only in a Kubernetes deployment scenario)</p>
<p>Similarly, <a href="https://istio.io/docs/concepts/security/" rel="nofollow noreferrer">Istio CRD</a> makes sense only in a Kubernetes context:</p>
<blockquote>
<p>Kubernetes currently implements the Istio configuration on Custom Resource Definitions (CRDs). These CRDs correspond to namespace-scope and cluster-scope CRDs and automatically inherit access protection via the Kubernetes RBAC. </p>
</blockquote>
<p>That approach (using etcd to store any application data) would not scale.</p>
|
<p>I have a kibana running inside docker container in k8s. And I put a nginx as sidecar to forward all traffic to kibana based on path.</p>
<p>Below is my nginx configuration. It works fine for elasticsearch. But when I open <code>/kibana/</code> in browser, it redirects to <code>/spaces/enter</code> and shows <code>404 Not Found</code>.</p>
<p>In kibana container log I can see a 302 forward log:</p>
<p><code>{"type":"response","@timestamp":"2021-02-24T05:50:32Z","tags":[],"pid":7,"method":"get","statusCode":302,"req":{"url":"/","method":"get","headers":{"connection":"Keep-Alive","proxy-connection":"Keep-Alive","host":"kibana-entrypoint:5601","x-forwarded-for":"49.255.115.150","x-forwarded-proto":"http","x-forwarded-port":"80","x-amzn-trace-id":"Root=1-6035e928-603d67da7eff4225005fdbfc","upgrade-insecure-requests":"1","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 11_1_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.182 Safari/537.36","accept":"text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9","accept-encoding":"gzip, deflate","accept-language":"en-GB,en-US;q=0.9,en;q=0.8"},"remoteAddress":"192.168.1.41","userAgent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 11_1_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.182 Safari/537.36"},"res":{"statusCode":302,"responseTime":7,"contentLength":9},"message":"GET / 302 7ms - 9.0B"}</code></p>
<pre><code>user nginx;
worker_processes 1;
events {
worker_connections 10240;
}
http {
server {
listen 8080;
server_name localhost;
location /es/ {
proxy_pass http://sample-es-entrypoint:9200/;
}
location /health {
proxy_pass http://sample-es-entrypoint:9200/_cluster/health;
}
location /kibana/ {
proxy_pass http://kibana-entrypoint:5601/;
proxy_redirect off;
proxy_buffering off;
proxy_http_version 1.1;
proxy_set_header Connection "Keep-Alive";
proxy_set_header Proxy-Connection "Keep-Alive";
}
}
}
</code></pre>
<p>The question is why kibana forward the request?</p>
| <p><a href="https://github.com/elastic/kibana/pull/66098" rel="nofollow noreferrer"><code>elastic/kibana</code> PR 66098</a> mentions that the <code>/spaces/enter</code> view is used by the space selector UIs to send users to the appropriate default route.</p>
<p>And <a href="https://github.com/elastic/kibana/pull/44678" rel="nofollow noreferrer">PR 44678</a> explains the origin of the notion of space:</p>
<blockquote>
<p>This deprecates the <code>server.defaultRoute</code> from <code>kibana.yml</code> setting (#46787) in favor of an <strong>Advanced Setting controllable through the UI</strong>.<br />
By making this an advanced setting, it inherently becomes space-aware, so users can specify a custom default route per space.</p>
<h3>Transition</h3>
<p>If <code>server.defaultRoute</code> is specified in <code>kibana.yml</code>, it will be mapped to the <code>uiSettings.overrides.defaultRoute</code> setting.<br />
This setting tells the UI Settings Service that the defaultRoute setting is locked, and cannot be edited via the UI.<br />
From a migration perspective, this is functionally equivalent for users upgrading in a minor release: the setting is only controllable via the <code>yml</code> file.</p>
<h3>Functionality</h3>
<p>Users who wish to take advantage of the space-aware routes simply need to remove the <code>server.defaultRoute</code> setting from their <code>yml</code> file, if set.<br />
If unset, the advanced setting defaults to <code>/app/kibana</code>, which was the previous default value for <code>server.defaultRoute</code>.</p>
</blockquote>
<p>So in your case, check the <code>kibana.yml</code> used by your Kibana Docker image: try and remove <code>server.defaultRoute</code> if set.</p>
|
<p>Im having k8s controller which needs to install some resources and update the status and condition accordantly</p>
<p>The flow in the reconcile is like following:</p>
<ol>
<li>Install the resource and don’t wait</li>
<li>Call to the function <code>checkAvailability</code> and update the status accordantly if ready/ pending install/ error</li>
</ol>
<p>I’ve two main questions:</p>
<ol>
<li>This is the first time that I use status and conditions, is it right way or do I miss something</li>
<li>Sometimes when I do the update <code>r.Status().Update</code> I got error :<code>Operation cannot be fulfilled on eds.core.vtw.bmw.com “resouce01”: the object has been modified; please apply your changes to the latest version and try again , so I’ve added the check </code>conditionChanged` which solve the problem but not sure if its correct as I update the status once and if it doesn’t chanted I don’t touch it so user can see status ready from a while ago and the reconcile doesn’t update the date and time for the ready condition as it skip it when its already “ready”</li>
</ol>
<p>I use the following</p>
<pre><code>func (r *ebdReconciler) checkHealth(ctx context.Context, req ctrl.Request, ebd ebdmanv1alpha1.ebd) (bool, error) {
vfmReady, err := r.mbr.IsReady(ctx, req.Name, req.Namespace)
condition := metav1.Condition{
Type: ebdmanv1alpha1.KubernetesvfmHealthy,
Observebdneration: ebd.Generation,
LastTransitionTime: metav1.Now(),
}
if err != nil {
// There was an error checking readiness - Set status to false
condition.Status = metav1.ConditionFalse
condition.Reason = ebdmanv1alpha1.ReasonError
condition.Message = fmt.Sprintf("Failed to check vfm readiness: %v", err)
} else if vfmReady {
// The vfm is ready - Set status to true
condition.Status = metav1.ConditionTrue
condition.Reason = ebdmanv1alpha1.ReasonReady
condition.Message = "vfm custom resource is ready"
} else {
// The vfm is not ready - Set status to false
condition.Status = metav1.ConditionFalse
condition.Reason = ebdmanv1alpha1.ResourceProgressing
condition.Message = "vfm custom resource is not ready"
}
// Get the latest version of the ebd
latestebd := ebdmanv1alpha1.ebd{}
if err := r.Get(ctx, req.NamespacedName, &latestebd); err != nil {
return vfmReady, err
}
oldConditions := latestebd.Status.Conditions
meta.SetStatusCondition(&latestebd.Status.Conditions, condition)
if !conditionChanged(&oldConditions, &latestebd.Status.Conditions, ebdmanv1alpha1.KubernetesvfmHealthy) {
return vfmReady, nil
}
if err := r.Status().Update(ctx, &latestebd); err != nil {
r.Log.Error(err, "failed to update vfm status")
return vfmReady, err
}
return vfmReady, nil
}
func conditionChanged(oldConditions, newConditions *[]metav1.Condition, conditionType string) bool {
newCondition := meta.FindStatusCondition(*newConditions, conditionType)
oldCondition := meta.FindStatusCondition(*oldConditions, conditionType)
if oldCondition == nil && newCondition == nil {
return false
}
if oldCondition == nil || newCondition == nil {
return true
}
return oldCondition.Status != newCondition.Status || oldCondition.Reason != newCondition.Reason || oldCondition.Message != newCondition.Message
}
</code></pre>
| <p>To your questions:</p>
<ol>
<li><p>This is the first time that I use status and conditions, is it right way or do I miss something?</p>
<p>Your approach to managing the status and conditions of Kubernetes resources is generally fine. The <a href="https://book-v1.book.kubebuilder.io/basics/status_subresource.html" rel="nofollow noreferrer"><code>status</code> subresource</a> in a Kubernetes API object is typically used to represent the current state of the system, and it can include conditions.</p>
<p>A condition is a collection of fields that describe the state of an object in a more detailed way than just <code>true</code> or <code>false</code>. Each condition typically has a <code>type</code>, <code>status</code>, <code>reason</code>, <code>message</code>, and <code>lastTransitionTime</code>. Your code correctly sets these fields based on whether the <code>vfm</code> custom resource is ready or not.</p>
<p>It is good to note that conditions should be leveled - meaning they should be set to their current observed value regardless of their previous value. They should also be set (either <code>true</code>, <code>false</code>, or <code>unknown</code>) for all the significant or user-meaningful aspects of the component's current state. This makes conditions a good mechanism to indicate "transient states" like <code>Progressing</code> or <code>Degraded</code> that might be expected to change over time or based on external state.</p>
</li>
<li><p>Sometimes when I do the update <code>r.Status().Update</code> I got error: <code>Operation cannot be fulfilled on eds.core.vtw.bmw.com “resource01”: the object has been modified; please apply your changes to the latest version and try again</code>.</p>
<p>This error occurs because another client updated the same object while your controller was processing it. This could be another controller or even another instance of the same controller (if you run more than one).</p>
<p>One possible way to handle this is to use a retry mechanism that re-attempts the status update when this error occurs. In your case, you have implemented a <code>conditionChanged</code> check to only attempt the status update if the condition has changed. This is a good approach to avoid unnecessary updates, but it does not completely prevent the error, because another client could still update the object between your <code>Get</code> call and <code>Status().Update</code> call.</p>
<p>You could also consider using <code>Patch</code> instead of <code>Update</code> to modify the status, which reduces the risk of conflicting with other updates. Patching allows for partial updates to an object, so you are less likely to encounter conflicts.</p>
<p>Regarding the timing issue, you could consider updating the <code>LastTransitionTime</code> only when the status actually changes, instead of every time the health check is done. This would mean the <code>LastTransitionTime</code> reflects when the status last changed, rather than the last time the check was performed.</p>
<p>One thing to keep in mind is that frequent updates to the status subresource, even if the status does not change, can cause unnecessary API server load. You should strive to update the status only when it changes.</p>
</li>
</ol>
<p>A possible updated version of your <code>checkHealth</code> function considering those points could be:</p>
<pre class="lang-golang prettyprint-override"><code>func (r *ebdReconciler) checkHealth(ctx context.Context, req ctrl.Request, ebd ebdmanv1alpha1.ebd) (bool, error) {
vfmReady, err := r.mbr.IsReady(ctx, req.Name, req.Namespace)
condition := metav1.Condition{
Type: ebdmanv1alpha1.KubernetesvfmHealthy,
Status: metav1.ConditionUnknown, // start with unknown status
}
latestebd := ebdmanv1alpha1.ebd{}
if err := r.Get(ctx, req.NamespacedName, &latestebd); err != nil {
return vfmReady, err
}
oldCondition := meta.FindStatusCondition(latestebd.Status.Conditions, ebdmanv1alpha1.KubernetesvfmHealthy)
if err != nil {
// There was an error checking readiness - Set status to false
condition.Status = metav1.ConditionFalse
condition.Reason = ebdmanv1alpha1.ReasonError
condition.Message = fmt.Sprintf("Failed to check vfm readiness: %v", err)
} else if vfmReady {
// The vfm is ready - Set status to true
condition.Status = metav1.ConditionTrue
condition.Reason = ebdmanv1alpha1.ReasonReady
condition.Message = "vfm custom resource is ready"
} else {
// The vfm is not ready - Set status to false
condition.Status = metav1.ConditionFalse
condition.Reason = ebdmanv1alpha1.ResourceProgressing
condition.Message = "vfm custom resource is not ready"
}
// Only update the LastTransitionTime if the status has changed
if oldCondition == nil || oldCondition.Status != condition.Status {
condition.LastTransitionTime = metav1.Now()
} else {
condition.LastTransitionTime = oldCondition.LastTransitionTime
}
meta.SetStatusCondition(&latestebd.Status.Conditions, condition)
if oldCondition != nil && condition.Status == oldCondition.Status && condition.Reason == oldCondition.Reason && condition.Message == oldCondition.Message {
return vfmReady, nil
}
// Retry on conflict
retryErr := retry.RetryOnConflict(retry.DefaultRetry, func() error {
// Retrieve the latest version of ebd before attempting update
// RetryOnConflict uses exponential backoff to avoid exhausting the apiserver
if getErr := r.Get(ctx, req.NamespacedName, &latestebd); getErr != nil {
return getErr
}
if updateErr := r.Status().Update(ctx, &latestebd); updateErr != nil {
return updateErr
}
return nil
})
if retryErr != nil {
r.Log.Error(retryErr, "Failed to update vfm status after retries")
return vfmReady, retryErr
}
return vfmReady, nil
}
</code></pre>
<p>In this updated version:</p>
<ul>
<li><p>The <code>LastTransitionTime</code> field is updated only when the condition's status changes. This will ensure that the <code>LastTransitionTime</code> accurately reflects when the status was last changed rather than when the <code>checkHealth</code> function was last run. This should provide a more accurate timeline of when the resource's status actually changed, rather than when the reconciliation loop was run.</p>
</li>
<li><p>A retry mechanism is added using <code>retry.RetryOnConflict</code> to re-attempt the status update when a conflict error occurs. Note that you'll need to import the <a href="https://pkg.go.dev/k8s.io/client-go/util/retry" rel="nofollow noreferrer">"<code>k8s.io/client-go/util/retry</code>" package</a> for this.<br />
This is a common pattern for dealing with the <code>Operation cannot be fulfilled...</code> error.</p>
</li>
</ul>
<p>These changes should help to address the issues you were facing with updating the status and conditions of your Kubernetes resources.<br />
Remember that you may still occasionally get conflict errors, especially if there are other clients updating the same object. In these cases, the <code>RetryOnConflict</code> function will retry the update with the latest version of the object.</p>
|
<p>I can create config map from a property file and use the same config files inside the POD. However, I don't want to use the configmap created in past and supplied with the helmchart. Rather in the helmchart values.yaml i want to provide a file name from where the config map will be created dynamically ..</p>
<p>Any suggestions/examples are welcome .</p>
<p>Thanks in advance -
Tutai </p>
| <p>See if the approach described in <a href="https://github.com/kubernetes/charts/issues/1310" rel="nofollow noreferrer">kubernetes/charts issue 1310</a> works for you.</p>
<blockquote>
<p>I suggest that we allow for overriding the name of the <code>ConfigMap</code> that gets mounted to the persistent volume.<br>
That way, the parent chart could create, and even make templates for, these <code>ConfigMaps</code>.</p>
<p>For example <code>values.yaml</code> could have the following fields added:</p>
</blockquote>
<pre><code>## alertmanager ConfigMap entries
##
alertmanagerFiles:
# ConfigMap override where full-name is {{.Release.Name}}-{{.Values.alertmanagerFiles.configMapOverrideName}}
configMapOverrideName: ""
...
## Prometheus server ConfigMap entries
##
serverFiles:
# ConfigMap override where full-name is {{.Release.Name}}-{{.Values.serverFiles.configMapOverrideName}}
configMapOverrideName: ""
...
</code></pre>
<p>You can see the implementation of that issue in <a href="https://github.com/kubernetes/charts/commit/2ea776441e305cab858913e2c2806f5122df06dd" rel="nofollow noreferrer">commit 2ea7764</a>, as an example of override.</p>
<hr>
<p>This differs from a file approach, where you create a new config map and replace the old one:</p>
<pre><code>kubectl create configmap asetting --from-file=afile \
-o yaml --dry-run | kubectl replace -f -
</code></pre>
<p>See "<a href="https://medium.com/google-cloud/kubernetes-configmaps-and-secrets-68d061f7ab5b#33e4" rel="nofollow noreferrer">Updating Secrets and ConfigMaps</a>" as an example.</p>
|
<p>my problem is that I need to increase NATS max_payload in production environment but it uses Kubernetes and I have no idea how to do that, I tried to use ConfigMap but I had not success.</p>
<p>In local/dev environment it uses a NATS config file with docker so it works fine.</p>
<p>Way to make it works local: <a href="https://stackoverflow.com/questions/65638631/nats-with-moleculer-how-can-i-change-nats-max-payload-value">NATS with moleculer. How can I change NATS max_payload value?</a></p>
<p>Code inside k8s/deployment.develop.yaml</p>
<pre><code>(...)
apiVersion: v1
kind: ServiceAccount
metadata:
name: nats
namespace: develop
labels:
account: nats
---
apiVersion: v1
kind: Service
metadata:
name: nats
namespace: develop
labels:
app: nats
service: nats
spec:
ports:
- port: 4222
targetPort: 4222
selector:
app: nats
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nats-v1
namespace: develop
labels:
app: nats
version: v1
spec:
replicas: 1
strategy: {}
selector:
matchLabels:
app: nats
version: v1
template:
metadata:
labels:
app: nats
version: v1
spec:
serviceAccountName: nats
containers:
- image: 'nats:alpine'
name: nats
ports:
- containerPort: 4222
restartPolicy: Always
tolerations: []
affinity: {}
</code></pre>
<p>Thanks!</p>
| <p>I would recommend using these <a href="https://github.com/nats-io/k8s/tree/master/helm/charts/nats#limits" rel="nofollow noreferrer">NATS Helm charts</a> and then use something like the following:</p>
<pre class="lang-yaml prettyprint-override"><code>nats:
limits:
maxPayload: "5mb"
</code></pre>
<p>You can find a full example here: <a href="https://github.com/nats-io/k8s/tree/master/helm/charts/nats#limits" rel="nofollow noreferrer">https://github.com/nats-io/k8s/tree/master/helm/charts/nats#limits</a></p>
|
<p>How can I issue a <code>kubectl run</code> that pulls an environment var from a k8s secret configmap?</p>
<p>Currently I have:</p>
<p><code>kubectl run oneoff -i --rm NAME --image=IMAGE --env SECRET=foo
</code></p>
| <p>Look into the <code>overrides</code> flag of the <code>run</code> command... it reads as:</p>
<blockquote>
<p>An inline JSON override for the generated object. If this is non-empty, it is used to override the generated object. Requires that the object supply a valid apiVersion field.</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#run" rel="noreferrer">https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#run</a></p>
<p>So in your case I guess it would be something like:</p>
<pre><code>kubectl run oneoff -i --rm --overrides='
{
"spec": {
"containers": [
{
"name": "oneoff",
"image": "IMAGE",
"env": [
{
"name": "ENV_NAME"
"valueFrom": {
"secretKeyRef": {
"name": "SECRET_NAME",
"key": "SECRET_KEY"
}
}
}
]
}
]
}
}
' --image= IMAGE
</code></pre>
|
<p>I have a single kubernetes service called MyServices which hold four deployments. Each deployment is running as a single pod and each pod has its own port number.</p>
<p><a href="https://i.stack.imgur.com/zSOKs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zSOKs.png" alt="enter image description here"></a></p>
<p>As mentioned all the pods are running inside one kubernetes service. </p>
<p>I am able to call the services through the external IP Address of that kubernetes service and port number. </p>
<p><strong>Example : 92.18.1.1:3011/MicroserviceA</strong> Or <strong>92.18.1.1:3012/MicroserviceB</strong></p>
<p>I am now trying to develop and orchestration layer that calls these services and get a response from them, However, I am trying to figure out a way in which I do NOT need to specify every micro-service port number, instead I can call them through their endpoint/ServiceName. Example: <strong>192.168.1.1/MicroserviceA</strong></p>
<p>How can I achieve above statement?</p>
<p>From architecture perspective, is it a good idea to deploy all microservice inside a single kubenetes service (like my current approach) or each micro-service needs it's own service</p>
<p>Below is the kubernetes deployment file ( I removed the script for micro-service C and D since they are identical to A and B):</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: myservice
spec:
selector:
app: microservice
ports:
- name: microserviceA
protocol: TCP
port: 3011
targetPort: 3011
- name: microserviceB
protocol: TCP
port: 3012
targetPort: 3012
- name: microserviceC
protocol: TCP
port: 3013
targetPort: 3013
- name: microserviceD
protocol: TCP
port: 3014
targetPort: 3014
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: microserviceAdeployment
spec:
replicas: 1
template:
metadata:
labels:
app: microservice
spec:
containers:
- image: dockerhub.com/myimage:v1
name: microservice
ports:
- containerPort: 3011
imagePullSecrets:
- name: regcred
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: microserviceBdeployment
spec:
replicas: 1
template:
metadata:
labels:
app: microservice
spec:
containers:
- image: dockerhub.com/myimage:v1
name: microservice
ports:
- containerPort: 3012
</code></pre>
| <p>There is a way to <a href="https://stackoverflow.com/a/39260160/6309">discover all the port of Kubernetes services</a>.</p>
<p>So you could consider using <code>kubectl get svc</code>, as seen in "<a href="https://kubernetes.io/docs/tutorials/services/source-ip/#source-ip-for-services-with-type-nodeport" rel="nofollow noreferrer">Source IP for Services with Type=NodePort</a>"</p>
<pre><code>NODEPORT=$(kubectl get -o jsonpath="{.spec.ports[0].nodePort}" services <yourService>)
</code></pre>
<hr>
<blockquote>
<p>, I am trying to figure out a way in which I do NOT need to specify every micro-service port number, instead I can call them through their endpoint/ServiceName</p>
</blockquote>
<p>Then you need to expose those services through one entry point, typically a reverse-proxy like NGiNX.<br>
The idea is to expose said services using the default ports (80 or 443), and reverse-proxy them to the actual URL and port number.</p>
<p>Check "<a href="https://www.nginx.com/blog/service-discovery-in-a-microservices-architecture/" rel="nofollow noreferrer">Service Discovery in a Microservices Architecture</a>" for the general idea.</p>
<p><a href="https://i.stack.imgur.com/QoMZg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QoMZg.png" alt="https://cdn-1.wp.nginx.com/wp-content/uploads/2016/04/Richardson-microservices-part4-1_difficult-service-discovery.png"></a></p>
<p>And "<a href="https://www.nginx.com/blog/service-discovery-nginx-plus-etcd/" rel="nofollow noreferrer">Service Discovery for NGINX Plus with etcd</a>" for an implementation (using NGiNX plus, so could be non-free).<br>
Or "<a href="https://hackernoon.com/setting-up-nginx-ingress-on-kubernetes-2b733d8d2f45" rel="nofollow noreferrer">Setting up Nginx Ingress on Kubernetes</a>" for a more manual approach.</p>
|
<p>I'm trying to access .NET Web API which I docker-ized and mounted in an Kubernet Cluster on Microsoft Azure.</p>
<p>The application works fine on local docker machine.
The cluster is running, my deployment was correct and the pods where created. Everything I check is fine, but I cannot access my application through the external cluster IP (Load Balancer). This is my YAML deployment file:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: ohmioapi-deployment
spec:
selector:
matchLabels:
app: ohmioapi
replicas: 1
template:
metadata:
labels:
app: ohmioapi
spec:
containers:
- name: ohmioapi
image: ohmiocontainers.azurecr.io/ohmioapi:latest
imagePullPolicy: Always
ports:
- containerPort: 15200
imagePullSecrets:
- name: acr-auth
---
apiVersion: v1
kind: Service
metadata:
name: ohmioapi
labels:
app: ohmioapi
spec:
selector:
app: ohmioapi
ports:
- port: 15200
nodePort: 30200
protocol: TCP
type: LoadBalancer
</code></pre>
<p>Can anyone give a hint of where to start looking for?
Thanks!</p>
| <p>I would give the deployment/pods port a name (e.g. <code>http</code>) and then make the service serve off port 80 but target the pod port by name... that way you don't have to worry about port numbers when connecting to a service.</p>
<p>Also, you shouldn't need or want to use <code>nodePort</code> if you are using type of <code>LoadBalancer</code>.</p>
<p>E.g.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: ohmioapi-deployment
spec:
selector:
matchLabels:
app: ohmioapi
replicas: 1
template:
metadata:
labels:
app: ohmioapi
spec:
containers:
- name: ohmioapi
image: ohmiocontainers.azurecr.io/ohmioapi:latest
imagePullPolicy: Always
ports:
- name: http
containerPort: 15200
imagePullSecrets:
- name: acr-auth
---
apiVersion: v1
kind: Service
metadata:
name: ohmioapi
labels:
app: ohmioapi
spec:
selector:
app: ohmioapi
ports:
- name: http
port: 80
targetPort: http
protocol: TCP
type: LoadBalancer
</code></pre>
|
<p>I just started working with ArgoCD and I have an issue I can't find the answer for.</p>
<p>I have a file called <code>clusters.yaml</code> in my Git repo:</p>
<pre><code>clusters:
- name: cluster1-eu-k8s-002
url: https://cluster1.hcp.northeurope.azmk8s.io:443
values:
nameOverride: ReplaceWithThis
</code></pre>
<p>And I am using the following ApplicationSet in order to deploy Opentelemetry-collector on a bunch of clusters grouped under the label <code>group:dev</code>.</p>
<pre><code>apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: opentelemetry-applicationset
namespace: argocd
spec:
generators:
- git:
repoURL: [email protected]:removed/cloud.git
revision: HEAD
files:
- path: GitOps/argocd-apps/clusters.yaml
- clusters:
selector:
matchLabels:
argocd.argoproj.io/secret-type: cluster
group: dev
template:
metadata:
name: 'opentelemetry-{{name}}'
spec:
project: default
sources:
- repoURL: https://open-telemetry.github.io/opentelemetry-helm-charts
chart: opentelemetry-collector
targetRevision: 0.51.3
helm:
valueFiles:
- $values/GitOps/argocd-apps/opentelemetry-collector/values/values-dev.yaml
parameters:
- name: nameOverride
value: '{{ index .Clusters.values "nameOverride" }}'
- repoURL: [email protected]:removed/cloud.git
ref: values
destination:
server: '{{ server }}'
namespace: opentelemetry
</code></pre>
<p>I am trying to replace a parameter called <code>nameOverride</code> with my value <code>ReplaceWithThis</code> from <code>clusters.yaml</code>.</p>
<p>ArgoCD is not deploying my app because of this line: <code>value: '{{ index .Clusters.values "nameOverride" }}'</code></p>
<p>ArgoCD ApplicationSet controller logs throw some nonsense errors. I am sure I identified the problem correctly, because it works as expected if I just hardcore the string.</p>
<p>What exactly is the issue with the way I am trying to pull that value?</p>
| <p>In the <a href="https://argo-cd.readthedocs.io/en/stable/user-guide/application-set/" rel="nofollow noreferrer">Argo CD <code>ApplicationSet</code> controller</a>, you are using <code>{{ index .Clusters.values "nameOverride" }}</code> to access the <code>nameOverride</code> value. However, <code>Clusters</code> is an array in your <code>clusters.yaml</code> file, not a dictionary. So, you should not be trying to directly index it as if it is a dictionary. (In YAML, an array (or list) is denoted by items beginning with a dash (<code>-</code>).)</p>
<p>The <code>.Clusters</code> field will contain an <em>array</em> of clusters from your <code>clusters.yaml</code> file, and you want to access the <code>values.nameOverride</code> field of each cluster. However, your current syntax is treating <code>Clusters</code> as if it were a dictionary that can be indexed directly with <code>.values</code>.</p>
<p>You should instead iterate over the <code>Clusters</code> array to access each <code>values</code> dictionary individually. You may need to use a loop structure to do this, or modify your configuration so that <code>values</code> is not nested within an array.</p>
<p>You can also use a different structure for your <code>clusters.yaml</code> file.<br />
If you only have one cluster, you could structure your <code>clusters.yaml</code> file like this:</p>
<pre class="lang-yaml prettyprint-override"><code>clusters:
name: cluster1-eu-k8s-002
url: https://cluster1.hcp.northeurope.azmk8s.io:443
values:
nameOverride: ReplaceWithThis
</code></pre>
<p><em>Then</em>, in this case, you can directly access <code>nameOverride</code> with <code>{{ index .Clusters.values "nameOverride" }}</code>.</p>
<p>If you have multiple clusters and need a unique <code>nameOverride</code> for each, you could create a separate file for each cluster in your repository and adjust the <code>files</code> field in your <code>ApplicationSet</code> to match the new file structure.</p>
<p>That would be how a <a href="https://argocd-applicationset.readthedocs.io/en/stable/Generators-Git/#git-generator-files" rel="nofollow noreferrer">Git file generator</a> would be able to read each of those files, and access the <code>values.nameOverride</code> field of each cluster in their respective file.</p>
|
<p>I am experiencing an issue using the helm_release resource in Terraform.</p>
<p>I basically deployed a <code>kube-prometheus-stack</code> that includes many k8s resources and that works smoothly.</p>
<p>The problem arose when I tried to destroy (or remove) this part, since Helm does not delete all resources (it probably has to do with some garbage collection rule that keeps them up and running after delete) when uninstalls the chart. That means I end-up with:</p>
<ul>
<li>chart uninstalled</li>
<li>resources still up and running</li>
<li>having to manually go there and remove everything, otherwise if I re-create the thing I get plenty of duplicates</li>
</ul>
<p>I previously asked a question (that now I am closing) related to understanding if it was a problem with Helm (and is not, by design it deletes everything when it can, I am not sure if in the chart something can be done, but anyway I am assuming it won't be done now and quickly) now I would like to ask if somebody has ab idea on how I can manage this directly from Terraform.</p>
<p>Is there something I can use to, for instance, run a <code>kubectl delete</code> command on the labelled resources (or maybe the whole namespace) when the <code>helm_release</code> resource gets destroyed?</p>
<p>Note: I am not adding any code, since this has nothing to do with the code, but more about finding some hook or hack to run a cleanup only after destroy.</p>
<p>p.s.: I also tried to exploit Terraform cloud hooks post-apply, but I would prefer to solve this within depending on Terraform Cloud and, anyway, I didn't manage to create a dependency on whether the <code>helm_release</code> had been destroyed.</p>
| <p>In you need to solve this problem directly from Terraform, you could consider using a <a href="https://developer.hashicorp.com/terraform/language/resources/provisioners/null_resource" rel="nofollow noreferrer"><code>null_resource</code></a> with a <a href="https://developer.hashicorp.com/terraform/language/resources/provisioners/local-exec" rel="nofollow noreferrer"><code>local-exec</code> provisioner</a> that is triggered when the <a href="https://registry.terraform.io/providers/hashicorp/helm/latest/docs/resources/release" rel="nofollow noreferrer"><code>helm_release</code> resource</a> gets destroyed.<br />
The <code>local-exec</code> provisioner invokes a local executable, in this case <code>kubectl</code>, after a resource is destroyed.</p>
<p>Here is a brief example of how this could work:</p>
<pre><code>resource "helm_release" "example" {
name = "example"
namespace = "default"
chart = "stable/kube-prometheus-stack"
# add your values here...
}
resource "null_resource" "cleanup" {
triggers = {
helm_release_id = helm_release.example.id
}
provisioner "local-exec" {
when = destroy
command = "kubectl delete namespace ${helm_release.example.namespace}"
}
}
</code></pre>
<p>The above script will run <code>kubectl delete namespace</code> command on the namespace after the <code>helm_release</code> resource is destroyed.</p>
<p><strong>Do test that carefully</strong>: deleting the <em>entire</em> namespace, not just the resources created by the Helm chart, is not a casual operation!<br />
If there are other resources in the namespace that you do not want to delete, you will need to modify the <code>kubectl</code> command to delete only the resources you want.</p>
<p>And note that you would need to have <code>kubectl</code> configured on the machine running Terraform and it needs to have appropriate permissions to delete resources in your Kubernetes cluster.</p>
<p>Also, this <code>null_resource</code> will not get created until after the <code>helm_release</code> is created, due to the dependency in the <code>triggers</code> block. So, if the <code>helm_release</code> creation fails for some reason, the <code>null_resource</code> and its provisioners will not be triggered.</p>
<hr />
<blockquote>
<p>Unfortunately, I am using Terraform Cloud in a CI/CD pipe, therefore I won't be able to exploit the local-exec. But the answer is close to what I was looking for and since I didn't specify about Terraform Cloud is actually right.<br />
Do you have any other idea?</p>
</blockquote>
<p>The <code>local-exec</code> provisioner indeed cannot be used in the Terraform Cloud as it does not support running arbitrary commands on the host running Terraform.</p>
<h2>Kubernetes Provider lifecycle management</h2>
<p>An alternative solution in this context would be to use <a href="https://registry.terraform.io/providers/hashicorp/kubernetes/latest/docs" rel="nofollow noreferrer">Kubernetes providers in Terraform</a> to manage lifecycle of the resources that are left behind.</p>
<p>For example, let's say your Helm chart leaves behind a <code>PersistentVolumeClaim</code> resource. You could manage this using the Kubernetes provider in Terraform:</p>
<pre><code>provider "kubernetes" {
# configuration for your Kubernetes cluster
}
resource "helm_release" "example" {
name = "example"
namespace = "default"
chart = "stable/kube-prometheus-stack"
# add your values
}
data "kubernetes_persistent_volume_claim" "pvc" {
metadata {
name = "my-pvc"
namespace = helm_release.example.namespace
}
}
resource "kubernetes_persistent_volume_claim" "pvc" {
depends_on = [helm_release.example]
metadata {
name = data.kubernetes_persistent_volume_claim.pvc.metadata.0.name
namespace = data.kubernetes_persistent_volume_claim.pvc.metadata.0.namespace
}
spec {
access_modes = data.kubernetes_persistent_volume_claim.pvc.spec.0.access_modes
resources {
requests = {
storage = data.kubernetes_persistent_volume_claim.pvc.spec.0.resources.0.requests["storage"]
}
}
volume_name = data.kubernetes_persistent_volume_claim.pvc.spec.0.volume_name
}
}
</code></pre>
<p>In this example, the <code>kubernetes_persistent_volume_claim</code> resource will delete the PVC when the Terraform stack is destroyed.</p>
<p>You would have to do this for every type of resource that is left behind, so it can be a bit tedious, but it is an option.</p>
<h2>Kubernetes Provider for Job or a script</h2>
<p>Another approach would be using the Kubernetes provider to call a Kubernetes Job or a script that cleans up the resources left behind:</p>
<pre><code>provider "kubernetes" {
# configuration for your Kubernetes cluster goes here
}
resource "helm_release" "example" {
name = "example"
namespace = "default"
chart = "stable/kube-prometheus-stack"
# add your values here...
}
resource "kubernetes_job" "cleanup" {
metadata {
name = "cleanup-job"
namespace = helm_release.example.namespace
}
spec {
template {
metadata {}
spec {
container {
name = "cleanup"
image = "appropriate/curl" # or any image that has kubectl or equivalent tool
command = ["sh", "-c", "kubectl delete ..."] # replace ... with the actual cleanup commands
}
restart_policy = "Never"
}
}
backoff_limit = 4
}
depends_on = [helm_release.example]
}
</code></pre>
<p>In this second example, the <code>kubernetes_job</code> resource is triggered when the <code>helm_release</code> resource is created, running a cleanup script. The cleanup script could delete any resources that are left behind by the Helm chart.</p>
<p>Remember that in both cases, the Kubernetes provider needs to be properly configured and that the Kubernetes cluster permissions must allow the actions you are trying to perform.</p>
<hr />
<p>Regarding the second example, the OP asks if it is possible for the <code>kubernetes_job</code> to be triggered automatically when the <code>helm_release</code> resource gets destroyed.</p>
<p>Unfortunately, Terraform's built-in resources and providers do not provide a direct way to execute something only upon the destruction of another resource. The <code>provisioner</code> block is a way to do this, but as we discussed, it is not suitable for Terraform Cloud and cannot be used with the Kubernetes provider directly.</p>
<p>As an indirect solution, you can create a Kubernetes job that is configured to delete the resources as soon as it is launched, and then use a <a href="https://developer.hashicorp.com/terraform/language/meta-arguments/depends_on" rel="nofollow noreferrer"><code>depends_on</code> reference</a> to the <code>helm_release</code> in the job's configuration. That way, whenever the Helm release is created, the job will be launched as well. When you run <code>terraform destroy</code>, the Helm release will be destroyed and the job will be launched once more, thereby cleaning up the resources.</p>
<p>However, this approach is not perfect because it will also run the job when the resources are first created, not only when they are destroyed.</p>
<p>To address that, you could write your cleanup script such that it is idempotent and will not fail or cause any negative side effects if it is run when it is not necessary (i.e., upon creation of the Helm release).<br />
For example, your script could first check if the resources it is supposed to clean up actually exist before attempting to delete them:</p>
<pre><code>provider "kubernetes" {
# configuration for your Kubernetes cluster goes here
}
resource "helm_release" "example" {
name = "example"
namespace = "default"
chart = "stable/kube-prometheus-stack"
# add your values here...
}
resource "kubernetes_job" "cleanup" {
depends_on = [helm_release.example]
metadata {
name = "cleanup-job"
namespace = helm_release.example.namespace
}
spec {
template {
metadata {}
spec {
container {
name = "cleanup"
image = "appropriate/curl" # or any image that has kubectl or equivalent tool
command = ["sh", "-c",
"if kubectl get <resource> <name>; then kubectl delete <resource> <name>; fi"]
# replace <resource> and <name> with the actual resource and name
}
restart_policy = "Never"
}
}
backoff_limit = 4
}
}
</code></pre>
<p>In this example, the command checks if a specific Kubernetes resource exists before attempting to delete it. That way, the job can be safely run whether the Helm release is being created or destroyed, and the cleanup will only occur if the resource exists.</p>
<p>Do replace <code><resource></code> and <code><name></code> with the actual resource and name of the resource you wish to check and potentially delete.</p>
|
<p>What setting up a new Kubernetes endpoint and clicking "Verify Connection" the error message:
"The Kubconfig does not contain user field. Please check the kubeconfig. " - is always displayed. </p>
<p>Have tried multiple ways of outputting the config file to no avail. I've also copy and pasted many sample config files from the web and all end up with the same issue. Anyone been successful in creating a new endpoint? </p>
| <p>This is followed by <a href="https://github.com/TsuyoshiUshio/KubernetesTask/issues/35" rel="nofollow noreferrer">TsuyoshiUshio/KubernetesTask issue 35</a></p>
<blockquote>
<p>I try to reproduce, however, I can't do it.<br>
I'm not sure, however, I can guess it might the mismatch of the version of the cluster/kubectl which you download by the download <code>task/kubeconfig</code>.<br>
Workaround might be like this:</p>
<ul>
<li>kubectl version in your local machine and check the current server/client version</li>
<li>specify the same version as the server on the download task. (by default it is 1.5.2)</li>
<li>See the log of your release pipeline which is fail, you can see which <code>kubectl</code> command has been executed, do the same thing on your local machine with fitting your local pc's environment.</li>
</ul>
<p><strong>The point is, before go to the VSTS, download the kubectl by yourself</strong>.<br>
Then, put the kubeconfg on the default folder like <code>~/.kube/config</code> or set environment variables <code>KUBECONFIG</code> to the binary.<br>
Then execute kubectl get nodes and make sure if it works. </p>
<p>My kubeconfig is different format with yours. If you use AKS, <code>az aks install-cli</code> command and <code>az aks get-credentials</code> command.<br>
Please refer <a href="https://learn.microsoft.com/en-us/azure/aks/kubernetes-walkthrough" rel="nofollow noreferrer">https://learn.microsoft.com/en-us/azure/aks/kubernetes-walkthrough</a> .</p>
<p><strong>If it works locally, the config file must work on the VSTS task environment</strong>. (or this task or VSTS has a bug)</p>
</blockquote>
|
<p>Is it possible to get a list of pods that are Running on matser from kubectl?</p>
<p>i have tried this </p>
<pre><code>kubectl get pods -o wide --sort-by="{.spec.nodeName}"
</code></pre>
<p>but this doesnt say whether the node is master or worker</p>
| <p>As mentioned in <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/explore/explore-intro/" rel="nofollow noreferrer">the overview</a>:</p>
<blockquote>
<p>A Pod always runs on a <strong><a href="https://kubernetes.io/docs/concepts/architecture/nodes/" rel="nofollow noreferrer">Node</a></strong>.<br>
<strong>A Node is a worker machine in Kubernetes</strong> and may be either a virtual or a physical machine, depending on the cluster. Each Node is managed by the Master</p>
</blockquote>
<p>So by definition (even if it runs on the same physical machine than the master), any node is on a "worker machine"</p>
<p><a href="https://i.stack.imgur.com/190yK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/190yK.png" alt="https://d33wubrfki0l68.cloudfront.net/5cb72d407cbe2755e581b6de757e0d81760d5b86/a9df9/docs/tutorials/kubernetes-basics/public/images/module_03_nodes.svg"></a></p>
<p>Only kubectl get node does display a ROLE:</p>
<pre><code>vonc@voncvb:~/.kube$ kubectl get node -o wide
NAME STATUS ROLES AGE VERSION EXTERNAL-IP OS-IMAGE KERNEL-VERSION
serv0.server Ready <none> 18d v1.9.7 <none> SUSE CaaS Platform 3.0 docker://x.y.z.z
serv1.server Ready <none> 18d v1.9.7 <none> SUSE CaaS Platform 3.0 docker://x.y.z.z
serv2.server Ready <none> 18d v1.9.7 <none> SUSE CaaS Platform 3.0 docker://x.y.z.z
servm.server Ready master 18d v1.9.7 <none> SUSE CaaS Platform 3.0 docker://x.y.z.z
^^^^^^^
</code></pre>
|
<p>As per k8s docs:</p>
<p><a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/#pod-backoff-failure-policy" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/workloads/controllers/job/#pod-backoff-failure-policy</a></p>
<blockquote>
<p>Note: If your job has restartPolicy = "OnFailure", keep in mind that your Pod running the Job will be terminated once the job backoff limit has been reached. This can make debugging the Job's executable more difficult. We suggest setting restartPolicy = "Never" when debugging the Job or using a logging system to ensure output from failed Jobs is not lost inadvertently.</p>
</blockquote>
<p>I am very confused with this note:</p>
<ol>
<li><p>If I understand correctly backoff count increments with failed Pod only, not failed container. So with <code>restartPolicy = "OnFailure"</code> same container in Pod will restart in infinity loop, never increasing backoff count unless node fails?</p>
</li>
<li><p>If #1 is correct - then this footnote makes no sense as there is no difference either with <code>restartPolicy = "OnFailure"</code> or <code>restartPolicy = "Never"</code> - last Pod will be lost with the node failure anyway.</p>
</li>
<li><p>Footnote could make some sense in case of parallel execution - e.g. if there are 2 Pods with <code>restartPolicy = "OnFailure"</code> and backoff limit set to 2. So if the first pod is in error loop, and second pod failed 2 times due to node issues, first pod is terminated by Job controller to exit error loop. Is that what this footnote about? Seems like an extreme long shot to me.</p>
</li>
</ol>
<p>I feel like there is some easier logical reason in Pod lifecycle that leads to this behaviour, but still cant put my finger on it.</p>
| <blockquote>
<ol>
<li>If I understand correctly backoff count increments with failed Pod only, not failed container.</li>
</ol>
</blockquote>
<p>Yes, the backoff count increments only with a failed Pod, not with a failed container. If the container fails but the Pod is healthy (e.g., node issues), then with <code>restartPolicy = "OnFailure"</code>, the container within the Pod will be restarted indefinitely without increasing the backoff count. The Job's backoff limit will not be reached as long as the Pod itself does not fail.</p>
<blockquote>
<ol start="2">
<li>If #1 is correct - then this footnote makes no sense as there is no difference either with <code>restartPolicy = "OnFailure"</code> or <code>restartPolicy = "Never"</code></li>
</ol>
</blockquote>
<p>The primary difference is that with <code>restartPolicy = "OnFailure"</code>, the container will keep restarting within the same Pod if it fails, whereas with <code>restartPolicy = "Never"</code>, it will not restart. If a Pod fails, both policies will result in the Job controller tracking the failure towards the backoff limit.</p>
<p>You are right that node failure will cause the last Pod to be lost in both cases.</p>
<blockquote>
<ol start="3">
<li>Footnote could make some sense in case of parallel execution</li>
</ol>
</blockquote>
<p>The footnote seems more focused on the difficulty of debugging when using <code>restartPolicy = "OnFailure"</code>, as the continual restarting of the container could lead to loss of logs or make it more challenging to examine the state of a failed container.</p>
<p>This is more about best practices for debugging rather than describing a specific edge case in Pod lifecycle. For debugging purposes, using <code>restartPolicy = "Never"</code> would allow easier inspection of failed containers without them being restarted, which could be helpful for troubleshooting.</p>
<hr />
<p>If all this was not confusing enough, you also have <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/#backoff-limit-per-index" rel="nofollow noreferrer">Backoff limit per index</a> (alpha, K8s 1.28+), which specifically affects Indexed Jobs, where each <code>Pod</code> corresponds to an index, and allows a more fine-grained control over the failure handling for each index.</p>
<hr />
<blockquote>
<p>I fell like there is some easier logical reason in Pod lifecycle that leads to this behaviour, but still cant put my finger on it.</p>
</blockquote>
<p>The confusion might stem from the apparent contradiction that setting <code>restartPolicy = "OnFailure" </code>could cause a Job to terminate when the backoff limit is reached, even though this policy restarts containers indefinitely if the Pod itself does not fail.</p>
<p>So, the "easier logical reason" in Pod lifecycle leading to this behavior is how the <code>restartPolicy</code> interacts with the container and Pod states. It is about the scope and level at which the restarts are managed (container vs. Pod), and how these settings affect the ability to debug and control <code>Job</code> execution.</p>
|
<p>I want to configure Jenkins sever to execute commands into Kubernetes. I created token using:</p>
<pre><code>kubectl create sa cicd
kubectl get sa,secret
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: cicd
spec:
serviceAccount: cicd
containers:
- image: nginx
name: cicd
EOF
kubectl exec cicd -- cat /run/secrets/kubernetes.io/serviceaccount/token && echo
kubectl create token cicd
kubectl create token cicd --duration=999999h
kubectl create clusterrole cicd --verb=get,list --resource=namespaces
kubectl create clusterrolebinding cicd --clusterrole=cicd --serviceaccount=default:cicd
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: cicd
annotations:
kubernetes.io/service-account.name: "cicd"
EOF
kubectl get sa,secret
kubectl describe secret cicd
kubectl describe sa cicd
kubectl get sa cicd -oyaml
kubectl get sa,secret
</code></pre>
<p>Test:</p>
<pre><code>curl -k https://10.0.0.x:6443/api/v1/namespaces -H "Authorization: Bearer <.......>"
</code></pre>
<p>I copied this secrets file ~./kube/config</p>
<pre><code>apiVersion: v1
kind: Config
preferences: {}
clusters:
- cluster:
certificate-authority-data: <.....>
server: https://10.0.0.x:6443
name: cluster.local
contexts:
- context:
cluster: cluster.local
user: grafana
name: grafana
current-context: grafana
users:
- name: grafana
user:
token: <.....>
</code></pre>
<p>Jenkins configuration:</p>
<pre><code>pipeline {
agent any
stages {
.......
stage('helm deploy') {
steps {
script {
withKubeConfig([credentialsId: 'config_de']) {
..........
}
}
}
}
}
}
</code></pre>
<p>But I get error:</p>
<pre><code>Error: Kubernetes cluster unreachable: Get "https://x.x.x.x:6443/version": tls: failed to verify certificate: x509: certificate is valid for 10.x.x.x, 10.x.x.x, 127.0.0.1, not x.x.x.x
</code></pre>
<p>Do you know how I have to configure the IP properly?</p>
| <p>I would first check the Subject Alternative Name (SAN) details of a certificate using OpenSSL:</p>
<pre class="lang-bash prettyprint-override"><code>openssl s_client -connect 10.0.0.x:6443 -showcerts </dev/null 2>/dev/null | \
openssl x509 -text -noout | \
awk '/X509v3 Subject Alternative Name:/{flag=1; next} /X509v3/{flag=0} flag'
</code></pre>
<p>After you have identified the SAN details, choose one of the IP addresses or DNS names listed in the SAN to be used in Jenkins. Make sure to update the <code>kubeconfig</code> file or any other Kubernetes configuration in Jenkins to use this address. Specifically, the <code>server</code> URL in the <code>kubeconfig</code> file under the <code>clusters</code> section should match one of the addresses or DNS names in the SAN.</p>
<p>For example, if your SAN shows <code>DNS:kubernetes, DNS:kubernetes.default, IP Address:10.x.x.x, IP Address:127.0.0.1</code>, then your <code>kubeconfig</code> might look like this:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Config
preferences: {}
clusters:
- cluster:
certificate-authority-data: <.....>
server: https://10.x.x.x:6443 # That IP should match one from the SAN
name: cluster.local
</code></pre>
<p>Update this file and make sure Jenkins uses this updated <code>kubeconfig</code> in its pipeline configuration.</p>
<hr />
<pre class="lang-bash prettyprint-override"><code>rm /etc/kubernetes/pki/apiserver.* -f
kubeadm init phase certs apiserver --apiserver-cert-extra-sans 10.0.0.x \
--apiserver-cert-extra-sans 10.233.0.x --apiserver-cert-extra-sans localhost
</code></pre>
<p>The solution of deleting the old certificate and creating a new one with <code>kubeadm</code> essentially replaces the Kubernetes API server's certificate with a new one that contains the additional Subject Alternative Names (SANs).</p>
<p>The command specifies these extra SANs with the <code>--apiserver-cert-extra-sans</code> flags. The new certificate will therefore be valid for connections originating from those IP addresses, in addition to any originally configured.</p>
<p>Once the new certificate is in place, update the <code>kubeconfig</code> file or any other Kubernetes configuration in Jenkins to match one of the addresses or DNS names in the new SAN list.</p>
<p>For example, your <code>kubeconfig</code> may look like:</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Config
preferences: {}
clusters:
- cluster:
certificate-authority-data: <.....>
server: https://10.0.0.x:6443 # That IP should match one from the new SAN
name: cluster.local
</code></pre>
<p>Update this file and ensure that Jenkins uses this updated <code>kubeconfig</code> in its pipeline configuration.</p>
|
<p>I am using Kubernetes and Google Cloud to host my web application
and was looking for a way to increase the time for waiting for the server response before returning the error
<code>Internal server error Sorry, there seems to be an error. Please try again soon.</code>
I tried to search for the Kubernetes objects but no result
thank you</p>
| <p>If I understand correctly, you have a Django application hosted on a Kubernetes cluster, which is managed on Google Cloud.</p>
<p>There are still missing pieces, though:</p>
<ul>
<li>the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress-controllers/" rel="nofollow noreferrer">Kubernetes Ingress Controller</a> you are using, I will assume <a href="https://github.com/kubernetes/ingress-nginx" rel="nofollow noreferrer">NGINX</a></li>
<li>the <a href="https://wsgi.readthedocs.io/en/latest/what.html" rel="nofollow noreferrer">Web Server Gateway Interface (WSGI)</a> used by your <a href="https://docs.djangoproject.com/en/4.2/howto/deployment/wsgi/" rel="nofollow noreferrer">Django application</a>. I will assume <a href="https://gunicorn.org/" rel="nofollow noreferrer">Gunicorn</a>.</li>
</ul>
<p>That would be:</p>
<pre><code>+----------------------+
| Google Cloud |
| Load Balancer |
+----------------------+
|
v
+-----------------------+
| Kubernetes Cluster |
| NGINX Ingress |
| Controller |
+-----------------------+
|
v
+----------------------------+
| Django Application |
| (served via Gunicorn) |
+----------------------------+
</code></pre>
<ul>
<li><p><strong>Google Cloud Load Balancer</strong>: responsible for distributing incoming network traffic across several servers to ensure no single server becomes overwhelmed with too much traffic. It is the entry point for HTTP/S requests.</p>
</li>
<li><p><strong>Kubernetes NGINX Ingress Controller</strong>: manages external access to the services in the cluster, typically HTTP. It can provide load balancing, SSL termination, and name-based virtual hosting.</p>
</li>
<li><p><strong>Django Application (served via Gunicorn)</strong>: That is where the web application lives. Gunicorn acts as a WSGI HTTP Server for Python web applications, serving the Django application to handle incoming HTTP requests.</p>
</li>
</ul>
<p>Then you can identify the settings impacting timeout, at each level.</p>
<ul>
<li><p><strong>Django Settings</strong>: There is no setting directly affecting the waiting time for a response from the server before returning an error. You might want to look into custom middleware that can handle timeouts in a more graceful manner.</p>
<p>Note that an "<a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/500" rel="nofollow noreferrer">500 Internal server error</a>" can originate from the Django application itself (e.g., unhandled exceptions, misconfigurations, bugs in the code, etc.) or from other elements in the infrastructure, such as the database, cache, or other backend services that the Django application depends on.<br />
Make sure <a href="https://docs.djangoproject.com/en/4.2/topics/logging/" rel="nofollow noreferrer">Django's logging</a> is properly configured to capture errors and exceptions (see <a href="https://docs.djangoproject.com/en/4.2/topics/logging/#examples" rel="nofollow noreferrer"><code>settings.py</code></a>).<br />
Check the logs with <a href="https://cloud.google.com/logging" rel="nofollow noreferrer">Cloud Logging</a> or with a <a href="https://cloud.google.com/code/docs/shell/view-logs" rel="nofollow noreferrer"><code>kubectl logs <pod-name></code> from the Cloud shell</a>.</p>
</li>
<li><p><strong>Gunicorn Settings</strong>: you can increase the timeout setting by modifying the <a href="https://docs.gunicorn.org/en/stable/settings.html#timeout" rel="nofollow noreferrer"><code>--timeout</code> flag</a> when starting Gunicorn (<code>gunicorn --timeout 120 myproject.wsgi:application</code>). That would set the timeout to 120 seconds.</p>
</li>
<li><p><strong>Kubernetes Ingress Settings</strong>: if you are using the NGINX Ingress controller, you can set the <a href="https://github.com/kubernetes/ingress-nginx/blob/main/docs/user-guide/nginx-configuration/annotations.md#custom-timeouts" rel="nofollow noreferrer"><code>nginx.ingress.kubernetes.io/proxy-read-timeout</code> and <code>nginx.ingress.kubernetes.io/proxy-send-timeout</code> annotations</a> on your Ingress object.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/proxy-read-timeout: "120"
nginx.ingress.kubernetes.io/proxy-send-timeout: "120"
...
</code></pre>
</li>
</ul>
<ol start="4">
<li><strong>Google Cloud Load Balancer Settings</strong>:
<ul>
<li>On Google Cloud, if you are using a load balancer in front of your Kubernetes cluster, you might also need to configure the timeout settings on the load balancer.</li>
</ul>
</li>
</ol>
<p>But again, a "500 Internal Server Error" might not be "resolved" by adding more time: logs are crucial to understand what is going on.</p>
|
<p>I am trying to set up conditions in the bash script that will make sure if the file exists skip the entire code and go to the next part</p>
<p>'''</p>
<pre><code> echo "k8s: starting the init script"
if [ ! -e /etc/sawtooth/keys/validator.priv ]; then
echo $pbft0priv > /etc/sawtooth/keys/validator.priv
echo $pbft0pub > /etc/sawtooth/keys/validator.pub
fi &&
echo "k8s: checking for keys"
if [ ! -e /root/.sawtooth/keys/my_key.priv ]; then
sawtooth keygen my_key
fi &&
echo "k8s: checking for config-genesis.batch"
if [ ! -e /var/lib/sawtooth/genesis.batch.file ]; then
if [ ! -e config-genesis.batch ]; then
sawset genesis -k /root/.sawtooth/keys/my_key.priv -o config-genesis.batch
fi &&
echo "k8s: sleeping for 30 sec"
sleep 30 &&
echo sawtooth.consensus.pbft.members=["\"$pbft0pub\",\"$pbft1pub\",\"$pbft2pub\",\"$pbft3pub\",\"$pbft4pub\""] &&
if [ ! -e config.batch ]; then
sawset proposal create \
-k /root/.sawtooth/keys/my_key.priv \
sawtooth.consensus.algorithm.name=pbft \
sawtooth.consensus.algorithm.version=1.0\
sawtooth.consensus.pbft.members=["\"$pbft0pub\",\"$pbft1pub\",\"$pbft2pub\",\"$pbft3pub\",\"$pbft4pub\""] \
sawtooth.publisher.max_batches_per_block=1200 \
-o config.batch
fi && \
fi &&
if [ ! -e /var/lib/sawtooth/genesis.batch ]; then
sawadm genesis config-genesis.batch config.batch
fi &&
sawtooth-validator -vv \
--endpoint tcp://$SAWTOOTH_0_SERVICE_HOST:8800 \
--bind component:tcp://eth0:4004 \
--bind consensus:tcp://eth0:5050 \
--bind network:tcp://eth0:8800 \
--scheduler parallel \
--peering static \
--maximum-peer-connectivity 10000
</code></pre>
<p>'''</p>
<p>But I get error as
line 28: syntax error near unexpected token <code>fi' line 28: </code> fi &&'</p>
<p>This is line no. 28:</p>
<pre><code> 26: -o config.batch
27: fi && \
28: fi &&
</code></pre>
| <p>Your problem is in line 27:</p>
<pre><code>fi && \
</code></pre>
<p>That <code>&& \</code> doesn't belong there. It's the last <code>if</code> in the block, so you shouldn't put <code>&&</code> there since there are no commands afterwards.</p>
<pre class="lang-bash prettyprint-override"><code>if true; then
if true; then
echo one
fi &&
if true; then
echo two
fi
fi &&
echo three
</code></pre>
|
<p>How does pods get unique IP addresses even if they reside in the same worker node?</p>
<p>Also pod is not a device what is logic behind having it an IP address?<br>
Is the IP address assigned to a pod a virtual IP? </p>
| <p>A <a href="https://kubernetes.io/docs/concepts/workloads/pods/pod/" rel="noreferrer">pod</a> is part of a cluster (group of nodes), and <a href="https://kubernetes.io/docs/concepts/cluster-administration/networking/" rel="noreferrer">cluster networking</a> tells you that:</p>
<blockquote>
<p>In reality, Kubernetes applies IP addresses at the Pod scope - containers within a Pod share their network namespaces - including their IP address. </p>
<p>This means that containers within a Pod can all reach each other’s ports on localhost.<br>
This does imply that containers within a Pod must coordinate port usage, but this is no different than processes in a VM.<br>
This is called the <strong>“IP-per-pod” model</strong>.</p>
</blockquote>
<p>The constraints are:</p>
<blockquote>
<ul>
<li>all containers can communicate with all other containers without NAT</li>
<li>all nodes can communicate with all containers (and vice-versa) without NAT</li>
<li>the IP that a container sees itself as is the same IP that others see it as</li>
</ul>
</blockquote>
<p>See more with "<a href="https://medium.com/practo-engineering/networking-with-kubernetes-1-3db116ad3c98" rel="noreferrer"><strong>Networking with Kubernetes</strong></a>" from <a href="https://twitter.com/alsingh87" rel="noreferrer"><strong>Alok Kumar Singh</strong></a>:</p>
<p><a href="https://i.stack.imgur.com/O62bV.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/O62bV.gif" alt="https://cdn-images-1.medium.com/max/1000/1*lAfpMbHRf266utcd4xmLjQ.gif"></a></p>
<p>Here:</p>
<blockquote>
<p>We have a machine, it is called a <strong>node</strong> in kubernetes.<br>
It has an IP 172.31.102.105 belonging to a subnet having CIDR 172.31.102.0/24.</p>
</blockquote>
<p>(<a href="https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing" rel="noreferrer">CIDR: Classless Inter-Domain Routing</a>, a method for allocating IP addresses and IP routing)</p>
<blockquote>
<p>The node has an network interface <code>eth0</code> attached. It belongs to root network namespace of the node.<br>
For pods to be isolated, they were created in their own network namespaces — these are pod1 n/w ns and pod2 n/w ns.<br>
The pods are assigned IP addresses 100.96.243.7 and 100.96.243.8 from the CIDR range 100.96.0.0/11.</p>
</blockquote>
<p>For the, see "<a href="https://cloudnativelabs.github.io/post/2017-04-18-kubernetes-networking/" rel="noreferrer"><strong>Kubernetes Networking</strong></a>" from <a href="https://twitter.com/cloudnativelabs" rel="noreferrer"><strong>CloudNativelabs</strong></a>:</p>
<blockquote>
<p>Kubernetes does not orchestrate setting up the network and offloads the job to the <strong><a href="https://github.com/containernetworking/cni" rel="noreferrer">CNI (Container Network Interface)</a></strong> plug-ins. Please refer to the <strong><a href="https://github.com/containernetworking/cni/blob/master/SPEC.md" rel="noreferrer">CNI spec</a></strong> for further details on CNI specification. </p>
<p>Below are possible network implementation options through CNI plugins which permits pod-to-pod communication honoring the Kubernetes requirements:</p>
<ul>
<li>layer 2 (switching) solution</li>
<li>layer 3 (routing) solution</li>
<li>overlay solutions</li>
</ul>
</blockquote>
<h2>layer 2 (switching)</h2>
<p><a href="https://i.stack.imgur.com/VYSiH.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/VYSiH.jpg" alt="https://cloudnativelabs.github.io/img/l2-network.jpg"></a></p>
<p>You can see their IP attributed as part of a container subnet address range.</p>
<h2>layer 3 (routing)</h2>
<p><a href="https://i.stack.imgur.com/Vkt6G.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/Vkt6G.jpg" alt="https://cloudnativelabs.github.io/img/l3-gateway-routing.jpg"></a></p>
<p>This is about populating the default gateway router with routes for the subnet as shown in the diagram.<br>
Routes to 10.1.1.0/24 and 10.1.2.0/24 are configured to be through node1 and node2 respectively. </p>
<h2>overlay solutions</h2>
<p>Generally not used.</p>
<p>Note: See also (Oct. 2018): "<a href="https://cloud.google.com/kubernetes-engine/docs/concepts/network-overview" rel="noreferrer">Google Kubernetes Engine networking</a>".</p>
|
<p>The Istio ingress gateway exposes the following ports by default:</p>
<pre><code>80:31380/TCP,443:31390/TCP,31400:31400/TCP
</code></pre>
<p>Why does it expose 31400 and map it to 31400? I can't find an explanation of this in the docs or elsewhere.</p>
<p><strong>Background:</strong> I'm following the <a href="https://istio.io/docs/setup/kubernetes/helm-install/#option-1-install-with-helm-via-helm-template" rel="noreferrer">Install with Helm via <code>helm template</code></a> guide using Istio 0.8.0. The deployment manifest is built from <a href="https://github.com/istio/istio/tree/0.8.0/install/kubernetes/helm/istio" rel="noreferrer">https://github.com/istio/istio/tree/0.8.0/install/kubernetes/helm/istio</a>, giving the following ingress gateway service definition:</p>
<pre><code># Source: istio/charts/ingressgateway/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: istio-ingressgateway
namespace: istio-system
labels:
chart: ingressgateway-0.8.0
release: istio
heritage: Tiller
istio: ingressgateway
spec:
type: NodePort
selector:
istio: ingressgateway
ports:
-
name: http
nodePort: 31380
port: 80
-
name: https
nodePort: 31390
port: 443
-
name: tcp
nodePort: 31400
port: 31400
</code></pre>
| <p><a href="https://github.com/istio/istio/commit/a4b6cc55dd2066f6c2bbe8fdf6d39657f44f444e" rel="noreferrer">Commit a4b6cc5</a> mentions:</p>
<blockquote>
<p>Adding the 31400 port back because of testdata dependency</p>
</blockquote>
<p>This is part of <a href="https://github.com/istio/istio/pull/6350" rel="noreferrer"><code>istio/istio</code> PR 6350</a></p>
<blockquote>
<p>These changes add support for multiple ingress/egress gateway configuration in the Helm charts.<br>
The new gateways field is an array that by default has one configuration (as it was before) but allows users to add more configurations to have multiple ingress/egress gateways deployed when installing the charts.</p>
</blockquote>
<p>See <a href="https://github.com/istio/istio/pull/6350/commits/05cba4e6570c1350a6b532355b7b4cc9c857c8e7" rel="noreferrer">commit 05cba4e</a>.</p>
|
<h1>Question</h1>
<p>Given this single-line string:</p>
<pre><code>PG_USER=postgres PG_PORT=1234 PG_PASS=icontain=and*symbols
</code></pre>
<p>What would be the right way to assign each value to its designated variable so that I can use it afterward?</p>
<hr />
<h1>Context</h1>
<p>I'm parsing the context of a k8s secret within a <code>CronJob</code> so that I can periodically call a Stored Procedure in our Postgres database.</p>
<p>To do so, I plan on using:</p>
<pre class="lang-sh prettyprint-override"><code>PG_OUTPUT_VALUE=$(PGPASSWORD=$PG_PASSWD psql -qtAX -h $PG_HOST -p $PG_PORT -U $PG_USER -d $PG_DATABASE -c $PG_TR_CLEANUP_QUERY)
echo $PG_OUTPUT_VALUE
</code></pre>
<p>The actual entire helm chart I'm currently trying to fix looks like this:</p>
<pre><code>apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: {{ template "fullname" $ }}-tr-cleanup-cronjob
spec:
concurrencyPolicy: Forbid
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
restartPolicy: OnFailure
volumes:
- name: postgres
secret:
secretName: {{ template "fullname" $ }}-postgres
containers:
- name: {{ template "fullname" $ }}-tr-cleanup-pod
image: postgres:12-alpine
imagePullPolicy: Always
env:
- name: PG_PROPS
valueFrom:
secretKeyRef:
name: {{ template "fullname" $ }}-postgres
key: postgres.properties
command:
- /bin/sh
- -c
- echo "props:" && echo $PG_PROPS && PG_USER=$(grep "^PG_USER=" | cut -d"=" -f2-) && echo $PG_USER && PG_TR_CLEANUP_QUERY="SELECT something FROM public.somewhere;" && echo $PG_TR_CLEANUP_QUERY && PG_OUTPUT_VALUE=$(PGPASSWORD=$PG_PASSWD psql -qtAX -h $PG_HOST -p $PG_PORT -U $PG_USER -d $PG_DATABASE -c $PG_TR_CLEANUP_QUERY) && echo PG_OUTPUT_VALUE
volumeMounts:
- name: postgres
mountPath: /etc/secrets/postgres
</code></pre>
<h3>Current approach</h3>
<p>As you can see, I'm currently using:</p>
<pre class="lang-sh prettyprint-override"><code>PG_USER=$(grep "^PG_USER=" | cut -d"=" -f2-)
</code></pre>
<p>That is because I initially thought the secret would be output on multiple lines, but it turns out that I was wrong. The <code>echo $PG_USER</code> displays an empty string.</p>
| <p>The bash <code>declare</code> command is appropriate here, and is safer than <code>eval</code>.</p>
<p>Suppose the input contains something potentially malicious</p>
<pre class="lang-sh prettyprint-override"><code>line='PG_USER=postgres PG_PORT=1234 PG_PASS=icontain=and*symbols`ls`'
</code></pre>
<p>I'm assuming none of the values contain whitespace. Let's split that string</p>
<pre class="lang-sh prettyprint-override"><code>read -ra assignments <<< "$line"
</code></pre>
<p>Now, <code>declare</code> each one</p>
<pre class="lang-sh prettyprint-override"><code>for assignment in "${assignments[@]}"; do declare "$assignment"; done
</code></pre>
<p>Everywhere we examine the input, we maintain double quotes.</p>
<p>Let's see what we ended up with:</p>
<pre class="lang-sh prettyprint-override"><code>$ declare -p PG_USER PG_PORT PG_PASS
declare -- PG_USER="postgres"
declare -- PG_PORT="1234"
declare -- PG_PASS="icontain=and*symbols\`ls\`"
</code></pre>
|
<p>Assuming I have a Kubernetes Deployment object with the <code>Recreate</code> strategy and I update the Deployment with a new container image version. Kubernetes will:</p>
<ol>
<li>scale down/kill the existing Pods of the Deployment,</li>
<li>create the new Pods,</li>
<li>which will pull the new container images</li>
<li>so the new containers can finally run.</li>
</ol>
<p>Of course, the <code>Recreate</code> strategy is exepected to cause a downtime between steps 1 and 4, where no Pod is actually running. However, step 3 can take a lot of time if the container images in question are or the container registry connection is slow, or both. In a test setup (Azure Kubernetes Services pulling a Windows container image from Docker Hub), I see it taking 5 minutes and more, which makes for a really long downtime.</p>
<p>So, what is a good option to reduce that downtime? Can I somehow get Kubernetes to pull the new images before killing the Pods in step 1 above? (Note that the solution should work with Windows containers, which are notoriously large, in case that is relevant.)</p>
<p>On the Internet, I have found <a href="https://codefresh.io/kubernetes-tutorial/single-use-daemonset-pattern-pre-pulling-images-kubernetes/" rel="nofollow noreferrer">this Codefresh article using a DaemonSet and Docker in Docker</a>, but I guess <a href="https://kubernetes.io/blog/2020/12/02/dont-panic-kubernetes-and-docker/#so-why-the-confusion-and-what-is-everyone-freaking-out-about" rel="nofollow noreferrer">Docker in Docker is no longer compatible with containerd</a>.</p>
<p>I've also found <a href="https://stackoverflow.com/a/59588935/62838">this StackOverflow answer</a> that suggests using an Azure Container Registry with Project Teleport, but that is in private preview and doesn't support Windows containers yet. Also, it's specific to Azure Kubernetes Services, and I'm looking for a more general solution.</p>
<p>Surely, this is a common problem that has a "standard" answer?</p>
<p><strong>Update 2021-12-21:</strong> Because I've got a corresponding answer, I'll clarify that I cannot easily change the deployment strategy. The application in question does not support running Pods of different versions at the same time because it uses a database that needs to be migrated to the corresponding application version, without forwards or backwards compatibility.</p>
| <p>Implement a "blue-green" deployment strategy. For instance, the service might be running and active in the "blue" state. A new deployment is created with a new container image, which deploys the "green" pods with the new container image. When all of the "green" pods are ready, the "switch live" step is run, which switches the active color. Very little downtime.</p>
<p>Obviously, this has tradeoffs. Your cluster will need more memory to run the additional transitional pods. The deployment process will be more complex.</p>
|
<p>My pod can't be created because of the following problem:</p>
<pre><code>Failed to pull image "europe-west3-docker.pkg.dev/<PROJECT_ID>/<REPO_NAME>/my-app:1.0.0": rpc error: code = Unknown desc = Error response from daemon: Get https://europe-west3-docker.pkg.dev/v2/<PROJECT_ID>/<REPO_NAME>/my-app/manifests/1.0.0: denied: Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/<PROJECT_ID>/locations/europe-west3/repositories/<REPO_NAME>" (or it may not exist)
</code></pre>
<p>I've never experienced anything like it. Maybe someone can help me out.</p>
<p>Here is what I did:</p>
<ol>
<li>I set up a standrd Kubernetes cluster on Google Cloud in the Zone <code>europe-west-3-a</code></li>
<li>I started to follow the steps described here <a href="https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app</a></li>
<li>I built the docker imager and pushed it to the Artifcats repository</li>
<li>I can confirm the repo and the image are present, both in the Google Console as well as pulling the image with docker</li>
<li>Now I want to deploy my app, here is the deployment file:</li>
</ol>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
labels:
app: my-app
spec:
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: europe-west3-docker.pkg.dev/<PROJECT_ID>/<REPO_NAME>/my-app:1.0.0
imagePullPolicy: Always
ports:
- containerPort: 8080
</code></pre>
<ol start="6">
<li>The pod fails to create due to the error mentioned above.</li>
</ol>
<p>What am I missing?</p>
| <p>I encountered the same problem, and was able to get it working by executing:</p>
<pre class="lang-sh prettyprint-override"><code>gcloud projects add-iam-policy-binding ${PROJECT} \
--member=serviceAccount:${EMAIL} \
--role=roles/artifactregistry.reader
</code></pre>
<p>with <code>${PROJECT}</code> = the project name and <code>${EMAIL}</code> = the default service account, e.g. something like <code>[email protected]</code>.</p>
<p>I suspect I may have removed some "excess permissions" too eagerly in the past.</p>
|
<p>Most of my images that I deploy into Kubernetes has the common base (From) image. So, I have multiple applications deployed into multiple pods. How does the Docker layer cache work across multiple pods, as the From image is same for all the pods?</p>
| <p>See "<a href="https://kubernetes.io/docs/concepts/workloads/pods/pod-overview/#how-pods-manage-multiple-containers" rel="noreferrer">How Pods manage multiple Containers</a>"</p>
<blockquote>
<p>Pods are designed to support multiple cooperating processes (as containers) that form a cohesive unit of service.<br>
The containers in a Pod are automatically co-located and co-scheduled on the same physical or virtual machine in the cluster.<br>
The containers can share resources and dependencies, communicate with one another, and coordinate when and how they are terminated.</p>
</blockquote>
<p>So, <strong>within the same cluster</strong>, your pods will share the same container runtime (for instance docker, but others exist).<br>
Which means the layers of your base image will be reused by the various containers running in those pods.<br>
Then each container will write to its special <a href="https://en.wikipedia.org/wiki/UnionFS" rel="noreferrer">UnionFS</a> (which can use one of a few different storage backends, such as aufs or btrfs, there are several choices), using <a href="https://en.wikipedia.org/wiki/Copy-on-write" rel="noreferrer">copy-on-write</a>.</p>
|
<p>I have a Postgres pod which has a mounted volume:</p>
<pre><code>kind: PersistentVolume
apiVersion: v1
metadata:
name: postgres-pv
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 100M
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app: postgres
name: psql-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100M
</code></pre>
<p>The yaml contains the following:</p>
<pre><code>volumeMounts:
- mountPath: /docker-entrypoint-initdb.d/
name: psql-config-map-volume
- mountPath: /var/lib/postgresql/data
name: psql-claim
subPath: postgres
volumes:
- name: psql-config-map-volume
configMap:
name: psql-config-map // contains an init.sql
- name: psql-claim
persistentVolumeClaim:
claimName: psql-claim
</code></pre>
<p>It works well, data retain after deployment/pod deletion and redeploy.</p>
<p>The problem appears when I modify the init.sql.
It didn't come into effect and got this message at psql pod startup:</p>
<blockquote>
<p>PostgreSQL Database directory appears to contain a database; Skipping initialization</p>
</blockquote>
<p>The pod itself starts without any error (just with the old init.sql data)</p>
<p>What I have tried: </p>
<ol>
<li>Deleting the deployment,the pvc and pv. Then redeploy everything. Same result.<br></li>
<li>I searched the hostpath data but /mnt/data is empty.<br></li>
</ol>
<p>What else should I try? How can I force the init.sql to launch? Where the actual old data is stored if not in the hostpath? </p>
<p><strong><em>edit: I have search for pg files and found this:</em></strong><br>
/var/lib/docker/overlay2/6ae2../merged/mnt/data/postgres/pg_ident.conf<br>
/var/lib/docker/overlay2/6ae2../diff/mnt/data/postgres/pg_ident.conf<br>
And it still exists after pvc and pv deletion. How can I gracefully reset its data?</p>
| <p>I just had a similar situation. It turned out that I had a typo in my <code>initdb.sh</code> script. This results in PG starting, failing to apply the script and crashing. This causes the container to restart, and the next time, PG skips the script.</p>
<p>I finally figured it out because the pod was showing <code>restartCount: 1</code>. Running <code>kubectl logs postgres-0 --previous</code> gave me the logs from <em>before</em> the crash, which pointed to a typo in my script (<code>LC_TYPE</code> instead of <code>LC_CTYPE</code>, in my case).</p>
|
<p>For example, a defer function is implemented within a webhook's logic (the webhook is written in Golang). The webhook times out after taking too long to complete. In this case, will the defer function still get executed?</p>
<p>The request timeout is set via <a href="https://pkg.go.dev/context#WithTimeout" rel="nofollow noreferrer">context.WithTimeout</a> for the webhook.</p>
| <p>It's usually a good idea to clearly mention what programming language you're talking about, although I can figure it out.</p>
<p>In Go, the defer functor (whatever you want to call it) will be executed when the method returns, no matter how it completes, unless you manage to kill the executing process in a drastic manner, and perhaps even then.</p>
|
<p>I understand that blue-green deployment means having two versions of production code deployed. It allows us to test in production.</p>
<p>Assume that we have an e-commerce application, and both deployments are sharing a database.
Then someone testing the green deployment makes a purchase. This purchase will be recorded in the production database even though it was just a test.</p>
<p>How do you deal with this scenario? How do blue-green deployments handle database interactions in general?</p>
| <p>I'm not sure what resources you're looking at, but at least in our organization, using Kubernetes, blue-green deployments are not used for "canary testing", which is what you're referring to. Blue-green deployments are used to facilitate smooth deployment switches. During the process of deployment, the existing pods in the "current color" are still running while the pods in the "target color" are starting up and reaching readiness state. The service doesn't switch over to the target color until the new pods in the target color are all ready.</p>
<p>Regarding how you would do that sort of thing with "canary testing", I'd say that is completely application-specific.</p>
|
<p>I am deploying a Spring boot application to Kubernetes. My Docker file is as follows.</p>
<pre><code>FROM alpine-jdk1.8:latest
RUN mkdir -p /ext/app
COPY target/app-service.war /ext/app
ENV JAVA_OPTS="" \
APPLICATION_ARGS=""
CMD java ${JAVA_OPTS} -jar /ext/app/app-service.war ${APPLICATION_ARGS}
</code></pre>
<p>I have many config files under conf directory, but there are secrets also.</p>
<p>So, moved few of them to secrets and few to configMaps in Kubernetes. But, created more than 1 configmaps and secrets to groups configs and secrets.</p>
<p>Since, there are many configMaps and secrets, I had to create many volume mounts and volumes and used the Spring config location to add all these volumes to the classpath as a comma separated values.</p>
<pre><code>- name: APPLICATION_ARGS
value: --spring.config.location=file:/conf,.....
</code></pre>
<p>Is there any other better approach?</p>
| <p>That is a good approach for secrets, but less so <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/" rel="nofollow noreferrer">for configMaps</a>.</p>
<p>If your war application can rely on environment variable, a possible approach is to convert that configMap into an rc file (file with properties) which can then be read once by the application and used</p>
<p>You can see an example of such an approach in "<a href="http://blog.knuthaugen.no/2016/06/kubernetes-wars-day-3/" rel="nofollow noreferrer"><strong>The Kubernetes Wars</strong></a>" from <a href="https://twitter.com/knuthaug" rel="nofollow noreferrer"><strong>knu:t hæugen</strong></a>:</p>
<blockquote>
<p>How to deal with configuration?<br>
Kubernetes likes app config in environment variables, not config files.<br>
This is easy in our node apps using <a href="https://github.com/mozilla/node-convict" rel="nofollow noreferrer">convict</a>, pretty easy in our ruby apps and ranging from relatively easy to bloody hard in our java apps. </p>
<p>But how to get config into the replication controllers? We opted for using configmaps (a kubernetes object) to store the config, reference the variables from the rc files and maintain it in git controlled files.<br>
So when we want to change to app config, update the config files and run a script which updates the configmap and reloads all the pods for the app</p>
</blockquote>
|
<p>Has someone experience in debugging .NET 6 F# code running in a service-less deployment/pod inside a kubernetes cluster in AKS with Visual Studio (ideally 2022)?</p>
<p>Bridge to Kubernetes is not available for VS 2022, and the VS2019 (and VS code) seems to require a service and an http access. In my case, I have microservices that only use the internal cluster networking between them with a messaging engine, without associated services.</p>
<p>Logs are helpful, but being able to debug would be great.</p>
<p>Ideas?</p>
<p>P.S.</p>
<p><a href="https://dev.to/stevesims2/step-through-debugging-of-code-running-in-kubernetes-using-vs2019-ssh-attach-46p9" rel="nofollow noreferrer">This</a> is a way, but it looks way too invasive</p>
| <p>My experience with this sort of thing is with Java applications, but I assume it would be similar for this platform. This would typically be done with a "port-forward", described on this page: <a href="https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/</a> .</p>
<p>Basically, this will provision a local port on your desktop, such that if you connect to that port on your desktop, it will actually be connecting to a mapped port on the pod in the cluster.</p>
<p>Once the port-forward is working, you can connect to that port from vscode.</p>
<p>In Java, this port would be the "debugger port", sometimes referred to as the JPDA port.</p>
|
<p>I have a micro service written using <code>Java EE + Openliberty</code>. I have deployed this into <code>kubernetes</code> in my <code>Docker-Desktop</code>. I want to know, is there any way to debug this deployed service? I am using <code>Eclipse</code>. Please let me know how can I debug deployed service. Thank you!</p>
| <p>I heartily endorse looking at telepresence. I don't use it for real yet, but I've verified it will be very useful in our enterprise once we get some pieces in place.</p>
<p>However, I don't believe you need that complication for your simpler situation. Telepresence will be very nice for injecting NEW code into a cluster without actually deploying it. In your case, you just want to debug the existing code. This allows for a simpler mechanism.</p>
<p>To remotely debug in Eclipse, you need to connect to a debugger port. To do this with code running in a k8s cluster, you have to expose the port to connect to, and make it available on your desktop.</p>
<p>In your image, you have to ensure that your Java command line has the "-Xdebug" parameters, which specifies the debugger port (5005 by convention). You also have to ensure that your Dockerfile specifies that port to be exposed. When that is running in your cluster, you have to use "kubectl port-forward" on your desktop (with appropriate parameters). This will proxy the debugger port in the cluster to a corresponding port on your desktop. You then connect to that port from Eclipse.</p>
|
<p>How do I get the current namespace of a deployment/service using the kubernetes client-go API? It doesn't seem to be in the client object or in the config.</p>
| <p>Using</p>
<pre><code>ioutil.ReadFile("/var/run/secrets/kubernetes.io/serviceaccount/namespace")
</code></pre>
<p>works but is ugly, when the desired implementation is present in the <code>Namespace()</code> method of <code>inClusterClientConfig</code>. But how would you get that object starting from <code>rest.InClusterConfig()</code>? It is only instantiable from outside the package via <code>NewNonInteractiveDeferredLoadingClientConfig</code>.</p>
<p>I see <a href="https://github.com/kubernetes/kubernetes/pull/63707" rel="noreferrer">kubernetes #63707</a> which looks related but was abandoned.</p>
|
<p>I have a AWS <strong>LoadBalancer</strong> which created using <code>Kube</code>, <code>Kops</code> and <code>AWS</code>.
protocl type for the <strong>ELB</strong> is <strong>tcp</strong>. this work fine for <code>http</code> requests, means I can access my site with <strong><a href="http://testing.example.com" rel="nofollow noreferrer">http://testing.example.com</a></strong>. Now I tried to add <strong>SSL</strong> for this <strong>ELB</strong> using <strong>ACM</strong> <code>(Certificate manager)</code>. I added my Domain details <code>example.com</code> and <code>*.example.com</code> by requesting a <strong>public Certificate</strong>. it created successfully and domain validation is also success.</p>
<blockquote>
<p>Then I tried to add this ssl to my ELB like below.</p>
</blockquote>
<ul>
<li>went to my ELB and selected the ELB.</li>
<li>Then went to Listeners tab and Added SSL to it like below.</li>
</ul>
<p><a href="https://i.stack.imgur.com/Za3k7.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Za3k7.png" alt="enter image description here"></a></p>
<p>and <strong>ELB</strong> description is like below.</p>
<p><a href="https://i.stack.imgur.com/XUzVT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XUzVT.png" alt="enter image description here"></a></p>
<p>I cannot access the <a href="https://testing.example.com" rel="nofollow noreferrer">https://testing.example.com</a>, it hangs for few minutes and nothing happens. what is going on here. hope your help with this.</p>
| <p>In the Listener configuration, you are forwarding the default HTTP port <code>80</code> to port <code>30987</code> on the back-end server. So this tells me that the back-end server is listening for HTTP requests on port <code>30987</code>.</p>
<p>You then added an SSL listener on the default port <code>443</code> but you are forwarding that to port <code>443</code> on the back-end server. Do you have something on your back-end listening on port <code>443</code> in addition to <code>30987</code>?</p>
<p>The most likely fix for this is to change the SSL listener on the load balancer to forward to port <code>30987</code> on the back-end by setting that as the "Instance Port" setting.</p>
|
<p>I need to move my filebeat to other namespace, but I must keep registry , I mean that:</p>
<pre><code> # data folder stores a registry of read status for all files, so we don't send everything again on a Filebeat pod restart
- name: data
hostPath:
path: /var/lib/filebeat-data
type: DirectoryOrCreate
</code></pre>
<p>Can you tell me how can I copy that in kubernetes</p>
| <p>Just to check my assumptions:</p>
<ul>
<li>filebeat is a a DaemonSet</li>
<li>When you start it up in the new Namespace, you want to keep the registry</li>
<li>You're happy to keep the on-disk path the same</li>
</ul>
<p>Because the <code>data</code> folder is mounted from the host directly - if you apply the same DaemonSet in a new Namespace, it will mount the same location into the container. So there's no need to copy any files around.</p>
|
<p>I wrote a script that checks the kubernetes pods and in case of hanging it must write pod logs and delete it. Kube master is on another server and I try to connect to it by ssh. It's working good when I try to start script locally on kube master server, but when I try to enter the same command as ssh argument, I get pod deleted, but this pod's logs weren't written and there's no errors, so I don't understand what I do wrong. I use python 3.6, access permissions 777 for scripts and directories in which they are located.</p>
<p>I know about possibility of doing this using k8s api but I don't know how to do it now. I am still learning and will try to realize it a bit later but the system should work now.</p>
<p>PS. The reason that I do this is that I want to add 'check_radio' to cron and forget about hanging pods...
PPS. I am not master-pro at python, just a regular sysadmin that wants to become a devops, so if you have some ideas how to optimize my work you're welcome.</p>
<p>Here is the script that is on kube master (kube_reload.py):</p>
<pre><code>#!/bin/python3
import sys
import os
import re
import subprocess as sp
from contextlib import redirect_stdout as r_stdout
marker = False
pod_nums = ''
pod_nums_int = []
for_reload = []
get_pods = sp.check_output(['kubectl', 'get', 'pods'])
get_pods = str(get_pods)
get_pods = get_pods.split()
pods = []
'''write garbage pod names in array'''
for word in get_pods:
if 'radio-hls' in word:
pods.append(word)
'''make usual pod names from garbage names'''
for num in range(len(pods)):
while marker == False:
if pods[num][:5] != 'radio':
pods[num] = pods[num][1:]
if pods[num][:5] == 'radio':
break
'''Function that lists all pods'''
def pod_list():
sp.run('clear')
print('PODS LIST:\n')
for num in range(len(pods)):
print(num+1, '.\t', pods[num])
print(len(pods)+1, '.\t Reload all')
'''Function for recursion in try-except statement in 'input_nums()' function'''
def return_for_except():
pod_list()
print('Error input. Try again\n\n')
input_nums()
'''Function that asks which pods you want to reload'''
def input_nums():
pod_nums = str(input('Select which pods you want to reload (type all numbers with a space):\nExample: 1 2 3 10\n\n'))
pod_nums = pod_nums.split()
try:
global pod_nums_int
pod_nums_int = [eval(i) for i in pod_nums]
except:
return return_for_except()
'''Function to write pod logs to a file'''
def write_logs():
global for_reload
if len(pods)+1 in pod_nums_int:
all_reload = input('You selected "Reload all". To continue, press Enter\n')
if all_reload == '':
for i in range(len(pods)):
with open (f'{pods[i-1][10:-17]}.log', 'w') as pod_log:
sp.run(['kubectl', 'logs', f'{pods[i-1]}'], stdout=pod_log)
print(f'{pods[i-1]} logged successfully')
for_reload.append(i)
else:
print('Something went wrong')
quit()
else:
for i in pod_nums_int:
with open (f'{pods[i-1][10:-17]}.log', 'w') as pod_log:
sp.run(['kubectl', 'logs', f'{pods[i-1]}'], stdout=pod_log)
print(f'{pods[i-1]} logged successfully')
for_reload.append(i)
'''Function that reloads streams'''
def reload_pods(for_reload, pods):
for i in for_reload:
sp.run(['kubectl', 'delete', 'pod', f'{pods[i-1]}'])
print(f'{pods[i-1]}', 'deleted')
'''Start'''
'''manual (with no arguments)'''
if len(sys.argv) == 1:
pod_list()
input_nums()
write_logs()
reload_pods(for_reload, pods)
sp.run(['kubectl', 'get', 'pods'])
print()
sp.run(['ls', '-lah'])
'''auto (from nginx srver)'''
if len(sys.argv) > 1:
for arg in sys.argv:
if arg == './kube_reload.py':
continue
else:
for pod in pods:
if arg in pod:
#write logs
with open (f'{arg}.log', 'w') as log:
sp.run(['kubectl', 'logs', f'{pod}'], stdout=log)
#reload pods
sp.run(['kubectl', 'delete', 'pod', f'{pod}'])
else:
continue
</code></pre>
<p>Here is the script from another server (check_radio):</p>
<pre><code>#!/bin/python3
import requests as r
import subprocess as sp
import sys
'''IN CASE OF ADDING ADDITIONAL STREAM ADD IT TO "streams" '''
streams = [
'gold',
'tophits',
'worldchart',
'ukraine',
'rock',
'chill',
'rap',
'retromusic',
'elektro',
'sport',
'radionv',
'edyninov',
'hromadske'
]
'''IF IF THERE IS NO NEED TO CHECK SOME STREAMS YOU CAN ADD IT TO "streams_not_to_check" '''
streams_not_to_check = [
'radionv',
'edyninov',
'hromadske'
]
streams_to_delete = []
#CLUSTER_API = 'https://host_ip:6443'
#auth_header = 'Authorization: '
for stream in streams:
check_stream = r.get(f'https://host.host/stream/{stream}/status/health').json()
if check_stream['metadata'] == 'UNHEALTHY':
streams_to_delete.append(stream)
for stream in streams_not_to_check:
if stream in streams_to_delete:
streams_to_delete.remove(stream)
print(streams_to_delete)
if len(streams_to_delete) >= 1:
for stream in streams_to_delete:
sp.Popen(f'ssh developer@radio1 python3 ~/deekly/kube_reload.py {stream}', shell=True).communicate()
</code></pre>
<p>I try this from nginx server</p>
<pre><code>./check_radio
</code></pre>
<p>and get this</p>
<pre><code>[developer@radio-lb1 deekly]$ ./check_radio
['rap', 'rock']
pod "radio-hls-rap-f4b86bd77-jpmr4" deleted
pod "radio-hls-rock-57fc8fcd64-m54k5" deleted
[developer@radio-lb1 deekly]$
</code></pre>
<p>and this on kube server</p>
<pre><code>[developer@radio1 deekly]$ ls -la
total 8
drwxrwxrwx 2 developer developer 28 Aug 4 20:46 .
drwx------ 14 developer developer 4096 Aug 4 20:26 ..
-rwxrwxrwx 1 developer developer 2850 Aug 4 19:51 kube_reload.py
[developer@radio1 deekly]$
</code></pre>
<p>but when I try this from kube master</p>
<pre><code>./kube_reload.py rap rock
</code></pre>
<p>or this (doesn't matter)</p>
<pre><code>python3 kube_reload.py rap rock
</code></pre>
<p>I get this</p>
<pre><code>[developer@radio1 deekly]$ ./kube_reload.py rap rock
pod "radio-hls-rap-f4b86bd77-4bvcs" deleted
pod "radio-hls-rock-57fc8fcd64-g5j96" deleted
[developer@radio1 deekly]$ ls -la
total 32
drwxrwxrwx 2 developer developer 59 Aug 5 16:38 .
drwx------ 14 developer developer 4096 Aug 4 20:26 ..
-rwxrwxrwx 1 developer developer 2850 Aug 4 19:51 kube_reload.py
-rw-rw-r-- 1 developer developer 8303 Aug 5 16:38 rap.log
-rw-rw-r-- 1 developer developer 8345 Aug 5 16:38 rock.log
</code></pre>
| <p>Your <code>kube_reload.py</code> script writes its logs to its current working directory:</p>
<pre><code>with open (f'{arg}.log', 'w') as log:
...
</code></pre>
<p>When you're running kube_reload.py interactively, you're running it from this "deekly" directory, so "deekly" is the process's working directory and the logs are written there:</p>
<pre><code>[developer@radio1 deekly]$ ./kube_reload.py rap rock
...
[developer@radio1 deekly]$ ls -la
...
-rw-rw-r-- 1 developer developer 8303 Aug 5 16:38 rap.log
-rw-rw-r-- 1 developer developer 8345 Aug 5 16:38 rock.log
</code></pre>
<p>When this <code>check_radio</code> invokes <code>kube_reload.py</code> through <code>ssh</code>, the <code>kube_reload.py</code> process has this "developer" user's home directory as its working directory, not the "deekly" directory:</p>
<pre><code>sp.Popen(f'ssh developer@radio1 python3 ~/deekly/kube_reload.py {stream}', shell=True)...
</code></pre>
<p><code>kube_reload.py</code> will presumably write its log files to this "developer" user's home directory, not the "deekly" subdirectory.</p>
<p>If you want the logs to be written to this "deekly" directory, you should do one of three things:</p>
<ol>
<li><p>Modify <code>kube_reload.py</code> to put the logs where you want them, instead of its current working directory.</p>
</li>
<li><p>Modify <code>kube_reload.py</code> to change its working directory the desired directory before opening the logs.</p>
</li>
<li><p>Modify <code>check_radio</code> to invoke <code>kube_reload.py</code> on the remote host with the correct working directory. I'm not a python programmer so I can't give you the exact python syntax. But a command such as the following should do it:</p>
<p>ssh developer@radio1 'cd deekly && python3 ./kube_reload.py args...'</p>
</li>
</ol>
<p>You will have to do whatever is necessary in python to escape those single quotes, so that they're present in the command actually being executed.</p>
|
<p><strong>Scenario:</strong></p>
<p>I need to build a web-app, from which I can run/sop/delete/etc. containers in a cluster. So I installed <code>Kubernetes</code> and tested the API from the console. Everything seems working and looks fine. </p>
<p>Following the Docs, they write about Docker, but do I need it necessarily? </p>
<p>I mean I had to disable <code>Hyper-V</code> to make <code>Minikube</code> work, and after a reboot, Docker (which usually starts at startup) says that "something went wrong.. bla bla" .. but I can create deployments and <code>proxys</code> on <code>Minikube</code>.<br>
This is somehow confusing. </p>
<p>Can someone explain this please for dummies?</p>
| <p>Technically, you need a <em>container runtime</em> which respects <a href="https://kubernetes.io/blog/2016/12/container-runtime-interface-cri-in-kubernetes/" rel="noreferrer">CRI (Container Runtime Interface)</a>.</p>
<p><a href="https://i.stack.imgur.com/Z7aPG.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Z7aPG.png" alt="https://d3vv6lp55qjaqc.cloudfront.net/items/0I3X2U0S0W3r1D1z2O0Q/Image%202016-12-19%20at%2017.13.16.png" /></a></p>
<p>That is why you have <a href="https://github.com/kubernetes-incubator/cri-o" rel="noreferrer">CRI-O</a>, which provides an integration path between OCI conformant runtimes and the kubelet.<br />
See "<a href="https://thenewstack.io/cri-o-project-run-containers-without-docker-reaches-1-0/" rel="noreferrer">CRI-O, the Project to Run Containers without Docker, Reaches 1.0</a>" by <strong><a href="https://twitter.com/HallSd" rel="noreferrer">Susan Hall</a></strong>.</p>
<blockquote>
<p>The project “opens the door for plugging <a href="http://programmableinfrastructure.com/components/container-runtime/" rel="noreferrer">alternative container runtimes</a> in the <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/" rel="noreferrer">kubelet</a> more easily, instead of relying on the default docker runtime.</p>
<p>Those new runtimes may include virtual machines-based ones, such as <strong>runv</strong> and <strong>Clear Containers</strong>, or standard Linux containers runtimes like <code>rkt</code>,” <a href="http://red.ht/2uJGuQo" rel="noreferrer">Red Hat</a> senior engineer <a href="https://twitter.com/runc0m" rel="noreferrer">Antonio Murdaca</a> wrote on the <a href="https://www.projectatomic.io/blog/2017/02/crio-runtimes/" rel="noreferrer">Project Atomic blog</a>.</p>
</blockquote>
<hr />
<p>But in your case, your issue is to make Minikube work with HyperV: see "<a href="https://medium.com/@JockDaRock/minikube-on-windows-10-with-hyper-v-6ef0f4dc158c" rel="noreferrer">Minikube on Windows 10 with Hyper-V</a>" from <strong><a href="https://twitter.com/JockDaRock" rel="noreferrer">Jock Reed</a></strong>.<br />
The trick is to create a new (External) Virtual network switch, named "Primary Virtual Switch", and to start Minikube with:</p>
<pre><code>minikube start --vm-driver hyperv --hyperv-virtual-switch "Primary Virtual Switch"
</code></pre>
|
<p>I have a microservice deployed in a Tomcat container/pod. There are four different files generated in the container - access.log, tomcat.log, catalina.out and application.log (log4j output). What is the best approach to send these logs to Elasticsearch (or similar platform). </p>
<p>I read through the information on this <a href="https://kubernetes.io/docs/concepts/cluster-administration/logging/" rel="nofollow noreferrer">page</a> Logging Architecture - Kubernetes 5. Is “Sidecar container with a logging agent” the best option for my use case?</p>
<p>Is it possible to fetch pod labels (e.g.: version) and add it to each line? If it is doable, use a logging agent like fluentd? (I just want to know the direction I should take).</p>
| <p>Yes, the best option for your use case is to have to have one <code>tail -f</code> sidecar per log file and then install either a <code>fluentd</code> or a <code>fluent-bit</code> daemonset that will handle shipping and enriching the log events. </p>
<p>The <a href="https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch" rel="nofollow noreferrer">fluentd elasticsearch</a> cluster addon is available at that link. It will install a fluentd daemonset and a minimal ES cluster. The ES cluster is not production ready so please see the README for details on what must be changed.</p>
|
<p>I'd like to confirm information of the authenticated user and assigned role and assigned cluster role. How can I do it?</p>
| <blockquote>
<p>information of the authenticated user</p>
</blockquote>
<p>When you see <a href="https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#define-clusters-users-and-contexts" rel="nofollow noreferrer">Define clusters, users, and contexts</a>, you realize you need to get the information associated with a defined <em>context</em>.</p>
<pre><code>kubectl config --kubeconfig=config-demo use-context dev-frontend
kubectl config --kubeconfig=config-demo view --minify
</code></pre>
<blockquote>
<p>The output shows configuration information associated with the dev-frontend context:</p>
</blockquote>
<pre><code>apiVersion: v1
clusters:
- cluster:
certificate-authority: fake-ca-file
server: https://1.2.3.4
name: development
contexts:
- context:
cluster: development
namespace: frontend
user: developer
name: dev-frontend
current-context: dev-frontend
kind: Config
preferences: {}
users:
- name: developer
user:
client-certificate: fake-cert-file
client-key: fake-key-file
</code></pre>
<blockquote>
<p>assigned role and assigned cluster role. </p>
</blockquote>
<p>You can list roles for all users or clusters, but you still need to parse the result to get the one for <em>a</em> user or <em>a</em> cluster.<br>
Example: "<a href="https://stackoverflow.com/q/43186611/6309">kubectl and seeing (cluster)roles assigned to subjects</a>".</p>
|
<p>I'm running Jenkins on EKS cluster with <a href="https://plugins.jenkins.io/kubernetes/" rel="nofollow noreferrer">k8s plugin</a> and i'd like to write a <strong>declarative</strong> pipeline in which I specify the pod template in each stage. So a basic example would be the following, in which in the first stage a file is created and in the second one is printed :</p>
<pre><code>pipeline{
agent none
stages {
stage('First sample') {
agent {
kubernetes {
label 'mvn-pod'
yaml """
spec:
containers:
- name: maven
image: maven:3.3.9-jdk-8-alpine
"""
}
}
steps {
container('maven'){
sh "echo 'hello' > test.txt"
}
}
}
stage('Second sample') {
agent {
kubernetes {
label 'bysbox-pod'
yaml """
spec:
containers:
- name: busybox
image: busybox
"""
}
}
steps {
container('busybox'){
sh "cat test.txt"
}
}
}
}
}
</code></pre>
<p>This clearly doesn't work since the two pods don't have any kind of shared memory. Reading <a href="https://www.jenkins.io/doc/pipeline/steps/kubernetes/" rel="nofollow noreferrer">this doc</a> I realized I can use <code>workspaceVolume dynamicPVC ()</code> in the yaml declaration of the pod so that the plugin creates and manages a <code>persistentVolumeClaim</code> in which hopefully i can write the data I need to share between stages.</p>
<p>Now, with <code>workspaceVolume dynamicPVC (...)</code> both <code>pv</code> and <code>pvc</code> are successfully created but the pod goes on error and terminates. In particular, the pods provisioned is the following :</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/psp: eks.privileged
runUrl: job/test-libraries/job/sample-k8s/12/
creationTimestamp: "2020-08-07T08:57:09Z"
deletionGracePeriodSeconds: 30
deletionTimestamp: "2020-08-07T08:58:09Z"
labels:
jenkins: slave
jenkins/label: bibibu
name: bibibu-ggb5h-bg68p
namespace: jenkins-slaves
resourceVersion: "29184450"
selfLink: /api/v1/namespaces/jenkins-slaves/pods/bibibu-ggb5h-bg68p
uid: 1c1e78a5-fcc7-4c86-84b1-8dee43cf3f98
spec:
containers:
- image: maven:3.3.9-jdk-8-alpine
imagePullPolicy: IfNotPresent
name: maven
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
tty: true
volumeMounts:
- mountPath: /home/jenkins/agent
name: workspace-volume
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-5bt8c
readOnly: true
- env:
- name: JENKINS_SECRET
value: ...
- name: JENKINS_AGENT_NAME
value: bibibu-ggb5h-bg68p
- name: JENKINS_NAME
value: bibibu-ggb5h-bg68p
- name: JENKINS_AGENT_WORKDIR
value: /home/jenkins/agent
- name: JENKINS_URL
value: ...
image: jenkins/inbound-agent:4.3-4
imagePullPolicy: IfNotPresent
name: jnlp
resources:
requests:
cpu: 100m
memory: 256Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /home/jenkins/agent
name: workspace-volume
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-5bt8c
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: ...
nodeSelector:
kubernetes.io/os: linux
priority: 0
restartPolicy: Never
schedulerName: default-scheduler
securityContext: {}
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: workspace-volume
persistentVolumeClaim:
claimName: pvc-bibibu-ggb5h-bg68p
- name: default-token-5bt8c
secret:
defaultMode: 420
secretName: default-token-5bt8c
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2020-08-07T08:57:16Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2020-08-07T08:57:16Z"
message: 'containers with unready status: [jnlp]'
reason: ContainersNotReady
status: "False"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2020-08-07T08:57:16Z"
message: 'containers with unready status: [jnlp]'
reason: ContainersNotReady
status: "False"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2020-08-07T08:57:16Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: docker://9ed5052e9755ee4f974704fa4b74f2d89702283a4437e60a9945cf4ec7d6da68
image: jenkins/inbound-agent:4.3-4
imageID: docker-pullable://jenkins/inbound-agent@sha256:62f48a12d41e02e557ee9f7e4ffa82c77925b817ec791c8da5f431213abc2828
lastState: {}
name: jnlp
ready: false
restartCount: 0
state:
terminated:
containerID: docker://9ed5052e9755ee4f974704fa4b74f2d89702283a4437e60a9945cf4ec7d6da68
exitCode: 1
finishedAt: "2020-08-07T08:57:35Z"
reason: Error
startedAt: "2020-08-07T08:57:35Z"
- containerID: docker://96f747a132ee98f7bf2488bd3cde247380aea5dd6f84bdcd7e6551dbf7c08943
image: maven:3.3.9-jdk-8-alpine
imageID: docker-pullable://maven@sha256:3ab854089af4b40cf3f1a12c96a6c84afe07063677073451c2190cdcec30391b
lastState: {}
name: maven
ready: true
restartCount: 0
state:
running:
startedAt: "2020-08-07T08:57:35Z"
hostIP: 10.108.171.224
phase: Running
podIP: 10.108.171.158
qosClass: Burstable
startTime: "2020-08-07T08:57:16Z"
</code></pre>
<p>Retrieving logs from jnlp container on the pod with <code>kubectl logs name-of-the-pod -c jnlp -n jenkins-slaves</code> led me towards this error :</p>
<pre><code>Exception in thread "main" java.io.IOException: The specified working directory should be fully accessible to the remoting executable (RWX): /home/jenkins/agent
at org.jenkinsci.remoting.engine.WorkDirManager.verifyDirectory(WorkDirManager.java:249)
at org.jenkinsci.remoting.engine.WorkDirManager.initializeWorkDir(WorkDirManager.java:201)
at hudson.remoting.Engine.startEngine(Engine.java:288)
at hudson.remoting.Engine.startEngine(Engine.java:264)
at hudson.remoting.jnlp.Main.main(Main.java:284)
at hudson.remoting.jnlp.Main._main(Main.java:279)
at hudson.remoting.jnlp.Main.main(Main.java:231)
</code></pre>
<p>I also tried to specify the <code>accessModes</code> as parameter of <code>dynamicPVC</code>, but the error is the same. <br>
What am I doing wrong?</p>
<p>Thanks</p>
| <p>The docker image being used is configured to run as a non-root user <code>jenkins</code>. By default PVCs will be created only allowing root-user access.</p>
<p>This can be configured using the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#configure-volume-permission-and-ownership-change-policy-for-pods" rel="nofollow noreferrer">security context</a>, e.g.</p>
<pre><code>securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
</code></pre>
<p>(The jenkins user in that image is ID 1000)</p>
|
<p>I'm learning Kubernetes and trying to follow the following tutorial to create a Mysql database:</p>
<p><a href="https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/#:%7E:text=Deploy%20MySQL,MySQL%20and%20references%20the%20PersistentVolumeClaim" rel="noreferrer">https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/#:~:text=Deploy%20MySQL,MySQL%20and%20references%20the%20PersistentVolumeClaim</a>.</p>
<p>Until now, I've been using Docker and Docker-compose to do all of my development and production work.</p>
<p>Using Docker Desktop's (On Mac) Kubernetes single master node cluster, I was able to get the example working and interact with Mysql creating a table and filling some data. After deleting the resources and retrying, the data persisted and I was able to access the data immediately upon reinstalling the Deployment, service, PV and PVC. A successful pre-requisite for connecting an application.</p>
<p>However I cannot for the life of me find out WHERE the data is actually being stored on my laptop's local file system . There are only 2 file paths listed in these examples. One in the deployment (which I assume to be the container's internal filesystem , Similar to Docker's container internal path):</p>
<pre><code>volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
</code></pre>
<p>And the persistent volume path (Which I assume to be docker's Host system equivalent file path):</p>
<pre><code>hostPath:
path: "/mnt/data"
</code></pre>
<p>My problem is : Nowhere on my laptop can I find either of these paths. /mnt/data does not exist from my laptop's root directly (where I would have expected it to be) , despite the PV still being active, and I see nowhere where the kubernetes cluster, node, or pod is mounted on my file system. WHERE on my laptop is this data being stored?? This isn't necessarily a practically useful thing to know, but it's critical to know in order to have a good grasp of understanding kubernetes and how it works, and I'm unwilling and unable to continue learning kubernetes until I can figure out where physically the data is stored on the host filesystem.</p>
<p>I've seen another question saying to "SSH into the Node that's running the PV Resource" , But I feel like that's referencing a distributed cloud system where the node exists on a separate server, and as stated I cannot find the mount path to my one master node running locally. Everything else seems to point to the not-useful official docs on PV and PVCs.</p>
<p>Can anyone explain to me where this mysql data should be?</p>
| <p>Docker Desktop on a Mac runs <a href="https://docs.docker.com/docker-for-mac/docker-toolbox/#the-docker-desktop-on-mac-environment" rel="noreferrer">a virtual machine to run docker inside</a>. You're correct when you look at the <code>hostPath.path</code> - that's where it's stored. But because docker/Kubernetes is inside the VM, that's where the path is relative to.</p>
<p>To see this yourself is a little tricky, Docker Desktop doesn't provide a guide on getting inside the VM. The community has <a href="https://gist.github.com/BretFisher/5e1a0c7bcca4c735e716abf62afad389" rel="noreferrer">some options though</a>. An easy way would be <code>docker run -it --rm --privileged --pid=host justincormack/nsenter1</code>. That command runs a container inside docker that gets you a shell on the VM containing the docker daemon.</p>
<p>As to where this is on your laptop, you can follow the <a href="https://docs.docker.com/docker-for-mac/space/" rel="noreferrer">docs about the disk image for the VM</a>. <strong>Preferences > Resources > Advanced</strong> will show you where the disk image file is physically located on your laptop.</p>
|
<p>I am working with minikube , currently creating config-maps attaching as volume to the pod
this is my test-config.yaml</p>
<pre><code>apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: 2016-02-18T18:52:05Z
name: bb-configmap
namespace: default
resourceVersion: "516"
uid: b4952dc3-d670-11e5-8cd0-68f728db1985
data:
game.properties: |
enemies=aliens
enemies.cheat=true
lives=3
enemies.cheat.level=noGoodRotten
secret.code.passphrase=UUDDLRLRBABAS
secret.code.allowed=true
secret.code.lives=30
ui.properties: |
color.good=purple
color.bad=yellow
allow.textmode=true
how.nice.to.look=fairlyNice
</code></pre>
<p>my pod.yaml</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: busybox-container
image: busybox
volumeMounts:
- name: config-dir
mountPath: /etc/config
volumes:
- name: config-dir
configMap:
name: bb-configmap
</code></pre>
<p>when I start my pod it is keep on restarting saying crashloopbackoff.
As per my understanding it should be in completed state ,but it is in crashLoopBackOff
please find the describe pod details below</p>
<pre><code>Containers:
busybox-container:
Container ID: docker://bb650f7fe715855adb8ca8ab3be04e62924bcda2abfccff78e5e30cf20e2dc02
Image: busybox
Image ID: docker-pullable://busybox@sha256:4f47c01fa91355af2865ac10fef5bf6ec9c7f42ad2321377c21e844427972977
Port: <none>
Host Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 21 Aug 2020 17:23:20 +0530
Finished: Fri, 21 Aug 2020 17:23:20 +0530
Ready: False
Restart Count: 11
Environment: <none>
Mounts:
/etc/config from config-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-zkt9s (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
config-dir:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: bb-configmap
Optional: false
default-token-zkt9s:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-zkt9s
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned default/mypod to minikube
Normal Pulling 35m (x4 over 36m) kubelet, minikube Pulling image "busybox"
Normal Pulled 34m (x4 over 36m) kubelet, minikube Successfully pulled image "busybox"
Normal Created 34m (x4 over 36m) kubelet, minikube Created container busybox-container
Normal Started 34m (x4 over 36m) kubelet, minikube Started container busybox-container
Warning BackOff 74s (x157 over 35m) kubelet, minikube Back-off restarting failed container
</code></pre>
| <p>In the PodSpec, <code>restartPolicy</code> <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.18/#podspec-v1-core" rel="nofollow noreferrer">defaults to <code>Always</code></a>. For the Pod to go to <code>Completed</code> instead of <code>CrashLoopBackOff</code>, this field should be set to <code>OnFailure</code>.</p>
<p>The default behaviour is that Kubernetes expects a Pod to always be running, unless otherwise specified.</p>
|
<p>I am trying to deploy a configmap onto a cluster </p>
<pre><code>- name: Make/Update all configmaps on the cluster
kubernetes:
api_endpoint: blah
url_username: blah
url_password: blah
inline_data:
apiVersion: v1
kind: ConfigMap
metadata:
name: blah
namespace: blah
data: my-data.txt: "{{ data }}"
state: present
data: |
some = foo
foo = some
(using spinnaker to attach it to pods)
</code></pre>
<p>When I go into the pod and open my-data.txt it displays:</p>
<pre><code>some = foo\n foo = some\n
</code></pre>
<p>I want it to look exactly like the text and print newline rather than \n</p>
<p>Weird thing if I put ' ' single quotes somewhere in the text it prints the text as is but with the single quotes so :</p>
<pre><code>data: |
some = foo
foo = some
' '
</code></pre>
<p>prints exactly the same.</p>
<p>I have tried to research but I couldn't find anything and I have been stuck on this for a while now.</p>
| <p>This seems to be similar to <a href="https://github.com/kubernetes/kubernetes/issues/36222" rel="noreferrer">kubernetes/kubernetes issue 36222</a> when creating configMap from files.</p>
<p>In your case, that happens when created from a <code>data</code> block.</p>
<p>The recent <a href="https://github.com/kubernetes/kubernetes/issues/63503" rel="noreferrer">kubernetes/kubernetes issue 63503</a> references all printed issues.</p>
<p>A <a href="https://github.com/kubernetes/kubernetes/issues/36222#issuecomment-497132788" rel="noreferrer">comment mentions</a>:</p>
<blockquote>
<p>I added a new line in a configMap using Tab for identation. After changing to Spaces instead of Tab, I was able to see the configmap as expected...</p>
</blockquote>
<p>August 202: The <a href="https://github.com/kubernetes/kubernetes/issues/36222#issuecomment-666168348" rel="noreferrer">issue 36222</a> now includes:</p>
<blockquote>
<p>If you just want the raw output as it was read in when created <code>--from-file</code>, you can use <code>jq</code> to get the raw string (without escaped newlines etc)</p>
<p>If you created a configmap from a file like this:</p>
<pre><code>kubectl create configmap myconfigmap --from-file mydata.txt
</code></pre>
<p>Get the data:</p>
<pre><code>kubectl get cm myconfigmap -o json | jq '.data."mydata.txt""' -r
</code></pre>
</blockquote>
<p>Also:</p>
<blockquote>
<p>If the formatting of cm goes wierd a simple hack to get it back to normal is :</p>
<p>kubectl get cm configmap_name -o yaml > cm.yaml</p>
<p>Now copy the contents of <code>cm.yaml</code> file and past it on <a href="http://www.yamllint.com/" rel="noreferrer"><code>yamllint.com</code></a>. Yamllint.com is powerful tool to check the linting of yaml files.<br />
This will provide you with the configmap as expected with correct formatting.</p>
<p>Paste the output in another yaml file (for e.g - cm_ready.yaml)</p>
<pre><code> kubectl apply -f cm_ready.yaml
</code></pre>
</blockquote>
<hr />
<p>Update Nov. 2020, the <a href="https://github.com/kubernetes/kubernetes/issues/36222#issuecomment-729237587" rel="noreferrer">same issue</a> includes:</p>
<blockquote>
<p>I was able to fix this behavior by:</p>
<ul>
<li><p>Don't use tabs, convert to spaces</p>
</li>
<li><p>To remove spaces before a newline character, use this:</p>
<pre><code> sed -i -E 's/[[:space:]]+$//g' File.ext
</code></pre>
</li>
</ul>
<p>It seems also will convert CRLF to LF only.</p>
</blockquote>
|
<p>In my Kubernetes <code>Service</code>, running on OpenShift, I have an annotation like this:</p>
<pre><code> annotations:
service.beta.openshift.io/serving-cert-secret-name: "..."
</code></pre>
<p>which works fine on OpenShift 4.x.</p>
<p>However I also want to support OpenShift 3.11, which requires the similar annotation (note <em>alpha</em>):</p>
<pre><code>service.alpha.openshift.io/serving-cert-secret-name: "..."
</code></pre>
<p>Can I just include <strong>both</strong> annotations in my yaml file in order to support both versions? In other words will OpenShift 4.x ignore the <code>alpha</code> annotation; and will OpenShift 3.11 ignore the <code>beta</code> annotation?</p>
| <p>Yes</p>
<p>This is a common pattern for alpha/beta annotation migrations in the Kubernetes ecosystem, the controllers will only be looking for their specific annotation, any the controller doesn't recognise will be ignored.</p>
<p>If a controller is written to be backwards-compatible, they will normally look for the new beta annotation, and only if not finding it respect the alpha one.</p>
|
<p>I have observed that <code>kubectl</code> inserts an additional <code>\</code> to linefeed characters when using the <code>--from-literal</code> option. It works as expected when loading "the same" content from a file. Clearly, there must be a difference because the stdout looks different but I fail to see why.</p>
<pre><code>echo "api_key" >> .tmp
echo "api_value" >> .tmp
cat -e .tmp
kubectl delete secrets api-env
kubectl create secret generic api-env --from-file=.api=.tmp
rm .tmp
kubectl get secret api-env -o json | jq '.data | map_values(@base64d)'
#prints:
#api_key$
#api_value$
#secret "api-env" deleted
#secret/api-env created
#{
# ".api": "api_key\napi_value\n"
#}
</code></pre>
<p>The commands above create a single linefeed character on each line. Demonstrated by <code>cat -e</code> there are two linefeed characters in the file, each at the end.</p>
<p>Doing the same using a string results in the <code>\n</code> to be escaped.</p>
<pre><code>api="api_key\napi_value\n"
echo $api
kubectl delete secrets api-env
kubectl create secret generic api-env --from-literal=.api=$api
kubectl get secret api-env -o json | jq '.data | map_values(@base64d)'
#prints:
#api_key\napi_value\n
#secret "api-env" deleted
#secret/api-env created
#{
# ".api": "api_key\\napi_value\\n"
#}
</code></pre>
<p>The <code>echo</code> command shows the string as it was supplied to the variable, however after loading that to kubernetes the <code>\n</code> are escaped and the content is considered to be a single line.</p>
<p>This is important because in several instances where I am working with <code>kubectl</code> I am not allowed to write to the local file system.</p>
<p>What is happening here and how to stop kubernetes from escaping the <code>\n</code> character?</p>
<p>Environment:</p>
<ul>
<li>zsh 5.8 (x86_64-apple-darwin21.0)</li>
<li>Darwin Kernel Version 21.4.0: root:xnu-8020.101.4~15/RELEASE_X86_64 x86_64</li>
<li>kubectl Client:"v1.20.10"</li>
<li>kubectl Server: "v1.23.3"</li>
<li>minikube version: v1.25.2</li>
</ul>
| <p>When you use <code>echo $api</code>, <code>echo</code> itself changes the contents: On POSIX-compliant shells with XSI extensions enabled (and while zsh isn't POSIX-compliant in general, it <em>does</em> implement this aspect), the <code>\n</code>s are replaced with literal newlines.</p>
<p>That <em>doesn't</em> happen with <code>--from-literal=.api=$api</code>; there, your <code>\n</code>s are still two-character sequences, first a backslash, then a <code>n</code>.</p>
<p>Given that you're on a shell that supports using <code>$'\n'</code> as a way to represent a newline literal directly, consider <code>--from-literal=.api="api_key"$'\n'"api_value"</code></p>
|
<p>I am trying to create a statefulset that runs zookeper, but I want it to run as non-root (i.e. under the zookeper user).<br />
This is the image used for this:</p>
<p><a href="https://github.com/kubernetes-retired/contrib/blob/master/statefulsets/zookeeper/Dockerfile" rel="noreferrer">https://github.com/kubernetes-retired/contrib/blob/master/statefulsets/zookeeper/Dockerfile</a></p>
<p>This is how I am trying to mount the volumes (Apparently I need a init container according to <a href="https://serverfault.com/questions/906083/how-to-mount-volume-with-specific-uid-in-kubernetes-pod">this</a>):</p>
<pre><code> initContainers:
# Changes username for volumes
- name: changes-username
image: busybox
command:
- /bin/sh
- -c
- |
chown -R 1000:1000 /var/lib/zookeeper /etc/zookeeper-conf # <<<<--- this returns
# cannot change ownership, permission denied
# read-only filesystem.
containers:
- name: zookeeper
imagePullPolicy: IfNotPresent
image: my-repo/k8szk:3.4.14
command:
- sh
- -c
- |
zkGenConfig.sh # The script right here requires /var/lib/zookeper to be owned by zookeper user.
# Initially zookeeper user does own it as per the dockerfile above,
# but when mounting the volume, the directory becomes owned by root
# hence the script fails.
volumeMounts:
- name: zk-data
mountPath: /var/lib/zookeeper
- name: zk-log4j-config
mountPath: /etc/zookeeper-conf
</code></pre>
<p>I also tried to add the securityContext: <code>fsGroup: 1000</code> with no change.</p>
| <p>This can be configured using the <a href="https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#configure-volume-permission-and-ownership-change-policy-for-pods" rel="noreferrer">security context</a>, e.g.</p>
<pre><code>securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
</code></pre>
<p>Then you don't need the initContainer at all - Kubernetes will handle the recursive chown as part of making the volume ready.</p>
<p>One issue here is the Dockerfile linked doesn't container a <code>USER</code> statement, so Kubernetes doesn't know to start the pod as the correct user - the <code>runAsUser</code> will fix that.</p>
<p>The reason the <code>initContainer</code> hack you're trying isn't working is because you're also trying to change the ownership of the read-only config directory. ConfigMaps are mounted read-only, you can't chown them. (This used to be different, but was changed for security reasons)</p>
|
<p>Basically, I need clarification if this is the right way to do: I am able to run sed command inside a container on a k8s pod. Now, the same sed I want to loop over for 10times but am not sure if this is working though I get no error from kubernetes pods or logs. Please confirm if my looping is good.</p>
<pre><code>'sed -i "s/\(training:\).*/\1 12/" ghav/default_sql.spec.txt &&
lant estimate -e dlav/lat/experiment_specs/default_sql.spec.txt -r /out'
</code></pre>
<p>I want to do this working command 10times inside the same container. is the below right?</p>
<pre><code>'for run in $(seq 1 10); do sed -i "s/\(training:\).*/\1 12/" ghav/default_sql.spec.txt &&
lant estimate -e dlav/lat/experiment_specs/default_sql.spec.txt -r /out; done'
</code></pre>
<p>the pod gets created and is running fine but am not sure how to confirm my loop is good and am doing that 10times...</p>
<p>inside pod describe I see below</p>
<pre><code>Args:
sh
-c
'for run in $(seq 1 10); do sed -i "s/\(training:\).*/\1 12/" ghav/default_sql.spec.txt &&
lant estimate -e dlav/lat/experiment_specs/default_sql.spec.txt -r /out; done'
</code></pre>
| <p>The "<a href="https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#run-a-command-in-a-shell" rel="nofollow noreferrer">Define a Command and Arguments for a Container</a>" does mention:</p>
<blockquote>
<p>To see the output of the command that ran in the container, view the logs from the Pod:</p>
<pre><code>kubectl logs command-demo
</code></pre>
</blockquote>
<p>So make sure that your command, for testing, does echo something, and check the pod logs.</p>
<pre><code>sh -c 'for run in $(seq 1 10); do echo "$run"; done'
</code></pre>
<p>As in:</p>
<pre><code>command: ["/bin/sh"]
args: ["-c", "for run in $(seq 1 10); do echo \"$run\"; done"]
</code></pre>
<p>(using <code>seq</code> here, as mentioned in <a href="https://github.com/kubernetes/kubernetes/issues/56631#issuecomment-348421974" rel="nofollow noreferrer">kubernetes issue 56631</a>)</p>
<p>For any complex sequence of commands, mixing quotes, it is best to wrap that sequence in a script <em>file</em>, and call that executable file 10 tiles.
The logs will confirm that the loop is executed 10 times.</p>
|
<p><em>As far as I'm concerned, this is more of a development question than a server question, but it lies very much on the boundary of the two, so feel free to migrate to serverfault.com if that's the consensus).</em></p>
<p>I have a service, let's call it <code>web</code>, and it is declared in a <code>docker-compose.yml</code> file as follows:</p>
<pre><code> web:
image: webimage
command: run start
build:
context: ./web
dockerfile: Dockerfile
</code></pre>
<p>In front of this, I have a reverse-proxy server running Apache Traffic Server. There is a simple mapping rule in the <a href="https://docs.trafficserver.apache.org/en/latest/admin-guide/files/remap.config.en.html" rel="nofollow noreferrer">url remapping config file</a></p>
<pre><code>map / http://web/
</code></pre>
<p>So all incoming requests are mapped onto the <code>web</code> service described above. This works just peachily in <code>docker-compose</code>, however when I move the service to kubernetes with the following service description:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
labels:
io.kompose.service: web
name: web
spec:
clusterIP: None
ports:
- name: headless
port: 55555
targetPort: 0
selector:
io.kompose.service: web
status:
loadBalancer: {}
</code></pre>
<p>...traffic server complains because it cannot resolve the DNS name <code>web</code>.</p>
<p>I can resolve this by slightly changing the DNS behaviour of traffic server with the following config change:</p>
<pre><code>CONFIG proxy.config.dns.search_default_domains INT 1
</code></pre>
<p>(see <a href="https://docs.trafficserver.apache.org/en/7.1.x/admin-guide/files/records.config.en.html#dns" rel="nofollow noreferrer">https://docs.trafficserver.apache.org/en/7.1.x/admin-guide/files/records.config.en.html#dns</a>)</p>
<p>This config change is described as follows:</p>
<blockquote>
<p>Traffic Server can attempt to resolve unqualified hostnames by expanding to the local domain. For example if a client makes a request to an unqualified host (e.g. host_x) and the Traffic Server local domain is y.com, then Traffic Server will expand the hostname to host_x.y.com.</p>
</blockquote>
<p>Now everything works just great in kubernetes.</p>
<p>However, when running in <code>docker-compose</code>, traffic-server complains about not being able to resolve <code>web</code>.</p>
<p>So, I can get things working on both platforms, but this requires config changes to do so. I could fire a start-up script for traffic-server to establish if we're running in kube or docker and write the config line above depending on where we are running, but ideally, I'd like the DNS to be consistent across platforms. My understanding of DNS (and in particular, DNS default domains/ local domains) is patchy.</p>
<p>Any pointers? Ideally, a local domain for docker-compose seems like the way to go here.</p>
| <p>The default kubernetes local domain is</p>
<pre><code>default.svc.cluster.local
</code></pre>
<p>which means that the fully qualified name of the <code>web</code> service under kubernetes is <code>web.default.svc.cluster.local</code></p>
<p>So, in the <code>docker-compose</code> file, under the trafficserver config section, I can create an alias for <code>web</code> as <code>web.default.svc.cluster.local</code> with the following <code>docker-compose.yml</code> syntax:</p>
<pre><code>version: "3"
services:
web:
# ...
trafficserver:
# ...
links:
- "web:web.default.svc.cluster.local"
</code></pre>
<p>and update the mapping config in trafficserver to:</p>
<pre><code>map / http://web.default.svc.cluster.local/
</code></pre>
<p>and now the <code>web</code> service is reachable using the same domain name across <code>docker-compose</code> and <code>kubernetes</code>.</p>
|
<p>I am using helm 3 to install nexus in kubernetes v1.18:</p>
<pre><code>helm install stable/sonatype-nexus --name=nexus
</code></pre>
<p>and then expose nexus by using traefik 2.x to outside by using domian: <code>nexus.dolphin.com</code>. But when I using domain to access nexus servcie it give me this tips:</p>
<pre><code>Invalid host. To browse Nexus, click here/. To use the Docker registry, point your client
</code></pre>
<p>and I have read <a href="https://stackoverflow.com/questions/52274286/run-nexus-in-kubernetes-cluster-using-helm">this question</a>, but It seem not suite for my situation. And this is my nexus yaml config now:</p>
<pre><code>kind: Deployment
apiVersion: apps/v1
metadata:
name: nexus-sonatype-nexus
namespace: infrastructure
selfLink: /apis/apps/v1/namespaces/infrastructure/deployments/nexus-sonatype-nexus
uid: 023de15b-19eb-442d-8375-11532825919d
resourceVersion: '1710210'
generation: 3
creationTimestamp: '2020-08-16T07:17:07Z'
labels:
app: sonatype-nexus
app.kubernetes.io/managed-by: Helm
chart: sonatype-nexus-1.23.1
fullname: nexus-sonatype-nexus
heritage: Helm
release: nexus
annotations:
deployment.kubernetes.io/revision: '1'
meta.helm.sh/release-name: nexus
meta.helm.sh/release-namespace: infrastructure
managedFields:
- manager: Go-http-client
operation: Update
apiVersion: apps/v1
time: '2020-08-16T07:17:07Z'
fieldsType: FieldsV1
- manager: kube-controller-manager
operation: Update
apiVersion: apps/v1
time: '2020-08-18T16:26:34Z'
fieldsType: FieldsV1
spec:
replicas: 1
selector:
matchLabels:
app: sonatype-nexus
release: nexus
template:
metadata:
creationTimestamp: null
labels:
app: sonatype-nexus
release: nexus
spec:
volumes:
- name: nexus-sonatype-nexus-data
persistentVolumeClaim:
claimName: nexus-sonatype-nexus-data
- name: nexus-sonatype-nexus-backup
emptyDir: {}
containers:
- name: nexus
image: 'sonatype/nexus3:3.20.1'
ports:
- name: nexus-docker-g
containerPort: 5003
protocol: TCP
- name: nexus-http
containerPort: 8081
protocol: TCP
env:
- name: install4jAddVmParams
value: >-
-Xms1200M -Xmx1200M -XX:MaxDirectMemorySize=2G
-XX:+UnlockExperimentalVMOptions
-XX:+UseCGroupMemoryLimitForHeap
- name: NEXUS_SECURITY_RANDOMPASSWORD
value: 'false'
resources: {}
volumeMounts:
- name: nexus-sonatype-nexus-data
mountPath: /nexus-data
- name: nexus-sonatype-nexus-backup
mountPath: /nexus-data/backup
livenessProbe:
httpGet:
path: /
port: 8081
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 1
periodSeconds: 30
successThreshold: 1
failureThreshold: 6
readinessProbe:
httpGet:
path: /
port: 8081
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 1
periodSeconds: 30
successThreshold: 1
failureThreshold: 6
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
- name: nexus-proxy
image: 'quay.io/travelaudience/docker-nexus-proxy:2.5.0'
ports:
- name: nexus-proxy
containerPort: 8080
protocol: TCP
env:
- name: ALLOWED_USER_AGENTS_ON_ROOT_REGEX
value: GoogleHC
- name: CLOUD_IAM_AUTH_ENABLED
value: 'false'
- name: BIND_PORT
value: '8080'
- name: ENFORCE_HTTPS
value: 'false'
- name: NEXUS_DOCKER_HOST
- name: NEXUS_HTTP_HOST
- name: UPSTREAM_DOCKER_PORT
value: '5003'
- name: UPSTREAM_HTTP_PORT
value: '8081'
- name: UPSTREAM_HOST
value: localhost
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
serviceAccountName: nexus-sonatype-nexus
serviceAccount: nexus-sonatype-nexus
securityContext:
fsGroup: 2000
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
revisionHistoryLimit: 10
progressDeadlineSeconds: 600
status:
observedGeneration: 3
replicas: 1
updatedReplicas: 1
readyReplicas: 1
availableReplicas: 1
conditions:
- type: Progressing
status: 'True'
lastUpdateTime: '2020-08-18T16:23:54Z'
lastTransitionTime: '2020-08-18T16:23:54Z'
reason: NewReplicaSetAvailable
message: >-
ReplicaSet "nexus-sonatype-nexus-79fd4488d5" has successfully
progressed.
- type: Available
status: 'True'
lastUpdateTime: '2020-08-18T16:26:34Z'
lastTransitionTime: '2020-08-18T16:26:34Z'
reason: MinimumReplicasAvailable
message: Deployment has minimum availability.
</code></pre>
<p>why the domian could not access nexus by default? and what should I do to access nexus by domain?</p>
| <p>From <a href="https://github.com/helm/charts/tree/master/stable/sonatype-nexus" rel="nofollow noreferrer">the documentation</a> you should set a property of the helm chart: <code>nexusProxy.env.nexusHttpHost</code> to <code>nexus.dolphin.com</code></p>
<p>The docker image used here has a proxy that allows you to access the Nexus HTTP and Nexus Docker services by different domains, if you don't specify either then you get the behaviour you're seeing.</p>
|
<p>I have a running Gitlab CI pipeline in my Kubernetes cluster.</p>
<p>When the tests are failing, I need to grab the app screenshots and logs from the pod where it ran so that they are available where the Gitlab Runner is expecting them.</p>
<p>I tried the <code>kubectl cp <namespace>/<podname>:/in-pod-path /local/path</code> to copy the files from a stopped pod (having the <code>tar</code> command installed in my Docker image), but <a href="https://github.com/kubernetes/kubectl/issues/454" rel="nofollow noreferrer">it isn't yet supported</a>.</p>
<p>Until this is available, I guess I need a volume mounted in the pod at the path where are saved my artefacts so that I can grab them from this volume after the tests execution is finished.</p>
<p>I'm wondering <strong>what kind of volume should I use</strong> knowing that I have 3 kube workers, I don't need that volume to be persistent over time, more to be shared across the nodes?</p>
<p>I'm expecting to deploy this volume before deploying the pod running my tests mounting this volume. When tests failure is detected, I would extract the artefacts to the right place and delete the pod and the volume.</p>
| <p>You could try and define a PVC with access mode <code>ReadWriteMany</code>, in order to get a volume shared between multiple pods.<br>
See "<a href="https://stackoverflow.com/a/52564314/6309">How to share storage between Kubernetes pods?</a>"</p>
<p>It would still be a persistent volume (to support that), with all the pods scheduled to the node with that volume.</p>
<blockquote>
<p>There are several volume types that are suitable for that and not tied to any cloud provider:</p>
<ul>
<li>NFS</li>
<li>RBD (Ceph Block Device)</li>
<li>CephFS</li>
<li>Glusterfs</li>
<li>Portworx Volumes</li>
</ul>
</blockquote>
<p>But:</p>
<blockquote>
<p>I don't really need to share the volume between many pods, I'm fine to create a volume per pods.<br>
I'd like to avoid installing/configuring a node shared volume service from the list you gave.<br>
I'm looking for an <strong>ephemeral volume</strong> if that is possible? </p>
</blockquote>
<p>Then an <strong><a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#local-ephemeral-storage" rel="nofollow noreferrer">ephemeral <em>storage</em></a></strong> is possible:</p>
<blockquote>
<p>Kubernetes version 1.8 introduces a new resource, ephemeral-storage for managing local ephemeral storage. In each Kubernetes node, kubelet’s root directory (<code>/var/lib/kubelet</code> by default) and log directory (<code>/var/log</code>) are stored on the root partition of the node.<br>
This partition is also shared and consumed by Pods via <code>emptyDir</code> volumes, container logs, image layers and container writable layers.</p>
</blockquote>
<p>In your case, you need a <a href="https://docs.openshift.com/container-platform/3.10/architecture/additional_concepts/ephemeral-storage.html#section-type-runtime" rel="nofollow noreferrer">runtime ephemeral storage</a>.</p>
|
<p>I have a script in a pod called <code>script01</code> and it can take parameters.
I run the script through the following:</p>
<pre><code>POD=<pod name>
runScript="kubectl -n nmspc exec $POD -- script01"
$runScript --command "do stuff"
</code></pre>
<p>The reason I run it this way is that I don't have access to create a script on the local machine but I do have access to the script on the pod. </p>
<p>The issue is I want to pass the IP of the host machine to the pod and wanted to do it using an environment variable. I've tried using bash -c to pass the parameters but when calling the script through the variable, it doesn't append the parameters.</p>
<pre><code>runScript="kubectl -n nmspc exec $POD -- bash -c \"export curIP=123 && script01\""
</code></pre>
<p>but it does work if I run it with
<code>$runScript --command "do stuff"</code></p>
<p>How can I pass an environment variable to the pod but still be able to call the script through the variable?</p>
| <p><code>/usr/bin/env</code> exports values passed in <code>key=value</code> pairs into the environment of any program it's used to invoke.</p>
<pre><code>kubectl -n nmspc exec "$POD" -- env curIP=123 script01
</code></pre>
<p>Note that you should <strong>never</strong> use <code>$runScript</code> or any other unquoted expansion to invoke a shell command. See <a href="http://mywiki.wooledge.org/BashFAQ/050" rel="noreferrer">BashFAQ #50</a> -- <em>I'm trying to put a command in a variable, but the complex cases always fail!</em></p>
<hr>
<p>As an example of how you <em>could</em> keep <code>bash -c</code> in place but have your command work, consider:</p>
<pre><code>runScript() {
kubectl -n nmspc exec "$POD" -- bash -c 'export curIP=123 && script01 "$@"' _ "$@"
}
runScript --command "do stuff"
</code></pre>
<p>Here, <code>runScript</code> is a <em>function</em>, not a string variable, and it explicitly passes its entire argument list through to <code>kubectl</code>. Similarly, the copy of <code>bash</code> started by <code>kubectl</code> explicitly passes <em>its</em> argument list (after the <code>$0</code> placeholder <code>_</code>) through to <code>script01</code>, so the end result is your arguments making it through to your final program.</p>
|
<p>I'm able to successfully run a .NET 5 Console Application with a <code>BackgroundService</code> in an Azure Kubernetes cluster on Ubuntu 18.04. In fact, the <code>BackgroundService</code> is all that really runs: just grabs messages from a queue, executes some actions, then terminates when Kubernetes tells it to stop, or the occasional exception.</p>
<p>It's this last scenario which is giving me problems. When the <code>BackgroundService</code> hits an unrecoverable exception, I'd like the container to stop (complete, or whatever state will cause Kubernetes to either restart or destroy/recreate the container).</p>
<p>Unfortunately, any time an exception is encountered, the <code>BackgroundService</code> <em>appears</em> to hit the <code>StopAsync()</code> function (from what I can see in the logs and console output), but the container stays in a running state and never restarts. My Main() is as appears below:</p>
<pre><code> public static async Task Main(string[] args)
{
// Build service host and execute.
var host = CreateHostBuilder(args)
.UseConsoleLifetime()
.Build();
// Attach application event handlers.
AppDomain.CurrentDomain.ProcessExit += OnProcessExit;
AppDomain.CurrentDomain.UnhandledException += new UnhandledExceptionEventHandler(OnUnhandledException);
try
{
Console.WriteLine("Beginning WebSec.Scanner.");
await host.StartAsync();
await host.WaitForShutdownAsync();
Console.WriteLine("WebSec.Scanner has completed.");
}
finally
{
Console.WriteLine("Cleaning up...");
// Ensure host is properly disposed.
if (host is IAsyncDisposable ad)
{
await ad.DisposeAsync();
}
else if (host is IDisposable d)
{
d.Dispose();
}
}
}
</code></pre>
<p>If relevant, those event handlers for <code>ProcessExit</code> and <code>UnhandledException</code> exist to flush the AppInsights telemetry channel (maybe that's blocking it?):</p>
<pre><code> private static void OnProcessExit(object sender, EventArgs e)
{
// Ensure AppInsights logs are submitted upstream.
Console.WriteLine("Flushing logs to AppInsights");
TelemetryChannel.Flush();
}
private static void OnUnhandledException(object sender, UnhandledExceptionEventArgs e)
{
var thrownException = (Exception)e.ExceptionObject;
Console.WriteLine("Unhandled exception thrown: {0}", thrownException.Message);
// Ensure AppInsights logs are submitted upstream.
Console.WriteLine("Flushing logs to AppInsights");
TelemetryChannel.Flush();
}
</code></pre>
<p>I am only overriding <code>ExecuteAsync()</code> in the <code>BackgroundService</code>:</p>
<pre><code> protected async override Task ExecuteAsync(CancellationToken stoppingToken)
{
this.logger.LogInformation(
"Service started.");
try
{
// Loop until the service is terminated.
while (!stoppingToken.IsCancellationRequested)
{
// Do some work...
}
}
catch (Exception ex)
{
this.logger.LogWarning(
ex,
"Terminating due to exception.");
}
this.logger.LogInformation(
"Service ending.",
}
</code></pre>
<p>My Dockerfile is simple and has this line to run the service:</p>
<pre><code>ENTRYPOINT ["dotnet", "MyService.dll"]
</code></pre>
<p>Am I missing something obvious? I feel like there's something about running this as a Linux container that I'm forgetting in order to make this run properly.</p>
<p>Thank you!</p>
| <p>Here is a full example of how to use <code>IHostApplicationLifetime.StopApplication()</code>.</p>
<pre><code>void Main()
{
var host = Host.CreateDefaultBuilder()
.ConfigureServices((context, services) =>
{
services.AddHostedService<MyService>();
})
.Build();
Console.WriteLine("Starting service");
host.Run();
Console.WriteLine("Ended service");
}
// You can define other methods, fields, classes and namespaces here
public class MyService : BackgroundService
{
private readonly IHostApplicationLifetime _lifetime;
private readonly Random _rnd = new Random();
public MyService(IHostApplicationLifetime lifetime)
{
_lifetime = lifetime;
}
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
{
try
{
while (true)
{
stoppingToken.ThrowIfCancellationRequested();
var nextNumber = _rnd.Next(10);
if (nextNumber < 8)
{
Console.WriteLine($"We have number {nextNumber}");
}
else
{
throw new Exception("Number too high");
}
await Task.Delay(1000);
}
}
// If the application is shutting down, ignore it
catch (OperationCanceledException e) when (e.CancellationToken == stoppingToken)
{
Console.WriteLine("Application is shutting itself down");
}
// Otherwise, we have a real exception, so must ask the application
// to shut itself down.
catch (Exception e)
{
Console.WriteLine("Oh dear. We have an exception. Let's end the process.");
// Signal to the OS that this was an error condition by
// setting the exit code.
Environment.ExitCode = 1;
_lifetime.StopApplication();
}
}
}
</code></pre>
<p>Typical output from this program will look like:</p>
<pre><code>Starting service
We have number 0
info: Microsoft.Hosting.Lifetime[0]
Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
Hosting environment: Production
info: Microsoft.Hosting.Lifetime[0]
Content root path: C:\Users\rowla\AppData\Local\Temp\LINQPad6\_spgznchd\shadow-1
We have number 2
Oh dear. We have an exception. Let's end the process.
info: Microsoft.Hosting.Lifetime[0]
Application is shutting down...
Ended service
</code></pre>
|
<p>With a Kubernetes cluster in place, what would be the alternative way to send configurations/passwords into containers? I know about the secrets way but what I'm looking for is a centralised environment that has the password encrypted, not base64 encoded.</p>
| <p>You could also consider <strong>Kamus</strong> (and <strong><a href="https://kamus.soluto.io/docs/user/crd/" rel="nofollow noreferrer">KamusSecret</a></strong>, see at the end):</p>
<blockquote>
<p>An open source, GitOps, zero-trust secrets encryption and decryption solution for Kubernetes applications.</p>
<p>Kamus enable users to easily encrypt secrets than can be decrypted only by the application running on Kubernetes.<br />
The encryption is done using strong encryption providers (currently supported: Azure KeyVault, Google Cloud KMS and AES).<br />
To learn more about Kamus, check out the <a href="https://blog.solutotlv.com/can-kubernetes-keep-a-secret?utm_source=github" rel="nofollow noreferrer">blog post</a> and <a href="https://www.slideshare.net/SolutoTLV/can-kubernetes-keep-a-secret" rel="nofollow noreferrer">slides</a>.</p>
<pre><code>helm repo add soluto https://charts.soluto.io
helm upgrade --install kamus soluto/kamus
</code></pre>
<p>Architecture: Kamus has 3 components:</p>
<ul>
<li>Encrypt API</li>
</ul>
</blockquote>
<ul>
<li>Decrypt API</li>
<li>Key Management System (KMS)</li>
</ul>
<blockquote>
<p>The encrypt and decrypt APIs handle encryption and decryption requests. The KMS is a wrapper for various cryptographic solutions. Currently supported:</p>
<ul>
<li>AES - uses one key for all secrets</li>
<li>Azure KeyVault - creates one key per service account.</li>
<li>Google Cloud KMS - creates one key per service account.</li>
</ul>
</blockquote>
<hr />
<p>As noted by <a href="https://stackoverflow.com/users/4792970/omer-levi-hevroni">Omer Levi Hevroni</a> in <a href="https://stackoverflow.com/questions/54542638/kubernetes-with-secrets-alternative/54832340#comment102010506_54832340">the comments</a>:</p>
<blockquote>
<p>We are not planning to support env vars directly, as there are some security issues with using them.<br />
As an alternative, you can use <strong><a href="https://kamus.soluto.io/docs/user/crd/" rel="nofollow noreferrer">KamusSecret</a></strong> to create a regular secret and mount it</p>
</blockquote>
<blockquote>
<p>KamusSecret works very similary to regular secret encryption flow with Kamus.<br />
The encrypted data is represented in a format that is identical to regular Kubernetes Secrets.<br />
Kamus will create an identical secret with the decrypted content.</p>
</blockquote>
|
<p>I ran bash script from java code to sort a file of tens of millions of strings of <25 characters. The java application is running in a kubernetes pod. The script failed and the error message is</p>
<pre><code>bash: line 1: 425 Killed
sort /tmp/keys > /tmp/keys_sorted
</code></pre>
<p>What does 425 actually mean here? How can I fix it?</p>
| <p>This means that <code>sort</code> was sent a SIGKILL signal.</p>
<p>On modern systems without something weird going on, this <em>almost</em> always means you're out of RAM.</p>
<p>Linux supports something called "memory overcommit". Due to language runtimes (Java among them!) habitually allocating far more virtual memory than they'll ever need matching physical memory for, the kernel allows allocation requests to succeed <em>even if it doesn't have enough memory to back them</em>. When the application actually tries to <em>access</em> those virtual memory pages, the kernel's fault handler gets invoked to try to find physical memory to map to them.</p>
<p>Sometimes it can free up space by deleting unused pages -- discarding block cache contents, or memory pages that are mapped to file contents and thus can be reloaded from that file later. Sometimes, however, there's more memory outstanding than can be free'd, and the kernel invokes the "OOM killer" -- where OOM stands for "out of memory" -- to kill some processes and <em>make</em> more free RAM.</p>
<hr />
<p>In the case of <code>sort</code>ing a large file specifically:</p>
<ul>
<li>Make sure you're using GNU sort, not a clone like busybox sort.
This is because GNU sort, but not all its smaller clones, supports breaking a large stream into pieces, writing those pieces to disk, and then doing a merge sort to reassemble them later; so it can sort files larger than available RAM.</li>
<li>Make sure that you have temporary space that is <em>actually</em> disk.
If GNU sort tries to conserve RAM by shuffling contents off to disk <em>that is actually RAM itself</em>, that's obviously not going to go well.</li>
<li>Use the GNU <code>sort</code> argument <code>-S</code> to limit the amount of memory GNU sort will allocate before shunting data to temporary files on disk. (For example, one can use <code>sort -S 32M</code> to allow 32MB of RAM to be allocated for working space).</li>
</ul>
|
<p>I have created a workload on Rancher. This workload created from an image which is hosted on a gitlab-ci project registry.</p>
<p>I want to force rancher to download a new version of this image and upgrade workload.</p>
<p>I want to do this from a .gitlab-ci.yml script. How to do this with Rancher <strong>version 2</strong>? With Rancher 1.6 I used this script:</p>
<pre><code>deploy:
stage: deploy
image: cdrx/rancher-gitlab-deploy
script:
- upgrade --stack mystack --service myservice --no-start-before-stopping
</code></pre>
| <p>In rancher 2, much of the management of workloads is delegated to Kubernetes via its api or CLI (kubectl).</p>
<p>You could patch the deployment to specify a new image/version, but if you are using a tag like <code>:latest</code> which moves, you will need to force Kubernetes to redeploy the pods by changing something about the deployment spec. </p>
<p>One common way to do this is to change/add an environment variable, which forces a redeploy. </p>
<p>In Gitlab, set two variables in your gitlab project or group to pass authentication information into the build.</p>
<p>The <code>kubectl patch</code> will update or add an environment variable called <code>FORCE_RESTART_AT</code> on your deployment's container that will force a redeploy each time it is set because Gitlab's pipeline ID changes. </p>
<p>You will need to specify the namespace, the name of your deployment, the name of the container and the image. If the image tag is changing, there is no need to supply the environment variable. If you are using <code>:latest</code>, be sure that the your container's <code>imagePullPolicy: Always</code> is set, which is the default if Kubernetes detects an image using <code>:latest</code>.</p>
<p>The image <code>diemscott/rancher-cli-k8s</code> is a simple image derived from <code>rancher/cli</code> that also includes <code>kubectl</code>.</p>
<pre><code>RANCHER_SERVER_URL=https://rancher.example.com
RANCHER_API_TOKEN="token-sd5kk:d27nrsstx6z5blxgkmspqv94tzkptnrpj7rkcrt7vtxt28tvw4djxp"
deploy:
stage: deploy
image: diemscott/rancher-cli-k8s:v2.0.2
script:
- rancher login "$RANCHER_SERVER_URL" -t "$RANCHER_API_TOKEN"
- rancher kubectl --namespace=default patch deployment nginx --type=strategic -p '{"spec":{"template":{"spec":{"containers":[{"name":"nginx","image": "nginx","env":[{"name":"FORCE_RESTART_AT","value":"$CI_PIPELINE_ID"}]}]}}}}'
</code></pre>
|
<p><code>kubectl proxy</code> and <code>kubectl port-forwarding</code> look similar and confusing to me, what are their main differences and use cases?</p>
| <p>As mentioned in "<a href="https://stackoverflow.com/a/51469150/6309">How <code>kubectl port-forward</code> works?</a>"</p>
<blockquote>
<p><code>kubectl port-forward</code> forwards connections to a local port to a port on a pod. </p>
<p>Compared to <code>kubectl proxy</code>, <code>kubectl port-forward</code> is more generic as it can forward TCP traffic while <code>kubectl proxy</code> can only forward HTTP traffic.</p>
</blockquote>
<p>As an example, see "<a href="https://medium.com/pixelpoint/kubernetes-port-forwarding-simple-like-never-before-20a8ab16370f" rel="noreferrer">Kubernetes port forwarding simple like never before</a>" from <a href="https://twitter.com/alex_barashkov" rel="noreferrer">Alex Barashkov</a>:</p>
<blockquote>
<p><strong>Port forwarding mostly used for the purpose of getting access to internal cluster resources and debugging</strong>.</p>
<p>How does it work?</p>
<p>Generally speaking, using port forwarding you could get on your ‘localhost’ any services launched in your cluster.<br>
For example, if you have Redis installed in the cluster on 6379, by using a command like this:</p>
<pre><code>kubectl port-forward redis-master-765d459796-258hz 7000:6379
</code></pre>
<p>you could forward Redis from the cluster to localhost:7000, access it locally and do whatever you want to do with it.</p>
</blockquote>
<p>For a limited HTTP access, see kubectl proxy, and, as an example, "<a href="https://blog.heptio.com/on-securing-the-kubernetes-dashboard-16b09b1b7aca" rel="noreferrer">On Securing the Kubernetes Dashboard</a>" from <a href="https://twitter.com/jbeda" rel="noreferrer">Joe Beda</a>:</p>
<blockquote>
<p>The easiest and most common way to access the cluster is through kubectl proxy. This creates a local web server that securely proxies data to the dashboard through the Kubernetes API server.</p>
</blockquote>
<p>As shown in "<a href="https://medium.com/edureka/kubernetes-dashboard-d909b8b6579c" rel="noreferrer">A Step-By-Step Guide To Install & Use Kubernetes Dashboard</a>" from <a href="https://twitter.com/edurekaIN" rel="noreferrer">Awanish</a>:</p>
<blockquote>
<pre><code>kubectl create -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
</code></pre>
<p>Accessing Dashboard using the kubectl</p>
<pre><code>kubectl proxy
</code></pre>
<p>It will proxy server between your machine and Kubernetes API server.</p>
<p>Now, to view the dashboard in the browser, navigate to the following address in the browser of your Master VM:</p>
<pre><code>http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
</code></pre>
</blockquote>
|
<p>I am trying to workaround an issue with a third party tool. That tool needs to be able to ensure that the namespace I tell it to work in exists. To do that, it runs:</p>
<pre><code>kubectl get namespace my-namespace-name-here
</code></pre>
<p>The user that I let the third party tool run as has <code>edit</code> permissions in the <code>my-namespace-name-here</code> namespace. (Via a <code>rolebinding</code> to the namespace using the <code>clusterrole</code> called <code>edit</code>.)</p>
<p>But edit permissions is not enough to allow it to check (using that command) if the namespace exists.</p>
<p>Ideally, I would like a way to grant the user permissions to just get the one namespace above. But I would be satisfied if I could grant permissions to just list namespaces and nothing else new at the cluster level.</p>
<p><strong>How can I just add permissions to list namespaces?</strong></p>
| <p>I figured it out!</p>
<p>I needed to make a <code>Role</code> scoped to <code>my-namespace-name-here</code> that grants the ability to get namespaces. Then make a <code>rolebinding</code> to grant that permission to my user. Running a <code>kubectl apply -f ./my-yaml-file-below.yaml</code> did it.</p>
<p>Here is the yaml</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: namespace-reader
namespace: my-namespace-name-here
rules:
- apiGroups: [""]
resources: ["namespaces"]
verbs: ["get"]
---
apiVersion: "rbac.authorization.k8s.io/v1"
kind: RoleBinding
metadata:
name: my-username-here-namespace-reader
namespace: my-namespace-name-here
roleRef:
apiGroup: "rbac.authorization.k8s.io"
kind: Role
name: namespace-reader
subjects:
- apiGroup: "rbac.authorization.k8s.io"
kind: User
name: "[email protected]"
</code></pre>
<p>This allows the user to do a <code>kubectl get namespace</code> only the the namespace that this is granted on.</p>
|
<p>I am trying to setup my Helm chart to be able to deploy a <code>VirtualService</code>. My deploy user has the <code>Edit</code> ClusterRole bound to it. But I realized that because Istio is not part of the core Kubernetes distro, the <code>Edit</code> ClusterRole does not have permissions to add a <code>VirtualService</code> (or even look at them).</p>
<p>I can, of course, make my own Roles and ClusterRoles if needed. But I figured I would see if Istio has a recommended Role or ClusterRole for that.</p>
<p>But all the docs that I can find for Istio Roles and ClusterRoles are for old versions of Istio.</p>
<p><strong>Does Istio not recommend using Roles and ClusterRoles anymore? If not, what do they recommend? If they do, where are the docs for it?</strong></p>
| <p>I ended up using these ClusterRoles. They merge with the standard Kubernetes roles of admin, edit and view. (My edit role only allows access to the VirtualService because that fit my situtation.)</p>
<pre><code>apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: istio-admin
labels:
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rules:
- apiGroups: ["config.istio.io", "networking.istio.io", "rbac.istio.io", "authentication.istio.io", "security.istio.io"]
resources: ["*"]
verbs: ["*"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: istio-edit
labels:
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rules:
- apiGroups: ["config.istio.io", "networking.istio.io", "rbac.istio.io", "authentication.istio.io", "security.istio.io"]
resources: ["virtualservices"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: istio-view
labels:
rbac.authorization.k8s.io/aggregate-to-view: "true"
rules:
- apiGroups: ["config.istio.io", "networking.istio.io", "rbac.istio.io", "authentication.istio.io", "security.istio.io"]
resources: ["*"]
verbs: ["get", "list", "watch"]
</code></pre>
|
<p>I am trying to scrape metrics for spark driver and executor using javaagent with below options. I have Prometheus in kubernetes cluster and I am running this spark application outside the kubernetes cluster.</p>
<pre><code>spark.executor.extraJavaOptions=-javaagent:/opt/clkd/prometheus/jmx_prometheus_javaagent-0.3.1.jar=53700:executor_pattern.yaml
</code></pre>
<p>but I got below exception since both executors running on the same machine</p>
<pre><code>Caused by: java.net.BindException: Address already in use ....
</code></pre>
<p>I see many have posted the same question but I couldn't find the answer. Please let me know how can I resolve this issue.</p>
| <p>I think that you need to switch from the pull-based monitoring, to push-based monitoring. For things, such as Spark jobs it makes more sense, as they aren't running all the time. For that you have some alternatives:</p>
<ul>
<li>Spark Prometheus Sink from Banzai Cloud as outlined in their <a href="https://banzaicloud.com/blog/spark-prometheus-sink/" rel="nofollow noreferrer">blog post</a></li>
<li>Setup GraphiteSink as described in the <a href="https://spark.apache.org/docs/2.4.6/monitoring.html" rel="nofollow noreferrer">Spark documentation</a>, and point it to the <a href="https://github.com/prometheus/graphite_exporter" rel="nofollow noreferrer">https://github.com/prometheus/graphite_exporter</a>, and then <strong>scrape</strong> metrics from that exporter</li>
</ul>
<hr />
<p>Initial answer:</p>
<p>You can't have 2 processes listening on the same port, so just bind Prometheus from different jobs onto the different ports. Port is the number after the <code>jmx_prometheus_javaagent-0.3.1.jar=</code>, and before <code>:</code> character - in your case it's <code>53700</code>. So you can use one port for one task, and another port (maybe <code>53701</code>) for 2nd task...</p>
|
<p>Here my command line:</p>
<pre><code>kubectl apply -f postgres.secret.yaml \
-f postgres.configmap.yaml \
-f postgres.volume.yaml \
-f postgres.deployment.yaml \
-f postgres.service.yaml
</code></pre>
<p>and i got and error as this picture :
<a href="https://i.stack.imgur.com/8CpqL.png" rel="noreferrer"><img src="https://i.stack.imgur.com/8CpqL.png" alt="enter image description here" /></a></p>
<p>Here my yaml file deployment:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 0
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
restartPolicy: Always
containers:
- name: postgres
image: postgres:12
ports:
- containerPort: 5432
envFrom:
- secretRef:
name: postgres-secret
- configMapRef:
name: postgres-configmap
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-pv
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pv-claim
</code></pre>
<p>And i got an error : Unknown field "name" in io.k8s.api.core.v1.EnvFromSource
I have checked this error everyobdy says that is from space from envFrom however it is rightly indent as the solution they propose.</p>
| <p>The indentation is wrong.</p>
<p>It should be:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 0
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
restartPolicy: Always
containers:
- name: postgres
image: postgres:12
ports:
- containerPort: 5432
envFrom:
- secretRef:
name: postgres-secret
- configMapRef:
name: postgres-configmap
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: postgres-pv
volumes:
- name: postgredb
persistentVolumeClaim:
claimName: postgres-pv-claim
</code></pre>
<p>i.e. <code>name</code> should be indented under the <code>secretRef</code> or <code>configMapRef</code> fields</p>
|
<p>I'm setting up a Kubeflow cluster on AWS EKS, is there a native way in Kubeflow that allows us to automatically schedule jobs i.e. (Run the workflow every X hours, get data every X hours, etc.)</p>
<p>I have tried to look for other things like Airflow, but i'm not really sure if it will integrate well with the Kubeflow environment.</p>
| <p>That should be what a <a href="https://www.kubeflow.org/docs/pipelines/overview/concepts/run/" rel="nofollow noreferrer">recurring run</a> is for.</p>
<p>That would be using a <a href="https://www.kubeflow.org/docs/pipelines/overview/concepts/run-trigger/" rel="nofollow noreferrer">run trigger</a>, which does have a cron field, for specifying cron semantics for scheduling runs.</p>
|
<p>I currently own only one computer, and I won't have another.</p>
<ol>
<li><p>I run <em>Spark</em> on its CPU cores : <code>master=local[5]</code>, using it directly : I set <code>spark-core</code> and <code>spark-sql</code> for dependencies, do quite no other configuration, and my programs start immediately. It's confortable, of course.</p>
</li>
<li><p>But should I attempt to create an architecture with a master and some workers by the mean of <em>Docker</em> containers or <em>minikube</em> (<em>Kubernetes</em>) on my computer ?</p>
</li>
</ol>
<p>Will solution <strong>#2</strong> - with all the settings it requires - reward me with better performances, because <em>Spark</em> is truly designed to work that way, even on a single computer,</p>
<p>or will I loose some time, because the mode I'm currently running it, without network usage, without need of data locality will always give me better performances, and solution <strong>#1</strong> will always be the best on a single computer ?</p>
<p>My hypothesis is that <strong>#1</strong> is fine. But I have no true measurement for that. No source of comparison. Who have experienced the two manners of doing things on a sigle computer ?</p>
| <p>It really depends on your goals - if you always will run your Spark code on the single node with local master, then just use it. But if you intend to run your resulting code in the distributed mode on multiple machines, then emulating cluster with Docker could be useful, as you'll get your code running in truly distributed manner, and you'll able to find problems that not always are found when you run your code with the local master.</p>
<p>Instead of direct Docker usage (that could be tricky to setup, although it's still possible), maybe you can consider to use Spark on Kubernetes, for example, via minikube - there is a plenty of articles found by Google on this topic.</p>
|
<p>I have a working Kubernetes cluster that I want to monitor with Grafana.</p>
<p>I have been trying out many dashboards from <a href="https://grafana.com/dashboards" rel="nofollow noreferrer">https://grafana.com/dashboards</a> but they all seem to have some problems: it looks like there's a mismatch between the Prometheus metric names and what the dashboard expects.</p>
<p>Eg if I look at this recently released, quite popular dashboard: <a href="https://grafana.com/dashboards/5309/revisions" rel="nofollow noreferrer">https://grafana.com/dashboards/5309/revisions</a></p>
<p>I end up with many "holes" when running it: </p>
<p><a href="https://i.stack.imgur.com/TJ2ls.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TJ2ls.png" alt="grafana dashboard with missing values"></a></p>
<p>Looking into the panel configuration, I see that the issues come from small key changes, eg <code>node_memory_Buffers</code> instead of <code>node_memory_Buffers_bytes</code>.</p>
<p>Similarly the dashboard expects <code>node_disk_bytes_written</code> when Prometheus provides <code>node_disk_written_bytes_total</code>.</p>
<p>I have tried out a <em>lot</em> of Kubernetes-specific dashboards and I have the same problem with almost all of them.</p>
<p>Am I doing something wrong?</p>
| <p>The Prometheus node exporter changed a lot of the metric names in the 0.16.0 version to conform to new naming conventions.</p>
<p>From <a href="https://github.com/prometheus/node_exporter/releases/tag/v0.16.0" rel="nofollow noreferrer">https://github.com/prometheus/node_exporter/releases/tag/v0.16.0</a>:</p>
<blockquote>
<p><strong>Breaking changes</strong></p>
<p>This release contains major breaking changes to metric names. Many
metrics have new names, labels, and label values in order to conform
to current naming conventions.</p>
<ul>
<li>Linux node_cpu metrics now break out <code>guest</code> values into separate
metrics. </li>
<li>Many counter metrics have been renamed to <code>include _total</code>. </li>
<li>Many metrics have been renamed/modified to include
base units, for example <code>node_cpu</code> is now <code>node_cpu_seconds_total</code>.</li>
</ul>
</blockquote>
<p>See also the <a href="https://github.com/prometheus/node_exporter/blob/v0.16.0/docs/V0_16_UPGRADE_GUIDE.md" rel="nofollow noreferrer">upgrade guide</a>. One of its suggestion is to use <a href="https://github.com/prometheus/node_exporter/blob/v0.16.0/docs/example-16-compatibility-rules.yml" rel="nofollow noreferrer">compatibility rules</a> that will create duplicate metrics with the old names.</p>
<p>Otherwise use version 0.15.x until the dashboards are updated, or fix them!</p>
|
<p>I've create a private repo on docker hub and trying to pull that image into my kubernetes cluster. I can see the documentations suggest to do this</p>
<p><code>
kubectl create secret generic regcred \
--from-file=.dockerconfigjson=<path/to/.docker/config.json> \
--type=kubernetes.io/dockerconfigjson
</code></p>
<p>I am already logged in, and i change path to ~/.docker/config.json but it keeps giving me </p>
<p><code>error: error reading ~./docker/config.json: no such file or directory</code> </p>
<p>despite the fact if i type <code>cat ~/.docker/config.json</code> it displays the content, meaning there is a file.</p>
<p>So in other words how to properly authenticate and be able to push private images into kube cluster?</p>
| <blockquote>
<pre><code>error: error reading ~./docker/config.json: no such file or directory
^^^^ ?
</code></pre>
</blockquote>
<p><code>~./docker/config.json</code> does not seem valid:<br>
<code>~/.docker/config.json</code> would</p>
<p>To remove any doubt, try the full path instead of <code>~</code>:</p>
<pre><code>kubectl create secret generic regcred \
--from-file=.dockerconfigjson=/home/auser/.docker/config.json \
--type=kubernetes.io/dockerconfigjson
</code></pre>
|
<p>I am running Cassandra as a Kubernetes pod . One pod is having one Cassandra container.we are running Cassandra of version 3.11.4 and auto_bootstrap set to true.I am having 5 node in production and it holds 20GB data.</p>
<p>Because of some maintenance activity and if I restart any Cassandra pod it is taking 30 min for bootstrap then it is coming UP and Normal state.In production 30 min is a huge time.</p>
<p>How can I reduce the bootup time for cassandra pod ?</p>
<p>Thank you !!</p>
| <p>If you're restarting the existing node, and data is still there, then it's not a bootstrap of the node - it's just restart.</p>
<p>One of the potential problems that you have is that you're not draining the node before restart, and all commit logs need to be replayed on the start, and this can take a lot of time if you have a lot of data in commit log (you can just check <code>system.log</code> on what Cassandra is doing at that time). So the solution could be is to execute <code>nodetool drain</code> before stopping the node. </p>
<p>If the node is restarted before crash or something like, you can thing in the direction of the regular flush of the data from memtable, for example via <code>nodetool flush</code>, or configuring tables with periodic flush via <code>memtable_flush_period_in_ms</code> option on the most busy tables. But be careful with that approach as it may create a lot of small SSTables, and this will add more load on compaction process.</p>
|
<p>I would like to install Istio into my Kubernetes cluster. The <a href="https://istio.io/latest/docs/setup/getting-started/" rel="nofollow noreferrer">installation page</a> says to:</p>
<blockquote>
<p>download the installation file for your OS</p>
</blockquote>
<p>My developer machine is a Windows 10 machine. The nodes in my Kubernetes cluster run CentOS.</p>
<p><strong>When it says "Your OS" does it mean my machine that I will download it to and run it from, or does it mean the OS that my cluster runs on?</strong> (or do I need to run it from a node in the cluster?)</p>
| <p>The download basically has <code>istioctl</code> and some samples in it.</p>
<p>So you want to download for the OS that you are running the command from (in my case Windows 10).</p>
|
<p>I have a new application running in a container in OpenShift. Currently it can't connect to the database because I'm waiting for the SPN and database permissions to be set up, but the strange thing is that when the application tries the container itself crashes and the pod is restarted.</p>
<p>My code is properly catching exceptions, but it seems as though the pod is restarted immediately when the exception is generated. It works correctly, catching the exception and returning an error message, when run locally.</p>
<p>In OpenShift the last line I see in the logs is:</p>
<pre><code>Opening connection to database 'MyDB' on server 'MyServer.domain.com'.
</code></pre>
<p>OC describe pod shows this:</p>
<pre><code>Last State: Terminated
Reason: Error
Exit Code: 139
</code></pre>
<p>I see that exit code 139 may mean a SIGSEV memory access issue, but I'm not doing anything special with memory. It's just a normal EF Core database context...</p>
<p>My Context declaration is:</p>
<pre class="lang-cs prettyprint-override"><code>var OptionsBuilder = new DbContextOptionsBuilder<MyContext>()
.UseSqlServer("Data Source = MyServer.domain.com; Initial Catalog = MyDB; Integrated Security = True; TrustServerCertificate=True;MultipleActiveResultSets=True")
.EnableSensitiveDataLogging()
.LogTo(Console.Out.WriteLine);
var newContext = new MyContext(OptionsBuilder.Options);
//This line (or any line that causes a database connection) causes the error
newContext.Database.CanConnect();
</code></pre>
<p>What else should I look at?</p>
| <p>It turns out this is due to a bug in Microsoft's Microsoft.Data.SqlClient library: <a href="https://github.com/dotnet/SqlClient/issues/1390" rel="nofollow noreferrer">https://github.com/dotnet/SqlClient/issues/1390</a></p>
<p>It was patched in version Microsoft.Data.SqlClient 4.1.0 but EF Core 6 is using a much, much earlier version.</p>
<p>The solution for me was to separately pull in the latest version of Microsoft.Data.SqlClient in the client app.</p>
|
<p>I would like to programmatically create GKE clusters (and resize them etc.). To do so, I could use the gscloud commands, but I found this java library that seems to imply that one can create/resize/delete clusters all from within java:
<a href="https://developers.google.com/api-client-library/java/apis/container/v1" rel="nofollow noreferrer">https://developers.google.com/api-client-library/java/apis/container/v1</a> library
(Note: This is a DIFFERENT library from the Java libraries for Kubernetes, which is well documented. The above link is for creating the INITIAL cluster, not starting up / shutting down pods etc.)</p>
<p>However, I couldn't find any examples/sample code on how to do some basic commands, eg</p>
<p>a) get list of clusters and see if a cluster of a particular name is runing
b) start up cluster of a particular name in a certain region with a certain number of nodes of a certain instance type
c) wait until the cluster has fully started up from (b)
d) etc.</p>
<p>Any one have any examples of using the java library to accomplish this?</p>
<p>Also, is there a "generic" java library for any Kubernetes cluster managerment (not just the Google GKE one? I couldn't find any. Again, there are libraries for pod management, but I couldn't find any for generic Kubernetes <em>cluster</em> management (ie create cluster etc.))</p>
| <p>You could consider using the <a href="https://www.terraform.io/docs/providers/google/r/container_cluster.html" rel="nofollow noreferrer">Terraform GKE provider</a> to programmatically create and mange GKE clusters.
It is idempotent and tracks state. I'd consider it to be more stable than any standalone library implementation. Besides, this is a typical use case for Terraform.</p>
|
<p>I was playing with k8s deployment - rolling update and it works really well.
I am curious to know how to do deployment when we have a service dependency! Not sure If i am explaining my question correctly. It is just a very high level scenario! </p>
<p>Lets consider this example. I have deployed 2 apps with 10 replicas each, exposed as services. </p>
<pre><code>Service-A
Deployment-A
Pod-A - v1 - (10)
Service-B
Deployment-B
Pod-B - v1 - (10)
</code></pre>
<p>Service A depends on B. Now as part of v2 release both apps need to use v2. Service B api expects few additional parameters / slightly changed. When we upgrade both apps with newer version v2, if service-B becomes up and running before Service-A, some of the requests would be failing as Service-A is still in v1 (as the upgrade is in progress). How can we do the deployment without any failure here? if you are using k8s already, what is the best practice you would follow.</p>
| <p>As shown in "<a href="https://medium.com/platformer-blog/enable-rolling-updates-in-kubernetes-with-zero-downtime-31d7ec388c81" rel="nofollow noreferrer">Enable Rolling updates in Kubernetes with Zero downtime</a>" from <a href="https://twitter.com/Nilesh_93" rel="nofollow noreferrer">Nilesh Jayanandana</a>, you could check if implementing a <a href="https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/" rel="nofollow noreferrer">readiness probe</a> would help service B to wait for service A being in V2.</p>
<p>Another approach would be through an Helm package, as in "<a href="https://docs.bitnami.com/kubernetes/how-to/deploy-application-kubernetes-helm/" rel="nofollow noreferrer">Deploy, Scale and Upgrade an Application on Kubernetes with Helm</a>", which can modelize the dependency, and then, through <code>helm update</code>, perform the rolling upgrade.</p>
|
<p>I am trying to connect to cqlsh from remote (kuebctl command) when encryption is enabled, but I am unable to connect to cqlsh. anyone has a better way to connect?</p>
<pre><code>$ kubectl run -i --tty --restart=Never --rm --image cassandra cqlsh -- cqlsh cassandra-0.cassandra.default.svc.cluster.local -u cassandra -p cassandra --ssl
If you don't see a command prompt, try pressing enter.
Validation is enabled; SSL transport factory requires a valid certfile to be specified. Please provide path to the certfile in [ssl] section as 'certfile' option in /root/.cassandra/cqlshrc (or use [certfiles] section) or set SSL_CERTFILE environment variable.
pod "cqlsh" deleted
pod default/cqlsh terminated (Error)
</code></pre>
<p>Since I am connecting from remote, I cannot set the cqlshrc file.</p>
| <p>You can specify location of the certfile, and validate options via environment variables <code>SSL_CERTFILE</code> and <code>SSL_VALIDATE</code> correspondingly, but you'll need to mount certificate files anyway, so you can also mount corresponding <code>cqlshrc</code>...</p>
<p>See <a href="https://docs.datastax.com/en/dse/5.1/dse-admin/datastax_enterprise/security/usingCqlshSslAndKerberos.html" rel="nofollow noreferrer">documentation</a> for more details.</p>
<p>P.S. Also, if client validation is enabled, you'll need to provide client's key/certificate as well (options <code>userkey</code>, and <code>usercert</code> in the <code>cqlshrc</code>).</p>
|
<p>I have an existing POD containing a DB. I have a script containing executable queries in that container. I need to schedule the execution of the script. How do I go about doing this?</p>
| <p>OpenShift has a "cronjob" resource type which can schedule a job to run at specific intervals. You can read more about it <a href="https://docs.okd.io/3.11/dev_guide/cron_jobs.html" rel="nofollow noreferrer">here</a>.</p>
<p>You can create a custom image which contains the client to connect to your DB and supply it with the credentials mapped as secrets. This can run your executable queries at the interval you've set for the job.</p>
|
<p>I have k8s cluster with three workers and when I explicitly creates the pod, corresponding docker images get downloaded to worker. When I explicitly deletes the pods corresponding docker images are still present on worker nodes. Is this expected case? </p>
| <blockquote>
<p>Is this expected case?</p>
</blockquote>
<p>Possibly, considering the <a href="https://kubernetes.io/docs/concepts/cluster-administration/kubelet-garbage-collection/" rel="noreferrer">Kubernetes Garbage collection policy for images</a>:</p>
<blockquote>
<p>Kubernetes manages lifecycle of all images through <code>imageManager</code>, with the cooperation of <code>cadvisor</code>.</p>
<p>The policy for garbage collecting images takes two factors into consideration: <code>HighThresholdPercent</code> and <code>LowThresholdPercent</code>.<br />
Disk usage above the high threshold will trigger garbage collection.<br />
The garbage collection will delete least recently used images until the low threshold has been met.</p>
</blockquote>
|
<p>Datastax Spark Cassandra Connector takes "spark.cassandra.connection.host" for connecting to cassandra cluster.</p>
<ol>
<li><p>Can we provide headless service of C* cluster on K8S environment as host to this parameter("spark.cassandra.connection.host").</p>
</li>
<li><p>Will it resolve the contact points?</p>
</li>
<li><p>What is the preferred way of connecting with C* cluster on the K8s environment with Spark Cassandra Connector?</p>
</li>
</ol>
| <p>By default, SCC resolves all provided contact points into IP addresses on the first connect, and then only uses these IP addresses for reconnection. And after initial connection happened, it discover the rest of the cluster. Usually this is not a problem as SCC should receive notifications about nodes up & down and track nodes IP addresses. But in practice, it could happen that nodes are restarted too fast, and notifications are not received, so Spark jobs that use SCC could stuck trying to connect to the IP addresses that aren't valid anymore - I hit this multiple times on the DC/OS.</p>
<p>This problem is solved with the <a href="https://www.datastax.com/blog/2020/05/advanced-apache-cassandra-analytics-now-open-all" rel="nofollow noreferrer">release of SCC 2.5.0</a> that includes a fix for <a href="https://datastax-oss.atlassian.net/browse/SPARKC-571" rel="nofollow noreferrer">SPARKC-571</a>. It introduced a new configuration parameter - <code>spark.cassandra.connection.resolveContactPoints</code> that when it's set to <code>false</code> (<code>true</code> by default) will always use hostnames of the contact points for both initial connection & reconnection, avoiding the problems with changed IP addresses.</p>
<p>So on K8S I would try to use this configuration parameter with just normal Cassandra deployment.</p>
|
<p>I have an existing system that uses a relational DBMS. I am unable to use a NoSQL database for various internal reasons.</p>
<p>The system is to get some microservices that will be deployed using Kubernetes and Docker with the intention to do rolling upgrades to reduce downtime. The back end data layer will use the existing relational DBMS. The micro services will follow good practice and "own" their data store on the DBMS. The one big issue with this seems to be how to deal with managing the structure of the database across this. I have done my research:</p>
<ul>
<li><a href="https://blog.philipphauer.de/databases-challenge-continuous-delivery/" rel="nofollow noreferrer">https://blog.philipphauer.de/databases-challenge-continuous-delivery/</a></li>
<li><a href="http://www.grahambrooks.com/continuous%20delivery/continuous%20deployment/zero%20down-time/2013/08/29/zero-down-time-relational-databases.html" rel="nofollow noreferrer">http://www.grahambrooks.com/continuous%20delivery/continuous%20deployment/zero%20down-time/2013/08/29/zero-down-time-relational-databases.html</a></li>
<li><a href="http://blog.dixo.net/2015/02/blue-turquoise-green-deployment/" rel="nofollow noreferrer">http://blog.dixo.net/2015/02/blue-turquoise-green-deployment/</a></li>
<li><a href="https://spring.io/blog/2016/05/31/zero-downtime-deployment-with-a-database" rel="nofollow noreferrer">https://spring.io/blog/2016/05/31/zero-downtime-deployment-with-a-database</a></li>
<li><a href="https://www.rainforestqa.com/blog/2014-06-27-zero-downtime-database-migrations/" rel="nofollow noreferrer">https://www.rainforestqa.com/blog/2014-06-27-zero-downtime-database-migrations/</a></li>
</ul>
<p>All of the discussions seem to stop around the point of adding/removing columns and data migration. There is no discussion of how to manage stored procedures, views, triggers etc.</p>
<p>The application is written in .NET Full and .NET Core with Entity Framework as the ORM.</p>
<p>Has anyone got any insights on how to do continious delivery using a relational DBMS where it is a full production system? Is it back to the drawing board here? In as much that using a relational DBMS is "too hard" for rolling updates?</p>
<p>PS. Even though this is a continious delivery problem I have also tagged with Kubernetes and Docker as that will be the underlying tech in use for the orchestration/container side of things.</p>
| <p>I work in an environment that achieves continuous delivery. We use MySQL.</p>
<p>We apply schema changes with minimal interruption by using <a href="https://www.percona.com/doc/percona-toolkit/LATEST/pt-online-schema-change.html" rel="nofollow noreferrer">pt-online-schema-change</a>. One could also use <a href="https://github.com/github/gh-ost" rel="nofollow noreferrer">gh-ost</a>.</p>
<p>Adding a column can be done at any time if the application code can work with the extra column in place. For example, it's a good rule to avoid implicit columns like <code>SELECT *</code> or <code>INSERT</code> with no columns-list clause. Dropping a column can be done after the app code no longer references that column. Renaming a column is trickier to do without coordinating an app release, and in this case you may have to do two schema changes, one to add the new column and a later one to drop the old column after the app is known not to reference the old column.</p>
<p>We do upgrades and maintenance on database servers by using redundancy. Every database master has a replica, and the two instances are configured in master-master (circular) replication. So one is active and the other is passive. Applications are allowed to connect only to the active instance. The passive instance can be restarted, upgraded, etc. </p>
<p>We can switch the active instance in under 1 second by changing an internal CNAME, and updating the <code>read_only</code> option in each MySQL instance.</p>
<p>Database connections are terminated during this switch. Apps are required to detect a dropped connection and reconnect to the CNAME. This way the app is always connected to the active MySQL instance, freeing the passive instance for maintenance.</p>
<p>MySQL replication is asynchronous, so an instance can be brought down and back up, and it can resume replicating changes and generally catches up quickly. As long as its master keeps the binary logs needed. If the replica is down for longer than the binary log expiration, then it loses its place and must be reinitialized from a backup of the active instance.</p>
<hr>
<p>Re comments:</p>
<blockquote>
<p>how is the data access code versioned? ie v1 of app talking to v2 of DB? </p>
</blockquote>
<p>That's up to each app developer team. I believe most are doing continual releases, not versions.</p>
<blockquote>
<p>How are SP's, UDF's, Triggers etc dealt with?</p>
</blockquote>
<p>No app is using any of those.</p>
<p>Stored routines in MySQL are really more of a liability than a feature. No support for packages or libraries of routines, no compiler, no debugger, bad scalability, and the SP language is unfamiliar and poorly documented. I don't recommend using stored routines in MySQL, even though it's common in Oracle/Microsoft database development practices.</p>
<p>Triggers are not allowed in our environment, because pt-online-schema-change needs to create its own triggers.</p>
<p><a href="https://dev.mysql.com/doc/refman/8.0/en/adding-udf.html" rel="nofollow noreferrer">MySQL UDFs</a> are compiled C/C++ code that has to be installed on the database server as a shared library. I have never heard of any company who used UDFs in production with MySQL. There is too a high risk that a bug in your C code could crash the whole MySQL server process. In our environment, app developers are not allowed access to the database servers for SOX compliance reasons, so they wouldn't be able to install UDFs anyway. </p>
|
<p>I am unable to run any <code>kubectl</code> commands and I believe it is a result of an expired apiserver-etcd-client certificate.</p>
<pre><code>$ openssl x509 -in /etc/kubernetes/pki/apiserver-etcd-client.crt -noout -text |grep ' Not '
Not Before: Jun 25 17:28:17 2018 GMT
Not After : Jun 25 17:28:18 2019 GMT
</code></pre>
<p>The log from the failed apiserver container shows:</p>
<pre><code>Unable to create storage backend: config (&{ /registry [https://127.0.0.1:2379] /etc/kubernetes/pki/apiserver-etcd-client.key /etc/kubernetes/pki/apiserver-etcd-client.crt /etc/kubernetes/pki/etcd/ca.crt true false 1000 0xc420363900 <nil> 5m0s 1m0s}), err (dial tcp 127.0.0.1:2379: getsockopt: connection refused)
</code></pre>
<p>I am using kubeadm 1.10, and would like to upgrade to 1.14. I was able to renew several expired certificates described by <a href="https://github.com/kubernetes/kubeadm/issues/581" rel="nofollow noreferrer">issue 581</a> on GitHub. Following the instructions updated the following keys & certs in <code>/etc/kubernetes/pki</code>:</p>
<pre><code>apiserver
apiserver-kubelet-client
front-proxy-client
</code></pre>
<p>Next, I tried:</p>
<pre><code>kubeadm --config kubeadm.yaml alpha phase certs apiserver-etcd-client
</code></pre>
<p>Where the <code>kubeadm.yaml</code> file is:</p>
<pre><code>apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
advertiseAddress: 172.XX.XX.XXX
kubernetesVersion: v1.10.5
</code></pre>
<p>But it returns:</p>
<pre><code>failure loading apiserver-etcd-client certificate: the certificate has expired
</code></pre>
<p>Further, in the directory <code>/etc/kubernetes/pki/etcd</code> with the exception of the <code>ca</code> cert and key, all of the remaining certificates and keys are expired.</p>
Is there a way to renew the expired certs without resorting to rebuilding the cluster?
<pre><code>
Logs from the etcd container:
$ sudo docker logs e4da061fc18f
2019-07-02 20:46:45.705743 I | etcdmain: etcd Version: 3.1.12
2019-07-02 20:46:45.705798 I | etcdmain: Git SHA: 918698add
2019-07-02 20:46:45.705803 I | etcdmain: Go Version: go1.8.7
2019-07-02 20:46:45.705809 I | etcdmain: Go OS/Arch: linux/amd64
2019-07-02 20:46:45.705816 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
2019-07-02 20:46:45.705848 N | etcdmain: the server is already initialized as member before, starting as etcd member...
2019-07-02 20:46:45.705871 I | embed: peerTLS: cert = /etc/kubernetes/pki/etcd/peer.crt, key = /etc/kubernetes/pki/etcd/peer.key, ca = , trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true
2019-07-02 20:46:45.705878 W | embed: The scheme of peer url http://localhost:2380 is HTTP while peer key/cert files are presented. Ignored peer key/cert files.
2019-07-02 20:46:45.705882 W | embed: The scheme of peer url http://localhost:2380 is HTTP while client cert auth (--peer-client-cert-auth) is enabled. Ignored client cert auth for this url.
2019-07-02 20:46:45.712218 I | embed: listening for peers on http://localhost:2380
2019-07-02 20:46:45.712267 I | embed: listening for client requests on 127.0.0.1:2379
2019-07-02 20:46:45.716737 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
2019-07-02 20:46:45.718103 I | etcdserver: recovered store from snapshot at index 13621371
2019-07-02 20:46:45.718116 I | etcdserver: name = default
2019-07-02 20:46:45.718121 I | etcdserver: data dir = /var/lib/etcd
2019-07-02 20:46:45.718126 I | etcdserver: member dir = /var/lib/etcd/member
2019-07-02 20:46:45.718130 I | etcdserver: heartbeat = 100ms
2019-07-02 20:46:45.718133 I | etcdserver: election = 1000ms
2019-07-02 20:46:45.718136 I | etcdserver: snapshot count = 10000
2019-07-02 20:46:45.718144 I | etcdserver: advertise client URLs = https://127.0.0.1:2379
2019-07-02 20:46:45.842281 I | etcdserver: restarting member 8e9e05c52164694d in cluster cdf818194e3a8c32 at commit index 13629377
2019-07-02 20:46:45.842917 I | raft: 8e9e05c52164694d became follower at term 1601
2019-07-02 20:46:45.842940 I | raft: newRaft 8e9e05c52164694d [peers: [8e9e05c52164694d], term: 1601, commit: 13629377, applied: 13621371, lastindex: 13629377, lastterm: 1601]
2019-07-02 20:46:45.843071 I | etcdserver/api: enabled capabilities for version 3.1
2019-07-02 20:46:45.843086 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32 from store
2019-07-02 20:46:45.843093 I | etcdserver/membership: set the cluster version to 3.1 from store
2019-07-02 20:46:45.846312 I | mvcc: restore compact to 13274147
2019-07-02 20:46:45.854822 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
2019-07-02 20:46:45.855232 I | warning: ignoring ServerName for user-provided CA for backwards compatibility is deprecated
2019-07-02 20:46:45.855267 I | etcdserver: starting server... [version: 3.1.12, cluster version: 3.1]
2019-07-02 20:46:45.855293 I | embed: ClientTLS: cert = /etc/kubernetes/pki/etcd/server.crt, key = /etc/kubernetes/pki/etcd/server.key, ca = , trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true
2019-07-02 20:46:46.443331 I | raft: 8e9e05c52164694d is starting a new election at term 1601
2019-07-02 20:46:46.443388 I | raft: 8e9e05c52164694d became candidate at term 1602
2019-07-02 20:46:46.443405 I | raft: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 1602
2019-07-02 20:46:46.443419 I | raft: 8e9e05c52164694d became leader at term 1602
2019-07-02 20:46:46.443428 I | raft: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 1602
2019-07-02 20:46:46.443699 I | etcdserver: published {Name:default ClientURLs:[https://127.0.0.1:2379]} to cluster cdf818194e3a8c32
2019-07-02 20:46:46.443768 I | embed: ready to serve client requests
2019-07-02 20:46:46.444012 I | embed: serving client requests on 127.0.0.1:2379
2019-07-02 20:48:05.528061 N | pkg/osutil: received terminated signal, shutting down...
2019-07-02 20:48:05.528103 I | etcdserver: skipped leadership transfer for single member cluster
</code></pre>
<p>systemd start-up script:</p>
<pre><code>sudo systemctl status -l kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/etc/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Mon 2019-07-01 14:54:24 UTC; 1 day 23h ago
Docs: http://kubernetes.io/docs/
Main PID: 9422 (kubelet)
Tasks: 13
Memory: 47.0M
CGroup: /system.slice/kubelet.service
└─9422 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --authentication-token-webhook=true --authorization-mode=Webhook --client-ca-file=/etc/kubernetes/pki/ca.crt --cgroup-driver=cgroupfs --rotate-certificates=true --cert-dir=/var/lib/kubelet/pki
Jul 03 14:10:49 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: E0703 14:10:49.871276 9422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.31.22.241:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dahub-k8s-m1.aws-intanalytic.com&limit=500&resourceVersion=0: dial tcp 172.31.22.241:6443: getsockopt: connection refused
Jul 03 14:10:49 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: E0703 14:10:49.872444 9422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://172.31.22.241:6443/api/v1/nodes?fieldSelector=metadata.name%3Dahub-k8s-m1.aws-intanalytic.com&limit=500&resourceVersion=0: dial tcp 172.31.22.241:6443: getsockopt: connection refused
Jul 03 14:10:49 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: E0703 14:10:49.880422 9422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://172.31.22.241:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.22.241:6443: getsockopt: connection refused
Jul 03 14:10:50 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: E0703 14:10:50.871913 9422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.31.22.241:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dahub-k8s-m1.aws-intanalytic.com&limit=500&resourceVersion=0: dial tcp 172.31.22.241:6443: getsockopt: connection refused
Jul 03 14:10:50 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: E0703 14:10:50.872948 9422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:461: Failed to list *v1.Node: Get https://172.31.22.241:6443/api/v1/nodes?fieldSelector=metadata.name%3Dahub-k8s-m1.aws-intanalytic.com&limit=500&resourceVersion=0: dial tcp 172.31.22.241:6443: getsockopt: connection refused
Jul 03 14:10:50 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: E0703 14:10:50.880792 9422 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:452: Failed to list *v1.Service: Get https://172.31.22.241:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.31.22.241:6443: getsockopt: connection refused
Jul 03 14:10:50 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: I0703 14:10:50.964989 9422 kubelet_node_status.go:271] Setting node annotation to enable volume controller attach/detach
Jul 03 14:10:50 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: I0703 14:10:50.966644 9422 kubelet_node_status.go:82] Attempting to register node ahub-k8s-m1.aws-intanalytic.com
Jul 03 14:10:50 ahub-k8s-m1.aws-intanalytic.com kubelet[9422]: E0703 14:10:50.967012 9422 kubelet_node_status.go:106] Unable to register node "ahub-k8s-m1.aws-intanalytic.com" with API server: Post https://172.31.22.241:6443/api/v1/nodes: dial tcp 172.31.22.241:6443: getsockopt: connection refused
</code></pre>
| <p>In Kubernetes 1.14 and above, you can just run <code>sudo kubeadm alpha certs renew all</code> and reboot the master. For older versions the manual steps are:</p>
<pre class="lang-sh prettyprint-override"><code>sudo -sE #switch to root
# Check certs on master to see expiration dates
echo -n /etc/kubernetes/pki/{apiserver,apiserver-kubelet-client,apiserver-etcd-client,front-proxy-client,etcd/healthcheck-client,etcd/peer,etcd/server}.crt | xargs -d ' ' -I {} bash -c "ls -hal {} && openssl x509 -in {} -noout -enddate"
# Move existing keys/config files so they can be recreated
mv /etc/kubernetes/pki/apiserver.key{,.old}
mv /etc/kubernetes/pki/apiserver.crt{,.old}
mv /etc/kubernetes/pki/apiserver-kubelet-client.crt{,.old}
mv /etc/kubernetes/pki/apiserver-kubelet-client.key{,.old}
mv /etc/kubernetes/pki/apiserver-etcd-client.crt{,.old}
mv /etc/kubernetes/pki/apiserver-etcd-client.key{,.old}
mv /etc/kubernetes/pki/front-proxy-client.crt{,.old}
mv /etc/kubernetes/pki/front-proxy-client.key{,.old}
mv /etc/kubernetes/pki/etcd/healthcheck-client.crt{,.old}
mv /etc/kubernetes/pki/etcd/healthcheck-client.key{,.old}
mv /etc/kubernetes/pki/etcd/peer.key{,.old}
mv /etc/kubernetes/pki/etcd/peer.crt{,.old}
mv /etc/kubernetes/pki/etcd/server.crt{,.old}
mv /etc/kubernetes/pki/etcd/server.key{,.old}
mv /etc/kubernetes/kubelet.conf{,.old}
mv /etc/kubernetes/admin.conf{,.old}
mv /etc/kubernetes/controller-manager.conf{,.old}
mv /etc/kubernetes/scheduler.conf{,.old}
# Regenerate keys and config files
kubeadm alpha phase certs apiserver --config /etc/kubernetes/kubeadm.yaml
kubeadm alpha phase certs apiserver-etcd-client --config /etc/kubernetes/kubeadm.yaml
kubeadm alpha phase certs apiserver-kubelet-client
kubeadm alpha phase certs front-proxy-client
kubeadm alpha phase certs etcd-healthcheck-client
kubeadm alpha phase certs etcd-peer
kubeadm alpha phase certs etcd-server
kubeadm alpha phase kubeconfig all --config /etc/kubernetes/kubeadm.yaml
# then need to restart the kubelet and services, but for the master probably best to just reboot
</code></pre>
|
<p>I just wanted to know the impact of storing data of apache Cassandra to any other distributed file system.</p>
<p>For example- let's say i am having Hadoop cluster of 5 node and replication factor of 3. </p>
<p>Similarly for cassandra i am having 5 node of cluster with replication factor of 3 for all keyspaces. all data will be stored at hdfs location with same Mount path. </p>
<p>For example- node-0 Cassandra data directory -"/data/user/cassandra-0/"</p>
<p>And Cassandra logs directory -
"/data/user/cassandra-0/logs/</p>
<p>With such kind of Architecture i need comments on following points-</p>
<ol>
<li><p>As suggested in datastax documentation casaandra data and commitlog directory should be different, which is not possible in this case. With default configuration cassandra commitlog size is 8192MB. So as per my understanding if i am having a disk of 1TB and if disk got full or any disk level error will stop entire cassandra clusters??</p></li>
<li><p>Second question is related to underlying storage mechanism. Going with two level of data distribution by specifying replication factor 3 for hdfs and 3 for cassandra, then is it same data (sstables) will be stored at 9 location? Significant memory loss please suggest on this??</p></li>
</ol>
| <p>Cassandra doesn't support out of the box storage of data on the non-local file systems, like, HDFS, etc. You can theoretically hack source code to support this, but it makes no sense - Cassandra handles replication itself, and doesn't need to have additional file system layer.</p>
|
<p>When reading blog posts about WAFs and Kubernetes, it seems 90+ % of the posts are written by WAF-providers, while the remaining posts seem to be sceptical. So I would like to hear what your experiences are with WAFs, do they make sense, and if so can you recommend any good open-source WAFs? We are currently not allowed to used American cloud providers, as we work with "person data", and the Schrems II judgement has indicated that unencrypted "person data" is not allowed on their platforms (even if on EU servers).</p>
<h1>To my understanding WAF help with the following:</h1>
<ol>
<li>IP-whitelists/blacklists</li>
<li>Rate Limits</li>
<li>Scanning of HTTPS requests for SQLi and XSS</li>
<li>Cookie Poisoning and session-jacking</li>
<li>DDOS (requires a huge WAF cluster)</li>
</ol>
<h1>But I would also think that these problems can be handled elsewhere:</h1>
<ol>
<li>IP-whitelists/blacklists can be handled by the Loadbalancer or NetworkPolicies</li>
<li>Rate Limits can be configured in the Ingress</li>
<li>Handling of SQLi and XSS is done by input sanitization in the application</li>
<li>Server-side sessions bound to IPs can prevent poisoning and jacking</li>
<li>DDOS are hard to absorb, so I have no native solution here (but they are low risk?)</li>
</ol>
<p>Sure, I can see the advantage in centralizing security at the access gate to the network, but from what I have read WAFs are hard to maintain, they have tons af false positives and most companies mainly use them to be compliant with ISO-standards, and mainly in "monitoring mode". Shouldn't it be secure enough to use SecurityPolicies, NetworkPolicies, Ingress Rules and Loadbalancer Rules rather than a WAF?</p>
| <p>A WAF is not strictly <em>necessary</em> on Kubernetes — or on any other deployment platform. Honestly, even after consulting for dozens of companies, I've seldom encountered any site that used a WAF at all.</p>
<p>You're right that you could duplicate the functions of a WAF using other technology. But you're basically reinventing the wheel by doing so, and the programmers you assign to do it are not as expert in those security tasks than the developers of the WAF are. At least they are probably doing it as one of many other tasks they are working on, so they can't devote full-time to implementation and testing of the WAF.</p>
<p>There is also a valid argument that <a href="https://en.wikipedia.org/wiki/Defense_in_depth_(computing)" rel="nofollow noreferrer">defense in depth</a> in computing is a good thing. Even if you have other security measures in place, they might fail. It's worth creating redundant layers of security defense, to account for that possibility.</p>
<p>There's a tradeoff between implementing security (or any other feature) yourself versus paying someone else for their expert work. This is true for many areas of software development, not only a WAF.</p>
<p>For example, it has become popular to use a web application framework. Is it possible to develop your own framework? Of course it is, and sometimes it's necessary if you want the code to have very specific behavior. But most of the time you can use some third-party framework off the shelf. It saves you a lot of time, and you get the instant benefit from years of development and testing done by someone else.</p>
|
<p>I'm trying to setup auto deploy with Kubernetes on GitLab. I've successfully enabled Kubernetes integration in my project settings. </p>
<p>Well, the integration icon is green and when I click "Test Settings" I see "We sent a request to the provided URL":</p>
<p><a href="https://i.stack.imgur.com/5jjm1.png" rel="noreferrer"><img src="https://i.stack.imgur.com/5jjm1.png" alt="Kubernetes Integration"></a></p>
<p>My deployment environment is the Google Container Engine.</p>
<p>Here's the auto deploy section in my <code>gitlab-ci.yml</code> config:</p>
<pre><code>deploy:
image: registry.gitlab.com/gitlab-examples/kubernetes-deploy
stage: deploy
script:
- export
- echo CI_PROJECT_ID=$CI_PROJECT_ID
- echo KUBE_URL=$KUBE_URL
- echo KUBE_CA_PEM_FILE=$KUBE_CA_PEM_FILE
- echo KUBE_TOKEN=$KUBE_TOKEN
- echo KUBE_NAMESPACE=$KUBE_NAMESPACE
- kubectl config set-cluster "$CI_PROJECT_ID" --server="$KUBE_URL" --certificate-authority="$KUBE_CA_PEM_FILE"
- kubectl config set-credentials "$CI_PROJECT_ID" --token="$KUBE_TOKEN"
- kubectl config set-context "$CI_PROJECT_ID" --cluster="$CI_PROJECT_ID" --user="$CI_PROJECT_ID" --namespace="$KUBE_NAMESPACE"
- kubectl config use-context "$CI_PROJECT_ID"
</code></pre>
<p>When I look at the results, the deploy phase fails. This is because all the <code>KUBE</code> variables are empty. </p>
<p>I'm not having much luck with the Kubernetes services beyond this point. Am I missing something?</p>
| <p>As it turns out, the Deployment Variables will not materialise unless you have configured and referenced an Environment.</p>
<p>Here's what the <code>.gitlab-ci.yaml</code> file looks like with the <code>environment</code> keyword:</p>
<pre><code>deploy:
image: registry.gitlab.com/gitlab-examples/kubernetes-deploy
stage: deploy
environment: production
script:
- export
- echo CI_PROJECT_ID=$CI_PROJECT_ID
- echo KUBE_URL=$KUBE_URL
- echo KUBE_CA_PEM_FILE=$KUBE_CA_PEM_FILE
- echo KUBE_TOKEN=$KUBE_TOKEN
- echo KUBE_NAMESPACE=$KUBE_NAMESPACE
- kubectl config set-cluster "$CI_PROJECT_ID" --server="$KUBE_URL" --certificate-authority="$KUBE_CA_PEM_FILE"
- kubectl config set-credentials "$CI_PROJECT_ID" --token="$KUBE_TOKEN"
- kubectl config set-context "$CI_PROJECT_ID" --cluster="$CI_PROJECT_ID" --user="$CI_PROJECT_ID" --namespace="$KUBE_NAMESPACE"
- kubectl config use-context "$CI_PROJECT_ID"
</code></pre>
|
<p>While the <a href="https://github.com/kubernetes/client-go/blob/master/examples/out-of-cluster-client-configuration/main.go" rel="nofollow noreferrer">kubernetes golang api example for out-of-cluster authentication works fine</a>, and <a href="https://gist.github.com/innovia/fbba8259042f71db98ea8d4ad19bd708" rel="nofollow noreferrer">creating a service account and exporting the bearer token works great</a>, it feels silly to write the pieces to a temporary file only to tell the API to read it. Is there an API way to pass these pieces as an object rather than write to a file?</p>
<pre><code> clusterData := map[string]string{
"BEARER_TOKEN": bearerToken,
"CA_DATA": clusterCA,
"ENDPOINT": clusterUrl,
}
const kubeConfigTmpl = `
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: {{.CA_DATA}}
server: {{.HOST_IP_ADDRESS}}
name: kubernetes
contexts:
- context:
cluster: kubernetes
namespace: default
user: lamdba-serviceaccount-default-kubernetes
name: lamdba-serviceaccount-default-kubernetes
current-context: lamdba-serviceaccount-default-kubernetes
kind: Config
preferences: {}
users:
- name: lamdba-serviceaccount-default-kubernetes
user:
token: {{.BEARER_TOKEN}}
`
t := template.Must(template.New("registration").Parse(kubeConfigTmpl))
buf := &bytes.Buffer{}
if err := t.Execute(buf, clusterData); err != nil {
panic(err)
}
registrationPayload := buf.String()
d1 := []byte(registrationPayload)
err := ioutil.WriteFile("/tmp/config", d1, 0644)
</code></pre>
| <p>Looking at the source code, this should work:</p>
<pre><code>// error handling omitted for brevity
cc, _ := clientcmd.NewClientConfigFromBytes([]byte(d1))
config, _ := cc.ClientConfig()
clientset, _ := kubernetes.NewForConfig(config)
</code></pre>
|
<p>I am using Kubernetes to deploy all my microservices provided by Azure Kubernetes Services.</p>
<p>Whenever I release an update of my microservice which is getting frequently from last one month, it pulls the new image from the Azure Container Registry.</p>
<p>I was trying to figure out where do these images reside in the cluster? </p>
<p>Just like Docker stores, the pulled images in /var/lib/docker & since the Kubernetes uses Docker under the hood may be it stores the images somewhere too.</p>
<p>But if this is the case, how can I delete the old images from the cluster that are not in use anymore?</p>
| <p>Clusters with Linux node pools created on Kubernetes v1.19 or greater default to containerd for its container runtime (<a href="https://learn.microsoft.com/en-us/azure/aks/cluster-configuration#container-runtime-configuration" rel="nofollow noreferrer">Container runtime configuration</a>).</p>
<p>To manually remove unused images on a node running containerd:</p>
<p>Identity node names:</p>
<pre class="lang-sh prettyprint-override"><code>kubectl get nodes
</code></pre>
<p>Start an interactive debugging container on a node (<a href="https://learn.microsoft.com/en-us/azure/aks/ssh" rel="nofollow noreferrer">Connect with SSH to Azure Kubernetes Service</a>):</p>
<pre class="lang-sh prettyprint-override"><code>kubectl debug node/aks-agentpool-11045208-vmss000003 -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11
</code></pre>
<p>Setup <code>crictl</code> on the debugging container (<a href="https://github.com/kubernetes-sigs/cri-tools/releases" rel="nofollow noreferrer">check for newer releases of crictl</a>):</p>
<blockquote>
<p>The host node's filesystem is available at <code>/host</code>, so configure <code>crictl</code> to use the host node's <code>containerd.sock</code>.</p>
</blockquote>
<pre class="lang-sh prettyprint-override"><code>curl -sL https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.23.0/crictl-v1.23.0-linux-amd64.tar.gz | tar xzf - -C /usr/local/bin \
&& export CONTAINER_RUNTIME_ENDPOINT=unix:///host/run/containerd/containerd.sock IMAGE_SERVICE_ENDPOINT=unix:///host/run/containerd/containerd.sock
</code></pre>
<p>Remove unused images on the node:</p>
<pre class="lang-sh prettyprint-override"><code>crictl rmi --prune
</code></pre>
|
<p>I'm building a few Spring Boot microservices that are getting deployed in a Kubernetes (AKS specifically) cluster. I was planning on setting the probePaths for the <strong><em>liveness & readiness</em></strong> check to both point at the actuator health endpoint, but was wondering if that may not be the best option. My original thinking was that checking the path would be useful (at least for readiness) so that traffic wouldn't be sent to it until Spring has started up and is capable of handling requests. Since these services use a database connection, and the actuator health indicator will report status as down if it can't make a connection, will that not be such a good idea?</p>
<p>With liveness, I'm thinking it might start recycling the pods/containers over and over even though (in the case the DB is down) it might not fix anything.</p>
<p>With readiness, I'm thinking it might cause the pool of available apps to be 0 if the DB is down. The app itself will most likely not be very useful if the DB is down, but parts may still work potentially I suppose.</p>
<p>Is there a recommended best practice for this type of thing?</p>
| <p>As of Spring Boot 2.3, <a href="https://docs.spring.io/spring-boot/docs/2.3.0.RELEASE/reference/html/spring-boot-features.html#boot-features-application-availability" rel="noreferrer">the Availability state of the application</a> (including Liveness and Readiness) is supported in the core and <a href="https://docs.spring.io/spring-boot/docs/2.3.0.RELEASE/reference/html/production-ready-features.html#production-ready-kubernetes-probes" rel="noreferrer">can be exposed as Kubernetes Probes with Actuator</a>.</p>
<p>Your question is spot on and this was discussed at length in <a href="https://github.com/spring-projects/spring-boot/issues/19593#issuecomment-572975767" rel="noreferrer">the Spring Boot issue for the Liveness/Readiness feature</a>.</p>
<p>The <code>/health</code> endpoint was never really designed to expose the application state and drive how the cloud platform treats the app instance it and routes traffic to it. It's been used that way quite a lot since Spring Boot didn't have better to offer here.</p>
<p>The <code>Liveness</code> should only fail when the internal state of the application is broken and we cannot recover from it. As you've underlined in your question, failing here as soon as an external system is unavailable can be dangerous: the platform might recycle all application instances depending on that external system (maybe all of them?) and cause cascading failures, since other systems might be depending on that application as well.</p>
<p>By default, the liveness proble will reply with "Success" unless the application itself changed that internal state.</p>
<p>The <code>Readiness</code> probe is really about the ability for the application to serve traffic. As you've mentioned, some health checks might show the state of essential parts of the application, some others not. Spring Boot will synchronize the Readiness state with the lifecycle of the application (the web app has started, the graceful shutdown has been requested and we shouldn't route traffic anymore, etc). There is a way to configure a "readiness" health group to contain a custom set of health checks for your particular use case.</p>
<p>I disagree with a few statements in the answer that received the bounty, especially because a lot changed in Spring Boot since:</p>
<ol>
<li>You should not use <code>/actuator/health</code> for Liveness or Readiness probes as of Spring Boot 2.3.0.</li>
<li>With the new Spring Boot lifecycle, you should move all the long-running startup tasks as <code>ApplicationRunner</code> beans - they will be executed after Liveness is Success, but before Readiness is Success. If the application startup is still too slow for the configured probes, you should then use the StartupProbe with a longer timeout and point it to the Liveness endpoint.</li>
<li>Using the management port can be dangerous, since it's using a separate web infrastructure. For example, the probes exposed on the management port might be OK but the main connector (serving the actual traffic to clients) might be overwhelmed and cannot serve more traffic. Reusing the same server and web infrastructure for the probes can be safer in some case. </li>
</ol>
<p>For more information about this new feature, you can read the dedicated <a href="https://spring.io/blog/2020/03/25/liveness-and-readiness-probes-with-spring-boot" rel="noreferrer">Kubernetes Liveness and Readiness Probes with Spring Boot</a> blog post.</p>
|