Question
stringlengths 65
39.6k
| QuestionAuthor
stringlengths 3
30
⌀ | Answer
stringlengths 38
29.1k
| AnswerAuthor
stringlengths 3
30
⌀ |
---|---|---|---|
<p>I used this line o get the failed pods: <code>workflow.failures</code> , so I would like to get the same info about the nodes that have finished successfully. Is there a command to get the information about the ones that ended correctly??. I'm using Argo 3</p>
| javier_orta | <p>There is no <code>workflow.nodes</code> <a href="https://github.com/argoproj/argo-workflows/blob/master/docs/variables.md#exit-handler" rel="nofollow noreferrer">global variable</a>. But if you have kubectl access to get the JSON representation of the workflow, you can get information about executed nodes.</p>
<pre class="lang-sh prettyprint-override"><code>kubectl get wf my-workflow -ojson | jq '.status.nodes
</code></pre>
| crenshaw-dev |
<p>I created a <code>WorkflowTemplate</code> in which I want to pass result of a script template as an input parameter to another task</p>
<p>Here is my <code>WorkflowTemplate</code></p>
<pre><code>apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: dag-wft
spec:
entrypoint: whalesay
templates:
- name: whalesay
inputs:
parameters:
- name: message
default: '["tests/hello", "templates/hello", "manifests/hello"]'
dag:
tasks:
- name: prepare-lst
template: prepare-list-script
arguments:
parameters:
- name: message
value: "{{inputs.parameters.message}}"
- name: templates
depends: prepare-lst
templateRef:
name: final-dag-wft
template: whalesay-final
arguments:
parameters:
- name: fnl_message
value: "{{item}}"
withParam: "{{tasks.prepare-lst.outputs.parameters.templates_lst}}"
- name: manifests
depends: prepare-lst && (templates.Succeeded || templates.Skipped)
templateRef:
name: final-dag-wft
template: whalesay-final
arguments:
parameters:
- name: fnl_message
value: "{{item}}"
withParam: "{{tasks.prepare-lst.outputs.parameters.manifests_lst}}"
- name: prepare-list-script
inputs:
parameters:
- name: message
script:
image: python
command: [python]
source: |
manifests_lst = []
# templates list preparation
templates_lst = ['templates/' for template in "{{inputs.parameters.message}}" if 'templates/' in template]
print(templates_lst)
# manifests list preparation
for i in "{{inputs.parameters.message}}":
if 'templates/' not in i:
manifests_lst.append(i)
print(manifests_lst)
outputs:
parameters:
- name: templates_lst
- name: manifests_lst
</code></pre>
<p>In the above script template I've added print statement of two variables <code>templates_lst</code> and <code>manifests_lst</code>. I want to pass these two variables result as in input to two other tasks in the dag. The two other tasks are <code>templates</code> and <code>manifests</code></p>
<p>The way I am accessing the output values is <code>"{{tasks.prepare-lst.outputs.parameters.templates_lst}}"</code> and <code>"{{tasks.prepare-lst.outputs.parameters.manifests_lst}}"</code>. It is not working</p>
<p>How we can I do this?</p>
| Biru | <p><strong>1. Fully define your output parameters</strong></p>
<p>Your output parameter spec is incomplete. You need to specify <em>where</em> the output parameter comes from.</p>
<p>Since you have multiple output parameters, you can't just use standard out (<code>{{tasks.prepare-lst.outputs.parameters.result}}</code>). You have to write two files and derive an output parameter from each.</p>
<p><strong>2. Load the JSON array so it's iterable</strong></p>
<p>If you iterate over the string representation of the array, you'll just get one character at a time.</p>
<p><strong>3. Use an environment variable to pass input to Python</strong></p>
<p>Although it's not strictly necessary, I consider it best practice. If a malicious actor had the ability to set the <code>message</code> parameter, they could inject Python into your workflow. Pass the parameter as an environment variable so the string remains a string.</p>
<p><strong>Changes:</strong></p>
<pre><code> - name: prepare-list-script
inputs:
parameters:
- name: message
script:
image: python
command: [python]
+ env:
+ - name: MESSAGE
+ value: "{{inputs.parameters.message}}"
source: |
+ import json
+ import os
+ message = json.loads(os.environ["MESSAGE"])
manifests_lst = []
# templates list preparation
- templates_lst = ['templates/' for template in "{{inputs.parameters.message}}" if 'templates/' in template]
+ templates_lst = ['templates/' for template in message if 'templates/' in template]
- print(templates_lst)
+ with open('/mnt/out/templates_lst.txt', 'w') as outfile:
+ outfile.write(str(json.dumps(templates_lst)))
# manifests list preparation
for i in "{{inputs.parameters.message}}":
if 'templates/' not in i:
manifests_lst.append(i)
- print(manifests_lst)
+ with open('/mnt/out/manifests_lst.txt', 'w') as outfile:
+ outfile.write(str(json.dumps(manifests_lst)))
+ volumeMounts:
+ - name: out
+ mountPath: /mnt/out
+ volumes:
+ - name: out
+ emptyDir: { }
outputs:
parameters:
- name: templates_lst
+ valueFrom:
+ path: /mnt/out/templates_lst.txt
- name: manifests_lst
+ valueFrom:
+ path: /mnt/out/manifests_lst.txt
</code></pre>
| crenshaw-dev |
<p>I'm setting up a kubernetes cluster with many different components for our application stack and I'm trying to balance storage requirements while minimizing the number of components.</p>
<p>We have a web <strong>scraper</strong> that downloads tens of thousands of HTML files (and maybe PDFs) every day and I want to store these somewhere (along with some JSON metadata). I want the files stored in a redundant scalable way but having millions of small files seems like a bad fit with e.g. GlusterFS.</p>
<p>At the same time we have some very large binary files used by our system (several gigabytes large) and also probably many smaller binary files (10's of MBs). These do not seem like a good fit for any distribtued NoSQL DB like MongoDB.</p>
<p>So I'm considering using MongoDB + GlusterFS to separately address these two needs but I would rather reduce the number of moving pieces and just use one system. I have also read various warnings about using GlusterFS without e.g. Redhat support (which we definitely will not have).</p>
<p>Can anyone recommend an alternative? I am looking for something that is a distributed binary object store which is easy to setup/maintain and supports both small and large files. One advantage of our setup is that files will rarely ever be updated or deleted (just written and then read) and we don't even need indexing (that will be handled separately by elasticsearch) or high speed access for reads.</p>
| Prefer Anon | <p>Are you in a cloud? If in AWS S3 would be a good spot, object storage sounds like what you might want, but not sure of your requirements. </p>
<p>If not in a cloud, you could run Minio (<a href="https://www.minio.io/" rel="nofollow noreferrer">https://www.minio.io/</a>) which would give you the same type of object storage that s3 would give you. </p>
<p>I do something similar now where I store binary documents in MongoDB and we back the nodes with EBS volumes. </p>
| Steve Sloka |
<p>I am trying to install Argo CLI by following this (<a href="https://github.com/argoproj/argo-workflows/releases" rel="nofollow noreferrer">https://github.com/argoproj/argo-workflows/releases</a>) documentation.</p>
<pre><code># Download the binary
curl -sLO https://github.com/argoproj/argo/releases/download/v3.1.3/argo-linux-amd64.gz
# Unzip
gunzip argo-linux-amd64.gz
# Make binary executable
chmod +x argo-linux-amd64
# Move binary to path
mv ./argo-linux-amd64 /usr/local/bin/argo
# Test installation
argo version
</code></pre>
<p>The above instructions are not working. So, I followed the answer to this (<a href="https://stackoverflow.com/questions/64916480/how-to-update-argo-cli">How to update Argo CLI?</a>) question.</p>
<pre><code>curl -sLO https://github.com/argoproj/argo/releases/download/v2.12.0-rc2/argo-linux-amd64
chmod +x argo-linux-amd64
./argo-linux-amd64
</code></pre>
<p>But I am getting the following error:</p>
<pre><code>./argo-linux-amd64: line 1: Not: command not found
</code></pre>
<p>I also tried moving the <code>argo-linux-amd64</code> binary to <code>/usr/local/bin/argo</code> but still getting the same error (as expected).</p>
<p>Is there any solution to this?</p>
<p>Thank you.</p>
| Pratik Patil | <p>The download links on the Releases page are incorrect. Try this one:</p>
<pre><code>curl -sLO https://github.com/argoproj/argo-workflows/releases/download/v3.1.3/argo-linux-amd64.gz
</code></pre>
<p>I've submitted an <a href="https://github.com/argoproj/argo-workflows/issues/6440" rel="nofollow noreferrer">issue</a> to get the links fixed.</p>
| crenshaw-dev |
<p>I'm trying to make argocd cli output yaml/json to prep it for script ingestion.</p>
<p>According to this PR: <a href="https://github.com/argoproj/argo-cd/pull/2551" rel="nofollow noreferrer">https://github.com/argoproj/argo-cd/pull/2551</a>
It should be available but I can't find the option in cli help nor in documentation.</p>
<pre><code>#argocd version:
argocd: v2.1.2+7af9dfb
...
argocd-server: v2.0.3+8d2b13d
</code></pre>
| Vano | <p>Some commands accept the <code>-o json</code> flag to request JSON output.</p>
<p>Look in the <a href="https://github.com/argoproj/argo-cd/tree/master/docs/user-guide/commands" rel="nofollow noreferrer">commands documentation</a> to find commands which support that flag.</p>
<p><code>argocd cluster list -o json</code>, for example, will return a JSON list of configured clusters. <a href="https://github.com/argoproj/argo-cd/blob/caa246a38d56639f76ba0efb712967a191fddf44/docs/user-guide/commands/argocd_cluster_get.md#options" rel="nofollow noreferrer">The documentation</a> looks like this:</p>
<blockquote>
<h2>Options</h2>
<pre><code> -h, --help help for get
-o, --output string
Output format. One of: json|yaml|wide|server (default "yaml")
</code></pre>
</blockquote>
| crenshaw-dev |
<p>I'm working on deploying the Thanos monitoring system and one of its components, the metric compactor, warns that there should <em>never</em> be more than one compactor running at the same time. If this constraint is violated it will likely lead to corruption of metric data.</p>
<p>Is there any way to codify "Exactly One" pod via Deployment/StatefulSet/etc, aside from "just set <code>replicas: 1</code> and never scale"? We're using Rancher as an orchestration layer and it's <em>real</em> easy to hit that <code>+</code> button without thinking about it.</p>
| Sammitch | <p>Be careful with Deployment, because they can be configured with two update strategy:</p>
<ul>
<li><em>RollingUpdate</em>: new pods are added while and old pods are terminated. This mean that, depending on the <code>maxSurge</code> option, if you set your replicas to <code>1</code>, you may still be have <em>at most 2 pods</em>.</li>
<li><em>Recreate</em>: all the previous pods are terminated before any new pods are created.</li>
</ul>
<p>Instead, Statefulsets guarantee that there will never be more than 1 instance of a pod at any given time. </p>
<pre><code>apiVersion: apps/v1beta1
kind: StatefulSet
spec:
replicas: 1
</code></pre>
<p>Unlike Deployments, pods are not replaced until the previous has been terminated.</p>
| Federkun |
<p>Is there a way in Argo CD to suspend sync globally? For example, my company occasionally does deployment freezes in which no one is allowed to ship to production. Is there a way to turn off sync globally in Argo CD to prevent new versions of apps from deploying?</p>
| David Ham | <p>As LostJon commented, you can scale the application controller down. But then you won't see the current state of resources in the UI - only the state as of the moment the controller was shut down.</p>
<p>Instead of scaling down the controller, you can set a <a href="https://argo-cd.readthedocs.io/en/latest/user-guide/sync_windows/" rel="nofollow noreferrer">sync window</a> on a <a href="https://argo-cd.readthedocs.io/en/latest/user-guide/projects/#configuring-global-projects-v18" rel="nofollow noreferrer">global project</a>.</p>
| crenshaw-dev |
<p>I have an Argo workflow with dynamic fan-out tasks that do some map operation (in a Map-Reduce meaning context). I want to create a reducer that aggregates their results. It's possible to do that when the outputs of each mapper are small and can be put as an output parameter. See <a href="https://stackoverflow.com/questions/60569353/dynamic-fan-in-in-argo-workflows">this SO question-answer</a> for the description of how to do it.</p>
<p>But how to aggregate output <strong>artifacts</strong> with Argo without writing custom logic of writing them to some storage in each mapper and read from it in reducer?</p>
| Alexander Reshytko | <p>Artifacts are more difficult to aggregate than parameters.</p>
<p>Parameters are always text and are generally small. This makes it easy for Argo Workflows to aggregate them into a single JSON object which can then be consumed by a "reduce" step.</p>
<p>Artifacts, on the other hand, may be any type or size. So Argo Workflows is limited in how much it can help with aggregation.</p>
<p>The main relevant feature it provides is declarative repository write/read operations. You can specify, for example, an S3 prefix to write each parameter to. Then, in the reduce step, you can load everything from that prefix and perform your aggregation logic.</p>
<p>Argo Workflows provides a <a href="https://github.com/argoproj/argo-workflows/blob/master/examples/map-reduce.yaml" rel="nofollow noreferrer">generic map/reduce example</a>. But besides artifact writing/reading, you pretty much have to do the aggregation logic yourself.</p>
| crenshaw-dev |
<p>I'm developing a Blazor WebAssembly app with PWA enabled, and with files <code>appsettings.json</code>, <code>appsettings.Development.json</code> and <code>appsettings.Production.json</code>. The last one is empty because it would contain secrets to replace when production environment is deployed to a kubernetes cluster.</p>
<p>I'm using k8s to deploy, and a <code>Secret</code> resource to replace the empty <code>appsettings.Production.json</code> file by an encrypted file, into a nginx based container with the published blazor app inside.</p>
<p>Now I'm getting this issue in the browser:
<a href="https://i.stack.imgur.com/Cxpqn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Cxpqn.png" alt="Failed integrity validation on resource" /></a></p>
<p>When the application was built using docker build in a CI pipeline, the file was an empty json file, and got a SHA computed that does not match then one computed by the build process.</p>
<p><strong>My question is</strong>: How can I replace the <code>appsettings.Production.json</code> during deployment, much later than the build process, and don't have the integrity test failed over that file?</p>
<p>The file <code>blazor.boot.json</code> does not contain any SHA for the <code>appsetting.Production.json</code> file:</p>
<pre class="lang-json prettyprint-override"><code>{
"cacheBootResources": true,
"config": [
"appsettings.Development.json",
"appsettings.json",
"appsettings.Production.json"
],
"debugBuild": false,
"entryAssembly": "IrisTenantWeb",
"icuDataMode": 0,
"linkerEnabled": true,
"resources": {
"assembly": {
"Azure.Core.dll": "sha256-rzNx\/GlDpiutVRPzugT82owXvTopmiixMar68xLA6L8=",
// Bunch of .dlls,
"System.Private.CoreLib.dll": "sha256-S7l+o9J9ivjCunMa+Ms\/JO\/kVaXLW8KTAjq1eRjY4EA="
},
"lazyAssembly": null,
"pdb": null,
"runtime": {
"dotnet.timezones.blat": "sha256-SQvzbzBfueaAxSKIKE1khBH02NH2MJJaWDBav\/S5MSs=",
"dotnet.wasm": "sha256-YXYNlLeMqRPFVpY2KSDhleLkNk35d9KvzzwwKAoiftc=",
"icudt.dat": "sha256-m7NyeXyxM+CL04jr9ui1Z6pVfMWwhHusuz5qNZWpAwA=",
"icudt_CJK.dat": "sha256-91bygK5voY9lG5wxP0\/uj7uH5xljF9u7iWnSldT1Z\/g=",
"icudt_EFIGS.dat": "sha256-DPfeOLph83b2rdx40cKxIBcfVZ8abTWAFq+RBQMxGw0=",
"icudt_no_CJK.dat": "sha256-oM7Z6aN9jHmCYqDMCBwFgFAYAGgsH1jLC\/Z6DYeVmmk=",
"dotnet.5.0.5.js": "sha256-Dvb7uXD3+JPPqlsw2duS+FFNQDkFaxhIbSQWSnhODkM="
},
"satelliteResources": null
}
}
</code></pre>
<p>But the <code>service-worker-assets.js</code> file DOES contains a SHA computed for it:</p>
<pre class="lang-js prettyprint-override"><code>self.assetsManifest = {
"assets": [
{
"hash": "sha256-EaNzjsIaBdpWGRyu2Elt6mv3X+48iD9gGaSN8xAm3ao=",
"url": "appsettings.Development.json"
},
{
"hash": "sha256-RIn54+RUdIs1IeshTgpWlNViz\/PZ\/1EctFaVPI9TTAA=",
"url": "appsettings.json"
},
{
"hash": "sha256-RIn54+RUdIs1IeshTgpWlNViz\/PZ\/1EctFaVPI9TTAA=",
"url": "appsettings.Production.json"
},
{
"hash": "sha256-OV+CP+ILUqNY7e7\/MGw1L5+Gi7EKCXEYNJVyBjbn44M=",
"url": "css\/app.css"
},
// ...
],
"version": "j39cUu6V"
};
</code></pre>
<blockquote>
<p>NOTE: You can see that both <code>appsettings.json</code> and <code>appsettings.Production.json</code> have the same hash because they are both the empty json file <code>{}</code>. But in production the second one is having a computed hash of <code>YM2gjmV5...</code> and issuing the error.</p>
</blockquote>
<p>I can't have different build processes for different environments, because that would not ensure using the same build from staging and production. I need to use the same docker image but replacing the file at deployment time.</p>
| isierra | <p>I edited the <code>wwwroot/service-worker.published.js</code> file, which first lines are as follow:</p>
<pre class="lang-js prettyprint-override"><code>// Caution! Be sure you understand the caveats before publishing an application with
// offline support. See https://aka.ms/blazor-offline-considerations
self.importScripts('./service-worker-assets.js');
self.addEventListener('install', event => event.waitUntil(onInstall(event)));
self.addEventListener('activate', event => event.waitUntil(onActivate(event)));
self.addEventListener('fetch', event => event.respondWith(onFetch(event)));
const cacheNamePrefix = 'offline-cache-';
const cacheName = `${cacheNamePrefix}${self.assetsManifest.version}`;
const offlineAssetsInclude = [ /\.dll$/, /\.pdb$/, /\.wasm/, /\.html/, /\.js$/, /\.json$/, /\.css$/, /\.woff$/, /\.png$/, /\.jpe?g$/, /\.gif$/, /\.ico$/, /\.blat$/, /\.dat$/ ];
const offlineAssetsExclude = [ /^service-worker\.js$/ ];
async function onInstall(event) {
console.info('Service worker: Install');
// Fetch and cache all matching items from the assets manifest
const assetsRequests = self.assetsManifest.assets
.filter(asset => offlineAssetsInclude.some(pattern => pattern.test(asset.url)))
.filter(asset => !offlineAssetsExclude.some(pattern => pattern.test(asset.url)))
.map(asset => new Request(asset.url, { integrity: asset.hash }));
await caches.open(cacheName).then(cache => cache.addAll(assetsRequests));
}
...
</code></pre>
<p>I added an array of patterns, similar to <code>offlineAssetsInclude</code> and <code>offlineAssetsExclude</code> to indicate which files I want to skip integrity checks.</p>
<pre class="lang-js prettyprint-override"><code>...
const offlineAssetsInclude = [ /\.dll$/, /\.pdb$/, /\.wasm/, /\.html/, /\.js$/, /\.json$/, /\.css$/, /\.woff$/, /\.png$/, /\.jpe?g$/, /\.gif$/, /\.ico$/, /\.blat$/, /\.dat$/ ];
const offlineAssetsExclude = [ /^service-worker\.js$/ ];
const integrityExclude = [ /^appsettings\.Production\.json$/ ]; // <-- new variable
</code></pre>
<p>Then at <code>onInstall</code>, instead of always returning a <code>Request</code> with <code>integrity</code> set, I skipped it for excluded patterns:</p>
<pre class="lang-js prettyprint-override"><code>...
async function onInstall(event) {
console.info('Service worker: Install');
// Fetch and cache all matching items from the assets manifest
const assetsRequests = self.assetsManifest.assets
.filter(asset => offlineAssetsInclude.some(pattern => pattern.test(asset.url)))
.filter(asset => !offlineAssetsExclude.some(pattern => pattern.test(asset.url)))
.map(asset => {
// Start of new code
const integrity =
integrityExclude.some(pattern => pattern.test(asset.url))
? null
: asset.hash;
return !!integrity
? new Request(asset.url, { integrity })
: new Request(asset.url);
// End of new code
});
await caches.open(cacheName).then(cache => cache.addAll(assetsRequests));
}
...
</code></pre>
<p>I'll wait for others to comment and propose other solutions, because the ideal response would set the correct SHA hash to the file, instead of ignoring it.</p>
| isierra |
<p>I have this argo application :</p>
<pre><code>project: myproject
destination:
server: 'https://kubernetes.default.svc'
namespace: myns
syncPolicy:
automated:
prune: true
syncOptions:
- CreateNamespace=true
sources:
- repoURL: >-
https://gitlab.com/myrepos/helm-repo.git
path: helm
targetRevision: HEAD
helm:
valueFiles:
- $values/mocks/values.yaml
- repoURL: >-
https://gitlab.com/myrepos/values-repo.git
targetRevision: HEAD
ref: values
</code></pre>
<p>This is my values file:</p>
<pre><code>mocks:
projectName: "soapui-project"
</code></pre>
<p>this works fine and I am able to set the values from a remote repo as long as my xml file is in the same repo as my helm chart.</p>
<p>This is a template from my chart, it's a kubernetes configmap:</p>
<pre><code>{{ $projectFile := printf "%s.xml" .Values.mocks.projectName }}
{{ $projectName := .Values.mocks.projectName }}
apiVersion: v1
kind: ConfigMap
metadata:
name: "{{ $projectName | lower }}"
data:
{{ ($.Files.Glob $projectFile).AsConfig | indent 1 }}
</code></pre>
<p>My problem/question:</p>
<p>Can I read the file contents if the file is not in the same repo as the helm chart, let's say my xml file is in my values repo and pass it's content to the data of the config map?</p>
<p>I am expecting to be able to put my xml soapui file in a distant repo and pass it's content to my argo app's helm chart the same way I pass values from another repo to my argo App</p>
| titter | <p>No, you cannot. The <code>$values</code> feature is very specific to sharing the values file, not arbitrary config files.</p>
<p>This PR intends to implement the more general feature you need: <a href="https://github.com/argoproj/argo-cd/pull/12508" rel="nofollow noreferrer">https://github.com/argoproj/argo-cd/pull/12508</a></p>
| crenshaw-dev |
<p>I am trying to access the content(json data) of a file which is passed as input artifacts to a script template. It is failing with the following error <code>NameError: name 'inputs' is not defined. Did you mean: 'input'?</code></p>
<p>My artifacts are being stored in aws s3 bucket. I've also tried using environment variables instead of directly referring the artifacts directly in script template, but it is also not working.</p>
<p>Here is my workflow</p>
<pre><code>apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: output-artifact-s3-
spec:
entrypoint: main
templates:
- name: main
dag:
tasks:
- name: whalesay-script-template
template: whalesay
- name: retrieve-output-template
dependencies: [whalesay-script-template]
arguments:
artifacts:
- name: result
from: "{{tasks.whalesay-script-template.outputs.artifacts.message}}"
template: retrieve-output
- name: whalesay
script:
image: python
command: [python]
env:
- name: OUTDATA
value: |
{
"lb_url" : "<>.us-east-1.elb.amazonaws.com",
"vpc_id" : "<vpc-id",
"web_server_count" : "4"
}
source: |
import json
import os
OUTDATA = json.loads(os.environ["OUTDATA"])
with open('/tmp/templates_lst.txt', 'w') as outfile:
outfile.write(str(json.dumps(OUTDATA)))
volumeMounts:
- name: out
mountPath: /tmp
volumes:
- name: out
emptyDir: { }
outputs:
artifacts:
- name: message
path: /tmp
- name: retrieve-output
inputs:
artifacts:
- name: result
path: /tmp
script:
image: python
command: [python]
source: |
import json
result = {{inputs.artifacts.result}}
with open(result, 'r') as outfile:
lines = outfile.read()
print(lines)
print('Execution completed')
</code></pre>
<p>What's wrong in this workflow?</p>
| Biru | <p>In the last template, replace <code>{{inputs.artifacts.result}}</code> with <code>”/tmp/templates_lst.txt”</code>.</p>
<p><code>inputs.artifacts.NAME</code> has no meaning in the <code>source</code> field, so Argo leaves it as-is. Python tries to interpret it as code, which is why you get an exception.</p>
<p>The proper way to communicate an input artifact to Python in Argo is to specify the artifact destination (which you’ve done) in the templates input definition. Then in Python, use files from that path the same way you would do in any Python app.</p>
| crenshaw-dev |
<p>I am trying to perform a <a href="https://kubernetes.io/docs/tutorials/kubernetes-basics/update/update-intro/" rel="nofollow noreferrer" title="Kubernetes Performing a Rolling Update">Kubernetes Rolling Update</a> using <a href="https://v2.helm.sh/docs/" rel="nofollow noreferrer" title="Helm v2 Docs">Helm v2</a>; however, I'm unable to.</p>
<p>When I perform a <code>helm upgrade</code> on a <a href="https://hub.docker.com/r/hqasem/slow-tomcat" rel="nofollow noreferrer" title="Slow Tomcat Docker Image">slow Tomcat image</a>, the original pod is destroyed.</p>
<p>I would like to figure out how to achieve zero downtime by incrementally updating Pods instances with new ones, and draining old ones.</p>
<p>To demonstrate, I created a sample <a href="https://hub.docker.com/r/hqasem/slow-tomcat" rel="nofollow noreferrer" title="Slow Tomcat Docker Image">slow Tomcat Docker image</a>, and a <a href="https://github.com/h-q/slowtom/" rel="nofollow noreferrer" title="Slow Tomcat Helm v2 deployment chart">Helm chart</a>.</p>
<h2>To install:</h2>
<pre><code>helm install https://github.com/h-q/slowtom/raw/master/docs/slowtom.tgz --name slowtom \
-f https://github.com/h-q/slowtom/raw/master/docs/slowtom/environments/initial.yaml
</code></pre>
<p>You can follow the logs by running <code>kubectl logs -f slowtom-sf-0</code>, and once ready you can access the application on <code>http://localhost:30901</code></p>
<h2>To upgrade:</h2>
<h3>(and that's where I need help)</h3>
<pre><code>helm upgrade slowtom https://github.com/h-q/slowtom/raw/master/docs/slowtom.tgz \
-f https://github.com/h-q/slowtom/raw/master/docs/slowtom/environments/upgrade.yaml
</code></pre>
<p>The <a href="https://github.com/h-q/slowtom/blob/master/docs/slowtom/environments/upgrade.yaml" rel="nofollow noreferrer" title="Initial Helm Deployment Yaml file"><code>upgrade.yaml</code></a> is identical to the <a href="https://github.com/h-q/slowtom/blob/master/docs/slowtom/environments/initial.yaml" rel="nofollow noreferrer" title="Upgrad Helm Deployment Yaml file"><code>initial.yaml</code></a> deployment file with the exception of the tag version number.</p>
<p>Here the original pod is destroyed, and the new one starts. Meanwhile, users are unable to access the application on <code>http://localhost:30901</code></p>
<h3>To Delete:</h3>
<pre><code>helm del slowtom --purge
</code></pre>
<h1>Reference</h1>
<h2>Local Helm Chart</h2>
<h3>Download helm chart:</h3>
<pre><code>curl -LO https://github.com/h-q/slowtom/raw/master/docs/slowtom.tgz
tar vxfz ./slowtom.tgz
</code></pre>
<h3>Install from local helm-chart:</h3>
<pre><code>helm install --debug ./slowtom --name slowtom -f ./slowtom/environments/initial.yaml
</code></pre>
<h3>Upgrade from local helm-chart:</h3>
<pre><code>helm upgrade --debug slowtom ./slowtom -f ./slowtom/environments/upgrade.yaml
</code></pre>
<h2>Docker Image</h2>
<h3><code>Dockerfile</code></h3>
<pre><code>FROM tomcat:8.5-jdk8-corretto
RUN mkdir /usr/local/tomcat/webapps/ROOT && \
echo '<html><head><title>Slow Tomcat</title></head><body><h1>Slow Tomcat Now Ready</h1></body></html>' >> /usr/local/tomcat/webapps/ROOT/index.html
RUN echo '#!/usr/bin/env bash' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo 'x=2' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo 'secs=$(($x * 60))' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo 'while [ $secs -gt 0 ]; do' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo ' >&2 echo -e "Blast off in $secs\033[0K\r"' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo ' sleep 1' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo ' : $((secs--))' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo 'done' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo '>&2 echo "slow cataline done. will now start real catalina"' >> /usr/local/tomcat/bin/slowcatalina.sh && \
echo 'exec catalina.sh run' >> /usr/local/tomcat/bin/slowcatalina.sh && \
chmod +x /usr/local/tomcat/bin/slowcatalina.sh
ENTRYPOINT ["/usr/local/tomcat/bin/slowcatalina.sh"]
</code></pre>
<h2>Helm Chart Content</h2>
<h3><code>slowtom/Chart.yaml</code></h3>
<pre><code>apiVersion: v1
description: slow-tomcat Helm chart for Kubernetes
name: slowtom
version: 1.1.2 # whatever
</code></pre>
<h3><code>slowtom/values.yaml</code></h3>
<pre><code># Do not use this file, but ones from environmments folder
</code></pre>
<h3><code>slowtom/environments/initial.yaml</code></h3>
<pre><code># Storefront
slowtom_sf:
name: "slowtom-sf"
hasHealthcheck: "true"
isResilient: "false"
replicaCount: 2
aspect_values:
- name: y_aspect
value: "storefront"
image:
repository: hqasem/slow-tomcat
pullPolicy: IfNotPresent
tag: 1
env:
- name: y_env
value: whatever
</code></pre>
<h3><code>slowtom/environments/upgrade.yaml</code></h3>
<pre><code># Storefront
slowtom_sf:
name: "slowtom-sf"
hasHealthcheck: "true"
isResilient: "false"
replicaCount: 2
aspect_values:
- name: y_aspect
value: "storefront"
image:
repository: hqasem/slow-tomcat
pullPolicy: IfNotPresent
tag: 2
env:
- name: y_env
value: whatever
</code></pre>
<h3><code>slowtom/templates/deployment.yaml</code></h3>
<pre><code>---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ .Values.slowtom_sf.name }}
labels:
chart: "{{ .Chart.Name | trunc 63 }}"
chartVersion: "{{ .Chart.Version | trunc 63 }}"
visualize: "true"
app: {{ .Values.slowtom_sf.name }}
spec:
replicas: {{ .Values.slowtom_sf.replicaCount }}
selector:
matchLabels:
app: {{ .Values.slowtom_sf.name }}
template:
metadata:
labels:
app: {{ .Values.slowtom_sf.name }}
visualize: "true"
spec:
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: {{ .Values.slowtom_sf.name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
command: ["/usr/local/tomcat/bin/slowcatalina.sh"]
args: ["whatever"]
env:
{{ toYaml .Values.env | indent 12 }}
{{ toYaml .Values.slowtom_sf.aspect_values | indent 12 }}
resources:
{{ toYaml .Values.resources | indent 12 }}
---
</code></pre>
<h3><code>slowtom/templates/service.yaml</code></h3>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: {{.Values.slowtom_sf.name}}
labels:
chart: "{{ .Chart.Name | trunc 63 }}"
chartVersion: "{{ .Chart.Version | trunc 63 }}"
app: {{.Values.slowtom_sf.name}}
visualize: "true"
hasHealthcheck: "{{ .Values.slowtom_sf.hasHealthcheck }}"
isResilient: "{{ .Values.slowtom_sf.isResilient }}"
spec:
type: NodePort
selector:
app: {{.Values.slowtom_sf.name}}
sessionAffinity: ClientIP
ports:
- protocol: TCP
port: 8080
targetPort: 8080
name: http
nodePort: 30901
---
</code></pre>
| h q | <p>Unlike <code>Deployment</code>, <code>StatefulSet</code> does not start a new pod before destroying the old one during a rolling update. Instead, the expectation is that you have multiple pods, and they will be replaced one-by-one. Since you only have 1 replica configured, it must destroy it first. Either increase your replica count to 2 or more, or switch to a <code>Deployment</code> template.</p>
<p><a href="https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#rolling-update" rel="nofollow noreferrer">https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#rolling-update</a></p>
| superstator |
<p>I have a working kubernetes cluster where ingress and letsencrypt is working just fine when I use helm charts. I have a deployment not included in a chart that I want to expose using ingress with TLS. How can I do this with kubectl commands?</p>
<p>EDIT: I can manually create an ingress but I don't have a secret so HTTPS won't work. So my question is probably "How to create a secret with letsencrypt to use on a new ingress for an existing deployment"</p>
| JSantos | <p>Google provides a way to do this for their own managed certificates. The documentation for it is at <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/managed-certs</a>.</p>
| Andy Shinn |
<p>I'm trying to find a solution for the problem that seems like something very common.</p>
<ol>
<li>I have a k8s cluster ip service which exposes two ports: 8088 and 60004</li>
<li>I would like to expose these same ports on ALB and not use path based routing</li>
</ol>
<p>This works for exposing one service on 8088 port:</p>
<pre><code>---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myingress
namespace: myns
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/healthcheck-path: /ping
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/tags: Environment=dev,Team=test
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 8088}]'
spec:
rules:
- host: myhost
http:
paths:
- path: /*
backend:
serviceName: firstservice
servicePort: 8088
</code></pre>
<p>How can the same thing be achieved for both services using ONE ingress?</p>
<p>Thanks in advance.</p>
| Bakir Jusufbegovic | <p>Eventually, to solve this problem, I've used ALB ingress controller group feature, which is currently in alpha state: <a href="https://github.com/kubernetes-sigs/aws-alb-ingress-controller/issues/914" rel="noreferrer">https://github.com/kubernetes-sigs/aws-alb-ingress-controller/issues/914</a></p>
<p>This is how my ingress resource looks now:</p>
<pre><code>---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myingress_1
namespace: myns
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/tags: Environment=dev,Team=test
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/group.name: mygroup
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 8088}]'
spec:
rules:
- host: <HOST>
http:
paths:
- path: /*
backend:
serviceName: myservice
servicePort: 8088
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myingress_2
namespace: myns
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/tags: Environment=dev,Team=test
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/group.name: mygroup
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 60004}]'
spec:
rules:
- host: <HOST>
http:
paths:
- path: /*
backend:
serviceName: myservice
servicePort: 60004
</code></pre>
<p>where key thing is </p>
<pre><code>alb.ingress.kubernetes.io/group.name: mygroup
</code></pre>
<p>which connects these two ingress resources.</p>
<p>Therefore, I end up with following:</p>
<ul>
<li>Service with multiple (two) ports in k8s exposed with two separate ingress
resources but they both point to the same AWS ALB (because of the same group
name)</li>
<li>On the AWS ALB side, I get one ALB with two ports exposed: 8088 and
60004 and each of them points to same k8s service but
different port on the same pod (this could easily be two different k8s services
if that was needed)</li>
</ul>
| Bakir Jusufbegovic |
<p>I am new to the argo universe and was trying to set up Argo Workflows <a href="https://github.com/argoproj/argo-workflows/blob/master/docs/quick-start.md#install-argo-workflows" rel="nofollow noreferrer">https://github.com/argoproj/argo-workflows/blob/master/docs/quick-start.md#install-argo-workflows</a> .</p>
<p>I have installed the <code>argo</code> CLI from the page : <a href="https://github.com/argoproj/argo-workflows/releases/latest" rel="nofollow noreferrer">https://github.com/argoproj/argo-workflows/releases/latest</a> . I was trying it in my minikube setup and I have my kubectl already configured to the minikube cluster. I am able to hit argo commands without any issues after putting it in my local bin folder.</p>
<p>How does it work? Where do the argo CLI is connecting to operate?</p>
| Kishor Unnikrishnan | <p>The <code>argo</code> CLI <a href="https://github.com/argoproj/argo-workflows/blob/877d6569754be94f032e1c48d1f7226a83adfbec/cmd/argo/commands/get.go#L73-L74" rel="noreferrer">manages two API clients</a>. The first client connects to the <a href="https://argoproj.github.io/argo-workflows/rest-api/" rel="noreferrer">Argo Workflows API</a> server. The second connects to the Kubernetes API. Depending on what you're doing, the CLI might connect just to one API or the other.</p>
<p>To connect to the Kubernetes API, the CLI just uses your kube config.</p>
<p>To connect to the Argo server, the CLI first checks for an <code>ARGO_TOKEN</code> environment variable. If it's not available, the CLI <a href="https://github.com/argoproj/argo-workflows/blob/877d6569754be94f032e1c48d1f7226a83adfbec/cmd/argo/commands/client/conn.go#L91" rel="noreferrer">falls back to using the kube config</a>.</p>
<p><code>ARGO_TOKEN</code> is <a href="https://argoproj.github.io/argo-workflows/rest-api/" rel="noreferrer">only necessary when the Argo Server is configured to require client auth</a> and then only if you're doing things which require access to the Argo API instead of just the Kubernetes API.</p>
| crenshaw-dev |
<p>How do i need to configure my ingress that Angular 7 App is working?</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- http:
paths:
- path:
backend:
serviceName: angular-service
servicePort: 80
- path: /test
backend:
serviceName: angular-service
servicePort: 80
</code></pre>
<p>Angular is hosted by nginx image:</p>
<pre><code>FROM nginx:alpine
COPY . /usr/share/nginx/html
</code></pre>
<p>And on Kubernetes:</p>
<p>kind: Service
apiVersion: v1
metadata:
name: test-service
spec:
selector:
app: test
ports:</p>
<h2> - port: 80</h2>
<p>apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: test
spec:
replicas: 2
template:
metadata:
labels:
app: test
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: test
image: xxx.azurecr.io/test:latest
imagePullPolicy: Always
ports:
- containerPort: 80</p>
<p>On path / everything works. But on /test it doesn't work.
Output in console:</p>
<blockquote>
<p>Uncaught SyntaxError: Unexpected token < runtime.js:1 </p>
<p>Uncaught SyntaxError: Unexpected token < polyfills.js:1</p>
<p>Uncaught SyntaxError: Unexpected token < main.js:1</p>
</blockquote>
<p>That´s why I changed on angular.json:</p>
<pre><code>"baseHref" : "/test",
</code></pre>
<p>But now I get the same error on both locations. What I´m doing wrong?</p>
<h2>edit details</h2>
<h3>ingress-controller (Version 0.25.0):</h3>
<p><a href="https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml</a></p>
<h3>ingress-service (for azure):</h3>
<p><a href="https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml" rel="nofollow noreferrer">https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/cloud-generic.yaml</a></p>
<h3>Test Procedure:</h3>
<p>The Application is built with </p>
<pre><code>"baseHref" : "",
</code></pre>
<p>When I run the application on my server everything works (by that baseHref is verified, too).</p>
<pre><code>$ sudo docker run --name test -p 80:80 test:v1
</code></pre>
<p>On Kubernetes the application works on location / (only if I use annotation nginx.ingress.kubernetes.io/rewrite-target: /).
If I try to enter /test I get an empty page.</p>
<h3>Logs:</h3>
<p>$ sudo kubectl logs app-589ff89cfb-9plfs</p>
<pre><code>10.xxx.x.xx - - [11/Jul/2019:18:16:13 +0000] "GET / HTTP/1.1" 200 574 "https://52.xxx.xx.xx/test" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36" "93.xxx.xxx.xxx"
</code></pre>
<p>$ sudo kubectl logs -n ingress-nginx nginx-ingress-controller-6df4d8b446-6rq65</p>
<p>Location /</p>
<pre><code>93.xxx.xxx.xxx - [93.xxx.xxx.xxx] - - [12/Jul/2019:06:55:31 +0000] "GET / HTTP/2.0" 200 279 "-" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36" 261 0.002 [default-app2-service-80] [] 10.xxx.x.48:80 574 0.004 200 95abb8e14b1dd95976cd44f23a2d829a
93.xxx.xxx.xxx - [93.xxx.xxx.xxx] - - [12/Jul/2019:06:55:31 +0000] "GET /runtime.js HTTP/2.0" 200 2565 "https://52.xxx.xx.xx/" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36" 44 0.001 [default-app2-service-80] [] 10.xxx.x.49:80 9088 0.004 200 d0c2396f6955e82824b1dec60d43b4ef
93.xxx.xxx.xxx - [93.xxx.xxx.xxx] - - [12/Jul/2019:06:55:31 +0000] "GET /polyfills.js HTTP/2.0" 200 49116 "https://52.xxx.xx.xx/" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36" 25 0.000 [default-app2-service-80] [] 10.xxx.x.48:80 242129 0.000 200 96b5d57f9baf00932818f850abdfecca
93.xxx.xxx.xxx - [93.xxx.xxx.xxx] - - [12/Jul/2019:06:55:31 +0000] "GET /styles.js HTTP/2.0" 200 5464 "https://52.xxx.xx.xx/" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36" 23 0.009 [default-app2-service-80] [] 10.xxx.x.49:80 16955 0.008 200 c3a7f1f937227a04c9eec9e1eab107b3
93.xxx.xxx.xxx - [93.xxx.xxx.xxx] - - [12/Jul/2019:06:55:31 +0000] "GET /main.js HTTP/2.0" 200 3193 "https://52.xxx.xx.xx/" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36" 21 0.019 [default-app2-service-80] [] 10.xxx.x.49:80 12440 0.016 200 c0e12c3eaec99212444cf916c7d6b27b
93.xxx.xxx.xxx - [93.xxx.xxx.xxx] - - [12/Jul/2019:06:55:31 +0000] "GET /runtime.js.map HTTP/2.0" 200 9220 "-" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36" 24 0.004 [default-app2-service-80] [] 10.xxx.x.48:80 9220 0.008 200 f1a820a384ee9e7a61c74ebb8f3cbf68
93.xxx.xxx.xxx - [93.xxx.xxx.xxx] - - [12/Jul/2019:06:55:31 +0000] "GET /vendor.js HTTP/2.0" 200 643193 "https://52.xxx.xx.xx/" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36" 23 0.130 [default-app2-service-80] [] 10.xxx.x.48:80 3391734 0.132 200 1cf47ed0d8a0e470a131dddb22e8fc48
93.xxx.xxx.xxx - [93.xxx.xxx.xxx] - - [12/Jul/2019:06:55:31 +0000] "GET /polyfills.js.map HTTP/2.0" 200 241031 "-" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36" 26 0.001 [default-app2-service-80] [] 10.xxx.x.49:80 241031 0.000 200 75413e809cd9739dc0b9b300826dd107
93.xxx.xxx.xxx - [93.xxx.xxx.xxx] - - [12/Jul/2019:06:55:31 +0000] "GET /styles.js.map HTTP/2.0" 200 19626 "-" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36" 23 0.000 [default-app2-service-80] [] 10.xxx.x.48:80 19626 0.004 200 1aa0865cbf07ffb1753d0a3eb630b4d7
93.xxx.xxx.xxx - [93.xxx.xxx.xxx] - - [12/Jul/2019:06:55:32 +0000] "GET /favicon.ico HTTP/2.0" 200 1471 "https://52.xxx.xx.xx/" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36" 55 0.048 [default-app2-service-80] [] 10.xxx.x.49:80 5430 0.000 200 2c015813c697c61f3cc6f67bb3bf7f75
93.xxx.xxx.xxx - [93.xxx.xxx.xxx] - - [12/Jul/2019:06:55:32 +0000] "GET /vendor.js.map HTTP/2.0" 200 3493937 "-" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36" 24 0.377 [default-app2-service-80] [] 10.xxx.x.49:80 3493937 0.380 200 7b41bbbecafc2fb037c934b5509de245
93.xxx.xxx.xxx - [93.xxx.xxx.xxx] - - [12/Jul/2019:06:55:32 +0000] "GET /main.js.map HTTP/2.0" 200 6410 "-" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36" 22 0.169 [default-app2-service-80] [] 10.xxx.x.48:80 6410 0.104 200 cc23aa543c19ddc0f55b4a922cc05d04
</code></pre>
<p>Location /test</p>
<pre><code>93.xxx.xxx.xxx - [93.xxx.xxx.xxx] - - [12/Jul/2019:06:43:08 +0000] "GET /test/ HTTP/2.0" 200 274 "-" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36" 294 0.000 [default-test-service-80] [] 10.xxx.x.xx:80 574 0.000 200 2b560857eba8dd1242e359d5eea0a84b
93.xxx.xxx.xxx - [93.xxx.xxx.xxx] - - [12/Jul/2019:06:43:08 +0000] "GET /test/runtime.js HTTP/2.0" 200 274 "https://52.xxx.xx.xx/test/" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36" 53 0.000 [default-test-service-80] [] 10.xxx.x.49:80 574 0.000 200 2695a85077c64d40a5806fb53e2977e5
93.xxx.xxx.xxx - [93.xxx.xxx.xxx] - - [12/Jul/2019:06:43:08 +0000] "GET /test/polyfills.js HTTP/2.0" 200 274 "https://52.xxx.xx.xx/test/" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36" 30 0.003 [default-test-service-80] [] 10.xxx.x.48:80 574 0.000 200 8bd4e421ee8f7f9db6b002ac40bf2025
93.xxx.xxx.xxx - [93.xxx.xxx.xxx] - - [12/Jul/2019:06:43:08 +0000] "GET /test/styles.js HTTP/2.0" 200 274 "https://52.xxx.xx.xx/test/" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36" 28 0.001 [default-test-service-80] [] 10.xxx.x.49:80 574 0.000 200 db7f0cb93b90a41d623552694e5e74b6
93.xxx.xxx.xxx - [93.xxx.xxx.xxx] - - [12/Jul/2019:06:43:08 +0000] "GET /test/vendor.js HTTP/2.0" 200 274 "https://52.xxx.xx.xx/test/" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36" 28 0.001 [default-test-service-80] [] 10.xxx.x.48:80 574 0.000 200 0e5eb8fc77a6fb94b87e64384ac083e0
93.xxx.xxx.xxx - [93.xxx.xxx.xxx] - - [12/Jul/2019:06:43:08 +0000] "GET /test/main.js HTTP/2.0" 200 274 "https://52.xxx.xx.xx/test/" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36" 27 0.001 [default-test-service-80] [] 10.xxx.x.49:80 574 0.000 200 408aa3cbfda25f65cb607e1b1ce47566
93.xxx.xxx.xxx - [93.xxx.xxx.xxx] - - [12/Jul/2019:06:43:08 +0000] "GET /test/favicon.ico HTTP/2.0" 200 274 "https://52.xxx.xx.xx/test/" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36" 58 0.001 [default-test-service-80] [] 10.xxx.x.48:80 574 0.000 200 d491507ae073de55a480909b4fab0484
</code></pre>
| Nico Schuck | <p>Without knowing the version of nginx-ingress this is just a guess.</p>
<p>Per the documentation at <a href="https://kubernetes.github.io/ingress-nginx/examples/rewrite/#rewrite-target" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/examples/rewrite/#rewrite-target</a> is says:</p>
<blockquote>
<p>Starting in Version 0.22.0, ingress definitions using the annotation nginx.ingress.kubernetes.io/rewrite-target are not backwards compatible with previous versions. In Version 0.22.0 and beyond, any substrings within the request URI that need to be passed to the rewritten path must explicitly be defined in a capture group.</p>
</blockquote>
<p>This means that you need to explicitly pass the paths like:</p>
<pre><code>apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: angular-service
servicePort: 80
- path: /test/(/|$)(.*)
backend:
serviceName: angular-service
servicePort: 80
</code></pre>
| Andy Shinn |
<p>I have created an Autopilot cluster on GKE</p>
<p>I want to connect and manage it with <a href="https://github.com/kubernetes-client/python" rel="nofollow noreferrer">Python Kubernetes Client</a></p>
<p>I am able to get the kubeconfig of cluster</p>
<p>I am able to access the cluster using kubectl on my local system using the command</p>
<blockquote>
<p>gcloud container clusters get-credentials</p>
</blockquote>
<p>When I try to connect with python-client-library of kubernetes, I get following error</p>
<pre><code> File "lib/python3.7/site-packages/urllib3/util/retry.py", line 399, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='xxx.xx.xxx.xxx', port=443): Max
retries exceeded with url: /apis/extensions/v1beta1/namespaces/default/ingresses (Caused by
SSLError(SSLError(136, '[X509] no certificate or crl found (_ssl.c:4140)')))
</code></pre>
<p>here is the code i am using</p>
<pre><code>os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = "863924b908c7.json"
credentials, project = google.auth.default(
scopes=['https://www.googleapis.com/auth/cloud-platform', ])
credentials.refresh(google.auth.transport.requests.Request())
cluster_manager = ClusterManagerClient(credentials=credentials)
# cluster = cluster_manager.get_cluster(project)
config.load_kube_config('config.yaml')
</code></pre>
| HERAwais | <p>Here's what I figured out. I think it's a good solution because it prevents man in the middle attacks (uses SSL) unlike other python snippets in the wild.</p>
<pre><code>from google.cloud.container_v1 import ClusterManagerClient
from kubernetes import client
from tempfile import NamedTemporaryFile
import base64
import google.auth
credentials, project = google.auth.default(scopes=['https://www.googleapis.com/auth/cloud-platform',])
credentials.refresh(google.auth.transport.requests.Request())
cluster_manager = ClusterManagerClient(credentials=credentials)
cluster = cluster_manager.get_cluster(name=f"projects/{gcp_project_id}/locations/{cluster_zone_or_region}/clusters/{cluster_id}")
with NamedTemporaryFile(delete=False) as ca_cert:
ca_cert.write(base64.b64decode(cluster.master_auth.cluster_ca_certificate))
config = client.Configuration()
config.host = f'https://{cluster.endpoint}:443'
config.verify_ssl = True
config.api_key = {"authorization": "Bearer " + credentials.token}
config.username = credentials._service_account_email
config.ssl_ca_cert = ca_cert.name
client.Configuration.set_default(config)
# make calls with client
</code></pre>
<blockquote>
<p>On GKE, SSL Validation works on the IP automatically. If you are in an environment where it doesn't work for some reason, you can bind the IP to a hostname list this:</p>
<pre><code>from python_hosts.hosts import (Hosts, HostsEntry)
hosts = Hosts()
hosts.add([HostsEntry(entry_type='ipv4', address=cluster.endpoint, names=['kubernetes'])])
hosts.write()
config.host = "https://kubernetes"
</code></pre>
</blockquote>
| E Brake |
<p>Since Kubernetes does not implement a dependency between Containers, I was wondering whether there is an elegant way of checking whether another Container in the same Pod is ready.</p>
<p>I would assume the Downward API is necessary.
Maybe it could be done by embedding <code>kubectl</code> inside the container - but is there a easier way?</p>
| abergmeier | <p>For now I ended up using a simple file existence check:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
...
- name: former
readinessProbe:
exec:
command:
- /bin/sh
- "-c"
- /bin/sh /check_readiness.sh && touch /foo/ready
volumeMounts:
- name: shared-data
mountPath: /foo
...
- name: latter
command:
- /bin/sh
- "-c"
- while [ ! -f /foo/ready ]; do sleep 1; done; /bar.sh
volumeMounts:
- name: shared-data
mountPath: /foo
readOnly: true
...
volumes:
- name: shared-data
emptyDir: {}
</code></pre>
| abergmeier |
<p>I am running a small 3 node test kubernetes cluster (using kubeadm) running on Ubuntu Server 22.04, with Flannel as the network fabric. I also have a separate gitlab private server, with container registry set up and working.</p>
<p>The problem I am running into is I have a simple test deployment, and when I apply the deployment yaml, it fails to pull the image from the gitlab private server.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: platform-deployment
spec:
replicas: 1
selector:
matchLabels:
app: platform-service
template:
metadata:
labels:
app: platform-service
spec:
containers:
- name: platform-service
image: registry.examle.com/demo/platform-service:latest
</code></pre>
<h4>Ubuntu Server: /etc/hosts (the relevant line)</h4>
<pre class="lang-bash prettyprint-override"><code>192.168.1.30 registry.example.com
</code></pre>
<h4>The Error</h4>
<pre class="lang-bash prettyprint-override"><code>Failed to pull image "registry.example.com/demo/platform-service:latest":
rpc error: code = Unknown desc = failed to pull and unpack image
"registry.example.com/deni/platform-service:latest": failed to resolve reference
"registry.example.com/demo/platform-service:latest": failed to do request: Head
"https://registry.example.com/v2/demo/platform-service/manifests/latest": dial tcp
xxx.xxx.xxx.xxx:443: i/o timeout
</code></pre>
<p>The 'xxx.xxx.xxx.xxx' is related to my external network, to which exists a domain name in the DNS, however all of my internal networks are set up to attach to the internal network representation, and 'registry.example.com' is a representation of my own domains.</p>
<p>Also to note:</p>
<pre class="lang-bash prettyprint-override"><code>docker pull registry.example.com/demo/platform-service:latest
</code></pre>
<p>From the command line of the server, works perfectly fine, it is just not working from kubernetes deploy yaml.</p>
<h4>The problem</h4>
<p>While the network on the server, and the host files on the server are configured correctly, the docker image is not resolving because when I apply it is not using the correct IP (that is configured in hosts), rather a public IP that is a different server. And the reason for the timeout is because the public facing server is not set up the same.</p>
<p>When I run <code>kubectl apply -f platform-service.yaml</code> why does it not respect the hosts file of the server, and is there a way configure hosts inside Kubernetes.</p>
<p>(If this problem is not clear, I apologize, I am quite new, and still learning terminology, maybe why google is not helping me with this problem.)</p>
<p>The closest S/O I could find is:</p>
<p><a href="https://stackoverflow.com/questions/62940403/kubernetes-not-able-pull-image-from-private-registry-having-private-domain-point">Kubernetes not able pull image from private registry having private domain pointed via /etc/hosts</a></p>
<p>(SO Answer #1): hostAliases (this is for the pod itself, not pulling the image), also, installed through apt/package manager rather than snap. With the rest of the answer suggests changing the distribution, which I would rather go with my current setup than change it.</p>
<h4>Update</h4>
<p>Attempts to add hosts to coredns not working either:
(<a href="https://stackoverflow.com/questions/65283827/how-to-change-host-name-resolve-like-host-file-in-coredns">How to change host name resolve like host file in coredns</a>)</p>
<pre><code>kubectl -n kube-system edit configmap/coredns
</code></pre>
<pre><code>...
.:53 {
errors
health {
lameduck 5s
}
ready
hosts custom.hosts registry.example.com {
192.168.1.30 registry.example.com
fallthrough
}
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30
loop
reload
loadbalance
}
...
</code></pre>
<p>deleted the coredns pods (so they are recreated)</p>
<p>and still the docker pull on the deployment fails with the external ip address instead of the internal address.</p>
| Aaron Murray | <p>After going through many different solutions and lots of research and testing. The answer was actually very simple.</p>
<h3>Solution in my case</h3>
<p>The /etc/hosts file <em>MUST</em> contain the host for the registry (and possibly the entry for the gitlab instance as well) on <em>EVERY</em> node of the cluster including the master node.</p>
<pre><code>192.168.1.30 registry.example.com
192.168.1.30 gitlab.example.com # Necessary in my case, not sure required
</code></pre>
<p>Once I included that on each of the 2 slaves, it attempted to pull the image, and failed with credential issues (which I was expecting to see once the hosts issue was resolved). From there I was able to add the credentials and now the image pulls fine from the private registry rather than the public facing registry.</p>
<h4>Bonus: Fix for credentials error connecting to private registry (not part of the original question, but part of the setup process for connecting)</h4>
<p>After fixing the /etc/hosts issue, you will probably need to set up 'regcred' credentials to access the private registry, Kubernetes documentation provides the steps on that part:</p>
<p><a href="https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/</a></p>
| Aaron Murray |
<p>I have broken down my workflow scenario into 2 separate <code>WorkflowTemplates</code>. <code>outer-template</code> would just define the steps and <code>inner-template</code> would hold that <code>job</code> definition that will spin up desired container, with all other fancy stuff. Now when I submit a request <code>request.yaml</code>, it does pass the parameter <code>message</code> down to outer and inner template and fails with this error:</p>
<pre><code> hello-59jg8-394098346:
Boundary ID: hello-59jg8-1953291600
Children:
hello-59jg8-534805352
Display Name: [0]
Finished At: 2021-06-15T00:41:45Z
Id: hello-59jg8-394098346
Message: child 'hello-59jg8[0].init-step[0].step-1' errored
Name: hello-59jg8[0].init-step[0]
Phase: Error
Started At: 2021-06-15T00:41:45Z
Template Name: HelloWorld
Template Scope: namespaced/outer-template
Type: StepGroup
hello-59jg8-534805352:
Boundary ID: hello-59jg8-1953291600
Display Name: step-1
Finished At: 2021-06-15T00:41:45Z
Id: hello-59jg8-534805352
Message: inputs.parameters.message was not supplied
Name: hello-59jg8[0].init-step[0].step-1
Phase: Error
Started At: 2021-06-15T00:41:45Z
Template Ref:
Name: inner-template
Template: InnerJob
Template Scope: namespaced/outer-template
Type: Skipped
Phase: Failed
Started At: 2021-06-15T00:41:45Z
Stored Templates:
</code></pre>
<p>Below 2 are <code>WorkflowTemplate</code>s and third one is the request.</p>
<pre><code>apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: inner-template
namespace: cali
labels:
workflows.argoproj.io/controller-instanceid: cali
spec:
templates:
- name: InnerJob
metadata:
annotations:
sidecar.istio.io/inject: "false"
inputs:
parameters:
- name: message
- name: stepName
value: ""
resource:
action: create
successCondition: status.succeeded > 0
failureCondition: status.failed > 0
manifest: |
apiVersion: batch/v1
kind: Job
metadata:
generateName: hello-pod-
annotations:
sidecar.istio.io/inject: "false"
spec:
template:
metadata:
annotations:
sidecar.istio.io/inject: "false"
spec:
containers:
- name: hellopods
image: centos:7
command: [sh, -c]
args: ["echo ${message}; sleep 5; echo done; exit 0"]
env:
- name: message
value: "{{inputs.parameters.message}}"
- name: stepName
value: "{{inputs.parameters.stepName}}"
restartPolicy: Never
outputs:
parameters:
- name: job-name
valueFrom:
jsonPath: '{.metadata.name}'
- name: job-obj
valueFrom:
jqFilter: '.'
</code></pre>
<pre><code>apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: outer-template
namespace: cali
labels:
workflows.argoproj.io/controller-instanceid: cali
spec:
entrypoint: HelloWorld
templates:
- name: HelloWorld
inputs:
parameters:
- name: message
steps:
- - name: step-1
templateRef:
name: inner-template
template: InnerJob
arguments:
parameters:
- name: message
- name: stepName
value: "this is step 1"
- - name: step-2
templateRef:
name: inner-template
template: InnerJob
arguments:
parameters:
- name: message
- name: stepName
value: "this is step 2"
</code></pre>
<pre><code>apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: hello-
namespace: cali
labels:
workflows.argoproj.io/controller-instanceid: cali
spec:
entrypoint: HelloWorld
serviceAccountName: argo
templates:
- name: HelloWorld
steps:
- - arguments:
parameters:
- name: message
value: "Hello World....."
name: init-step
templateRef:
name: outer-template
template: HelloWorld
</code></pre>
| colossal | <p>When passing an argument to a template in a step, you have to explicitly set the argument value.</p>
<p>In the <code>outer-template</code> WorkflowTemplate, you invoke <code>inner-template</code> twice. In each case you have half-specified the <code>message</code> argument. You have to also set the <code>value</code> for each parameter.</p>
<p>You should set <code>value: "{{inputs.parameters.message}}"</code> in <code>step-1</code> and <code>step-2</code>. That will pull the <code>message</code> input parameter from <code>outer-template.HelloWorld</code>.</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: outer-template
namespace: cali
labels:
workflows.argoproj.io/controller-instanceid: cali
spec:
entrypoint: HelloWorld
templates:
- name: HelloWorld
inputs:
parameters:
- name: message
steps:
- - name: step-1
templateRef:
name: inner-template
template: InnerJob
arguments:
parameters:
- name: message
value: "{{inputs.parameters.message}}"
- name: stepName
value: "this is step 1"
- - name: step-2
templateRef:
name: inner-template
template: InnerJob
arguments:
parameters:
- name: message
value: "{{inputs.parameters.message}}"
- name: stepName
value: "this is step 2"
</code></pre>
| crenshaw-dev |
<p>I have restarted the rancher host a few times while configuring rancher.</p>
<p>Nothing was lost, even though containers had been started and stopped several times during these reboots.</p>
<p>I had to stop and run the container again to set a specific IP for the UI, so I could use the other IP addresses available in the host as HostPorts for containers.</p>
<p>This is the command I had to execute again:</p>
<pre><code>docker run -d --restart=unless-stopped -p 1.2.3.4:80:80 -p 1.2.3.4:443:443 rancher/rancher
</code></pre>
<p>After running this, rancher started up as a clean installation, asking me for password, to setup a cluster, and do everything from scratch, even though I see a lot of containers running.</p>
<p>I tried rerunning the command that rancher showed on the first installation (including the old token and ca-checksum). Still nothing.</p>
<p>Why is this happening? Is there a way to restore the data, or should I do the configuration and container creation again?</p>
<p>What is the proper way of cleaning up, if I need to start from scratch? docker rm all containers and do the setup again?</p>
<p><strong>UPDATE</strong></p>
<p>I just found some information from another member in a related question, because this problem happened following a suggestion from another user.</p>
<p>Apparently there is an upgrade process that needs to be followed, but I am missing what needs to be done exactly. I can see my old, stopped container here: <a href="https://snag.gy/h2sSpH.jpg" rel="nofollow noreferrer">https://snag.gy/h2sSpH.jpg</a></p>
<p>I believe I need to do something with that container so the new rancher container becomes online with the previous data.</p>
<p>Should I be running this?</p>
<p><code>docker run -d --volumes-from stoic_newton --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher:latest</code></p>
| Miguel Mesquita Alfaiate | <p>Ok, I can confirm that this process works.</p>
<p>I have followed the guide here: <a href="https://rancher.com/docs/rancher/v2.x/en/upgrades/upgrades/single-node-upgrade/#completing-the-upgrade" rel="nofollow noreferrer">https://rancher.com/docs/rancher/v2.x/en/upgrades/upgrades/single-node-upgrade/#completing-the-upgrade</a></p>
<p>I just add to stop the new rancher container which was lacking the data, copy if from the original docker container to create a backup, and then restart the new container with the volumes from the data container which was created in the process.</p>
<p>I could probably have launched the new rancher container with the volumes from the old rancher container, but I preferred playing it safe and following every step of the guide, and as a plus I ended up with a backup :)</p>
| Miguel Mesquita Alfaiate |
<p>How can I reserved resources to a namespace?</p>
<p>Example: I want to guarantee that one namespace cannot allocate all the resources on the cluster.</p>
<p>Example: Having namespace A and B and a Cluster that can have 100 pods. How can I make sure that the pod A can schedule at least 10 pods, even if we have 200 users trying to create pods on namespace B. (Typical resource segregation)</p>
<p>I would expect something like in Yarn where I can say leave 10% of the cluster resources to queue A. </p>
| Jorge Machado | <p>Namespace resource quota won't do this for you. Try to explore: <a href="http://yunikorn.apache.org/" rel="nofollow noreferrer">http://yunikorn.apache.org/</a>. More specifically the min/max capacity model: <a href="http://yunikorn.apache.org/docs/next/get_started/core_features#hierarchy-resource-queues" rel="nofollow noreferrer">http://yunikorn.apache.org/docs/next/get_started/core_features#hierarchy-resource-queues</a>.</p>
| Weiwei Yang |
<p>An <code>outer-template</code> which calls <code>inner-template</code> twice since there are 2 steps. Inner template is a simple container which write some text to <code>/command_output/result.txt</code>. The workflow outputs attempts to read it thru:</p>
<pre><code>- name: previous_step_output
valueFrom:
path: /command_output/result.txt
</code></pre>
<p>This does appear to be working for some reason. Based on the documentation I also created <code>volumes</code> and <code>volumeMounts</code>
The error is:</p>
<pre><code> Service Account Name: argo
Templates:
Arguments:
Inputs:
Metadata:
Name: HelloWorld
Outputs:
Steps:
[map[arguments:map[parameters:[map[name:message value:Hello World.....]]] name:init-step templateRef:map[name:outer-template template:HelloWorld]]]
Status:
Conditions:
Status: True
Type: Completed
Finished At: 2021-06-17T23:50:37Z
Message: runtime error: invalid memory address or nil pointer dereference
Nodes:
hello-css4z:
</code></pre>
<p>Need some advise on what is missing. Attaching inner-template, outer-template and the request.yaml.</p>
<pre><code>apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: inner-template
namespace: cali
labels:
workflows.argoproj.io/controller-instanceid: cali
spec:
templates:
- name: InnerJob
metadata:
annotations:
sidecar.istio.io/inject: "false"
inputs:
parameters:
- name: message
- name: previous_step_output
value: ""
resource:
action: create
successCondition: status.succeeded > 0
failureCondition: status.failed > 0
manifest: |
apiVersion: batch/v1
kind: Job
metadata:
namespace: default
generateName: hellojob-
annotations:
sidecar.istio.io/inject: "false"
spec:
template:
metadata:
annotations:
sidecar.istio.io/inject: "false"
spec:
volumes:
- emptyDir: {}
name: cali-mount
containers:
- name: export-tenant
image: centos:7
command: [sh, -c]
args: ["echo 'some result' > /command_output/result.txt; cat /command_output/result.txt; sleep 5; echo done; exit 0"]
env:
- name: message
value: "{{inputs.parameters.message}}"
- name: previous_step_output
value: "{{inputs.parameters.previous_step_output}}"
volumeMounts:
- mountPath: /command_output
name: cali-mount
restartPolicy: Never
outputs:
parameters:
- name: job-name
valueFrom:
jsonPath: '{.metadata.name}'
- name: job-obj
valueFrom:
jqFilter: '.'
- name: previous_step_output
valueFrom:
path: /command_output/result.txt
</code></pre>
<pre><code>apiVersion: argoproj.io/v1alpha1
kind: WorkflowTemplate
metadata:
name: outer-template
namespace: cali
labels:
workflows.argoproj.io/controller-instanceid: cali
spec:
entrypoint: HelloWorld
templates:
- name: HelloWorld
inputs:
parameters:
- name: message
steps:
- - name: step-1
templateRef:
name: inner-template
template: InnerJob
arguments:
parameters:
- name: message
value: "{{inputs.parameters.message}}"
- - name: step-2
templateRef:
name: inner-template
template: InnerJob
arguments:
parameters:
- name: message
value: "{{inputs.parameters.message}}"
- name: previous_step_output
value: "{{steps.step-1.outputs.parameters.previous_step_output}}"
</code></pre>
<p>request payload:</p>
<pre><code>apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
generateName: hello-
namespace: cali
labels:
workflows.argoproj.io/controller-instanceid: cali
spec:
entrypoint: HelloWorld
serviceAccountName: argo
templates:
- name: HelloWorld
steps:
- - name: init-step
arguments:
parameters:
- name: message
value: "Hello World....."
templateRef:
name: outer-template
template: HelloWorld
</code></pre>
| colossal | <p>As far as I can tell, <a href="https://github.com/argoproj/argo-workflows/tree/master/examples#kubernetes-resources" rel="nofollow noreferrer">Argo Workflows resource templates</a> do not support reading files as output parameters.</p>
<p>It looks like the only built-in method of communicating from a <code>job</code> resource to the instantiating workflow is via the JSON representation of the <code>job</code> resource itself.</p>
<p>I would recommend converting the Job to a normal container template in the workflow. Then you could use all the typical communication methods (reading directly from stdout, reading a file into an output param, reading an output artifact, etc.).</p>
| crenshaw-dev |
<p>I'm trying Kubernetes in a Azure environment (AKS).</p>
<p>I have an nginx ingress deployed and exposed to internet through a public ip and an azure load balancer. It is used to expose public/front services.</p>
<p>My issue is I would like to deploy 'back' services, not exposed to internet. My first guess would be to deploy a second ingress and expose it on the internal load balancer, am I right ?</p>
<p>But what if my front services needs to consume the back services, can I consume it over the second ingress (to use nginx configuration, ssl offload, etc) but not do a round trip to the internal load balancer. What will be the DNS configuration in that case?</p>
| luke77 | <p>Ingress controllers are made for external traffic. For in-cluster communication it is best to use <a href="https://kubernetes.io/docs/concepts/services-networking/service/" rel="nofollow noreferrer">Kubernetes Services</a> which will configure the DNS inside the cluster. With a Service you'll be able to call your backend service without doing a roundtrip to an external resource, the load balancing will be done natively inside the k8s cluster. Nothing prevent you from deploying an nginx pod or inject it as a sidecar in your backend service pod and use it as a reverse proxy, but do you really the nginx configuration and mutual TLS for in-cluster communication? If you really need mutual TLS, you better look at something like <a href="https://istio.io/" rel="nofollow noreferrer">Istio</a>, but it is probably overkill for your use case. </p>
| Jean-Philippe Bond |
<p>I have been trying to watch some resources in my K8s cluster and after reading some blogs about watch vs informers, i've decided to go with Informers.</p>
<p>I came across this example of how to use one: <a href="https://github.com/Netflix-Skunkworks/kubernetes-client-java/blob/master/examples/src/main/java/io/kubernetes/client/examples/InformerExample.java" rel="nofollow noreferrer">https://github.com/Netflix-Skunkworks/kubernetes-client-java/blob/master/examples/src/main/java/io/kubernetes/client/examples/InformerExample.java</a></p>
<p>In the example, I see that the SharedIndexInformer is defined as such:</p>
<pre><code> factory.sharedIndexInformerFor(
(CallGeneratorParams params) -> {
return coreV1Api.listNodeCall(
null,
null,
null,
null,
null,
params.resourceVersion,
params.timeoutSeconds,
params.watch,
null,
null);
},
V1Node.class,
V1NodeList.class);
</code></pre>
<p>Based on my understanding of how lambdas are written, this basically says that we're creating a <code>sharedIndexInformer</code> from the factory by passing it a param Call (returned by coreV1Api.listNodeCall).</p>
<p>The Call object is created by this dynamic method which takes in a <code>CallGeneratorParams</code> argument.</p>
<p>I do not seem to understand how and where this argument is passed in from in the case of a SharedInformerFactory. It's very evident that some fields within the <code>params</code> variable is being used in building the <code>listNodeCall</code> but where and how is this object constructed ?</p>
| kambamsu | <p>Well it's a ride down a rabbit hole.</p>
<blockquote>
<p>I suggest to keep the diagrams <a href="https://github.com/huweihuang/kubernetes-notes/blob/master/code-analysis/kube-controller-manager/sharedIndexInformer.md" rel="nofollow noreferrer">from the official docs</a> open in separate tab/window in order to appreciate the whole picture better.</p>
</blockquote>
<p>In order to understand this, you would have to look at the implementation of the <code>SharedInformerFactory</code>, especially the <a href="https://github.com/kubernetes-client/java/blob/332c3a43bbd96b6814947f1f8543453b824ca253/util/src/main/java/io/kubernetes/client/informer/SharedInformerFactory.java#L115" rel="nofollow noreferrer">sharedIndexInformerFor</a> call.</p>
<p>Notice how the lambda is just passed further down to construct a new <a href="https://github.com/kubernetes-client/java/blob/332c3a43bbd96b6814947f1f8543453b824ca253/util/src/main/java/io/kubernetes/client/informer/ListerWatcher.java" rel="nofollow noreferrer">ListWatcher</a> instance <a href="https://github.com/kubernetes-client/java/blob/332c3a43bbd96b6814947f1f8543453b824ca253/util/src/main/java/io/kubernetes/client/informer/SharedInformerFactory.java#L194" rel="nofollow noreferrer">(method at line 194)</a>, which is then passed into a new <a href="https://github.com/kubernetes-client/java/blob/332c3a43bbd96b6814947f1f8543453b824ca253/util/src/main/java/io/kubernetes/client/informer/impl/DefaultSharedIndexInformer.java" rel="nofollow noreferrer">DefaultSharedIndexInformer</a> instance <a href="https://github.com/kubernetes-client/java/blob/332c3a43bbd96b6814947f1f8543453b824ca253/util/src/main/java/io/kubernetes/client/informer/SharedInformerFactory.java#L144" rel="nofollow noreferrer">(statement at line 144)</a>.</p>
<p>So now we have an instance of a <code>SharedIndexInformer</code> that passes the <code>ListerWatcher</code> yet further down to its <a href="https://github.com/kubernetes-client/java/blob/332c3a43bbd96b6814947f1f8543453b824ca253/util/src/main/java/io/kubernetes/client/informer/cache/Controller.java" rel="nofollow noreferrer">Controller</a> <a href="https://github.com/kubernetes-client/java/blob/332c3a43bbd96b6814947f1f8543453b824ca253/util/src/main/java/io/kubernetes/client/informer/impl/DefaultSharedIndexInformer.java#L99" rel="nofollow noreferrer">(constructor line 99)</a>. Now the <code>Controller</code> is started when the <code>Informer</code> itself runs (see the <code>run()</code> method).</p>
<p>To make it even more complex, the <code>Controller</code> uses a <code>Reflector</code> for .. stuff. A <code>Reflector</code> according to the <a href="https://github.com/kubernetes/kubernetes/blob/353f0a5eabe4bd8d31bb67275ee4beeb4655be3f/staging/src/k8s.io/client-go/tools/cache/reflector.go" rel="nofollow noreferrer">reflector.go</a></p>
<blockquote>
<p>Reflector watches a specified resource and causes all changes to be reflected in the given store.</p>
</blockquote>
<p>So its job is to call <code>list</code> and <code>watch</code> until it is told to stop. So when the <code>Controller</code> starts, it also schedules its <code>Reflector</code> to <a href="https://github.com/kubernetes-client/java/blob/332c3a43bbd96b6814947f1f8543453b824ca253/util/src/main/java/io/kubernetes/client/informer/cache/Controller.java#L116" rel="nofollow noreferrer">run periodically</a></p>
<p>At last. When the <code>Reflector</code> runs, it calls the <code>list</code> method, which .. <em>drum roll</em> .. <a href="https://github.com/kubernetes-client/java/blob/332c3a43bbd96b6814947f1f8543453b824ca253/util/src/main/java/io/kubernetes/client/informer/SharedInformerFactory.java#L205" rel="nofollow noreferrer">executes the lambda</a> you were asking about. And the <code>param</code> variable in the lambda is .. <em>another drum roll</em> .. <a href="https://github.com/kubernetes-client/java/blob/332c3a43bbd96b6814947f1f8543453b824ca253/util/src/main/java/io/kubernetes/client/informer/cache/ReflectorRunnable.java#L89" rel="nofollow noreferrer">created in the Reflector here</a></p>
<p>Pretty neat, wouldn't you say?</p>
<p>Let me know, if you need further help/clarification.</p>
<p>Cheers</p>
| iska |
<p>I am working with an Argo workflow.</p>
<p>There is a DAG step in my <code>entrypoint</code> which follows several normal steps. One of these steps does a <code>sys.stdout</code>. Once inside of the DAG step, I want some of the tasks to reference the results from the <code>sys.stdout</code>.</p>
<p>I know if we wanted to reference the <code>sys.stdout</code> when the workflow just goes from one step to the next (without the DAG), we can do <code>{{steps.step-name.outputs.result}}</code>. The same does not work inside of a DAG task though.</p>
<p>How can I reference the sys.stdout inside of a DAG task so I can use it with <code>withParam</code>?</p>
<p><strong>Edit:</strong></p>
<p>The workflow looks like the following:</p>
<pre><code> templates:
- name: the-entrypoint
steps:
- - name: step01
template: first-step
- - name: step02
template: second-step
- - name: step03
template: third-step
- - name: step04-the-dag-step
template: fourth-step
</code></pre>
<p>In general, if <code>third-step</code> does a <code>sys.stdout</code>, we can reference it by <code>{{steps.step03.outputs.result}}</code> in <code>fourth-step</code>. However, in this case <code>fourth-step</code> is a DAG, and if one of the DAG tasks wants to use the <code>sys.stdout</code>, calling <code>{{steps.step03.outputs.result}}</code> as an argument/parameter inside of DAG tasks throws up an error.</p>
<p>The question is then how can one correctly reference the <code>sys.stdout</code> generated by <code>third-step</code> inside <code>fourth-step</code> DAG tasks?</p>
| TCR | <h1>A bit of background about template outputs</h1>
<p>Argo Workflows supports a number of different <a href="https://github.com/argoproj/argo-workflows/blob/master/docs/workflow-concepts.md#template-types" rel="noreferrer"><em>types of templates</em></a>.</p>
<p>Each type of template supports different types of reference <em>within the template</em>.</p>
<p><strong>Within a <code>steps</code> template</strong>, you may access the output parameters of other steps with <code>steps.step-name.outputs.parameters.param-name</code> (for named parameters) or <code>steps.step-name.outputs.result</code> (for the stdout of a <code>script</code> or <code>container</code> template).</p>
<p>Example (<a href="https://github.com/argoproj/argo-workflows/blob/master/examples/output-parameter.yaml" rel="noreferrer">see full Workflow</a>):</p>
<pre class="lang-yaml prettyprint-override"><code> - name: output-parameter
steps:
- - name: generate-parameter
template: whalesay
- - name: consume-parameter
template: print-message
arguments:
parameters:
- name: message
value: "{{steps.generate-parameter.outputs.parameters.hello-param}}"
</code></pre>
<p><strong>Within a <code>dag</code> template</strong>, you may access the output of various tasks using a similar syntax, just using <code>tasks.</code> instead of <code>steps.</code>.</p>
<p>Example (<a href="https://github.com/argoproj/argo-workflows/blob/master/examples/dag-conditional-parameters.yaml" rel="noreferrer">see full Workflow</a>):</p>
<pre class="lang-yaml prettyprint-override"><code> - name: main
dag:
tasks:
- name: flip-coin
template: flip-coin
- name: heads
depends: flip-coin
template: heads
when: "{{tasks.flip-coin.outputs.result}} == heads"
- name: tails
depends: flip-coin
template: tails
when: "{{tasks.flip-coin.outputs.result}} == tails"
</code></pre>
<p><strong>Within a <code>container</code> or <code>script</code> template</strong>, you may access <em>only the inputs of that template</em>*. You may not directly access the outputs of steps or tasks from steps or tasks templates from a container or script template.</p>
<h1>Referencing a step output from a DAG</h1>
<p>As mentioned above, a DAG template cannot directly reference step outputs from a <code>steps</code> template. But a step within a <code>steps</code> template can <em>pass a step output to a DAG template</em>.</p>
<p>In your example, it would look something like this:</p>
<pre class="lang-yaml prettyprint-override"><code> templates:
- name: the-entrypoint
steps:
- - name: step01
template: first-step
- - name: step02
template: second-step
- - name: step03
template: third-step
- - name: step04-the-dag-step
template: fourth-step
arguments:
parameters:
- name: some-param
value: "{{steps.step03.outputs.result}}"
- name: fourth-step
inputs:
parameters:
- name: some-param
dag:
tasks:
# use the input parameter in the fourth-step template with "{{inputs.parameters.some-param}}"
</code></pre>
<h1>tl;dr</h1>
<p><code>steps.</code> and <code>tasks.</code> variables are meant to be <em>referenced within a single steps- or tasks-template</em>, but they can be explicitly <em>passed between templates</em>. If you need to use the output of a step in a DAG, directly pass that output as an argument where the DAG is invoked.</p>
<p>In your case, the DAG template is invoked as the last of four steps, so that is where you will pass the argument.</p>
<p><sub>* Okay, you also have access to <a href="https://github.com/argoproj/argo-workflows/blob/master/docs/variables.md" rel="noreferrer">various other variables</a> from within a <code>script</code> or <code>container</code> template, but you don't have access to variables that are scoped as internal variables within another template.</sub></p>
| crenshaw-dev |
<p>I am trying to configure one python flask application running in port 5000 in kubernetes. I have created the deployment, service and ingress. It is not working using the domain name which is added to hosts file, but python application is working when i have tried from port forwarding.</p>
<p>I have tried a lot changing the configurations, but no thing worked.</p>
<p>Please let me know your suggestions.</p>
<pre><code>kind: Deployment
metadata:
name: web-app
namespace: production
labels:
app: web-app
platform: python
spec:
replicas:
selector:
matchLabels:
app: web-app
template:
metadata:
labels:
app: web-app
spec:
containers:
- name: web-app
image: XXXXXX/XXXXXX:XXXXXX
imagePullPolicy: Always
ports:
- containerPort: 5000
</code></pre>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: web-app
namespace: production
spec:
selector:
app: web-app
ports:
- protocol: TCP
port: 5000
targetPort: 5000
selector:
run: web-app
</code></pre>
<pre><code>kind: Ingress
metadata:
name: name-virtual-host-ingress
namespace: production
spec:
rules:
- host: first.bar.com
http:
paths:
- backend:
serviceName: web-app
servicePort: 5000
</code></pre>
<p>kubectl get all -n production</p>
<pre><code>NAME READY STATUS RESTARTS AGE
pod/web-app-559df5fc4-67nbn 1/1 Running 0 24m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/web-app ClusterIP 10.100.122.15 <none> 5000/TCP 24m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/web-app 1 1 1 1 24m
NAME DESIRED CURRENT READY AGE
replicaset.apps/web-app-559df5fc4 1 1 1 24m
</code></pre>
<p>kubectl get ing -n production</p>
<pre><code>NAME HOSTS ADDRESS PORTS AGE
name-virtual-host-ingress first.bar.com 80 32s
</code></pre>
<p>kubectl get ep web-app -n production</p>
<pre><code>NAME ENDPOINTS AGE
web-app <none> 23m
</code></pre>
| Sreejith | <p>You need to run a Ingress Controller. The Prerequisites part of <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#prerequisites" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/services-networking/ingress/#prerequisites</a> says:</p>
<blockquote>
<p>You must have an ingress controller to satisfy an Ingress. Only creating an Ingress resource has no effect.</p>
</blockquote>
<p>One example would be <a href="https://kubernetes.github.io/ingress-nginx/deploy/" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/deploy/</a>. Be sure to run the <strong>Mandatory Command</strong> and the one that pertains to your provider. You can then get the service to see the assigned IP:</p>
<pre><code>kubectl get -n ingress-nginx svc/ingress-nginx
</code></pre>
| Andy Shinn |
<p>I have a module calling another module, I pass kubernetes provider from main to the 1st module which then passes it to the other module. provider passed to 1st module works ok, but the provider passed down from 1st module to other doesn't work</p>
<p>main.tf</p>
<pre><code>data "google_container_cluster" "gke" {
depends_on = [module.gke]
name = var.gke_cluster_name
project = var.project_id
location = var.gke_zone
}
provider "kubernetes" {
alias = "my-kuber"
host = "https://${data.google_container_cluster.gke.endpoint}"
token = data.google_client_config.provider.access_token
cluster_ca_certificate = base64decode(data.google_container_cluster.gke.master_auth[0].cluster_ca_certificate)
load_config_file = false
}
module "first-module" {
source = "./modules/first-module"
providers {
kubernetes.my-kuber = kubernetes.my-kuber
}
.
.
.
}
</code></pre>
<p>first-module.tf</p>
<pre><code>provider "kubernetes" {
alias = "my-kuber"
}
module "sub-module" {
source = "./modules/second-module"
providers {
kubernetes.my-kuber = kubernetes.my-kuber
}
.
.
.
}
</code></pre>
<p>second-module.tf</p>
<pre><code>provider "kubernetes" {
alias = "my-kuber"
}
resource "kubernetes_namespace" "ns" {
provider = kubernetes.my-kuber
metadata {
name = var.namespace
}
}
</code></pre>
<p>Here the kubernetes.my-kuber passed down to <code>second_module.tf</code> doesn't have right cluster credentials and it fails</p>
<p>Am I missing something? is passing provider down to sub modules supported?</p>
<p>Thanks in advance</p>
| RandomQuests | <p>You don't need to "pass" your provider to your module. The <code>providers</code> attribute in your module is only needed if you have multiple kubernetes provider which seems that is not your case. Only define the provider in the root module on which you are executing the <code>terraform plan</code>, you don't need the provider block in your sub modules. Terraform is able to defined which provider to use base on the resource type : <code>kubernetes_namespace</code> mean that the provider is kubernetes.</p>
<p>Something like this should work fine :</p>
<p><strong>main.tf</strong></p>
<pre><code>data "google_container_cluster" "gke" {
depends_on = [module.gke]
name = var.gke_cluster_name
project = var.project_id
location = var.gke_zone
}
provider "kubernetes" {
host = "https://${data.google_container_cluster.gke.endpoint}"
token = data.google_client_config.provider.access_token
cluster_ca_certificate = base64decode(data.google_container_cluster.gke.master_auth[0].cluster_ca_certificate)
load_config_file = false
}
module "first-module" {
source = "./modules/first-module"
.
.
.
}
</code></pre>
<p><strong>first-module.tf</strong></p>
<pre><code>module "sub-module" {
source = "./modules/second-module"
.
.
.
}
</code></pre>
<p><strong>second-module.tf</strong></p>
<pre><code>resource "kubernetes_namespace" "ns" {
metadata {
name = var.namespace
}
}
</code></pre>
| Jean-Philippe Bond |
<p>I recently start studying for kubernetes and ansible.</p>
<p>I have the following kubernetes command in order to do rollback</p>
<blockquote>
<p>kubectl patch deployment -n my-namespace mydeployment --type='json' -p='[
{"op": "replace", "path": "/spec/template/spec/containers/0/image",
"value":"127.0.0.1:5050/mydeployment:image_version"} ]</p>
</blockquote>
<p>Is any way to introduce a json array in kubernetes ansible command and patch my deployment?</p>
<p>That I tried is the following in my playbook</p>
<pre><code>- name: Demo
k8s:
api_version: apps/v1
kind: Deployment
state: present
namespace: '{{ meta.namespace }}'
name: my-operator
definition: |
spec:
template:
spec:
containers:
my-operator:
image: {{ installed_software_image }}
register: output
</code></pre>
<p>Due to the fact that containers is an array , the patch command fails .
I get the following error</p>
<blockquote>
<p>NewReplicaSetAvailable\\\\\\",\\\\\\"message\\\\\\":\\\\\\"ReplicaSet
\\\\\\\\\\\\\\"my-operator-66ff64c9f4\\\\\\\\\\\\\\"
has successfully progressed.\\\\\\"}]}}\\":
v1.Deployment.Spec: v1.DeploymentSpec.Template:
v1.PodTemplateSpec.Spec: v1.PodSpec.Containers: []v1.Container: decode
slice: expect [ or n, but found {, error found in #10 byte of
...|tainers\\":{\\"new-opera|..., bigger context
...|t\\":\\"2021-03-24T22:26:02Z\\"}},\\"spec\\":{\\"containers\\":{\\"my-operator\\":\\"image:\\\\\\"27.0.0.1:5050/my-ope|...","field":"patch"}]},"code":422}\n'", "reason": "Unprocessable Entity", "status": 422}</p>
</blockquote>
<p>Is any way to do debug or to print the command that actually is sent to kubernetes server?</p>
| getsoubl | <p>The error is indicating that “containers:” is an array.</p>
<p>Try adding “- “ in front of “my-operaror:” to indicate that it's the first item in the array</p>
| csantanapr |
<p>When setting the following annotations:</p>
<pre><code>nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "ALPHA"
nginx.ingress.kubernetes.io/session-cookie-path: /
</code></pre>
<p>Where do they end up in nginx.conf?</p>
<p>I'm comparing nginx.conf before and after by using a difftool but the config is identical.</p>
<p>If I e.g. add a:</p>
<pre><code>nginx.ingress.kubernetes.io/rewrite-target /$1
</code></pre>
<p>nginx.conf gets updated.</p>
<p>Results in:</p>
<pre><code>rewrite "(?i)/myapp(/|$)(.*)" /$2 break;
</code></pre>
| PussInBoots | <p>The short answer is that these settings exist in memory of the <a href="https://github.com/openresty/lua-nginx-module" rel="nofollow noreferrer">lua nginx module</a> used by nginx-ingress.</p>
<p>The longer answer and explanation of how this works is in the documentation at <a href="https://kubernetes.github.io/ingress-nginx/how-it-works" rel="nofollow noreferrer">https://kubernetes.github.io/ingress-nginx/how-it-works</a>. Particularly:</p>
<blockquote>
<p>Though it is important to note that we don't reload Nginx on changes that impact only an upstream configuration (i.e Endpoints change when you deploy your app). We use <a href="https://github.com/openresty/lua-nginx-module" rel="nofollow noreferrer">https://github.com/openresty/lua-nginx-module</a> to achieve this. Check below to learn more about how it's done.</p>
</blockquote>
<p>The referenced below section then mentions:</p>
<blockquote>
<p>On every endpoint change the controller fetches endpoints from all the services it sees and generates corresponding Backend objects. It then sends these objects to a Lua handler running inside Nginx. The Lua code in turn stores those backends in a shared memory zone. Then for every request Lua code running in balancer_by_lua context detects what endpoints it should choose upstream peer from and applies the configured load balancing algorithm to choose the peer.</p>
</blockquote>
<p>The backend object in question has the session and cookie information. The code for receiving this is at <a href="https://github.com/kubernetes/ingress-nginx/blob/57a0542fa356c49a6afb762cddf0c7dbf0b156dd/rootfs/etc/nginx/lua/balancer/sticky.lua#L151-L166" rel="nofollow noreferrer">https://github.com/kubernetes/ingress-nginx/blob/57a0542fa356c49a6afb762cddf0c7dbf0b156dd/rootfs/etc/nginx/lua/balancer/sticky.lua#L151-L166</a>. In particular, there is this line in the sync function:</p>
<pre><code>ngx_log(INFO, string_format("[%s] nodes have changed for backend %s", self.name, backend.name))
</code></pre>
<p>Which indicates you should see a log entry for the change in the nginx log when making a change like this to the backends.</p>
| Andy Shinn |
<p>I'm running a kubernetes cluster and one microservice is constantly crashing with exitCode 134. I already changed the resource memory limit to 6Gi</p>
<pre><code>resources: {
limits: {
memory: "6Gi"
}
}
</code></pre>
<p>but the pod never goes above 1.6/1.7Gi. </p>
<p>What may be missing?</p>
| Miguel Morujão | <p>It's not about Kubernetes memory limit. Default JavaScript Heap limit is 1.76GB when running in node (v8 engine).</p>
<p>The command-line in Deployment/Pod should be changed like <code>node --max-old-space-size=6144 index.js</code>.</p>
| silverfox |
<p>I have <strong>2</strong> Deployments are as follow:</p>
<ol>
<li><strong>Orient DB Deployment.</strong></li>
<li><strong>Web Service Deployment.</strong></li>
</ol>
<p>Initially, To access Orient DB, Web service fetch the Orient DB username and password which are stored in <strong>Azure Key Vault</strong>.</p>
<p>To provide extra security, I created a network security policy which only allows pods with <strong>namespaceSelector "application: production"</strong> and <strong>podSelector "application: production"</strong>.
The network security policy applied are as follows:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: nmsp
namespace: production
spec:
podSelector:
matchLabels:
application: production
ingress:
- from:
- namespaceSelector:
matchLabels:
application: production
podSelector:
matchLabels:
application: production
egress:
- to:
- namespaceSelector:
matchLabels:
application: production
podSelector:
matchLabels:
application: production
</code></pre>
<p>But, after applying network security policy the Web service is unable to connect with Orient DB because Web service is failed to get username and password from <strong>Azure key vault</strong>.
It gives error,</p>
<pre><code>Unhandled Rejection at: FetchError: request to https://in-keyvault-kv.vault.azure.net/secrets?api-version=7.1 failed, reason: getaddrinfo EAI_AGAIN in-aks-keyvault-kv.vault.azure.net
at ClientRequest. (/usr/src/app/node_modules/node-fetch/lib/index.js:1461:11)
at ClientRequest.emit (events.js:314:20)
at TLSSocket.socketErrorListener (_http_client.js:428:9)
at TLSSocket.emit (events.js:314:20)
at emitErrorNT (internal/streams/destroy.js:92:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
at processTicksAndRejections (internal/process/task_queues.js:84:21) {
type: 'system',
errno: 'EAI_AGAIN',
code: 'EAI_AGAIN'
}
</code></pre>
<p><strong>SO how can I access this key vault for the username and password with network security policy enables? and connect with orient DB service?</strong></p>
<p>If any one know please help me with this.
Thank you.</p>
| Kaivalya Dambalkar | <p>You can either add an egress rule that enable the port 443 (And IP Range of the Key Vault service if you want to restrict the traffic) or use a something like <a href="https://github.com/Azure/secrets-store-csi-driver-provider-azure" rel="nofollow noreferrer">Azure Key Vault provider for Secret Store CSI driver</a> to get secret contents stored in Azure Key Vault instance and use the Secret Store CSI driver interface to mount them into Kubernetes pods.</p>
| Jean-Philippe Bond |
<p>I am trying to route outbound traffic from an application in my GKE cluster through a static IP, as the destination server requires whitelisting IP for access. I have been able to do this using the terraformed nat gateway, but this impacts all traffic from the cluster.</p>
<p>Following the istio guide on the site, I've been able to route traffic through an egressgateway pod (I can see it in the gateway logs), but I need the gateway to have a static ip, and there is no override in the helm values for egressgateway static ip.</p>
<p>How can I assign a static ip to the egressgateway without having to patch anything or hack it after installing istio?</p>
| Blender Fox | <p>I think of your problem as having three steps. First, to fix the outgoing traffic to a particular pod. The istio egress gateway does this for you. Second and third, to fix the pod to a particular IP address.</p>
<p>If you use GCP's version of floating IP addresses, then you can assign a known IP to one of the hosts in your cluster. Then, use node affinity on the egress-gateway to schedule it to the particular host, <a href="https://kubernetes.io/docs/concepts/configuration/assign-pod-node/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/configuration/assign-pod-node/</a></p>
<p>I've edited the egress deployment in one of my test clusters, to
<code>
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
- ppc64le
- s390x
- key: kubernetes.io/hostname
operator: In
values:
- worker-2720002
</code>
to pin it by the hostname label, but you'll probably want to choose and apply a new label to the node when you assign it a floating ip. In my test, the pod is moved to the specified node, and my outgoing egress traffic does too.</p>
| fraznen |
<p>I'm currently trying to wrap my head around learning Go, some details of the kubernetes API I haven't used before and the kubernetes api framework for Go at the same time, and would appreciate your help in understanding the grammar of that framework and why people use it anyways.</p>
<p>Honestly I'm not sure why to use a framework in the first place if it contains the same information as the REST endpoint. Wouldn't it make more sense to just call the API directly via a <code>http</code> library?</p>
<p>And here's one example (taken from <a href="https://github.com/coreos/etcd-operator/blob/master/cmd/operator/main.go#L171" rel="noreferrer">some real code</a>):</p>
<pre><code>pod, err := kubecli.CoreV1().Pods(namespace).Get(name, metav1.GetOptions{})
</code></pre>
<p>What I feel bothersome is that I have to <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/" rel="noreferrer">look up everything in the API docs</a> and then I additionally need to figure out that <code>/v1/</code> translates to <code>CoreV1()</code>. And I'm not even sure where I could look that up. Also the whole block <code>metav1.GetOptions{}</code> seems completely unnecessary, or which part of a HTTP request is represented by it?</p>
<p>I hope I could make clear what the confusion is and hope for your help in clearing it up.</p>
<h2>Edit:</h2>
<p>Here's also an example, generated from the new operator-framework which sadly doesn't make it much better:</p>
<pre><code> return &v1.Pod{
TypeMeta: metav1.TypeMeta{
Kind: "Pod",
APIVersion: "v1",
},
ObjectMeta: metav1.ObjectMeta{
Name: "busy-box",
Namespace: cr.Namespace,
OwnerReferences: []metav1.OwnerReference{
*metav1.NewControllerRef(cr, schema.GroupVersionKind{
Group: v1alpha1.SchemeGroupVersion.Group,
Version: v1alpha1.SchemeGroupVersion.Version,
Kind: "Memcached",
}),
},
Labels: labels,
},
Spec: v1.PodSpec{
Containers: []v1.Container{
{
Name: "busybox",
Image: "busybox",
Command: []string{"sleep", "3600"},
},
},
},
}
</code></pre>
<p>The <a href="https://v1-9.docs.kubernetes.io/docs/reference/generated/kubernetes-api/v1.9/#pod-v1-core" rel="noreferrer">API docs</a> don't know anything about this <code>TypeMeta</code> object. And the second element is called <code>ObjectMeta:</code> instead of <code>metadata</code>. I mean, I'm not a magician. How should I know this.</p>
| erikbstack | <p>I'm a bit late, but here is my 2 cents.</p>
<h1>Why to use <code>client-go</code> instead of <code>http</code> library</h1>
<p>There are serval pros with <code>client-go</code>.</p>
<ol>
<li><p>Kubernetes resource is defined as <strong>strongly-typed class</strong>, means less misspelled debugging and easy to refactor.</p></li>
<li><p>When we manipulate some resources, It <strong>authenticates with cluster automatically</strong> (<a href="https://github.com/kubernetes/client-go/tree/master/examples" rel="nofollow noreferrer" title="docs">doc</a>), what it only needs is a valid config. And we need not to know how exactly the authentication is done.</p></li>
<li><p>It has multiple versions <strong>compatible</strong> with different Kubernetes version. It make our code align with specify kubernetes version much easier, without knowing every detail of API changes.</p></li>
</ol>
<h1>How do I know which class and method should be called</h1>
<p>In <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/" rel="nofollow noreferrer">API Reference</a>, each resource has the latest Group and Version tag.
For example, <a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#pod-v1-core" rel="nofollow noreferrer">Pod</a> is group <code>core</code>, version <code>v1</code>, kind <code>Pod</code> in v1.10. </p>
<p><a href="https://godoc.org/k8s.io/client-go/kubernetes" rel="nofollow noreferrer">GoDoc</a> listed all properties and links to detail explanation for every class like <a href="https://godoc.org/k8s.io/api/core/v1#Pod" rel="nofollow noreferrer">Pod</a>.</p>
<p>So the pod list can be found by calling <a href="https://godoc.org/k8s.io/client-go/kubernetes#Clientset.CoreV1" rel="nofollow noreferrer"><code>CoreV1()</code></a>, then <a href="https://godoc.org/k8s.io/client-go/kubernetes/typed/core/v1#PodsGetter" rel="nofollow noreferrer"><code>Pods(namespace string)</code></a>, then <a href="https://godoc.org/k8s.io/client-go/kubernetes/typed/core/v1#PodInterface" rel="nofollow noreferrer"><code>List(opts meta_v1.ListOptions)</code></a>.</p>
| silverfox |
<p>I want to migrate Mule applications deployed on Mule standalone (on-Premise) to Anypoint Runtime Fabric (RTF) Self managed Kubernetes on AWS, but I could not find any document on this.</p>
<p>Any ideas or any document available on this please share it.</p>
<p>Thanks in advance</p>
| anonymous | <p>Mule applications run exactly the same on-prem, on <a href="https://docs.mulesoft.com/runtime-manager/cloudhub" rel="nofollow noreferrer">CloudHub</a> or in <a href="https://docs.mulesoft.com/runtime-fabric/1.8/" rel="nofollow noreferrer">Anypoint Runtime Fabric</a>. It is only if your applications make assumptions about their environment that you are going to need to make adjustments. For example any access to the filesystem (reading a file from some directory) or some network access that is not replicated to the Kubernetes cluster. A common mistake is when developers use Windows as the development environment and are not aware that the execution in a container environment will be different. You may not be aware of those assumptions. Just test the application and see if there are any issues. It is possible it will run fine.</p>
<p>The one exception is if the applications share configurations and/or libraries through domains. Since applications in Runtime Fabric are self isolated, domains are not supported. You need to include the configurations into each separate applications. For example you can not have an HTTP Listener config where several applications share the same TCP Port to listen to incoming requests. That should be replaced by using <a href="https://docs.mulesoft.com/runtime-fabric/1.8/enable-inbound-traffic-self" rel="nofollow noreferrer">Runtime Fabric inbound configurations</a>.</p>
<p>About the deployment, when you deploy to a new deployment model, it is considered a completely new application, with no relationship to the previous one. There is no "migration" of deployments. You can deploy using Runtime Manager or Maven. See the <a href="https://docs.mulesoft.com/runtime-fabric/1.8/deploy-index" rel="nofollow noreferrer">documentation</a>. Note that the documentation states that to deploy with Maven you <a href="https://docs.mulesoft.com/runtime-fabric/1.8/deploy-maven-4.x#prerequisites" rel="nofollow noreferrer">first must publish</a> the application to Exchange.</p>
| aled |
<p>I know that with Azure AKS , master components are fully managed by the service. But I'm a little confused here when it comes to pick the node pools. I understand that there are two kind of pools system and user, where the user nodes pool offer hosting my application pods. I read on official documentation that <strong>System node pools serve the primary purpose of hosting critical system pods such as CoreDNS and tunnelfront.</strong> And i'm aware that we can only rely on system nodes to create and run our kubernetes cluster.</p>
<p>My question here, do they mean here by the system node the <strong>MASTER node</strong> ? If it is so, why then we have the option to not create the user nodes (worker node by analogy) ? because as we know -in on prem kubernetes solution- we cannot create a kubernetes cluster with master nodes only.</p>
<p><a href="https://i.stack.imgur.com/IHNyo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IHNyo.png" alt="enter image description here" /></a></p>
<p>I'll appreciate any help</p>
| am fs | <p>System node pools in AKS does not contain Master nodes. Master nodes in AKS are 100% managed by Azure and are outside your VNet. A system node pool contains worker nodes on which AKS automatically assigns the label <code>kubernetes.azure.com/mode: system</code>, that's about it. AKS then use that label to deploy critical pod like <code>tunnelfront</code>, which is use to create a secure communication from your nodes to the control plane. You need at least 1 system node pool per cluster and they have the <a href="https://learn.microsoft.com/en-us/azure/aks/use-system-pools#system-and-user-node-pools" rel="nofollow noreferrer">following restrictions</a> :</p>
<ul>
<li>System pools osType must be Linux.</li>
<li>System pools must contain at least one node, and user node pools may contain zero or more nodes.</li>
<li>System node pools require a VM SKU of at least 2 vCPUs and 4GB memory. But burstable-VM(B series) is not recommended.</li>
<li>System node pools must support at least 30 pods as described by the minimum and maximum value formula for pods.</li>
</ul>
| Jean-Philippe Bond |
<p>Can anyone share me the yaml file for creating kafka cluster with two kafka broker and zookeeper cluster with 3 servers.I'm new to kubernetes.</p>
| Radha | <p>Take look at <a href="https://github.com/Yolean/kubernetes-kafka" rel="nofollow noreferrer">https://github.com/Yolean/kubernetes-kafka</a>, Make sure the broker memory limit is 2 GB or above.</p>
<p>Maintaining a reliable kafka cluster in kubernetes is still a challenge, good luck.</p>
| silverfox |
<p>I would like to know, if there is an optimal approach for setting memory limits for Kubernetes containers, especially for applications running java.</p>
<p>For Java applications, we have to set a heap value in conjunction with Kubernetes resources. it's like we're kinda forced to guess at a memory limit for the Kubernetes microservice.</p>
<p>To be more clear,</p>
<ul>
<li>in java the heap memory can be limited to memory limits defined in the container, but how to arrive at a specific limit value ??</li>
<li>If we don't set up limits for the container, then the java heap considers the underlying node memory limits rather than at container limits, so it can extend the pod limit to max memory, which can stress the other pods running in that node.</li>
<li>If we don't set up enough memory limits at the container, then we can see containers getting killed with OOM errors.</li>
</ul>
<p>The possible solutions, I can think of is</p>
<ol>
<li>Monitoring the microservice for some period of time and based on the utilization, choosing the limits</li>
<li>Implementing some load testing mechanism and based on the observation setting the limits</li>
</ol>
<p>Other than the above, I would like to get some comments if there is any other approach followed by anyone in setting memory limits for Kubernetes containers.
Has anyone encountered this earlier!!</p>
| Bala krishna | <p>Yes, I have encountered the issue multiple times. You definitely want to keep the memory limit for the k8 to avoid the noisy neighbour problems. The possible solutions you have mentioned are right. Monitoring and load testing are a must to arrive at the number.</p>
<p>Along with these, I used the profiling of Java processes to see how GC is getting triggered and whether the memory usage should remain the same or increase with the increase of load. Profiling is a very powerful tool to provide some insights into suboptimal usage of data structures as well.</p>
<p><strong>What to profile</strong></p>
<p>While doing the Java profiling, you need to check</p>
<ul>
<li>What's the Eden and old-gen usage</li>
<li>How often full GC is running, the memory utilisation will increase and decrease after the full GC. See the <a href="https://dzone.com/articles/interesting-garbage-collection-patterns" rel="nofollow noreferrer">GC pattern</a></li>
<li>How many objects are getting created</li>
<li>CPU usage, (will increase during the full GC)</li>
</ul>
<p><strong>How to profile Java application</strong></p>
<p>Here are a few good resources</p>
<ul>
<li><a href="https://www.baeldung.com/java-profilers#:%7E:text=A%20Java%20Profiler%20is%20a,thread%20executions%2C%20and%20garbage%20collections" rel="nofollow noreferrer">https://www.baeldung.com/java-profilers#:~:text=A%20Java%20Profiler%20is%20a,thread%20executions%2C%20and%20garbage%20collections</a>.</li>
<li><a href="https://medium.com/platform-engineer/guide-to-java-profilers-e344ce0339e0" rel="nofollow noreferrer">https://medium.com/platform-engineer/guide-to-java-profilers-e344ce0339e0</a></li>
</ul>
<p><strong>How to Profile Kubernetes Application with Java</strong></p>
<ul>
<li><a href="https://medium.com/swlh/introducing-kubectl-flame-effortless-profiling-on-kubernetes-4b80fc181852" rel="nofollow noreferrer">https://medium.com/swlh/introducing-kubectl-flame-effortless-profiling-on-kubernetes-4b80fc181852</a></li>
<li><a href="https://www.youtube.com/watch?v=vHTWdkCUAoI" rel="nofollow noreferrer">https://www.youtube.com/watch?v=vHTWdkCUAoI</a></li>
</ul>
| Avishek Bhattacharya |
<p>I would like to access multiple remote registries to pull images.
In the k8s <a href="https://kubernetes.io/docs/concepts/containers/images/" rel="nofollow noreferrer">documentation</a> they say:</p>
<blockquote>
<p>(If you need access to multiple registries, you can create one secret
for each registry. Kubelet will merge any imagePullSecrets into a
single virtual .docker/config.json)</p>
</blockquote>
<p>and so the POD definition should be something like this:</p>
<pre><code>apiVersion: v1
kind: Pod
spec:
containers:
- name: ...
imagePullSecrets:
- name: secret1
- name: secret2
- ....
- name: secretN
</code></pre>
<p>Now I am not sure how K8S will pick the right secret for each image? will all secrets be verified one by one each time? and how K8S will handle the failed retries? and if a specific amount of unauthorized retries could lead to some lock state in k8sor docker registries?</p>
<p>/ Thanks</p>
| Geis | <p>You can use following script to add two authentications in one secret</p>
<pre><code>#!/bin/bash
u1="user_1_here"
p1="password_1_here"
auth1=$(echo -n "$u1:$p1" | base64 -w0)
u2="user_2_here"
p2="password_2_here"
auth2=$(echo -n "$u2:$p2" | base64 -w0)
cat <<EOF > docker_config.json
{
"auths": {
"repo1_name_here": {
"auth": "$auth1"
},
"repo2_name_here": {
"auth": "$auth2"
}
}
}
EOF
base64 -w0 docker_config.json > docker_config_b64.json
cat <<EOF | kubectl apply -f -
apiVersion: v1
type: kubernetes.io/dockerconfigjson
kind: Secret
data:
.dockerconfigjson: $(cat docker_config_b64.json)
metadata:
name: specify_secret_name_here
namespace: specify_namespace_here
EOF
</code></pre>
| Sameer Naik |
<p>I am confused about some elementary network concept in k8s and can someone kindly explain this to me please? thank you!</p>
<p>as described <a href="https://github.com/bmuschko/ckad-crash-course/blob/master/exercises/31-networkpolicy/instructions.md" rel="nofollow noreferrer">here</a>:</p>
<blockquote>
<p>All ingress Pod-to-Pod communication has been denied across all namespaces.
You want to allow the Pod busybox in namespace k1 to communicate with Pod nginx in namespace k2.
You'll create a network policy to achieve that.</p>
</blockquote>
<p>I create two pods in k1 and k2 separately in KIND cluster, and I didn't create any network policy, so I understand pod in k1 are not allowed to talk to pod in k2; and why am I seeing the wget is successful between the two pods here?</p>
<pre><code>$k get ns k1 k2
NAME STATUS AGE
k1 Active 10m
k2 Active 10m
$k get pod -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
k1 busybox 1/1 Running 0 11m 10.244.0.5 t1-control-plane <none> <none>
k2 nginx 1/1 Running 0 11m 10.244.0.6 t1-control-plane <none> <none>
$k get NetworkPolicy -A
No resources found
$k exec -it busybox -n k1 -- wget --timeout=5 10.244.0.6:80
Connecting to 10.244.0.6:80 (10.244.0.6:80)
saving to 'index.html'
index.html 100% |********************************| 615 0:00:00 ETA
'index.html' saved
</code></pre>
| sqr | <p>the setup.yaml should create a NetworkPolicy,
you also need to install Cilium to achieve the setup before apply the solution</p>
<blockquote>
<p>NOTE: Without a network policy controller, network policies won't have
any effect. You need to configure a network overlay solution that
provides this controller. You'll have to go through some extra steps
to install and enable the network provider Cilium. Without adhering to
the proper prerequisites, network policies won't have any effect. You
can find installation guidance in the file cilium-setup.md. If you do
not already have a cluster, you can create one by using minikube or
you can use the O'Reilly interactive lab "Creating a Network Policy".</p>
</blockquote>
<p><a href="https://github.com/bmuschko/ckad-crash-course/blob/master/exercises/31-networkpolicy/cilium-setup.md" rel="nofollow noreferrer">https://github.com/bmuschko/ckad-crash-course/blob/master/exercises/31-networkpolicy/cilium-setup.md</a></p>
| emaniacs |
<p>I am struggling to get my nginx ingress (on AWS EKS) working with path rules and TLS.</p>
<p>The ingress is from
<a href="https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/aws/deploy.yaml" rel="nofollow noreferrer">here</a></p>
<p>A snippet from the Ingress looks like:</p>
<pre><code>spec:
tls:
- hosts:
- example.com
secretName: ingress-tls
rules:
- host: example.com
- http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: api-service
port:
number: 443
</code></pre>
<p>This ingress creates the AWS network load balancer, with a URL like
<code>https://xyz.elb.us-west-1.amazonaws.com/</code></p>
<p>I am updating the
<code>ingress-tls</code>
secret with a certificate using
<code>cert-manager</code>.</p>
<p>When I access the ingress using the NLB URL
<code>https://xyz.elb.us-west-1.amazonaws.com/api</code>, I get</p>
<ol>
<li>GOOD: Correct routing based on the path rules from the ingress definition (i.e. it goes to my
<code>api-service</code> as expected)</li>
<li>BAD: Certificate errors since I'm not accessing the ingress with the domain that the certificate is for.</li>
</ol>
<p>When I access the ingress using the correct domain e.g.
<code>https://example.com/api</code> which is what I want to do, I get:</p>
<ol>
<li>BAD:
<code>404</code>, it doesn't respect my path rules, and goes to
<code>upstream-default-backend</code> instead.</li>
<li>GOOD: certificate all good, it’s the one for
<code>example.com</code> that
<code>cert-manager</code> configured.</li>
</ol>
<p>I tried removing the
<code>host: example.com</code> from the
<code>rules:</code>, which gives me:</p>
<ol>
<li>GOOD: Correct routing based on the path rules from the ingress definition</li>
<li>BAD: Certificate errors, it serves up the default ingress “Fake” certificate instead of the one for
<code>example.com</code>, I guess since the
<code>host</code> is missing from the rules, though not sure of the exact reason.</li>
</ol>
<p>Can someone please help me get</p>
<ol>
<li>GOOD</li>
<li>GOOD</li>
</ol>
<p>I’m at a loss here.</p>
| e.dan | <p>After staring at this for several more hours, and digging through the nasty chunk of lua that is the
<code>nginx.conf</code> for this, I found it! Maybe someday someone will have this problem, and might find this useful.</p>
<p>The problem was:</p>
<pre><code> rules:
- host: example.com
- http:
</code></pre>
<p>This is defining (I think) a
<code>host</code> with no forwarding rules, then then some
<code>http</code> forwarding rules without a host. What I had intended was obviously that the forwarding rules would be for the host.</p>
<p>And that would be:</p>
<pre><code> rules:
- host: example.com
http:
</code></pre>
<p>I have to say that I'm now even less of a fan of YAML than I was previously, if that's even possible.</p>
| e.dan |
<p>I am facing performance issues with hazelcast configured as client-server.
I have one K8S Cluster consisting of 5 Nodes and 1 Master Node. Each node is of 64 GB of RAM and 16 Core (Hazelcast Version 3.12.4)
Hazelcast server is deployed on K8S with one POD on one of the nodes available in cluster
My Client is deployed on K8S which is connected to above Hazelcast with smart client (Hazelcast discovery enabled for K8S). There are a total 10 PODs of my applications with each node consisting of 2 PODs of my application.</p>
<p>I am running different different APIs and performing load testing of my application (Approx 110 Threads at a time shared across all 10 PODs)</p>
<p>I am having the following piece of code in my application to get cache.</p>
<pre><code>public Map<Object, Object> get(String cacheId, Long lTenantId) {
String strMethodName="get";
long t1 = System.currentTimeMillis();
Map<Object,Object> cacheDataMap=hazelcastInstance.getMap(cacheId);
long totalTimeTaken = (System.currentTimeMillis()-t1);
if(totalTimeTaken > 10){
logger.warnLog(CLASSNAME, strMethodName,"Total time taken by "+cacheId+" identifier for get operation is : "+totalTimeTaken+" ms");
}
return cacheDataMap;
}
</code></pre>
<p>The way my application uses this map varies like</p>
<p>1) </p>
<pre><code>map.get(key);
</code></pre>
<p>2) </p>
<pre><code>Set keys = map.keySet();
Iterator iterator = keys.iterator(); //I changed to keyset iterator because entryset was causing lot of performance issues
while (iterator.hasNext()) {
// doing stuff
}
</code></pre>
<p>When all My APIs are started for Load I am getting these logs printed in an application (Total time taken by....) where each cache access time is > 10 miliseconds and this is causing performance issues and hence I am not able to achieve my desired TPS for all APIs.</p>
<p>There are approx 300 Maps are stored in cache where total size of the cache is 4.22 MB</p>
<p>I am using near cache configuration and also on the management center it is showing effectiveness as 100%. (This was taken when hazelcast.client.statistics.enabled was enabled).</p>
<p>I have also tried with 8 PODs deployed on 4 Nodes and 1 dedicated node for Hazelcast server but the issue remains the same. There are no issues observed when I am connecting Hazelcast as embedded and I am able to achieve my desired TPS for all APIs.</p>
<p>Am I Missing any configuration or any other thing which is causing this problem?</p>
<p>Here is my hazelcast-client.xml</p>
<pre><code><hazelcast-client
xmlns="http://www.hazelcast.com/schema/client-config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.hazelcast.com/schema/client-config
http://hazelcast.com/schema/client-config/hazelcast-client-config-3.11.xsd">
<group>
<name>dev</name>
</group>
<instance-name>hazelcast</instance-name>
<properties>
<property name="hazelcast.client.shuffle.member.list">true</property>
<property name="hazelcast.client.heartbeat.timeout">600000</property>
<property name="hazelcast.client.heartbeat.interval">180000</property>
<property name="hazelcast.client.event.queue.capacity">1000000</property>
<property name="hazelcast.client.invocation.timeout.seconds">120</property>
<property name="hazelcast.client.statistics.enabled">false</property>
<property name="hazelcast.discovery.enabled">true</property>
<property name="hazelcast.map.invalidation.batch.enabled">false</property>
</properties>
<network>
<discovery-strategies>
<discovery-strategy enabled="true"
class="com.hazelcast.kubernetes.HazelcastKubernetesDiscoveryStrategy">
<properties>
<property name="service-name"><service-name></property>
<property name="namespace"><namespace></property>
</properties>
</discovery-strategy>
</discovery-strategies>
<smart-routing>true</smart-routing>
<redo-operation>true</redo-operation>
<connection-timeout>90000</connection-timeout>
<connection-attempt-period>100</connection-attempt-period>
<connection-attempt-limit>0</connection-attempt-limit>
</network>
<near-cache name="default">
<in-memory-format>OBJECT</in-memory-format>
<serialize-keys>true</serialize-keys>
<invalidate-on-change>true</invalidate-on-change>
<eviction eviction-policy="NONE" max-size-policy="ENTRY_COUNT"/>
</near-cache>
</hazelcast-client>
</code></pre>
<p>Here is my hazelcast.xml</p>
<pre><code><?xml version="1.0" encoding="UTF-8"?>
<hazelcast xsi:schemaLocation="http://www.hazelcast.com/schema/config hazelcast-config-3.11.xsd"
xmlns="http://www.hazelcast.com/schema/config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<management-center enabled="${hazelcast.mancenter.enabled}">${hazelcast.mancenter.url}</management-center>
</hazelcast>
</code></pre>
| Pavan Mulani | <p>The goal of a cache is to get the value from a key as fast as possible. In general, you already have the key, and request the value. That means you send a request to any node, this looks in the partition table which partition the key belongs to, and forwards the query to the relevant node.</p>
<p>In your second use-case, you try to get all keys from all nodes:</p>
<pre><code>Set keys = map.keySet();
Iterator iterator = keys.iterator();
while (iterator.hasNext()) {
// doing stuff
}
</code></pre>
<p>To return fast as possible, Hazelcast will return a lazy implementation of the <code>Iterator</code>. For each call to <code>next()</code>, it will need first to retrieve the key following the above process. Plus, I assume the <code>// doing stuff</code> code actually loads the value from the key.</p>
<p>In conclusion, please avoid at all costs using <code>map.keySet()</code>. Unless I know more about your context and your use-case, I unfortunately cannot provide with a relevant alternative.</p>
| Nicolas |
<p>How can we auto-update (<em>delete, create, change</em>) entries in <code>/etc/hosts</code> file of running Pod without actually entering the pod?</p>
<p>We working on containerisation of <em>SAP</em> application server and so far succeeded in achieving this using <em>Kubernetes</em>.</p>
<pre><code>apiVersion: v1
kind: Pod
spec:
hostNetwork: true
</code></pre>
<p>Since we are using host network approach, all entries of our VMs <code>/etc/hosts</code> file are getting copied whenever a new pod is created.</p>
<p>However, once pod has been created and in running state, any changes to VMs <code>/etc/hosts</code> file are not getting transferred to already running pod.</p>
<p>We would like to achieve this for our project requirement.</p>
| Jayesh | <p>Kubernetes does have several different ways of affecting name resolution, your request is most similar to <a href="https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/" rel="nofollow noreferrer">here</a> and related pages.</p>
<p>Here is an extract, emphasis mine.</p>
<blockquote>
<p>Adding entries to a Pod’s /etc/hosts file provides Pod-level override of hostname resolution when DNS and other options are not applicable. In 1.7, users can add these custom entries with the HostAliases field in PodSpec.</p>
<p><strong>Modification not using HostAliases is not suggested because the file is managed by Kubelet and can be overwritten on during Pod creation/restart.</strong></p>
</blockquote>
<p>An example Pod specification using <code>HostAliases</code> is as follows:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: hostaliases-pod
spec:
restartPolicy: Never
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "foo.local"
- "bar.local"
- ip: "10.1.2.3"
hostnames:
- "foo.remote"
- "bar.remote"
containers:
- name: cat-hosts
image: busybox
command:
- cat
args:
- "/etc/hosts"
</code></pre>
<p>One issue here is that you will need to update and restart the Pods with a new set of <code>HostAliases</code> if your network IPs change. That might cause downtime in your system.</p>
<p>Are you sure you need this mechanism and not <a href="https://kubernetes.io/docs/concepts/services-networking/service/#services-without-selectors" rel="nofollow noreferrer">a service that points to an external endpoint</a>?</p>
| Paul Annetts |
<p>So when I run helm init, sometimes it works and a tiller pod gets created and sometimes the connection times out. I reasoned to extend the tiller-connection-time out with:</p>
<p><code>helm init --tiller-connection-timeout 500 --service-account tiller --tiller-image my-image --tiller-namespace my-namespace</code></p>
<p>..but I got this error:
<strong>Error: unknown flag: --tiller-connection-timeout</strong></p>
<p>However, the docs list this as a valid flag, copy-pasted from the docs:
<a href="https://docs.helm.sh/helm/" rel="nofollow noreferrer">https://docs.helm.sh/helm/</a></p>
<p>Anybody else have issues with <code>helm init</code> in Kubernetes? How to get a consistent tiller pod created? I'm happy to provide more info, if that helps</p>
| Aliisa Roe | <p><code>--tiller-connection-timeout</code> is available from helm 2.9.0, take a look at your helm version, it maybe need some upgrade.</p>
| silverfox |
<p>I have an Azure Kubernetes Service cluster, running version <code>1.15.7</code>. This cluster recently replaced an older cluster version (<code>1.12.something</code>). In the past, once the various service pods were up and running, we would create a public IP resource in Azure portal and assign it a name, then create a <code>Service</code> resource like this:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: myservice-frontend
labels:
app: myservice
spec:
ports:
- port: 80
name: myservice-frontend
targetPort: 80
- port: 443
name: myservice-frontend-ssl
targetPort: 443
selector:
app: myservice-frontend
type: LoadBalancer
loadBalancerIP: 1.2.3.4
</code></pre>
<p>Finally, we'd add the public IP to a Traffic Manager instance. </p>
<p>Since upgrading to 1.15, this doesn't seem to work anymore. We can go through all the above steps, but as soon as the Service/Load Balancer is created, the public IP loses its DNS name, which causes it to be evicted from Traffic Manager. We can reset the name, but within 36-48 hours it gets lost again. My suspicion is that AKS is trying to apply a name to the associated IP address, but since I haven't defined one above, it just sets it to null. </p>
<p>How can I tell AKS what name to assign to a public IP? Better yet, can I skip the static public IP and let AKS provision a dynamic address and simply add the DNS name to Traffic Manager?</p>
| superstator | <p>This is indeed a bug in AKS <code>1.15.7</code></p>
<p><a href="https://github.com/kubernetes/kubernetes/issues/87127" rel="nofollow noreferrer">Azure - PIP dns label will be default deleted</a></p>
<p>The upshot is, this is part of a new feature in 1.15 that allows the DNS label for a LoadBalancer IP to be set in the Service configuration. So, the definition above can become:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: myservice-frontend
labels:
app: myservice
annotations:
service.beta.kubernetes.io/azure-dns-label-name: myservice-frontend
spec:
ports:
- port: 80
name: myservice-frontend
targetPort: 80
- port: 443
name: myservice-frontend-ssl
targetPort: 443
selector:
app: myservice-frontend
type: LoadBalancer
</code></pre>
<p>And the service will be automatically assigned a new static IP with the annotated DNS name.</p>
| superstator |
<p>I'm getting <code>Unable to connect to the server: dial tcp <IP> i/o timeout</code> when trying to run <code>kubectl get pods</code> when connected to my cluster in google shell. This started out of the blue without me doing any changes to my cluster setup. </p>
<pre><code>gcloud beta container clusters create tia-test-cluster \
--create-subnetwork name=my-cluster\
--enable-ip-alias \
--enable-private-nodes \
--master-ipv4-cidr <IP> \
--enable-master-authorized-networks \
--master-authorized-networks <IP> \
--no-enable-basic-auth \
--no-issue-client-certificate \
--cluster-version=1.11.2-gke.18 \
--region=europe-north1 \
--metadata disable-legacy-endpoints=true \
--enable-stackdriver-kubernetes \
--enable-autoupgrade
</code></pre>
<p>This is the current cluster-config.
I've run <code>gcloud container clusters get-credentials my-cluster --zone europe-north1-a --project <my project></code> before doing this aswell.</p>
<p>I also noticed that my compute instances have lost their external IPs. In our staging environment, everything works as it should based on the same config.</p>
<p>Any pointers would be greatly appreciated.</p>
| Coss | <p>From what I can see of what you've posted you've turned on master authorized networks for the network <code><IP></code>.</p>
<p>If the IP address of the Google Cloud Shell ever changes that is the exact error that you would expect.</p>
<p>As per <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#cloud_shell" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#cloud_shell</a>: you need to update the allowed IP address.</p>
<pre><code>gcloud container clusters update tia-test-cluster \
--region europe-north1 \
--enable-master-authorized-networks \
--master-authorized-networks [EXISTING_AUTH_NETS],[SHELL_IP]/32
</code></pre>
| Paul Annetts |
<p>I am struggling to understand the Spark documentation in order to set up the local-dir correctly.</p>
<h2>Setup:</h2>
<p>I am running Spark 3.1.2 on Kubernetes via the Sparkoperator approach. The Number of executor Pods varying on job size and available resources on the cluster. A typical case is that i start the job with 20 requested executors but 3 Pods remain in pending state and spark complete the job with 17 executors.</p>
<h2>Base Problem:</h2>
<p>I am running in the error "The node was low on resource: ephemeral-storage." due to much spilling of data into the default local-dir created via <code>empty-dir</code>on the kubernetes nodes.</p>
<p>This is a known issue and it should be solved by pointing the <code>local-dir</code> to a mounted presistent volume.</p>
<p>I tried to approaches but both are not working:</p>
<h2>Approach 1:</h2>
<p>Following the documentation <a href="https://spark.apache.org/docs/latest/running-on-kubernetes.html#local-storage" rel="nofollow noreferrer">https://spark.apache.org/docs/latest/running-on-kubernetes.html#local-storage</a> I added the following options into the spark-config</p>
<pre><code>"spark.kubernetes.executor.volumes.persistentVolumeClaim.spark-local-dir-1.options.claimName": "tmp-spark-spill"
"spark.kubernetes.executor.volumes.persistentVolumeClaim.spark-local-dir-1.options.storageClass": "csi-rbd-sc"
"spark.kubernetes.executor.volumes.persistentVolumeClaim.spark-local-dir-1.options.sizeLimit": "3000Gi"
"spark.kubernetes.executor.volumes.persistentVolumeClaim.spark-local-dir-1.mount.path": ="/spill-data"
"spark.kubernetes.executor.volumes.persistentVolumeClaim.spark-local-dir-1.mount.readOnly": "false"
</code></pre>
<p>the full yaml looks like</p>
<pre><code>apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: SparkApplication
metadata:
name: job1
namespace: spark
spec:
serviceAccount: spark
type: Python
pythonVersion: "3"
mode: cluster
image: "xxx/spark-py:app-3.1.2"
imagePullPolicy: Always
mainApplicationFile: local:///opt/spark/work-dir/nfs/06_dwh_core/jobs/job1/main.py
sparkVersion: "3.0.0"
restartPolicy:
type: OnFailure
onFailureRetries: 0
onFailureRetryInterval: 10
onSubmissionFailureRetries: 0
onSubmissionFailureRetryInterval: 20
sparkConf:
"spark.default.parallelism": "400"
"spark.sql.shuffle.partitions": "400"
"spark.serializer": "org.apache.spark.serializer.KryoSerializer"
"spark.sql.debug.maxToStringFields": "1000"
"spark.ui.port": "4045"
"spark.driver.maxResultSize": "0"
"spark.kryoserializer.buffer.max": "512"
"spark.kubernetes.executor.volumes.persistentVolumeClaim.spark-local-dir-1.options.claimName": "tmp-spark-spill"
"spark.kubernetes.executor.volumes.persistentVolumeClaim.spark-local-dir-1.options.storageClass": "csi-rbd-sc"
"spark.kubernetes.executor.volumes.persistentVolumeClaim.spark-local-dir-1.options.sizeLimit": "3000Gi"
"spark.kubernetes.executor.volumes.persistentVolumeClaim.spark-local-dir-1.mount.path": ="/spill-data"
"spark.kubernetes.executor.volumes.persistentVolumeClaim.spark-local-dir-1.mount.readOnly": "false"
driver:
cores: 1
memory: "20G"
labels:
version: 3.1.2
serviceAccount: spark
volumeMounts:
- name: nfs
mountPath: /opt/spark/work-dir/nfs
executor:
cores: 20
instances: 20
memory: "150G"
labels:
version: 3.0.0
volumeMounts:
- name: nfs
mountPath: /opt/spark/work-dir/nfs
volumes:
- name: nfs
nfs:
server: xxx
path: /xxx
readOnly: false
</code></pre>
<h2>Issue 1:</h2>
<p>this results in an error saying that the pvc already exists and it only creates effectively one executor.</p>
<pre><code>io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: POST at: https://kubernetes.default.svc/api/v1/namespaces/spark-poc/persistentvolumeclaims. Message: persistentvolumeclaims "tmp-spark-spill" already exists. Received status: Status(apiVersion=v1, code=409, details=StatusDetails(causes=[], group=null, kind=persistentvolumeclaims, name=tmp-spark-spill, retryAfterSeconds=null, uid=null, additionalProperties={}), kind=Status, message=persistentvolumeclaims "tmp-spark-spill" already exists, metadata=ListMeta(_continue=null, remainingItemCount=null, resourceVersion=null, selfLink=null, additionalProperties={}), reason=AlreadyExists, status=Failure, additionalProperties={}).
</code></pre>
<p>Do I have to define this local-dir claims for every executor? kind of</p>
<pre><code> "spark.kubernetes.executor.volumes.persistentVolumeClaim.spark-local-dir-1.options.claimName": "tmp-spark-spill"
.
.
.
"spark.kubernetes.executor.volumes.persistentVolumeClaim.spark-local-dir-2.options.claimName": "tmp-spark-spill"
.
.
.
"spark.kubernetes.executor.volumes.persistentVolumeClaim.spark-local-dir-3.options.claimName": "tmp-spark-spill"
.
.
.
</code></pre>
<p>But how can I make it dynamicaly if I have changing numbers of executors? Is it not automatically picking it up from the executor config?</p>
<h2>Approach 2:</h2>
<p>I created an pvc myself mounted it as volume and set the local-dir as spark config parameter</p>
<pre><code>apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-spark-spill
namespace: spark-poc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3000Gi
storageClassName: csi-rbd-sc
volumeMode: Filesystem
</code></pre>
<p>mounted into to executors like</p>
<pre><code>apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: SparkApplication
metadata:
name: job1
namespace: spark
spec:
serviceAccount: spark
type: Python
pythonVersion: "3"
mode: cluster
image: "xxx/spark-py:app-3.1.2"
imagePullPolicy: Always
mainApplicationFile: local:///opt/spark/work-dir/nfs/06_dwh_core/jobs/job1/main.py
sparkVersion: "3.0.0"
restartPolicy:
type: OnFailure
onFailureRetries: 0
onFailureRetryInterval: 10
onSubmissionFailureRetries: 0
onSubmissionFailureRetryInterval: 20
sparkConf:
"spark.default.parallelism": "400"
"spark.sql.shuffle.partitions": "400"
"spark.serializer": "org.apache.spark.serializer.KryoSerializer"
"spark.sql.debug.maxToStringFields": "1000"
"spark.ui.port": "4045"
"spark.driver.maxResultSize": "0"
"spark.kryoserializer.buffer.max": "512"
"spark.local.dir": "/spill"
driver:
cores: 1
memory: "20G"
labels:
version: 3.1.2
serviceAccount: spark
volumeMounts:
- name: nfs
mountPath: /opt/spark/work-dir/nfs
executor:
cores: 20
instances: 20
memory: "150G"
labels:
version: 3.0.0
volumeMounts:
- name: nfs
mountPath: /opt/spark/work-dir/nfs
- name: pvc-spark-spill
mountPath: /spill
volumes:
- name: nfs
nfs:
server: xxx
path: /xxx
readOnly: false
- name: pvc-spark-spill
persistentVolumeClaim:
claimName: pvc-spark-spill
</code></pre>
<h2>Issue 2</h2>
<p>This approach fails with the message that the <code>/spill</code> must be unique.</p>
<pre><code> Message: Pod "job1-driver" is invalid: spec.containers[0].volumeMounts[7].mountPath: Invalid value: "/spill": must be unique.
</code></pre>
<h2>Summary and Questions</h2>
<p>It seems that every executor needs his own pvc or at least his own folder on the pvc to spill his data. But how do I configure it correctly? I am not getting it from the documentation</p>
<p>Thanks for your help
Alex</p>
| Alex Ortner | <p>spark should be able to create PVC dynamically by setting up claimName= OnDemand.
Attaching multiple pods for the same pvc will be issue on Kubernetes end</p>
<p>attaching screenshot for documentation
<a href="https://i.stack.imgur.com/jmIPT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jmIPT.png" alt="config screenshot" /></a></p>
<p>You can look into nfs share which will work outside kubenetes manage volumes.
Example
<a href="https://www.datamechanics.co/blog-post/apache-spark-3-1-release-spark-on-kubernetes-is-now-ga" rel="nofollow noreferrer">https://www.datamechanics.co/blog-post/apache-spark-3-1-release-spark-on-kubernetes-is-now-ga</a></p>
| Vish |
<p>I have an <em>alpine</em> Docker image running in Kubernetes, in which I try to push to Git using a Deploy Key (with passphrase).</p>
<p>Now my command looks like:</p>
<pre class="lang-docker prettyprint-override"><code>CMD ["/bin/sh", "-c", "GIT_SSH_COMMAND=\"sshpass -p mygreatpassphrase ssh -vvv\" git -C /workspace push --mirror [email protected]:foo/bar.git"]
</code></pre>
<p>The result then is:</p>
<pre class="lang-shell prettyprint-override"><code><snip>
debug3: send packet: type 21
debug2: set_newkeys: mode 1
debug1: rekey after 134217728 blocks
debug1: SSH2_MSG_NEWKEYS sent
debug1: expecting SSH2_MSG_NEWKEYS
debug3: receive packet: type 21
debug1: SSH2_MSG_NEWKEYS received
debug2: set_newkeys: mode 0
debug1: rekey after 134217728 blocks
debug1: Will attempt key: /.ssh/id_rsa
debug1: Will attempt key: /.ssh/id_dsa
debug1: Will attempt key: /.ssh/id_ecdsa
debug1: Will attempt key: /.ssh/id_ed25519
debug1: Will attempt key: /.ssh/id_xmss
debug2: pubkey_prepare: done
debug3: send packet: type 5
debug3: receive packet: type 7
debug1: SSH2_MSG_EXT_INFO received
debug1: kex_input_ext_info: server-sig-algs=<ssh-ed25519,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384,ecdsa-sha2-nistp521,ssh-rsa,rsa-sha2-512,rsa-sha2-256,ssh-dss>
debug3: receive packet: type 6
debug2: service_accept: ssh-userauth
debug1: SSH2_MSG_SERVICE_ACCEPT received
debug3: send packet: type 50
debug3: receive packet: type 51
debug1: Authentications that can continue: publickey
debug3: start over, passed a different list publickey
debug3: preferred publickey,keyboard-interactive,password
debug3: authmethod_lookup publickey
debug3: remaining preferred: keyboard-interactive,password
debug3: authmethod_is_enabled publickey
debug1: Next authentication method: publickey
debug1: Trying private key: /.ssh/id_rsa
</code></pre>
<p>It hangs on this line indefinitely. Sadly it seems there is no more verbose output for <code>ssh</code>. I am not sure whether the problem is with <code>sshpass</code> inside the container or where the actual problem arises.</p>
<p>Building in Docker hangs, too:</p>
<pre class="lang-docker prettyprint-override"><code>FROM alpine/git
RUN apk add --no-cache sshpass
RUN sshpass -p foo /bin/sh -c 'read X < /dev/tty'
</code></pre>
<p><code>sshpass</code> has the following files open:</p>
<pre class="lang-shell prettyprint-override"><code>lr-x------ 1 tempuser root 64 Jul 31 10:43 0 -> pipe:[380942247]
l-wx------ 1 tempuser root 64 Jul 31 10:43 1 -> pipe:[380942248]
l-wx------ 1 tempuser root 64 Jul 31 10:43 2 -> pipe:[380944011]
lrwx------ 1 tempuser root 64 Jul 31 10:43 3 -> /dev/pts/ptmx
lrwx------ 1 tempuser root 64 Jul 31 10:43 4 -> /dev/pts/0
</code></pre>
<p><code>ssh</code> in contrast:</p>
<pre class="lang-shell prettyprint-override"><code>lr-x------ 1 tempuser root 64 Jul 31 09:23 0 -> pipe:[380942247]
l-wx------ 1 tempuser root 64 Jul 31 09:23 1 -> pipe:[380942248]
l-wx------ 1 tempuser root 64 Jul 31 09:23 2 -> pipe:[380944011]
lrwx------ 1 tempuser root 64 Jul 31 09:23 3 -> socket:[380944638]
lrwx------ 1 tempuser root 64 Jul 31 10:43 4 -> /dev/tty
</code></pre>
| abergmeier | <p>For Keys with a Passphrase, the SSH prompt is different.
So I had to change the prompt using <code>-P assphrase</code>:</p>
<pre class="lang-docker prettyprint-override"><code>CMD ["/bin/sh", "-c", "GIT_SSH_COMMAND=\"sshpass -p mygreatpassphrase -P assphrase ssh -vvv\" git -C /workspace push --mirror [email protected]:foo/bar.git"]
</code></pre>
| abergmeier |
<p>I was following <a href="https://cloud.google.com/python/django/kubernetes-engine" rel="nofollow noreferrer">this</a> tutorial for deploying Django App to Kubernetes Cluster. I've created cloudsql credentials and exported them as in the tutorial</p>
<pre><code>export DATABASE_USER=<your-database-user>
export DATABASE_PASSWORD=<your-database-password>
</code></pre>
<p>However my password was generated by LastPass and contains special characters, which are striped out in Kubernetes Pod thus making the password incorrect.</p>
<p>This is my password (altered, just showing the special chars)
<code>5bb4&sL!EB%e</code></p>
<p>So i've tried various ways of exporting this string, echoing it out always show correct password, however in Kubernetes Dashboard the password is always incorrect (Also altered in DevTools, but some chars are just stripped out)</p>
<p><a href="https://i.stack.imgur.com/yR5o9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yR5o9.png" alt="enter image description here"></a></p>
<p>Things I've tried</p>
<pre><code>export DATABASE_PASSWORD=$'5bb4&sL\!EB\%e'
export DATABASE_PASSWORD='5bb4&sL!EB%e'
</code></pre>
<p>Echoing is always good but kubernetes is always stripping it.</p>
<p>Deploying with <code>skaffold deploy</code></p>
<p>EDIT:</p>
<p>After hint I've tried to store the password in base64 encoding form, however I suspect it only applies to local scope, as the password in Kubernetes Dashboard is still the same, I suspect that I need to regenerate the certificate to make this work remotely on gke cluster?</p>
<p><a href="https://i.stack.imgur.com/VGjsJ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VGjsJ.jpg" alt="enter image description here"></a></p>
<p>So the env variables are for local and credentials in cloud sql proxy are the ones that are being used and misinterpreted? Where are those files by the way?</p>
<p>EDIT2:</p>
<p>I've just found out that indeed the gke cluster in using the credentials json rather than the exported variables. The configuration json already contains the password in base64 encoded form, HOWEVER it is base64 encode of string which still is missing special characters. Looks like the only way out is to generate new credentials without specials characters, that looks like a bug, doesnt it?</p>
| Josef Korbel | <p>You should <code>base64</code> encode your password before passing it into the pod so that special characters are encoded in a way that they can be preserved.</p>
<p>In bash you can do this with: </p>
<pre><code>export DATABASE_PASSWORD=`echo [ACTUAL_PASSWORD_HERE] | base64`
</code></pre>
<p>You'll then need to ensure that the Django app <code>settings.py</code> uses a base64 decode before applying the password to its internal variable.</p>
<p>So in the tutorial you linked to, the line</p>
<p><code>'PASSWORD': os.getenv('DATABASE_PASSWORD'),</code></p>
<p>would need to change to:</p>
<p><code>'PASSWORD': base64.b64decode(os.getenv('DATABASE_PASSWORD')),</code></p>
| Paul Annetts |
<p>I am writing this question to share the solution we found in our company.
We migrated Solr over a docker only solution to a kubernetes solution.</p>
<p>On kubernetes the environment ended up with slowness.
At least for me the solution was atypical.</p>
<p><strong>Environment:</strong></p>
<ul>
<li>solr(8.2.0) with just one node</li>
<li>solr database with 250GB on disk</li>
<li>kubernetes over Rancher</li>
<li>Node with 24vcpus and 32GB of Ram</li>
<li>Node hosts Solr and nginx ingress</li>
<li>Reserved 30GB for the Solr pod in kubernetes</li>
<li>Reserved 25GB for the Solr</li>
</ul>
<p><strong>Expected Load:</strong></p>
<ul>
<li>350 updates/min (pdf documents and html documents)</li>
<li>50 selects/min</li>
</ul>
<p>The result was Solr degrading over time having high loads on host. The culpirit was heavy disk access.</p>
| Tarmac | <p>After one week of frustrated adjustments this is the simple solution we found:</p>
<p>Solr JVM had 25 GB. We decreased the value to 10GB.</p>
<p>This is the command to start solr with the new values:</p>
<p>/opt/solr/bin/solr start -f -force -a '-Xms10g -Xmx10g' -p 8983</p>
<p>If someone can explain what happened that would be great.
My guess is that solr was trying to make cash and kubernetes was reapping this cache. So Solr ended up in a continuous reading of the disk trying to build its cache.</p>
| Tarmac |
<p>I know that you can assign multiple roles to one service account when you want your service account to access multiple namespaces, but what I wonder is how it will behave when you assign to it more than one clusterrole which is cluster scoped. From my perspective, I think that it will choose one of them but I'm not sure.</p>
| touati ahmed | <blockquote>
<p>Permissions are purely additive (there are no "deny" rules).</p>
</blockquote>
<p><a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/" rel="noreferrer">reference</a></p>
<p>This is the golden 🥇 rule here that we must memorize for kubernetes RBAC roles.</p>
<p>"purely additive" means always <strong>ALLOW</strong> no revoke.</p>
<p>Hence, "purely additive" means there are neither <strong>conflicts</strong> nor <strong>order of precedence</strong>.</p>
<ul>
<li>It's not like <strong>AWS IAM policies</strong> where we have DENY and ALLOW .. That's time, we have to know which one has the highest order of precedence.</li>
<li>It's not like also <strong>subnets ACL</strong> , where we have DENY and ALLOW .. That's time, we need to assign number for each rule. This number will decide the order of precedence.</li>
</ul>
<p>Example:</p>
<pre class="lang-yaml prettyprint-override"><code>---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
# "namespace" omitted since ClusterRoles are not namespaced
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
# "namespace" omitted since ClusterRoles are not namespaced
name: node-reader
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "watch", "list"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: pod-reader
subjects:
- kind: User
name: abdennour
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: pod-reader
apiGroup: rbac.authorization.k8s.io
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: node-reader
subjects:
- kind: User
name: abdennour
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: node-reader
apiGroup: rbac.authorization.k8s.io
</code></pre>
<p>as you can see in this example, the user Abdennour should have at the end the wide read access for both: nodes & pods.</p>
| Abdennour TOUMI |
<p>I have installed Ingress and linked my service to it (usign metallb).</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /api/tasks/*
# pathType: Exact
backend:
serviceName: tasks-service
servicePort: 5004
</code></pre>
<p>The thing is this, I set up the default prefix of the paths in the deployment to be</p>
<blockquote>
<p>/api/tasks/</p>
</blockquote>
<p>where <code>/api/tasks/tasks</code> shows the service is up while <code>/api/tasks/tasks_count</code> gives the total number. However in my k8s cluster, I cannot redirect to the different paths within the service. What could be the problem?</p>
| Denn | <p>Since this is a result in Google for wildcards and prefixes, I'll answer this old question.</p>
<p>The functionality you're looking for comes from <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#path-types" rel="nofollow noreferrer">specifying the path type</a> as <code>pathType: Prefix</code></p>
<pre><code> paths:
- path: /api/tasks
pathType: Prefix
backend:
serviceName: tasks-service
servicePort: 5004
</code></pre>
<p>Importantly, the path doesn't contain a wildcard character. In fact, cloud providers like AWS will throw errors if you're using their <a href="https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.3/" rel="nofollow noreferrer">custom load balancer provisioners</a> for Ingress resources:</p>
<pre><code>prefix path shouldn't contain wildcards
</code></pre>
| Indigenuity |
<p>I run a tiny (read: single-node) Docker Swarm cluster that I want to migrate to Kubernetes, and I can't figure out how to expose a Service on a specific port so that I can access it from the outside world.</p>
<p>In Docker Swarm, I could expose e.g. a MySQL server by specifying</p>
<pre><code>ports:
- '3306:3306'
</code></pre>
<p>as part of the service block in my stack configuration file, which would let me access it on <code>127.0.0.1:3306</code>.</p>
<p>To replicate this in Kubernetes, my first instinct was to use the <code>NodePort</code> service type and specifying</p>
<pre><code>ports:
- port: 3306
targetPort: 3306
nodePort: 3306
</code></pre>
<p>in the service spec. But this is not allowed: Kubernetes tells me <code>provided port is not in the valid range. The range of valid ports is 30000-32767</code>.</p>
<p>Then there is <code>Ingress</code>, which seems closely aligned with what I want to do, but <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#prerequisites" rel="nofollow noreferrer">it's in beta</a> and is apparently geared towards HTTP services (which does not cover all my use cases). There is also the <code>LoadBalancer</code> type, but I'm not using a cloud provider with support for it and so that isn't an option for me.</p>
<p>This has left me a bit confused. If I want to expose a service in my Kubernetes cluster so that I can access it from the outside (e.g. from the internet at large on <code>some-public-ip:3306</code>), what is a recommended (or alternatively, beginner-friendly) way to set it up? What am I missing?</p>
| James | <p><code>NodePort</code> is probably the simplest approach, but you will need to pick a port in the range 30000 - 32767. That way you'd access say <code>some-public-ip:30306</code> which would map to your service's port 3306 internally.</p>
| Paul Annetts |
<p>In the context of Azure Kubernetes Service (AKS), I would like to deploy some pods to a region not currently supported by Azure (in my case, Mexico). Is it possible to provision a non-Azure VM here in Mexico and attach it as a worker node to my AKS cluster?</p>
<p>Just to be clear, I want Azure to host the Kubernetes control plane. I want to spin out some Azure VMs within various supported regions. Then configure a non-Azure VM hosted in Mexico as a Kubernetes Node and attach it to the cluster.</p>
<p>(Soon there will be a Microsoft Azure Datacenter in Mexico and this problem will be moot. In the mean time, was hoping to monkey wrench it.)</p>
| brando | <p>You can't have a node pool with VMs that are not managed by Azure with AKS. You'll need to run your own k8s cluster if you want to do something like this. The closest you can get to something managed in Azure like AKS is to build your own <a href="https://learn.microsoft.com/en-us/azure/azure-arc/kubernetes/overview" rel="nofollow noreferrer">Azure Arc enabled</a> Kubernetes Cluster, but you'll need some skills with tools like <code>Rancher</code>, <code>Kubespray</code>, <code>Kubeadm</code> or something else.</p>
| Jean-Philippe Bond |
<p>Setup is Kubernetes v1.13 & Istio 1.0.5</p>
<p>I'm running into an issue where the Istio service discovery is creating Envoy configurations that match TCP listeners instead of HTTP listeners. </p>
<p>The communication is working in the service mesh, but I need Envoy to serve as a Layer 7 proxy and not a Layer 4 pass through. I'm not getting the logs I need for the HTTP requests coming through Envoy. </p>
<p>Here is what I see in the sidecar istio-proxy log: </p>
<p>[2019-02-05T15:40:59.403Z] - 5739 7911 149929 "127.0.0.1:80" inbound|80||api-endpoint.default.svc.cluster.local 127.0.0.1:44560 10.244.3.100:80 10.244.3.105:35204</p>
<p>Which when I inspect the Envoy config in the sidecar - this is the corresponding config for that log message.</p>
<pre><code> "name": "envoy.tcp_proxy",
"config": {
"cluster": "inbound|80||api-endpoint.default.svc.cluster.local",
"access_log": [
{
"name": "envoy.file_access_log",
"config": {
"path": "/dev/stdout",
"format": "[%START_TIME%] %RESPONSE_FLAGS% %BYTES_RECEIVED% %BYTES_SENT% %DURATION% \"%UPSTREAM_HOST%\" %UPSTREAM_CLUSTER% %UPSTREAM_LOCAL_ADDRESS% %DOWNSTREAM_LOCAL_ADDRESS% %DOWNSTREAM_REMOTE_ADDRESS%\n"
}
}
],
"stat_prefix": "inbound|80||api-endpoint.default.svc.cluster.local"
}
</code></pre>
<p>So my question is: <strong>Why is Pilot providing Envoy with a TCP config for an HTTP service?</strong></p>
| Jonathan H | <p>I've come across this, in my case the port name for my service was not in the form <code>http-xyz</code>.</p>
<p>Istio/Envoy assumes that traffic is TCP, unless it gets a hint from the port name that it is some other protocol.</p>
<p>As per <a href="https://istio.io/help/faq/traffic-management/#naming-port-convention" rel="nofollow noreferrer">https://istio.io/help/faq/traffic-management/#naming-port-convention</a></p>
<blockquote>
<p>Named ports: Service ports must be named.</p>
<p>The port names must be of the form protocol-suffix with http, http2, grpc, mongo, or redis as the protocol in order to take advantage of Istio’s routing features.</p>
<p>For example, name: http2-foo or name: http are valid port names, but name: http2foo is not. If the port name does not begin with a recognized prefix or if the port is unnamed, traffic on the port will be treated as plain TCP traffic (unless the port explicitly uses Protocol: UDP to signify a UDP port).</p>
</blockquote>
| Paul Annetts |
<p>I have 3 nodes k8 cluster in on-premise setup in my company which run a TCP listener exposed on port <code>58047</code>.</p>
<p>We have a network load balancer which can RR on this nodes.</p>
<p>I can expose the port to the host in each nodes so NLB will take care, or should i create a service which exposes a single external ip which will be specified in the NLB.</p>
<p>Which is the best approach?</p>
<pre><code>NAME READY STATUS RESTARTS AGE
pod/iothubdeployment-57bbb5f4d6-m62df 1/1 Running 1 50m
pod/iothubdeployment-57bbb5f4d6-r9mzr 1/1 Running 1 50m
pod/iothubdeployment-57bbb5f4d6-w5dq4 1/1 Running 0 50m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d18h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/iothubdeployment 3/3 3 3 56m
NAME DESIRED CURRENT READY AGE
replicaset.apps/iothubdeployment-57bbb5f4d6 3 3 3 50m
replicaset.apps/iothubdeployment-6b78b96dc5 0 0 0 56m
</code></pre>
<p>My deployment-definition</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: iothubdeployment
spec:
replicas: 3
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 67%
minReadySeconds: 5
selector:
matchLabels:
app: iothub
template:
metadata:
labels:
app: iothub
version: 1.1.0
spec:
containers:
- name: iothubpod
image: gitlab.dt.local:5555/cere-brum/iot_gateway/iot_hub/iot_hub:latest
imagePullPolicy: Always
ports:
- containerPort: 58047
hostPort: 58000
protocol: TCP
imagePullSecrets:
- name: regcred
</code></pre>
| itsmewajid | <p>Looks like you’re directly trying to expose a Deployment via a host port. That is not recommended: you should create a Service that instructs Kubernetes how to expose your Deployment to other workloads in the cluster and outside.</p>
<p>A NodePort service would allow you to properly expose your Deployment on each Node: your load balancer can then be configured to connect to that port on any of your <em>node</em> IPs.</p>
| Paul Annetts |
<p>How can I execute "helm install" command and re-install resources that I have defined in "templates"? I have some custom resources that already exist so I want to re-install them. It is possible to do that through a parameter in helm command?</p>
| Riccardo Califano | <p>I think your main question is:</p>
<blockquote>
<p>I have some custom resources that already exist so I want to re-install them.</p>
</blockquote>
<p>Which means <strong>DELETE</strong> then <strong>CREATE</strong> again.</p>
<h2>Short answer</h2>
<p>No.. but it can be done thru workaround</p>
<h2>Detailed answer</h2>
<p>Helm manages the RELEASE of the Kubernetes manifests by either:</p>
<ul>
<li>creating <code>helm install</code></li>
<li>updating <code>helm upgrade</code></li>
<li>deleting <code>helm delete</code></li>
</ul>
<p>However, you can recreate resources following one of these approaches :</p>
<p><strong>1. Twice Consecutive Upgrade</strong></p>
<p>If your chart is designed to enable/disable installation of resources with <strong>Values</strong> ( .e.g: <code>.Values.customResources.enabled</code>) you can do the following:</p>
<pre class="lang-sh prettyprint-override"><code>helm -n namespace upgrade <helm-release> <chart> --set customResources.enabled=false
# Then another Run
helm -n namespace upgrade <helm-release> <chart> --set customResources.enabled=true
</code></pre>
<p>So, if you are the builder of the chart, your task is to make the design functional.</p>
<p><strong>2. Using helmfile hooks</strong></p>
<p><a href="https://github.com/roboll/helmfile" rel="noreferrer">Helmfile</a> is Helm of Helm.</p>
<p>It manage your helm releases within a single file called <code>helmfile.yaml</code>.</p>
<p>Not only that, but it also can call some <strong>LOCAL commands</strong> before/or/after installing/or/upgrading a Helm release.
This call which happen before or after, is named <strong>hook</strong>.</p>
<p>For your case, you will need <strong>presync</strong> hook.</p>
<p>If we organize your helm release as a Helmfile definition , it should be :</p>
<pre class="lang-yaml prettyprint-override"><code>releases:
- name: <helm-release>
chart: <chart>
namespace: <namespace>
hooks:
- events: ["presync"]
showlogs: true
command: kubectl
args: [ "-n", "{{`{{ .Release.Namespace }}`}}", "delete", "crd", "my-custom-resources" ]
</code></pre>
<p>Now you just need to run <code>helmfile apply</code></p>
<p>I know that CRD are not namespaced, but I put namespace in the hook just to demonstrate that Helmfile can give you the namespace of release as variable and no need to repeat your self.</p>
| Abdennour TOUMI |
<p>I have this BUILD file:</p>
<pre><code>package(default_visibility = ["//visibility:public"])
load("@npm_bazel_typescript//:index.bzl", "ts_library")
ts_library(
name = "lib",
srcs = glob(
include = ["**/*.ts"],
exclude = ["**/*.spec.ts"]
),
deps = [
"//packages/enums/src:lib",
"//packages/hello/src:lib",
"@npm//faker",
"@npm//@types/faker",
"@npm//express",
"@npm//@types/express",
],
)
load("@io_bazel_rules_docker//nodejs:image.bzl", "nodejs_image")
nodejs_image(
name = "server",
data = [":lib"],
entry_point = ":index.ts",
)
load("@io_bazel_rules_docker//container:container.bzl", "container_push")
container_push(
name = "push_server",
image = ":server",
format = "Docker",
registry = "gcr.io",
repository = "learning-bazel-monorepo/server",
tag = "dev",
)
load("@io_bazel_rules_k8s//k8s:object.bzl", "k8s_object")
k8s_object(
name = "k8s_deploy",
kind = "deployment",
namespace = "default",
template = ":server.yaml",
images = {
"deploy_server:do_not_delete": ":server"
},
)
</code></pre>
<p>But when running the <code>k8s_deploy</code> rule I get this error:</p>
<pre><code>INFO: Analyzed target //services/server/src:k8s_deploy (1 packages loaded, 7 targets configured).
INFO: Found 1 target...
Target //services/server/src:k8s_deploy up-to-date:
bazel-bin/services/server/src/k8s_deploy.substituted.yaml
bazel-bin/services/server/src/k8s_deploy
INFO: Elapsed time: 0.276s, Critical Path: 0.01s
INFO: 0 processes.
INFO: Build completed successfully, 1 total action
INFO: Build completed successfully, 1 total action
2019/12/22 07:45:14 Unable to publish images: unable to publish image deploy_server:do_not_delete
</code></pre>
<p>The <code>lib</code>, <code>server</code> and <code>push_server</code> rules work fine. So I don't know what's the issue as there is no specific error message.</p>
<p>A snippet out of my <code>server.yaml</code> file:</p>
<pre><code>spec:
containers:
- name: server
image: deploy_server:do_not_delete
</code></pre>
<p>You can try it yourself by running <code>bazel run //services/server/src:k8s_deploy</code> on this repo: <a href="https://github.com/flolude/minimal-bazel-monorepo/tree/de898eb1bb4edf0e0b1b99c290ff7ab57db81988" rel="noreferrer">https://github.com/flolude/minimal-bazel-monorepo/tree/de898eb1bb4edf0e0b1b99c290ff7ab57db81988</a></p>
| Florian Ludewig | <p>Have you pushed images using this syntax before? </p>
<p>I'm used to using the full repository tag for both the server.yaml and the k8s_object images. </p>
<p>So, instead of just "<code>deploy_server:do_not_delete</code>", try "<code>gcr.io/learning-bazel-monorepo/deploy_server:do_not_delete</code>".</p>
| Paul |
<p>We have an GKE ingress that is using the below frontend-service. The ingress terminates tls as well. We want to have http to https permanent redirects for any traffic that comes on http. </p>
<p>With the below configuration we have all working, and serving traffic on both http and https (without redirect).</p>
<p>The container used for the Deployment can be configured to rewrite http to https with --https-redirect flag. It also respect and trust the <strong>X-Forwarded-Proto</strong> header, and will consider it to be secure if the header value is set to <strong>https</strong>.</p>
<p>So the most reasonable configuration I can see for the readinessProbe would be the configuration below, but with the commented lines uncommented. However, as soon as we apply this version we never get into a healthy state, and instead the terminated domain configured with the Ingress returns with 502 and never recovers.</p>
<p>So what is wrong with the below approach?
I have tested using port-forwarding both the pod and the service, and they both return 301 if I do not provide the X-Forwarded-Proto header, and return 200 if I provide the X-Forwarded-Proto header with https value.</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: frontend-service
spec:
type: NodePort
ports:
- port: 8080
selector:
app: frontend
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- image: eu.gcr.io/someproject/frontend:master
imagePullPolicy: Always
# args:
# - '--https-redirect'
name: frontend
resources:
limits:
memory: 1Gi
cpu: '0.5'
ports:
- containerPort: 8080
name: frontend
readinessProbe:
httpGet:
path: /_readinessProbe
port: 8080
# httpHeaders:
# - name: X-Forwarded-Proto
# value: https
</code></pre>
| pjotr_dolphin | <p>The GCP Health Check is very picky about the HTTP response codes it gets back. It must be a 200, and not a redirect. If in the configuration you have posted, the NLB gets a 301/302 response from your server. it will then mark your backend as unhealthy as this is not a 200 response. If the health check is sending HTTP without the X-Forwarded-Proto header, this is likely.</p>
<p>You can check this by inspecting the kubectl logs of your deployment's pods.</p>
<p>My previous answer may be useful if you move to an HTTPS health check, in an attempt to remedy this.</p>
<hr>
<p>From <a href="https://cloud.google.com/kubernetes-engine/docs/concepts/ingress" rel="nofollow noreferrer">GKE documentation</a>:
You will need to put an annotation on your Service definition that tells GKE to use HTTPS for the health check. Otherwise it will try sending HTTP and get confused.</p>
<pre><code>kind: Service
metadata:
name: my-service-3
annotations:
cloud.google.com/app-protocols: '{"my-https-port":"HTTPS","my-http-port":"HTTP"}'
spec:
type: NodePort
selector:
app: metrics
department: sales
ports:
- name: my-https-port
port: 443
targetPort: 8443
- name: my-http-port
port: 80
targetPort: 50001
</code></pre>
<p>I haven't used the latest syntax, but this used to work for me.</p>
<p>However this was so clunky to use I ended up going over to Istio and getting that to do all the HTTPS termination. That's no small undertaking however, but it you're thinking of using cert-manager/Let's Encrypt it might be worth exploring.</p>
| Paul Annetts |
<p>Consider this current namespace config in JSON format:</p>
<pre><code>$ kubectl get configmap config -n metallb-system -o json
{
"apiVersion": "v1",
"data": {
"config": "address-pools:\n- name: default\n protocol: layer2\n addresses:\n - 192.168.0.105-192.168.0.105\n - 192.168.0.110-192.168.0.111\n"
},
"kind": "ConfigMap",
"metadata": {
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"data\":{\"config\":\"address-pools:\\n- name: default\\n protocol: layer2\\n addresses:\\n - 192.168.0.105-192.168.0.105\\n - 192.168.0.110-192.168.0.111\\n\"},\"kind\":\"ConfigMap\",\"metadata\":{\"annotations\":{},\"name\":\"config\",\"namespace\":\"metallb-system\"}}\n"
},
"creationTimestamp": "2020-07-10T08:26:21Z",
"managedFields": [
{
"apiVersion": "v1",
"fieldsType": "FieldsV1",
"fieldsV1": {
"f:data": {
".": {},
"f:config": {}
},
"f:metadata": {
"f:annotations": {
".": {},
"f:kubectl.kubernetes.io/last-applied-configuration": {}
}
}
},
"manager": "kubectl",
"operation": "Update",
"time": "2020-07-10T08:26:21Z"
}
],
"name": "config",
"namespace": "metallb-system",
"resourceVersion": "2086",
"selfLink": "/api/v1/namespaces/metallb-system/configmaps/config",
"uid": "c2cfd2d2-866c-466e-aa2a-f3f7ef4837ed"
}
}
</code></pre>
<p>I am interested only in the address pools that are configured. As per the <a href="https://kubernetes.io/docs/reference/kubectl/cheatsheet/#formatting-output" rel="nofollow noreferrer">kubectl cheat sheet</a>, I can do something like this to fetch the required address range:</p>
<pre><code>$ kubectl get configmap config -n metallb-system -o jsonpath='{.data.config}'
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.0.105-192.168.0.105
- 192.168.0.110-192.168.0.111
</code></pre>
<p>However, my requirement is to use only a JSON parser throughout, and I cannot parse the above output since it is in YAML instead.</p>
<p>Since I'm not willing to accomodate the above yaml output for direct use ( or via format conversion operation ), is there any suitable way I can obtain the address range from the <code>kubectl</code> interface in a JSON format instead?</p>
| Siddharth Srinivasan | <p>You need <a href="https://github.com/mikefarah/yq" rel="nofollow noreferrer"><code>yq</code></a> alongside <code>kubectl</code> ...</p>
<p>After inspecting your configmap, I understood the structure when I convert it to YAML :</p>
<pre><code>---
apiVersion: v1
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.0.105-192.168.0.105
- 192.168.0.110-192.168.0.111
kind: ConfigMap
metadata:
name: config
</code></pre>
<p>And you can see clearly that <code>.data.config</code> is a multi-line string.. but it can be converted also to YAML.</p>
<ul>
<li>parse the multi-line string with kubectl</li>
<li>treat this string as yaml using yq.</li>
</ul>
<p>So This is what you are looking for :</p>
<pre><code># all addresses
kubectl -n metallb-system get cm config -o 'go-template={{index .data "config" }}' | \
yq -r '.["address-pools"][0].addresses'
# first address only
kubectl -n metallb-system get cm config -o 'go-template={{index .data "config" }}' | \
yq -r '.["address-pools"][0].addresses[0]'
# second address only
kubectl -n metallb-system get cm config -o 'go-template={{index .data "config" }}' | \
yq -r '.["address-pools"][0].addresses[1]'
# so on ...
</code></pre>
| Abdennour TOUMI |
<p>I am trying to insert multiline json string into helm template for base64 encoding required for Kubernetes secret.</p>
<p>Goals:</p>
<ul>
<li>helm value is injected into json string</li>
<li>multi-line json string must be base64 encoded using <code>b64enc</code></li>
</ul>
<p><code>myfile1.json</code> does not work but <code>myfile2.json</code> works.
I prefer not to put entire json file in <code>values.yaml</code>.</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: {{ template "mychart.fullname" . }}
labels:
app: {{ template "mychart.name" . }}
chart: {{ template "mychart.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
type: Opaque
data:
myfile.json: {{ |-
{
"item1": {
"name": "{{ .Values.item1.name }}"
},
"item2": {
}
} | b64enc }}
myfile2.json: {{ .Values.myfile2 | b64enc }}
</code></pre>
| Steve | <p>You actually don't need to base64-encode the secret in the helm chart. If you use the <code>stringData</code> field instead of <code>data</code> field, Kubernetes knows that it needs to base64 encode the data upon the secret's deployment.</p>
<p>From the docs (<a href="https://kubernetes.io/docs/concepts/configuration/secret/#creating-a-secret-manually" rel="noreferrer">Source</a>):</p>
<blockquote>
<p>The Secret contains two maps: <code>data</code> and <code>stringData</code>. The <code>data</code> field is used to store arbitrary data, encoded using base64. The <code>stringData</code> field is provided for convenience, and allows you to provide secret data as unencoded strings.</p>
</blockquote>
<p>So we can rewrite your secret using <code>stringData</code> instead of <code>data</code> and keep multiline json strings in templates like so:</p>
<pre><code>apiVersion: "v1"
kind: "Secret"
metadata:
name: {{ template "mychart.fullname" . }}
labels:
app: {{ template "mychart.name" . }}
chart: {{ template "mychart.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
type: "Opaque"
stringData:
myfile.json: |-
{
"item1": {
"name": "{{ .Values.item1.name }}"
},
"item2": {
}
}
myfile2.json: {{ .Values.myfile2 }}
</code></pre>
<p>Note that this does not mean you suddenly need to worry about having unencoded secrets. <code>stringData</code> will ultimately be base64-encoded and converted to <code>data</code> when it is installed, so it will behave exactly the same once it's loaded into Kubernetes.</p>
<p>Again, from the docs <strong>(emphasis mine)</strong> (<a href="https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.16/#secret-v1-core" rel="noreferrer">Source</a>):</p>
<blockquote>
<p><code>stringData</code> allows specifying non-binary secret data in string form. <strong>It is provided as a write-only convenience method.</strong> All keys and values are merged into the <code>data</code> field on write, overwriting any existing values. <strong>It is never output when reading from the API.</strong></p>
</blockquote>
| Technetium |
<p>Gist: I am struggling to get a pod to connect to a service outside the cluster.
Basically the pod manages to resolve the ClusterIp of the selectorless service, but traffic does not go through. Traffic does go through if i hit the ClusterIp of the selectorless service from the cluster host.</p>
<p>I'm fairly new with microk8s and k8s in general. I hope i am making some sense though...</p>
<p>Background:</p>
<p>I am attempting to move parts of my infrastructure from a docker-compose setup on one virtual machine, to a microk8s cluster (with 2 nodes).</p>
<p>In the docker compose, i have a Grafana Container, connecting to an InfluxDb container.</p>
<p>kubectl version:</p>
<pre><code>Client Version: version.Info{Major:"1", Minor:"22+", GitVersion:"v1.22.2-3+9ad9ee77396805", GitCommit:"9ad9ee77396805781cd0ae076d638b9da93477fd", GitTreeState:"clean", BuildDate:"2021-09-30T09:52:57Z", GoVersion:"go1.16.8", Compiler:"gc", Platform:"linux/amd64"}
</code></pre>
<p>I now want to setup a Grafana container on the microk8s cluster, and have it connect to the InfluxDb that is still running on the docker-compose vm.</p>
<p>All of these VM's are running on an ESXi host.</p>
<ul>
<li>InfluxDb is exposed at 10.1.2.220:8086</li>
<li>microk8s-master has ip 10.1.2.50</li>
<li>microk8s-slave-1 has ip 10.1.2.51</li>
</ul>
<p>I have enabled ingress and dns. I have also enabled metallb, though i don't intend to use it here.</p>
<p>I have configured a selectorless service, a remote endpoint and an egress Network Policy (currently allowing all).</p>
<p>From microk8s-master and slave-1, i can</p>
<ul>
<li>telnet directly to 10.1.2.220:8086 successfully</li>
<li><strong>telnet to the ClusterIP(10.152.183.26):8086 of the service, successfully reaching influxdb</strong></li>
<li>wget ClusterIp:8086</li>
</ul>
<p>Inside the Pod, if i do a wget to influxdb-service:8086, it will resolve to the ClusterIP, but after that it times out.
I can however reach (wget), services pointing to other pods in the same namespace</p>
<p><strong>Update:</strong></p>
<p>I have been able to get it to work through a workaround, but i dont think this is the correct way.</p>
<p>My temporary solution is to expose the selectorless service on metallb, then use that exposed ip inside the pod.</p>
<p>Service and Endpoints for InfluxDb</p>
<pre><code>---
apiVersion: v1
kind: Service
metadata:
name: influxdb-service
labels:
app: grafana
spec:
ports:
- protocol: TCP
port: 8086
targetPort: 8086
---
apiVersion: v1
kind: Endpoints
metadata:
name: influxdb-service
subsets:
- addresses:
- ip: 10.1.2.220
ports:
- port: 8086
</code></pre>
<p>The service and endpoint shows up fine</p>
<pre><code>eso@microk8s-master:~/k8s-grafana$ microk8s.kubectl get endpoints
NAME ENDPOINTS AGE
neo4j-service-lb 10.1.166.176:7687,10.1.166.176:7474 25h
influxdb-service 10.1.2.220:8086 127m
questrest-service 10.1.166.178:80 5d
kubernetes 10.1.2.50:16443,10.1.2.51:16443 26d
grafana-service 10.1.237.120:3000 3h11m
eso@microk8s-master:~/k8s-grafana$ microk8s.kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 26d
questrest-service ClusterIP 10.152.183.56 <none> 80/TCP 5d
neo4j-service-lb LoadBalancer 10.152.183.166 10.1.2.60 7474:31974/TCP,7687:32688/TCP 25h
grafana-service ClusterIP 10.152.183.75 <none> 3000/TCP 3h13m
influxdb-service ClusterIP 10.152.183.26 <none> 8086/TCP 129m
eso@microk8s-master:~/k8s-grafana$ microk8s.kubectl get networkpolicy
NAME POD-SELECTOR AGE
grafana-allow-egress-influxdb app=grafana 129m
test-egress-influxdb app=questrest 128m
</code></pre>
<p>Describe:</p>
<pre><code>eso@microk8s-master:~/k8s-grafana$ microk8s.kubectl describe svc influxdb-service
Name: influxdb-service
Namespace: default
Labels: app=grafana
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.152.183.26
IPs: 10.152.183.26
Port: <unset> 8086/TCP
TargetPort: 8086/TCP
Endpoints: 10.1.2.220:8086
Session Affinity: None
Events: <none>
eso@microk8s-master:~/k8s-grafana$ microk8s.kubectl describe endpoints influxdb-service
Name: influxdb-service
Namespace: default
Labels: <none>
Annotations: <none>
Subsets:
Addresses: 10.1.2.220
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
<unset> 8086 TCP
Events: <none>
eso@microk8s-master:~/k8s-grafana$ microk8s.kubectl describe networkpolicy grafana-allow-egress-influxdb
Name: grafana-allow-egress-influxdb
Namespace: default
Created on: 2021-11-03 20:53:00 +0000 UTC
Labels: <none>
Annotations: <none>
Spec:
PodSelector: app=grafana
Not affecting ingress traffic
Allowing egress traffic:
To Port: <any> (traffic allowed to all ports)
To: <any> (traffic not restricted by destination)
Policy Types: Egress
</code></pre>
<p>Grafana.yml:</p>
<pre><code>eso@microk8s-master:~/k8s-grafana$ cat grafana.yml
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: grafana-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
storageClassName: ""
claimRef:
name: grafana-pvc
namespace: default
persistentVolumeReclaimPolicy: Retain
nfs:
path: /mnt/MainVol/grafana
server: 10.2.0.1
readOnly: false
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: grafana-pvc
spec:
accessModes:
- ReadWriteMany
storageClassName: ""
volumeName: grafana-pv
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: grafana
name: grafana
spec:
selector:
matchLabels:
app: grafana
template:
metadata:
labels:
app: grafana
spec:
securityContext:
fsGroup: 472
supplementalGroups:
- 0
containers:
- name: grafana
image: grafana/grafana:7.5.2
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
name: http-grafana
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /robots.txt
port: 3000
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 2
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: 3000
timeoutSeconds: 1
resources:
requests:
cpu: 250m
memory: 750Mi
volumeMounts:
- mountPath: /var/lib/grafana
name: grafana-pv
volumes:
- name: grafana-pv
persistentVolumeClaim:
claimName: grafana-pvc
---
apiVersion: v1
kind: Service
metadata:
name: grafana-service
spec:
ports:
- port: 3000
protocol: TCP
targetPort: http-grafana
selector:
app: grafana
#sessionAffinity: None
#type: LoadBalancer
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: grafana-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: "g2.some.domain.com"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: grafana-service
port:
number: 3000
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: grafana-allow-egress-influxdb
namespace: default
spec:
podSelector:
matchLabels:
app: grafana
ingress:
- {}
egress:
- {}
policyTypes:
- Egress
</code></pre>
| takilara | <p>As I haven't gotten much response, i'll answer the question with my "workaround". I am still not sure this is the best way to do it though.</p>
<p>I got it to work by exposing the selectorless service on metallb, then using that exposed ip inside grafana</p>
<pre><code>kind: Service
apiVersion: v1
metadata:
name: influxdb-service-lb
#namespace: ingress
spec:
type: LoadBalancer
loadBalancerIP: 10.1.2.61
# selector:
# app: grafana
ports:
- name: http
protocol: TCP
port: 8086
targetPort: 8086
---
apiVersion: v1
kind: Endpoints
metadata:
name: influxdb-service-lb
subsets:
- addresses:
- ip: 10.1.2.220
ports:
- name: influx
protocol: TCP
port: 8086
</code></pre>
<p>I then use the loadbalancer ip in grafana (10.1.2.61)</p>
<hr />
<p>Update October 2022
As a response to a comment above, I have added a diagram of how i believe this to work</p>
<p><a href="https://i.stack.imgur.com/3icoz.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3icoz.png" alt="enter image description here" /></a></p>
| takilara |
<p>Background:
There is a need in our project to store the balance info in a KV store. The KV store need strongly data consistence across DCs(different Kubernetes clusters), because we target to deploy the KV store across 3 DCs(different Kubernetes clusters) for high availability.</p>
<p>Our design:
Deploy HazelCast 3 nodes to 3 different Kubernetes clusters/dockers in 3 different DCs.
It means each DC has 1 Kubernetes cluster, the Hazelcast node is deployed to the docker of the cluster. So total 3 HazelCast nodes to form a cluster.</p>
<p>Our questions:</p>
<ol>
<li>Are those 3 HazelCast nodes able to form a cluster? As we know, they communicate through TCP only. But communicate across different Kubernetes/namespace need to go through Ingress which is HTTP.</li>
<li>Is HazelCast able to persist data to the Kubernetes PVC(PersistentVolumeClaim)?</li>
<li>Is the data strongly consistent among the 3 Hazelcast nodes? Is it base on Raft?</li>
</ol>
| Fathand Qi | <ol>
<li>You probably don't want to form a Hazelcast cluster across different Kubernetes cluster, as network-wise, it wouldn't be very performant. You need to check <a href="https://docs.hazelcast.com/imdg/4.2/wan/wan.html" rel="nofollow noreferrer">WAN Replication</a></li>
<li>Hazelcast stores data in-memory and not on disk.</li>
<li><a href="https://docs.hazelcast.com/imdg/4.2/cp-subsystem/cp-subsystem.html" rel="nofollow noreferrer">Hazelcast CP subsystem</a> implements Raft. Not all data structures are supported by the CP subsystem, but since you mention KV store - <code>IMap</code> is.</li>
</ol>
| Nicolas |
<p>I am using busybox to detect my network problem in kubernetes v1.18 pods. I created the busybox like this:</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- name: busybox
image: busybox:1.28
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
restartPolicy: Always
</code></pre>
<p>and login to find the kubernetes clusters network situation:</p>
<pre><code> kubectl exec -it busybox /bin/bash
</code></pre>
<p>What surprises me is that the busybox does not contain curl. Why does the busybox package not include the curl command? I am searching the internet and find the docs do not talk about how to add curl into busybox. I tried to install curl, but found no way to do this. Is there anyway to add curl package into busybox?</p>
| Dolphin | <p>The short answer, is you cannot.</p>
<h3>Why?</h3>
<p>Because busybox does not have package manager like: yum, apk, or apt-get ..</p>
<h3>Acutally you have two solutions:</h3>
<p><strong>1. Either use a modified busybox</strong></p>
<p>You can use other busybox images like <a href="https://github.com/progrium/busybox" rel="noreferrer">progrium/busybox</a> which provides <strong>opkg-install</strong> as a package manager.</p>
<pre><code>image: progrium/busybox
</code></pre>
<p>Then:</p>
<pre><code>kubectl exec -it busybox -- opkg-install curl
</code></pre>
<p><strong>2. Or if your concern to use a minimal image, you can use alpine</strong></p>
<pre><code>image: alpine:3.12
</code></pre>
<p>then:</p>
<pre><code>kubectl exec -it alpine -- apk --update add curl
</code></pre>
| Abdennour TOUMI |
<p>I have a question about authentication and security in Hazelcast Client. Which methods can I use to authenticate Hazelcast Clients? I'm using an open-source 5.0 version on k8s.</p>
<p>Thanks
Dũng</p>
| DungLe | <p><a href="https://docs.hazelcast.com/hazelcast/5.0/security/default-authentication" rel="nofollow noreferrer">Authentication</a> is part of the Enterprise package. With that, you'll have dedicated support.</p>
| Nicolas |
<p>I am trying to deploy a simple zookeeper ensemble following the tutorial from the <a href="https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/" rel="nofollow noreferrer">official Kubernetes website</a>. The tutorial states that I need</p>
<blockquote>
<p>a cluster with at least four nodes and each node requires at least 2
CPUs and 4 GiB of memory.</p>
</blockquote>
<p>I ignored this fact and created a cluster with 3 nodes of n1-standard-1 (1 vCPU, 3.73 GB Memory)
When I tried to apply .yaml file</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: zk-hs
labels:
app: zk
spec:
ports:
- port: 2888
name: server
- port: 3888
name: leader-election
clusterIP: None
selector:
app: zk
---
apiVersion: v1
kind: Service
metadata:
name: zk-cs
labels:
app: zk
spec:
ports:
- port: 2181
name: client
selector:
app: zk
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: zk-pdb
spec:
selector:
matchLabels:
app: zk
maxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zk
spec:
selector:
matchLabels:
app: zk
serviceName: zk-hs
replicas: 3
updateStrategy:
type: RollingUpdate
podManagementPolicy: OrderedReady
template:
metadata:
labels:
app: zk
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- zk
topologyKey: "kubernetes.io/hostname"
containers:
- name: kubernetes-zookeeper
imagePullPolicy: Always
image: "k8s.gcr.io/kubernetes-zookeeper:1.0-3.4.10"
resources:
requests:
memory: "1Gi"
cpu: "0.5"
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: server
- containerPort: 3888
name: leader-election
command:
- sh
- -c
- "start-zookeeper \
--servers=3 \
--data_dir=/var/lib/zookeeper/data \
--data_log_dir=/var/lib/zookeeper/data/log \
--conf_dir=/opt/zookeeper/conf \
--client_port=2181 \
--election_port=3888 \
--server_port=2888 \
--tick_time=2000 \
--init_limit=10 \
--sync_limit=5 \
--heap=512M \
--max_client_cnxns=60 \
--snap_retain_count=3 \
--purge_interval=12 \
--max_session_timeout=40000 \
--min_session_timeout=4000 \
--log_level=INFO"
readinessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds: 10
timeoutSeconds: 5
livenessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds: 10
timeoutSeconds: 5
volumeMounts:
- name: datadir
mountPath: /var/lib/zookeeper
securityContext:
runAsUser: 1000
fsGroup: 1000
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
</code></pre>
<p>And of course, I got the error <strong>PodUnschedulable</strong></p>
<p>In this file, I cloud not find anything that says that I need a cluster of 4 nodes and with 2 CPUs and 4G Ram.
So:</p>
<ul>
<li>What defines how much of resources do my deployment needs?</li>
<li>How to understand/calculate the required resources of applications and their respective deployments beforehand?</li>
<li>Zookeeper runs on 2GB RAM according to requirements but this is only a recommended configuration.</li>
</ul>
| Adelin | <p>By default, a kubernetes nodes will <strong>not come empty.</strong> Instead, it has running processes before even running your apps workload:</p>
<ul>
<li>kubelet is running ( in each node)</li>
<li>kube-proxy is running as daemonset ( in each node)</li>
<li>container-runtime (Docker) is running in each node</li>
<li>other daemonset can be running (like aws-node DS in the case of EKS..).</li>
</ul>
<blockquote>
<p>We are here discussing Worker Nodes not Masters.</p>
</blockquote>
<p>So imagine all that, you will end up by choosing a respectful resources for each Node.</p>
<p>Not all Nodes must be with the same size. However you decide which size you need according to the type of your apps :</p>
<ul>
<li><p>If your apps eats memory more than CPUs (Like Java Apps), you will need to choose Node of <strong>[2CPU, 8GB]</strong> is better than <strong>[4CPUs, 8GB]</strong>.</p>
</li>
<li><p>If your apps eats CPUs more than memory (Like ML workload), you will need to choose the opposite; computing-optimized instances.</p>
</li>
<li><p>The golden rule 🏆 is to calculate the <strong>whole capacity</strong> is better than looking into the individual capacity for each node.</p>
</li>
</ul>
<p>This means <strong>3 large</strong> nodes might be better than <strong>4 medium</strong> nodes in term of cost but also in term of the best usage of capacity.</p>
<p><a href="https://i.stack.imgur.com/le3cH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/le3cH.png" alt="enter image description here" /></a></p>
<p>As conclusion, Node resource must be:</p>
<ul>
<li>no less than 2 CPUs</li>
<li>no less than 4GB memory</li>
</ul>
<p>Otherwise, you should expect capacity issues.</p>
<hr />
<p>Now, we reach the half of the answer: Identify the capacity of the cluster.</p>
<p>The second half is about answering <strong>how to assign resources for the each app (pod)</strong>.</p>
<p>This is fall into another question; How much your app consume ?</p>
<p>To answer this question, you need to monitor your app with APM tools like Prometheus + Grafana.</p>
<p>Once you get insight about the avergae of consumption, it's the time to set <strong>resources limits</strong> for your app (its pods).</p>
<p><strong>Limits</strong> might throttle the app, that's why, you need to set up alongside other things for horizontal auto-scaling:</p>
<ul>
<li>Run Pods inside Deployment to manage replicas and deployment.</li>
<li>HPA or Horizontal Pod autoscaler: which monitors the pods of the deployment , then scale out/in according to thresholds (CPU, memory)</li>
</ul>
<p>As conclusion for this part, we can say:</p>
<p><strong>- Measure :</strong> start measure to identify <code>resources.limits</code> and <code>resources.requests</code>.</p>
<p><strong>- Measure:</strong> measure after running the app to identify again the needed resources.</p>
<p><strong>- Measure:</strong> Keep measure</p>
| Abdennour TOUMI |
<p>I am using argo events/sensors to create a Kubernetes Job , the sensor gets triggered correctly but it is giving me error "the server could not find the requested resource"</p>
<p>Here is my sensor.yaml</p>
<pre><code>apiVersion: argoproj.io/v1alpha1
kind: Sensor
metadata:
name: exp-webhook
spec:
template:
serviceAccountName: crypto-job-sa
dependencies:
- name: payload
eventSourceName: webhook
eventName: devops-toolkit
triggers:
- template:
name: sample-job
k8s:
group: batch
version: v1
resource: Job
operation: create
source:
resource:
apiVersion: batch/v1
kind: Job
metadata:
name: exp-job-crypto
# annotations:
# argocd.argoproj.io/hook: PreSync
# argocd.argoproj.io/hook-delete-policy: HookSucceeded
spec:
ttlSecondsAfterFinished: 100
template:
spec:
restartPolicy: OnFailure
containers:
- name: crypto-job
image: docker.artifactory.xyz.com/xyz/pqr:master-b1b347a
</code></pre>
<p>And here is the error details.</p>
<pre><code> {"level":"error","ts":1624900390.6760905,"logger":"argo-events.sensor","caller":"sensors/listener.go:271","msg":"failed to execute a trigger","sensorName":"exp-webhook","error":"failed to execute trigger: timed out waiting for the condition: the server could not find the requested resource",
"errorVerbose":"timed out waiting for the condition: the server could not find the requested resource\nfailed to execute trigger\ngithub.com/argoproj/argo-events/sensors.
(*SensorContext).triggerOne\n\t/home/jenkins/agent/workspace/argo-events_master/sensors/listener.go:328\ngithub.com/argoproj/argo-events/sensors.(*SensorContext).triggerActions\n\t/home/jenkins/agent/workspace/argo-events_master/sensors/listener.go:269\ngithub.com/argoproj/argo-events/sensors.(*SensorContext).listenEvents.func1.3\n\t/home/jenkins/agent/workspace/argo-events_master/sensors/listener.go:181\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1357","triggerName":"sample-job","triggeredBy":["payload"],"triggeredByEvents":["32396264373063382d306336312d343039322d616536652d623965383531346666373234"],"stacktrace":"github.com/argoproj/argo-events/sensors.
(*SensorContext).triggerActions\n\t/home/jenkins/agent/workspace/argo-events_master/sensors/listener.go:271\ngithub.com/argoproj/argo-events/sensors.(*SensorContext).listenEvents.func1.3\n\t/home/jenkins/agent/workspace/argo-events_master/sensors/listener.go:181"}
</code></pre>
<p>But it does not tell what resource is not found. Can someone please help? I don't know what is the mistake here.</p>
| TruckDriver | <p>So the error was , instead of</p>
<pre><code>resource: Job
</code></pre>
<p>it should be</p>
<pre><code>resource: jobs
</code></pre>
<p>That fixed this issue.</p>
| TruckDriver |
<p>Deployment.yaml</p>
<pre><code>...
env: {{ .Values.env}}
...
</code></pre>
<p>Values.yaml:</p>
<pre><code>env:
- name: "DELFI_DB_USER"
value: "yyy"
- name: "DELFI_DB_PASSWORD"
value: "xxx"
- name: "DELFI_DB_CLASS"
value: "com.mysql.jdbc.Driver"
- name: "DELFI_DB_URL"
value: "jdbc:sqlserver://dockersqlserver:1433;databaseName=ddbeta;sendStringParametersAsUnicode=false"
</code></pre>
<p>feels like I'm missing something obvious.<br>
linter says: ok<br>
template says:</p>
<blockquote>
<p>env: [map[name:DELFI_DB_USER value:yyy] map[name:DELFI_DB_PASSWORD
value:xxx] map[name:DELFI_DB_CLASS value:com.mysql.jdbc.Driver]
map[value:jdbc:mysql://dockersqlserver.{{ .Release.Namespace
}}.svc.cluster.local:3306/ddbeta\?\&amp\;useSSL=true\&amp\;requireSSL=false
name:DELFI_DB_URL]]</p>
</blockquote>
<p>upgrade says:</p>
<blockquote>
<p>Error: UPGRADE FAILED: YAML parse error on
xxx/templates/deployment.yaml: error converting YAML to JSON: yaml:
line 35: found unexpected ':'</p>
</blockquote>
<p>solution:</p>
<pre><code>env:
{{- range .Values.env }}
- name: {{ .name | quote }}
value: {{ .value | quote }}
{{- end }}
</code></pre>
| 4c74356b41 | <p>The current Go template expansion will give output which is not YAML: </p>
<pre><code>env: {{ .Values.env}}
</code></pre>
<p>becomes:</p>
<pre><code>env: env: [Some Go type stuff that isn't YAML]...
</code></pre>
<p>The Helm Go template needs to loop over the keys of the source YAML dictionary.
This is described in the <a href="https://docs.helm.sh/chart_template_guide/#looping-with-the-range-action" rel="nofollow noreferrer">Helm docs</a>.</p>
<p>The correct Deployment.yaml is:</p>
<pre><code>...
env:
{{- range .Values.env }}
- name: {{ .name | quote }}
value: {{ .value | quote }}
{{- end }}
...
</code></pre>
| Paul Annetts |
<p>I would like to run a one-off container from the command line in my Kubernetes cluster. The equivalent of:</p>
<pre><code>docker run --rm -it centos /bin/bash
</code></pre>
<p>Is there a <code>kubectl</code> equivalent?</p>
| Dmitry Minkovsky | <p>The <code>kubectl</code> equivalent of</p>
<pre><code>docker run --rm -it centos /bin/bash
</code></pre>
<p>is</p>
<pre><code>kubectl run tmp-shell --restart=Never --rm -i --tty --image centos -- /bin/bash
</code></pre>
<p>Notes:</p>
<ul>
<li><p>This will create a Pod named <code>tmp-shell</code>. If you don't specify <code>--restart=Never</code>, a Deploment will be created instead (credit: Urosh T's answer).</p>
</li>
<li><p><code>--rm</code> ensures the Pod is deleted when the shell exits.</p>
</li>
<li><p>If you want to detach from the shell and leave it running with the ability to re-attach, omit the <code>--rm</code>. You will then be able to reattach with: <code>kubectl attach $pod-name -c $pod-container -i -t</code> after you exit the shell.</p>
</li>
<li><p>If your shell does not start, check whether your cluster is out of resources (<code>kubectl describe nodes</code>). You can specify resource requests with <code>--requests</code>:</p>
<pre><code>--requests='': The resource requirement requests for this container. For example, 'cpu=100m,memory=256Mi'. Note that server side components may assign requests depending on the server configuration, such as limit ranges.
</code></pre>
</li>
</ul>
<p>(Credit: <a href="https://gc-taylor.com/blog/2016/10/31/fire-up-an-interactive-bash-pod-within-a-kubernetes-cluster" rel="noreferrer">https://gc-taylor.com/blog/2016/10/31/fire-up-an-interactive-bash-pod-within-a-kubernetes-cluster</a>)</p>
| Dmitry Minkovsky |
<p>I have access to a cluster with a lot of nodes. I am running my Nextflow workflow using this command:</p>
<pre><code>./nextflow kuberun user/repo -c nextflow.config -profile kubernetes -v my_pvc:/mounted_path -with-report _report.html -with-trace _trace
</code></pre>
<p>I would like to run my nextflow workflow on a specific set of nodes. I have have already labeled my nodes of interest:</p>
<pre><code>kubectl label nodes node1 disktype=my_experiment
kubectl label nodes node2 disktype=my_experiment
kubectl label nodes node3 disktype=my_experiment
</code></pre>
<p>I am not understanding from Nextflow and Kubernete documentation how to is it possible to schedule my workflow, with the processes splitted between my nodes of interest.</p>
<p>I understand how to do with kubernete only: <a href="https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/</a></p>
<p>But not how to the <code>nextflow kuberun</code> command.</p>
<p>Any help is very appreciated, thank you!</p>
| Nine | <p>Use one or more <a href="https://www.nextflow.io/docs/latest/config.html#process-selectors" rel="nofollow noreferrer">process selectors</a> and the <a href="https://www.nextflow.io/docs/latest/process.html#pod" rel="nofollow noreferrer">pod directive</a> to select the nodes using a pod label. For example, the following could be added to your 'kubernetes' profile:</p>
<pre><code>process {
withName: my_process {
pod {
nodeSelector = 'disktype=my_experiment'
}
}
...
}
</code></pre>
| Steve |
<p>We have a Kubernetes deployment consisting of a nodejs front end and an nginx backend. We're finding that the two deployments work fine in Kubernetes individually, but when they are both deployed requests to the front end return a 404 almost exactly 50% of the time.</p>
<p>It's natural to assume there is an issue with our virtual service, but this seems to not be the case, based on the fact that the deployment of the vs/gateway is not sufficient to cause the issue. It also seems that if we deploy a different, unrelated image in the backend, the front-end continues to work without 404 errors.</p>
<p>The app was originally generated via JHipster, and we manually separated the front-end and backend components. The front-end is nodejs, the backend is Java/nginx. The app works locally, but fails in a k8s deployment.</p>
<p>Also, our Kubernetes deployment is in Rancher.</p>
<p>Experiments seem to indicate it is related to something in our back-end deployment, so I'm including our backend deployement.yaml below:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: ourapp-be-custom-mount
spec:
revisionHistoryLimit: 3
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
template:
spec:
containers:
- name: ourapp-be-custom-mount
image: "IMAGE_SET_BY_OVERLAYS_KUSTOMIZATION"
envFrom:
- configMapRef:
name: ourapp-be-config
ports:
- name: http
containerPort: 8080
resources:
limits:
cpu: "0.5"
memory: "2048Mi"
requests:
cpu: "0.1"
memory: "64Mi"
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /usr/share/h2/data
name: ourapp-db-vol01-custom-mount
securityContext:
runAsNonRoot: true
runAsUser: 1000
imagePullSecrets:
- name: regcred-nexus
volumes:
- name: ourapp-db-vol01-custom-mount
persistentVolumeClaim:
claimName: ourapp-db-pvc-volume01-custom-mount
terminationGracePeriodSeconds: 30
</code></pre>
| Adam Wise | <p>Each service needs to point to a different app. You can verify in Rancher that each service points to a different app. Check your yaml. If using Kustomize, the commonLabels:app can trip you up. Make sure it points to different apps for frontend and backend.</p>
| Adam Wise |
<p>We have some weird memory leaking issues with our containers where the longer they live, the more resources they take. We do not have the resources at the moment to look into these issues (as they don't become problems for over a month) but would like to avoid manual work to "clean up" the bloated containers.</p>
<p>What I'd like to do is configure our deployments in such a way that "time alive" is a parameter for the state of a pod, and if it exceed a value (say a couple days) the pod is killed off and a new one is created. I'd prefer to do this entirely within kubernetes, as while we will eventually be adding a "health check" endpoint to our services, that will not be able to be done for a while. </p>
<p>What is the best way to implement this sort of a "max age" parameter on the healthiness of a pod? Alternatively, I guess we could trigger based off of resource usage, but it's not an issue if the use is temporary, only if the resources aren't released after a short while.</p>
| Marshall Tigerus | <p>The easiest way is to put a hard resource limit on memory that is above what you would see in a temporary spike: at a level that you'd expect to see over say a couple of weeks.</p>
<p>It's probably a good idea to do this anyhow, as k8s will schedule workloads based on requested resources, not their limit, so you could end up with memory pressure in a node as the memory usage increases.</p>
<p>One problem is would if you have significant memory spikes, the pod restart where k8s kills your pod would probably happen in the middle of some workload, so you'd need to be able to absorb that effect.</p>
<p>So, from <a href="https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#resource-requests-and-limits-of-pod-and-container" rel="nofollow noreferrer">the documentation</a> it would look something like this (and clearly <code>Deployment</code> would be preferable to a raw <code>Pod</code> as shown below, this example can be carried over into a <code>PodTemplateSpec</code>):</p>
<pre><code>apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: ccccc
image: theimage
resources:
requests:
memory: "64Mi"
limits:
memory: "128Mi"
</code></pre>
| Paul Annetts |
<p>I have setup jenkins to run on GKE kubernetes cluster. I wrote the Jenkinsfile to define CI/CD pipeline. But when it reaches the docker build command, it gives me the following error:
"Segmentation fault (core dumped)
Build step 'Execute shell' marked build as failure
Finished: FAILURE"</p>
<p>After that I setup a new test job and run random command and it executes successfully but when I run docker version, I get the same error. The error comes when I run docker commands. I have restarted my jenkins pod, freshly setup a new jenkins instance on the cluster but the error was still there. I need help!! Any feedback is much appreciated.</p>
<p>Regards,</p>
| devops_enthusiast | <p>The reason you are having an issue is because you are trying to run docker inside a container. The Jenkins pod(s) are themselves running in a container (docker or otherwise) inside the kubernetes cluster. It can be very tricky to run docker inside a container. There is a lot of help out there on how to do it - search for "docker in docker" or "dind", but there are a lot of reasons you do not want to do that. Security being a big issue here.</p>
<p>Instead, you may consider some other way to build your container images, without using a docker command. Search for "building containers without docker" or something similar. My favourite is to use <a href="https://github.com/GoogleContainerTools/kaniko" rel="nofollow noreferrer">kaniko</a>. Kaniko avoids the issues of running docker inside a container, and is compatible with the same Dockerfile you already use.</p>
<p>There are other ways to do this as well. Searching will give some good results.</p>
| dlaidlaw |
<p>I'm Running a Kubernetes cluster on <code>AWS</code> using <code>Kops</code> for the first time and I need some help in exposing the services to the public with an AWS managed domain name and an SSL certificate. </p>
<p>The cluster is running in a private VPC and I can access it through a bastion instance.</p>
<p>Right now I'm exposing the services to the public using LoadBalancer service type as follow:</p>
<pre><code>apiVersion: v1
kind: Service
metadata:
name: my-gateway-service
namespace: {{ .Values.nameSpace }}
labels:
app: gateway
tier: backend
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: 'http'
service.beta.kubernetes.io/aws-load-balancer-ssl-port: '{{ .Values.services.sslPort }}'
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: '{{ .Values.services.sslCert }}'
spec:
type: LoadBalancer
selector:
app: gateway
tier: backend
ports:
- name: http
port: 80
targetPort: {{ .Values.applications.nodeAppPort }}
- name: https
port: 443
targetPort: {{ .Values.applications.nodeAppPort }}
</code></pre>
<p>as you can see I'm passing the SSL certificate using annotations then I will just point the domain name to loadBalancer public ingress and done.</p>
<p><strong>The Problem:</strong>
This Project is a micro-services project and requires a lot of services to be exposed to the public in different environments which means a lot of <code>AWS LoadBalancers</code> and a lot of Money $$$$.</p>
<p>I've tried NodePort and ExternalName services but none of them worked because of the private VPC.</p>
<p>Any suggestions to overcome this problem?</p>
| Adel Bachene | <p>To solve this, you can point your <code>LoadBalancer</code> to a "reverse-proxy" service such as an NGINX instance or Istio's Gateway (<a href="https://istio.io/docs/reference/config/istio.networking.v1alpha3/#Gateway" rel="nofollow noreferrer">https://istio.io/docs/reference/config/istio.networking.v1alpha3/#Gateway</a>), the Ingress controller and other options.</p>
<p>That way when you hit <code>https://[your_service_url]/[path]</code> you can build rules which route to the correct internal service in Kubernetes based on the actual values of <code>your_service_url</code> or <code>path</code>.</p>
<p>That way you only pay for 1 Load Balancer, but can host many services in the cluster.</p>
| Paul Annetts |
<p>To my understanding <a href="https://kubernetes.io" rel="nofollow noreferrer">Kubernetes</a> is a container orchestration service comparable to <a href="https://aws.amazon.com/ecs/" rel="nofollow noreferrer">AWS ECS</a> or <a href="https://docs.docker.com/engine/swarm/" rel="nofollow noreferrer">Docker Swarm</a>. Yet there are several <a href="https://stackoverflow.com/questions/32047563/kubernetes-vs-cloudfoundry/32238148">high rated questions</a> on stackoverflow that compare it to <a href="https://www.cloudfoundry.org" rel="nofollow noreferrer">CloudFoundry</a> which is a plattform orchestration service. </p>
<p>This means that CloudFoundry can take care of the VM layer, updating and provisioning VMs while moving containers avoiding downtime. Therefore the comparison to Kubernetes makes limited sense to my understanding. </p>
<p>Am I misunderstanding something, does Kubernetes support provisioning and managing the VM layer too?</p>
| B M | <p>As for <strong>VM</strong>, my answer is <strong>YES</strong>; you can run VM as workload in k8s cluster.</p>
<p>Indeed, Redhat team figured out how to run VM in the kubernetes cluster by adding the patch <a href="https://kubevirt.io/docs/workloads/controllers/virtualmachine.html" rel="nofollow noreferrer">KubeVirt</a>.</p>
<p>example from the link above.</p>
<pre><code>apiVersion: kubevirt.io/v1alpha2
kind: VirtualMachine
metadata:
creationTimestamp: null
labels:
kubevirt.io/vm: vm-cirros
name: vm-cirros
spec:
running: false
template:
metadata:
creationTimestamp: null
labels:
kubevirt.io/vm: vm-cirros
spec:
domain:
devices:
disks:
- disk:
bus: virtio
name: registrydisk
volumeName: registryvolume
- disk:
bus: virtio
name: cloudinitdisk
volumeName: cloudinitvolume
machine:
type: ""
resources:
requests:
memory: 64M
terminationGracePeriodSeconds: 0
volumes:
- name: registryvolume
registryDisk:
image: kubevirt/cirros-registry-disk-demo:latest
- cloudInitNoCloud:
userDataBase64: IyEvYmluL3NoCgplY2hvICdwcmludGVkIGZyb20gY2xvdWQtaW5pdCB1c2VyZGF0YScK
name: cloudinitvolume
</code></pre>
<p>Then: </p>
<pre><code>kubectl create -f vm.yaml
virtualmachine "vm-cirros" created
</code></pre>
| Abdennour TOUMI |
<p>I am working on an <a href="https://github.com/lucastheisen/dev-bootstrap" rel="nofollow noreferrer">ansible based dev-bootstrap project</a>. I'd like to be able to <a href="https://docs.docker.com/docker-for-windows/#kubernetes" rel="nofollow noreferrer">enable the kubernetes</a> from the <a href="https://github.com/lucastheisen/dev-bootstrap/blob/master/roles/docker/tasks/windows.yml" rel="nofollow noreferrer">docker role</a>, but i cant seem to find a way to do so. I searched the registry for <code>docker</code> and <code>kubernetes</code>, nothing jumped out. I also checked for a <a href="https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-docker/configure-docker-daemon#configure-docker-with-configuration-file" rel="nofollow noreferrer">daemon.json</a>, but none is present even though i have kubernetes enabled (manually). Does anyone know if there is a way to make this happen?</p>
| Lucas | <p>Not sure if this is all you need, but when toggling the enable Kubernetes option it writes to the settings.json. There is also a Kubernetes Initial install step, so this might not be enough, but I would try see if it picks it up, needs a restart or doesn't work at all...</p>
<pre><code>function Enable-DockerKubernetes {
[CmdletBinding()]
param ()
try {
$settings = "$env:AppData\Docker\settings.json"
$dockerSettings = ConvertFrom-Json ( Get-Content $settings -Raw -ErrorAction Stop)
if (!$dockerSettings.KubernetesEnabled) {
Write-Verbose ("Enabling Kubernetes in {0}." -f $settings)
$dockerSettings.KubernetesEnabled = $true
$dockerSettings | ConvertTo-Json | Set-Content $settings -ErrorAction Stop
}
else {
Write-Verbose "Already enabled!"
}
}
catch {
Write-Error $_
}
}
</code></pre>
<p>The installer for Docker CE doesn't seem to support passing in installer options according to this issue which just went stale and got closed.</p>
<p>You might want to open a new issue specifically about this use case.</p>
<p><a href="https://github.com/docker/for-win/issues/1322" rel="nofollow noreferrer">https://github.com/docker/for-win/issues/1322</a></p>
| CJ Harmath |
<p>When I use helm, It creates a <code>.helm</code> folder in my home directory. Is it important? Should it be committed to source control? Poking around I only see cache related information, that makes me think the whole folder can be deleted. And helm will re-create it if needed. Am I wrong in this thinking? Or is there something important in those folders that is worth putting into source control?</p>
| 7wp | <p>In simple terms, no.</p>
<p>The <code>.helm</code> directory contains user specific data that will depend on the version of helm, the OS being used, the layout of the user’s system.</p>
<p>However, the main reason to not add it is that it also can contain TLS secrets which would then be disclosed to other users. Worse, if you use Git, these secrets would remain in the history and would be hard to remove, even if you deleted the <code>.helm</code> directory at a later date.</p>
| Paul Annetts |
<p>I am learning Kubernetes and planning to do continuous deployment of my apps with Kubernetes manifests. </p>
<p>I'd like to have my app defined as a <code>Deployment</code> and a <code>Service</code> in a manifest, and have my CD system run <code>kubectl apply -f</code> on the manifest file. </p>
<p>However, our current setup is to tag our Docker images with the SHA of the git commit for that version of the app. </p>
<p>Is there a Kubernetes-native way to express the image tag as a variable, and have the CD system set that variable to the correct git SHA?</p>
| David Ham | <p>You should consider <a href="/questions/tagged/helm" class="post-tag" title="show questions tagged 'helm'" rel="tag">helm</a> charts in this case, where you separate between the skeleton of templates (or what you called maniest) and its values which are changed from release to another.</p>
<p>In <strong>templates/deployment.yaml</strong> :</p>
<pre><code>spec:
containers:
- name: {{ template "nginx.name" . }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
</code></pre>
<p>And in <strong>values.yaml</strong> : </p>
<pre><code>image:
repository: nginx
tag: 1.11.0
</code></pre>
<p>See the full example <a href="https://github.com/helm/helm/blob/master/docs/examples/nginx" rel="nofollow noreferrer">here</a></p>
| Abdennour TOUMI |
<p><code>gcloud container clusters create --cluster-version 1.10 --zone us-east1-d ...</code> returns with the error message <code>ERROR: (gcloud.container.clusters.create) ResponseError: code=400, message=No valid versions with the prefix "1.10" found.</code>.</p>
<p>The GKE release notes <a href="https://cloud.google.com/kubernetes-engine/release-notes#february-11-2019" rel="nofollow noreferrer">https://cloud.google.com/kubernetes-engine/release-notes#february-11-2019</a>, indicates the specific kubernetes version is still supported.</p>
<p>Does anyone know what's going on?</p>
| Sameer Naik | <p>The syntax you are using looks correct, but support for k8s 1.10 is being phased out on GKE, as per the GKE release notes entry of February 11, 2019:</p>
<blockquote>
<h2>Coming soon</h2>
<p>We expect the following changes in the coming weeks. This information is not a guarantee, but is provided to help you plan for upcoming changes.</p>
<p>25% of the upgrades from 1.10 to 1.11.6-gke.2 will be complete.<br>
Version 1.11.6-gke.8 will be made available.<br>
<strong>Version 1.10 will be made unavailable.</strong></p>
</blockquote>
<p>Have you tried with the full version, say <code>1.10.12-gke.7</code>?</p>
<p><code>gcloud container clusters create --cluster-version 1.10.12-gke.7 --zone us-east1-d ...</code></p>
<p>Alternatively, use 1.11, because it looks like GKE is moving that way anyhow.</p>
| Paul Annetts |
<p>So Jenkins is installed inside the cluster with this <a href="https://github.com/helm/charts/tree/master/stable/jenkins" rel="nofollow noreferrer">official helm chart</a>. And this is my installed plugins as per helm release values: </p>
<pre><code> installPlugins:
- kubernetes:1.18.1
- workflow-job:2.33
- workflow-aggregator:2.6
- credentials-binding:1.19
- git:3.11.0
- blueocean:1.19.0
</code></pre>
<p>my Jenkinsfile relies on the following pod template to spin up slaves:</p>
<pre><code>kind: Pod
spec:
# dnsConfig:
# options:
# - name: ndots
# value: "1"
containers:
- name: dind
image: docker:19-dind
command:
- cat
tty: true
volumeMounts:
- name: dockersock
readOnly: true
mountPath: /var/run/docker.sock
resources:
limits:
cpu: 500m
memory: 512Mi
volumes:
- name: dockersock
hostPath:
path: /var/run/docker.sock
</code></pre>
<p>Slaves (pod /dind container) starts nicely as expected whenever there is new Build. </p>
<p>However, it broke at the step of "docker build" in ( Jenkinsfile pipeline
<code>docker build -t ...</code> ) and it breaks there : </p>
<pre><code>Step 16/24 : RUN ../gradlew clean bootJar
---> Running in f14b6418b3dd
Downloading https://services.gradle.org/distributions/gradle-5.5-all.zip
Exception in thread "main" java.net.UnknownHostException: services.gradle.org
at java.base/java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:220)
at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:403)
at java.base/java.net.Socket.connect(Socket.java:591)
at java.base/sun.security.ssl.SSLSocketImpl.connect(SSLSocketImpl.java:285)
at java.base/sun.security.ssl.BaseSSLSocketImpl.connect(BaseSSLSocketImpl.java:173)
at java.base/sun.net.NetworkClient.doConnect(NetworkClient.java:182)
at java.base/sun.net.www.http.HttpClient.openServer(HttpClient.java:474)
at java.base/sun.net.www.http.HttpClient.openServer(HttpClient.java:569)
at java.base/sun.net.www.protocol.https.HttpsClient.<init>(HttpsClient.java:265)
at java.base/sun.net.www.protocol.https.HttpsClient.New(HttpsClient.java:372)
at java.base/sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.getNewHttpClient(AbstractDelegateHttpsURLConnection.java:191)
at java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect0(HttpURLConnection.java:1187)
at java.base/sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:1081)
at java.base/sun.net.www.protocol.https.AbstractDelegateHttpsURLConnection.connect(AbstractDelegateHttpsURLConnection.java:177)
at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1587)
at java.base/sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1515)
at java.base/sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(HttpsURLConnectionImpl.java:250)
at org.gradle.wrapper.Download.downloadInternal(Download.java:67)
at org.gradle.wrapper.Download.download(Download.java:52)
at org.gradle.wrapper.Install$1.call(Install.java:62)
at org.gradle.wrapper.Install$1.call(Install.java:48)
at org.gradle.wrapper.ExclusiveFileAccessManager.access(ExclusiveFileAccessManager.java:69)
at org.gradle.wrapper.Install.createDist(Install.java:48)
at org.gradle.wrapper.WrapperExecutor.execute(WrapperExecutor.java:107)
at org.gradle.wrapper.GradleWrapperMain.main(GradleWrapperMain.java:63)
The command '/bin/sh -c ../gradlew clean bootJar' returned a non-zero code:
</code></pre>
<p>At the first galance, I thought it's DNS resolution issue with the Slave container (<code>docker:19-dind</code>) since it is alpine.
That's why I debug its <code>/etc/resolv.conf</code> by adding <code>sh "cat /etc/resolv.conf"</code> in the Jenkinsfile. </p>
<p>I got : </p>
<pre><code>nameserver 172.20.0.10
search cicd.svc.cluster.local svc.cluster.local cluster.local ap-southeast-1.compute.internal
options ndots:5
</code></pre>
<p>I removed the last line <code>options ndots:5</code> as per recommendation of many thread on the internet.</p>
<p>But it does not fix the issue. 😔</p>
<p>I thought again and again and I realized that the container responsible for this error is not the Slave (docker:19-dind), instead, it is the intermediate containers that are opened to satisfy <code>docker build</code>. </p>
<p>As consequence, I added <code>RUN cat /etc/resolv.conf</code> as another layer in the Dockerfile (which starts by <code>FROM gradle:5.5-jdk11</code>). </p>
<p>Now, the <code>resolv.conf</code> is different : </p>
<pre><code>Step 15/24 : RUN cat /etc/resolv.conf
---> Running in 91377c9dd519
; generated by /usr/sbin/dhclient-script
search ap-southeast-1.compute.internal
options timeout:2 attempts:5
nameserver 10.0.0.2
Removing intermediate container 91377c9dd519
---> abf33839df9a
Step 16/24 : RUN ../gradlew clean bootJar
---> Running in f14b6418b3dd
Downloading https://services.gradle.org/distributions/gradle-5.5-all.zip
Exception in thread "main" java.net.UnknownHostException: services.gradle.org
</code></pre>
<p>Basically, it is a different nameserver <code>10.0.0.2</code> than the nameserver of the slave container <code>172.20.0.10</code>. There is NO <code>ndots:5</code> in resolv.conf this intermediate container.</p>
<p>I was confused after all these debugging steps and a lot of attempts. </p>
<h2>Architecture</h2>
<pre><code>Jenkins Server (Container )
||
(spin up slaves)
||__ SlaveA (Container, image: docker:19-dind)
||
( run "docker build" )
||
||_ intermediate (container, image: gradle:5.5-jdk11 )
</code></pre>
| Abdennour TOUMI | <p>Just add <code>--network=host</code> to <code>docker build</code> or <code>docker run</code>.</p>
<pre><code> docker build --network=host foo/bar:latest .
</code></pre>
<p>Found the answer <a href="https://github.com/awslabs/amazon-eks-ami/issues/183#issuecomment-463687956" rel="nofollow noreferrer">here</a></p>
| Abdennour TOUMI |
<p>I don't mean being able to route to a specific port, I mean to actually change the port the ingress listens on.</p>
<p>Is this possible? How? Where is this documented?</p>
| Chris Stryczynski | <p>No. From the <a href="https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress" rel="noreferrer">kubernetes documentation</a>:</p>
<blockquote>
<p>An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type Service.Type=NodePort or Service.Type=LoadBalancer.</p>
</blockquote>
<p>It may be possible to customize a LoadBalancer on a cloud provider like AWS to listen on other ports.</p>
| dlaidlaw |
<p>I have an mysql container I'm deploying through k8s in which I am mounting a directory which contains a script, once the pod is up and running the plan is to execute that script.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
spec:
replicas: 1
template:
spec:
volumes:
- name: mysql-stuff
hostPath:
path: /home/myapp/scripts
type: Directory
containers:
- name: mysql-db
image: mysql:latest
volueMounts:
- name: mysql-stuff
mountPath: /scripts/
</code></pre>
<p>Once I have it up and running and run <code>kubectl exec -it mysql-db -- bin/sh</code> and <code>ls scripts</code> it returns nothing and the script that should be inside it is not there and I can't work out why.. For the sake of getting this working I have added no security context and am running the container as root. Any help would be greatly appreciated.</p>
| SkinnyBetas | <p>Since you are running your pod in a minikube cluster. Minikube itself is running in a VM , so the path mapping here implies the path of minikube VMs not your actual host.</p>
<p>However you can map your actual host path to the minikube path and then it will become accessible.</p>
<pre><code>minikube mount /home/myapp/scripts:/home/myapp/scripts
</code></pre>
<p>See more here
<a href="https://minikube.sigs.k8s.io/docs/handbook/mount/" rel="nofollow noreferrer">https://minikube.sigs.k8s.io/docs/handbook/mount/</a></p>
| TruckDriver |
<p>I'm running a K8S job, with the following flags:</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: my-EP
spec:
template:
metadata:
labels:
app: EP
spec:
restartPolicy: "Never"
containers:
- name: EP
image: myImage
</code></pre>
<p>The Job starts, runs my script that runs some application that sends me an email and then terminates. The application returns the exit code to the bash script.
when I run the command:
kubectl get pods, I get the following:</p>
<pre><code>NAME READY STATUS RESTARTS AGE
my-EP-94rh8 0/1 Completed 0 2m2s
</code></pre>
<p>Sometimes there are issues, and the network not connected or no license available.
I would like that to be visible to the pod user.
My question is, can I propagate the script exit code to be seen when I run the above get pods command?
I.E instead of the "Completed" status, I would like to see my application exit code - 0, 1, 2, 3....
or maybe there is a way to see it in the Pods Statuses, in the describe command?
currently I see:</p>
<pre><code>Pods Statuses: 0 Running / 1 Succeeded / 0 Failed
</code></pre>
<p>Is this possible?</p>
| Mary1 | <p>The a non-zero exit code on <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/" rel="nofollow noreferrer">k8s jobs</a> will fall into the <code>Failed</code> pod status. There really isn't a way for you to have the exit code shown with <code>kubectl get pods</code> but you could output the pod status with <code>-ojson</code> and then pipe it into <code>jq</code> looking for the exit code. Something like the following from <a href="https://stackoverflow.com/a/61341173/703161">this post</a> might work</p>
<pre><code> kubectl get pod pod_name -c container_name-n namespace -ojson | jq .status.containerStatuses[].state.terminated.exitCode
</code></pre>
<p>or this, with the <code>items[]</code> in the json</p>
<pre><code>kubectl get pods -ojson | jq .items[].status.containerStatuses[].state.terminated.exitCode
</code></pre>
<p>Alternatively, as <code>u/blaimi</code> mentioned, you can do it without <code>jq</code>, like this:</p>
<pre><code>kubectl get pod pod_name -o jsonpath --template='{.status.containerStatuses[*].state.terminated.exitCode}
</code></pre>
| DogEatDog |
<p>I have a kustomize layout something like this:</p>
<pre><code>├──release
│ ├──VariantA
│ │ └──kustomization.yaml
│ │ cluster_a.yaml
| └──VariantB
│ └──kustomization.yaml
│ cluster_b.yaml
└──test
├──TestVariantA
│ └──kustomization.yaml; resources=[VariantA]
│ common_cluster_patch.yaml
└──TestVariantB
└──kustomization.yaml; resources=[VariantB]
common_cluster_patch.yaml
</code></pre>
<p>My issue is the duplication of <code>common_cluster_patch.yaml</code>. It is a common patch which I need to apply to the the different base cluster objects. I would prefer not to have to maintain identical copies of it for each test variant.</p>
<p>The 2 unsuccessful solutions I tried are:</p>
<p><strong>A common patch resource</strong></p>
<pre><code>├──release
│ ├──VariantA
│ │ └──kustomization.yaml
│ │ cluster_a.yaml
| └──VariantB
│ └──kustomization.yaml
│ cluster_b.yaml
└──test
├──TestVariantA
│ └──kustomization.yaml; resources=[VariantA, TestPatch]
├──TestVariantB
│ └──kustomization.yaml; resources=[VariantB, TestPatch]
└──TestPatch
└──kustomization.yaml
common_cluster_patch.yaml
</code></pre>
<p>This fails with <code>no matches for Id Cluster...</code>, presumably because TestPatch is trying to patch an object it doesn't contain.</p>
<p><strong>A common patch directory</strong></p>
<pre><code>├──release
│ ├──VariantA
│ │ └──kustomization.yaml
│ │ cluster_a.yaml
| └──VariantB
│ └──kustomization.yaml
│ cluster_b.yaml
└──test
├──TestVariantA
│ └──kustomization.yaml; resources=[VariantA]; patches=[../TestPatch/common_cluster_patch.yaml]
├──TestVariantB
│ └──kustomization.yaml; resources=[VariantB]; patches=[../TestPatch/common_cluster_patch.yaml]
└──TestPatch
└──common_cluster_patch.yaml
</code></pre>
<p>This fails with: <code>'/path/to/test/TestPatch/common_cluster_patch.yaml' is not in or below '/path/to/test/TestVariantA'</code>.</p>
<p>I can work round this and successfully generate my templates with <code>kustomize build --load-restrictor LoadRestrictionsNone</code>, but this comes with dire warnings and portents. I am hoping that there is some better way of organising my resources which doesn't require either workarounds or duplication.</p>
| Matthew Booth | <p>Thanks to criztovyl for this answer! The solution is <a href="https://kubectl.docs.kubernetes.io/guides/config_management/components/" rel="nofollow noreferrer">kustomize components</a>. Components are currently only defined in <code>kustomize.config.k8s.io/v1alpha1</code> and the <a href="https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/components/" rel="nofollow noreferrer">reference documentation</a> is a stub, but they are included in current release versions of kustomize.</p>
<p>My solution now looks like:</p>
<pre><code>├──release
│ ├──VariantA
│ │ └──kustomization.yaml
│ │ cluster_a.yaml
| └──VariantB
│ └──kustomization.yaml
│ cluster_b.yaml
└──test
├──TestVariantA
│ └──kustomization.yaml; resources=[VariantA]; components=[../TestCommon]
├──TestVariantB
│ └──kustomization.yaml; resources=[VariantB]; components=[../TestCommon]
└──TestCommon
└──kustomization.yaml; patches=[common_cluster_patch.yaml]
common_cluster_patch.yaml
</code></pre>
<p>where <code>test/TestCommon/kustomization.yaml</code> has the header:</p>
<pre><code>apiVersion: kustomize.config.k8s.io/v1alpha1
kind: Component
</code></pre>
<p>The crucial difference between a component and a resource is that a component is applied after other processing. This means it can patch an object in the resource which included it.</p>
| Matthew Booth |
<p>My pods/containers run on a docker image which is about 4GiB in size. Pulling the image from the container registry takes about 2 mins whenever a new VM node is spun up when resources are insufficient. </p>
<p>That is to say, whenever a new request comes in and the Kubernetes service auto scale up a new node, it takes <strong>2 mins+</strong>. User has to wait 2 mins to put through a request. <em>Not ideal</em>. I am currently using the <strong>Azure AKS</strong> to deploy my application and using their cluster autoscaler feature.</p>
<p>I am using a typical deployment set up with 1 fix master pod, and 3 fix worker pods. These 3 worker pods correspond to 3 different types of requests. Each time a request comes in, the worker pod will generate a K8 Job to process the request.</p>
<p>BIG Question is, how can I pre pull the images so that when a new node is spun up in the Kubernetes cluster, users don't have to wait so long for the new Job to be ready? </p>
| Seng Wee | <p>If you are using Azure Container Registry (ACR) for storing and pulling your images you can enable teleportation that will significantly reduce your image pull time. Refer <a href="https://stevelasker.blog/2019/10/29/azure-container-registry-teleportation/" rel="nofollow noreferrer">link</a> for more information</p>
| vivekd |
<p>When I issue a command</p>
<pre><code>kubectl delete namespace <mynamespace>
</code></pre>
<p>What is the sequence followed by kubernetes to clean up the resources inside a namespace? Does it start with services followed by containers? Is it possible to control the order?</p>
<p>Stack:
I am using <code>HELM</code> to define kubernetes resources.</p>
| Vishrant | <p>No it is not possible and it will start in parallel.</p>
<p><code>kube-controller-manager</code> has few flags to control the speed/sync of different resources.</p>
<p>You can check <code>--concurrent-*</code> flags for controller manager on link: <a href="https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/" rel="nofollow noreferrer">https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/</a></p>
| Akash Sharma |
<p><strong>Use case</strong></p>
<p>Let's say I've got 3 different node pools: <code>default</code>, <code>kafka</code> and <code>team-a</code>. I want to make sure that only kafka relevant deployments and stuff like daemonsets or kubernetes system services run on this node pool. I do so by simply adding a node selector to my kafka deployments, so that it can only be scheduled on the kafka nodepool:</p>
<pre><code>nodeSelector:
cloud.google.com/gke-nodepool: kafka
</code></pre>
<p><strong>The problem</strong></p>
<p>When I have further deployments or statefulsets which <strong>do not</strong> have any node selector specified they might get scheduled on that kafka nodepool. Instead I want all other deployments without nodeselector to be scheduled inside of my default nodepool.</p>
<p><strong>Worded as generic question</strong></p>
<p>How can I make sure that all deployments & statefulsets without a node selector will be scheduled inside of a specific nodepool?</p>
| kentor | <p>Use <code>taint</code> for statefulset or <code>pod</code>. Follow: <a href="https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/" rel="noreferrer">https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/</a></p>
| Akash Sharma |
<p>I've been studying "Kubernetes Up and Running" by Hightower et al (first edition) Chapter 13 where they discussed creating a Reliable MySQL Singleton (Since I just discovered that there is a second edition, I guess I'll be buying it soon).</p>
<p>Using their MySQL reliable singleton example as a model, I've been looking for some sample YAML files to make a similar deployment with Microsoft SQL Server (Express) on Docker Desktop for Kubernetes.</p>
<p>Apparently I need YAML files to deploy</p>
<ol>
<li>Persistent Volume</li>
<li>Volume claim (should this be NFS?)</li>
<li>SQL Server (Express edition) replica set (in spite of the fact that this is just a singleton).</li>
</ol>
<p><a href="https://social.msdn.microsoft.com/Forums/vstudio/en-US/5d942d3f-8ecc-4aee-9bb2-cafc2a2be023/wanted-tutorial-for-running-aspnet-with-database-in-kubernetes-on-docker-desktop?forum=visualstudiogeneral" rel="nofollow noreferrer">I've tried this example</a> but I'm confused because it does not contain a persistent volume & claim and it does not work. I get the error</p>
<blockquote>
<p>Error: unable to recognize "sqlserver.yml": no matches for kind "Deployment" in version "apps/v1beta1"</p>
</blockquote>
<p>Can someone please point me to some sample YAML files that are not Azure specific that will work on Docker Desktop Kubernetes for Windows 10? After debugging my application, I'll want to deploy this to Azure (AKS).</p>
<p><strong>Wed Jul 15 2020 Update</strong></p>
<p>I left out the "-n namespace" for the helm install command (possibly because I'm using Helm and you are using helm v2?).</p>
<p>That install command still did not work. Then I did a</p>
<pre><code>helm repo add stable https://kubernetes-charts.storage.googleapis.com/
</code></pre>
<p>Now this command works:</p>
<pre><code>helm install todo-app-database stable/mssql-linux
</code></pre>
<p>Progress!</p>
<p>When I do a "k get pods" I see that my todo-app-mssql-linux database is in the pending state. So I did a</p>
<pre><code>kubectl get events
</code></pre>
<p>and I see</p>
<pre><code>Warning FailedScheduling pod/todo-app-database-mssql-linux-8668d9b88c-lsh5l 0/1 nodes are available: 1 Insufficient memory.
</code></pre>
<p>I've been google searching for "Kubernetes insufficient memory" and can find no match.</p>
<p>I suspect this is a problem specific to "Docker Desktop Kubernetes".</p>
<p>When I look at the output for</p>
<pre><code>helm -n ns-todolistdemo template todo-app-database stable/mssql-linux
</code></pre>
<p>I see the deployment is asking for 2Gi. (Interesting: when I use the template command, the "-n ns-todolistdemo" does not cause an error like it does with the install command).</p>
<p>So I do</p>
<pre><code>kubectl describe deployment todo-app-database-mssql-linux >todo-app-database-mssql-linux.yaml
</code></pre>
<p>I edit the yaml file to change 2Gi to 1Gi.</p>
<pre><code>kubectl apply -f todo-app-database-mssql-linux.yaml
</code></pre>
<p>I get this error:</p>
<pre><code>error: error parsing todo-app-database-mssql-linux.yaml: error converting YAML to JSON: yaml: line 9: mapping values are not allowed in this context
</code></pre>
<p>Hmm... that did not work. I try delete:</p>
<pre><code>kubectl delete deployment todo-app-database-mssql-linux
kubectl create -f todo-app-database-mssql-linux.yaml
</code></pre>
<p>I get this error:</p>
<pre><code>error: error validating "todo-app-database-mssql-linux.yaml": error validating data: invalid object to validate; if you choose to ignore these errors, turn validation off with --validate=false
</code></pre>
<p>So I try apply:</p>
<pre><code>kubectl apply -f todo-app-database-mssql-linux.yaml
</code></pre>
<p>Same error!</p>
<p>Shucks.... Is there a way to adjust the memory allocation for Docker Desktop?</p>
<p>Thank you</p>
<p>Siegfried</p>
| Siegfried | <h1>Short answer</h1>
<p><a href="https://github.com/helm/charts/blob/master/stable/mssql-linux/templates/pvc-master.yaml" rel="nofollow noreferrer">https://github.com/helm/charts/blob/master/stable/mssql-linux/templates/pvc-master.yaml</a></p>
<h1>Detailed Answer</h1>
<p>Docker For Desktop comes already with a default StorageClass :
<a href="https://i.stack.imgur.com/9mO55.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9mO55.png" alt="#" /></a></p>
<p>This storage class is responsible for auto-provisioning of PV whenever you create a PVC.</p>
<p>If you have a YAML definition of PVC (persistent volume claim), you just need to keep storageClass empty, so it will use the default.</p>
<pre><code>k get storageclass
NAME PROVISIONER AGE
hostpath (default) docker.io/hostpath 11d
</code></pre>
<p>This is fair enough as Docker-For-Desktop Cluster is a one node cluster. So if your DB crashes and the cluster opens it again , it will not move to another node, because simply, you have a single node :)</p>
<p>Now should write the YAML of PVC from scratch ?</p>
<p>No , you don't need. Because Helm should be your best friend.</p>
<p>( I explained below Why you have to use Helm even without deep learning curve)</p>
<p>Fortunately, the community provides a chart called <a href="https://github.com/helm/charts/tree/master/stable/mssql-linux" rel="nofollow noreferrer">stable/mssql-linux</a>..
Let's run it together :</p>
<pre class="lang-sh prettyprint-override"><code>helm -n <your-namespace> install todo-app-database stable/mssql-linux
# helm -n <namespace> install <release-name> <chart-name-from-community>
</code></pre>
<p>If you want to check the YAML (namely PVC) that Helm computed, you can run <code>template</code> instead of <code>install</code></p>
<pre class="lang-sh prettyprint-override"><code>helm -n <your-namespace> template todo-app-database stable/mssql-linux
</code></pre>
<h1>Why I give you the answer with Helm ?</h1>
<p>Writing YAML from scratch lets reinventing the wheel while others do it.</p>
<p>The most efficient way is to reuse what community prepare for you.</p>
<p>However, you may ask: How can i reuse what others doing ?</p>
<p>That's why <strong>Helm</strong> comes.</p>
<p>Helm comes to be your installer of any application on top of kubernetes <strong>regardless how much YAML does your app require.</strong></p>
<p>Install it now and hit the ground <code>choco install kubernetes-helm</code></p>
| Abdennour TOUMI |
<p>Have a kubernetes cluster with an nginx ingress to a service which I am trying to set up with https access using <code>cert-manager</code> and ACME <code>ClusterIssuer</code>.</p>
<p>The steps I have followed from cert-manager I am reasonably happy with but I am currently at the stage where a challenge is made to http solver which cert-manager has configured in the cluster as part of the challenge process. When I describe the service's generated challenge I see that its state is pending with:</p>
<pre><code>Reason: Waiting for http-01 challenge propagation: failed to perform self check GET request 'http://www.example.com/.well-known/acme-challenge/nDWOHEMXgy70_wxi53ijEKjUHFlzg_UJJS-sv_ahGzg': Get "http://www.example.com/.well-known/acme-challenge/nDWOHEMXgy70_wxi53ijEKjUHFlzg_UJJS-sv_ahGzg": dial tcp xx.xx.xx.xxx:80: connect: connection timed out
</code></pre>
<p>When I call the solver's url from my k8s host server:</p>
<pre><code>curl -H "Host: www.example.com" http://192.168.1.11:31344/.well-known/acme-challenge/nDWOHEMXgy70_wxi53ijEKjUHFlzg_UJJS-sv_ahGzg
</code></pre>
<p>I get a 200 ok back.</p>
<p>NOTE: The address 192.168.1.11 is the ip of the k8s node on which the http solver pod is running. And port 31344 is the internal port of the nodeIp service for the http solver pod.</p>
<p>I am trying to figure out why the challenge itself times out and not get a 200 back.</p>
<p>I have tested the http solver's url from my mobile phone over 4g (instead of wifi) and this way I get 200 OK so, this tells me that the http solver is reachable from the outside through the firewall and via nginx into the service and pod right? And so, if this is the case then what other reason(s) could there be for Let's Encrypt not being able to retrieve the token from the same URL?</p>
<p>--- CURRENT CONFIGS ---</p>
<p>Cluster Issuer:</p>
<pre><code>apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
namespace: cert-manager
spec:
acme:
# The ACME server URL
server: https://acme-staging-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: [email protected]
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-staging
# Enable the HTTP-01 challenge provider
solvers:
- selector: {}
http01:
ingress:
class: nginx
</code></pre>
<p>Ingress:</p>
<pre><code>apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: ing-myservice-web
namespace: myservice
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-staging"
spec:
tls:
- hosts:
- www.example.com
secretName: secret-myservice-web-tls
rules:
- host: www.example.com
http:
paths:
- backend:
serviceName: svc-myservice-web
servicePort: 8080
path: /
- host: www.example.co.uk
http:
paths:
- backend:
serviceName: svc-myservice-web
servicePort: 8080
path: /
</code></pre>
| Going Bananas | <p>After reading up about various different aspects of how <code>cert-manager</code> works, reading up about other peoples' similar issues on other posts and getting a better understanding on how my network is set up and is seen from the outside, I present below what I've learnt about my setup and thereafter what I did in order to get <code>cert-manager</code> working for my domain services in the k8s cluster within.</p>
<p><strong>Setup:</strong></p>
<ul>
<li>kubernetes cluster with backend services fronted by <code>nginx</code> ingress controller with a <code>NodePort</code> service exposing ports 25080 and 25443 for http and https respectively.</li>
<li>kubernetes cluster in private network behind ISP's public IP.</li>
</ul>
<p><strong>Solution:</strong></p>
<ul>
<li><p>Configured a local <code>http proxy</code> running on port 80 outside the k8s cluster which forwards requests to the <code>nginx controller</code>'s <code>NodePort</code> IP and port 25080.</p>
</li>
<li><p>Configured <code>bind9</code> on my network to point www to host where local <code>http proxy</code> is running.</p>
</li>
<li><p>Configured the k8s cluster's <code>CoreDNS</code> to point to <code>bind9</code> host (Instead of 8.8.4.4, etc.)</p>
</li>
<li><p>Configured my private network's entry point router to send any address port 80 to <code>nginx controller</code>'s <code>NodePort</code> IP and port 25080.</p>
</li>
<li><p>Configured my private network's entry point router to send any address port 443 to <code>nginx controller</code>'s <code>NodePort</code> IP and port 25443.</p>
</li>
</ul>
<p>The main reason for this solution is that my ISP does not allow hosts within my private network to call out and back into the network via the network's public IP address. (I believe this is quite common for ISPs and it's called Harpining or NAT Loopback, and some routers have the functionality to turn it on).</p>
<p>So, in order for the <code>cert-manager</code>'s <code>http solver</code> pod (running within the k8s cluster) to be able to complete the challenge it was necessary for it to be able reach the <code>nginx controller</code> by forcing the network routing for www via the locally hosted <code>http proxy</code> instead of going out to the world wide web and back in again (which my ISP does not allow).</p>
<p>With this solution in place the <code>http solver</code> pod was able to complete the challenge and thereafter <code>cert-manager</code> was able to issue certificates successfully.</p>
<p>I am sure (and I hope) there are better and cleaner solutions to solve this sort of scenario out there but I have not come across any myself yet so this is the solution I currently have in place.</p>
| Going Bananas |
<p>I'm running gitsync container in kubernetes and trying to sync the repository from github. I have already created secret using known_hosts and ssh. However I face following error.</p>
<blockquote>
<p>"msg"="failed to sync repo, aborting" "error"="error running command: exit status 128: "Cloning into '/tmp/git'...\nfatal: Could not read from remote repository.\n\nPlease make sure you have the correct access rights\nand the repository exists.\n""</p>
</blockquote>
<p>Here is my deployment file.</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: gitsync-deployment
labels:
app: gitsync
spec:
replicas: 1
selector:
matchLabels:
app: gitsync
template:
metadata:
labels:
app: gitsync
spec:
containers:
- name: git-sync
image: k8s.gcr.io/git-sync:v3.1.5
# command: ["cat"]
# args: ["/etc/git-secret/ssh"]
imagePullPolicy: Always
volumeMounts:
- name: git-secret
mountPath: /etc/git-secret
env:
- name: GIT_SYNC_REPO
value: "[email protected]:username/test.git"
- name: GIT_SYNC_SSH
value: "true"
- name: GIT_SYNC_BRANCH
value: master
- name: GIT_SYNC_DEST
value: git
- name: GIT_SYNC_DEPTH
value: "1"
volumes:
- name: html
emptyDir: {}
- name: git-secret
secret:
secretName: git-creds
defaultMode: 256
</code></pre>
| Sabir Piludiya | <p>Seems that you followed the <a href="https://github.com/kubernetes/git-sync/blob/master/docs/ssh.md" rel="nofollow noreferrer">official documentation</a>.</p>
<p>But it turns out that this documentation does not mention at all where to put the <strong>public key</strong>.</p>
<p>Actually, a git authentication thru SSH requires the following steps :</p>
<p><strong>1. Generate SSH key-pair :</strong></p>
<pre><code>ssh-keygen -t rsa -N "" -f mykey
</code></pre>
<p>This cmd generates 2 files:</p>
<ul>
<li>private key : <code>./mykey</code></li>
<li>public key : <code>./mykey.pub</code></li>
</ul>
<p><strong>2. Put the public key in your Github Account under Settings > SSH Keys</strong></p>
<p>Copy the content of <code>./mykey.pub</code> and add it in your github account.</p>
<p><strong>3. Put the Private Key in the k8s secret</strong></p>
<p>The official documentation started from here, and it consider <code>$HOME/.ssh/id_rsa</code> as the private key.</p>
<pre><code>kubectl create secret generic git-creds \
--from-file=ssh=./mykey \
....
</code></pre>
<p>the rest should be the same as the official documentation explained.</p>
| Abdennour TOUMI |
<p>Im trying to create a pod using my local docker image as follow.</p>
<p>1.First I run this command in terminal </p>
<pre><code>eval $(minikube docker-env)
</code></pre>
<p>2.I created a docker image as follow</p>
<pre><code>sudo docker image build -t my-first-image:3.0.0 .
</code></pre>
<p>3.I created the pod.yml as shown below and I run this command</p>
<pre><code>kubectl -f create pod.yml.
</code></pre>
<p>4.then i tried to run this command</p>
<pre><code>kubectl get pods
</code></pre>
<p>but it shows following error </p>
<pre><code>
NAME READY STATUS RESTARTS AGE
multiplication-6b6d99554-d62kk 0/1 CrashLoopBackOff 9 22m
multiplication2019-5b4555bcf4-nsgkm 0/1 CrashLoopBackOff 8 17m
my-first-pod 0/1 CrashLoopBackOff 4 2m51
</code></pre>
<p>5.i get the pods logs</p>
<pre><code>kubectl describe pod my-first-pod
</code></pre>
<pre><code>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 6m22s default-scheduler Successfully assigned default/my-first-pod to minikube
Normal Pulled 5m20s (x4 over 6m17s) kubelet, minikube Successfully pulled image "docker77nira/myfirstimage:latest"
Normal Created 5m20s (x4 over 6m17s) kubelet, minikube Created container
Normal Started 5m20s (x4 over 6m17s) kubelet, minikube Started container
Normal Pulling 4m39s (x5 over 6m21s) kubelet, minikube pulling image "docker77nira/myfirstimage:latest"
Warning BackOff 71s (x26 over 6m12s) kubelet, minikube Back-off restarting failed container
</code></pre>
<pre><code>Dockerfile
FROM node:carbon
WORKDIR /app
COPY . .
CMD [ "node", "index.js" ]
</code></pre>
<pre><code>pods.yml
kind: Pod
apiVersion: v1
metadata:
name: my-first-pod
spec:
containers:
- name: my-first-container
image: my-first-image:3.0.0
</code></pre>
<pre><code>index.js
var http = require('http');
var server = http.createServer(function(request, response) {
response.statusCode = 200;
response.setHeader('Content-Type', 'text/plain');
response.end('Welcome to the Golden Guide to Kubernetes
Application Development!');
});
server.listen(3000, function() {
console.log('Server running on port 3000');
});
</code></pre>
| Niranga Sandaruwan | <p>Try checking logs with command <code>kubectl logs -f my-first-pod</code></p>
| Akash Sharma |
<p>Kubernetes sends a SIGTERM signal to containers in a pod before terminating the pod</p>
<p>Does it send a similar signal when it restarts a pod?</p>
| Abdulrahman Bres | <p>Depends on what you mean here by pod restart. If a pod stops running because an underlying node is lost and then a higher level controller restarts it, then you may/may not see any signal being delivered because it is unexpected termination.</p>
<p>On the other hand, if you're talking about planned termination, where a controller kills/evicts a pod and starts a new pod of the same kind on a (potentially different) node, you will see the same set of events (<code>SIGTERM -> termination_grace_period > SIGKILL</code>) occur as in the case of a pod being killed.</p>
| Anirudh Ramanathan |
<p>I have been asked to create a system which has different functionalities. Assume service 1, service 2 and service 3. I need to run these services per hour to do something.
To make the system of those services I need: database, web interface for seeing the result of the process, caching and etc.
This is what I have thought about so far:</p>
<ul>
<li><p>I need kubernetes to orchestrate my services which are packaged as docker containers. I will deploy mySql to save my data and I can use Redis cache for caching.</p></li>
<li><p>My service are written by python scripts and Java and need to interact with each other through APIs.</p></li>
<li><p>I think I can use AWS EKS for my kubernetes cluster </p></li>
</ul>
<hr>
<p>this is what I need to know: </p>
<ul>
<li>how to deploy python or Java applications and connect them to each other and also connect them to a database service</li>
<li>I also need to know how to schedule the application to run per hour so I can see the results in the web interface.</li>
</ul>
<p><strong>Please shoot any ideas or questions you have</strong>. </p>
<p>Any help would be appreciated.</p>
| Milix | <p>For python/java applications, create docker images for both applications. If these application run forever to serve traffic then deploy them as <code>deployments</code>.If you need to have only cron like functionality, deploy as <code>Job</code> in kubernetes.</p>
<p>To make services accessible, create services as <code>selector</code> for applications, so these services can route traffic to specific applications.</p>
<p>Database or cache should be exposed as <code>service endpoints</code> so your applications are environment independent.</p>
| Akash Sharma |
<p>When it comes to running Express (NodeJS) in something like Kubernetes, would it be more cost effective to run with more cores and less nodes? Or more nodes with less cores each? (Assuming the cost of cpus/node is linear ex: 1 node with 4 cores = 2 nodes 2cores)</p>
<p>In terms of redundancy, more nodes seems the obvious answer.</p>
<p>However, in terms of cost effectiveness, less nodes seems better because with more nodes, you are paying more for overhead and less for running your app. Here is an example:</p>
<p>1 node with 4 cores costs $40/month, it is running:</p>
<ul>
<li>10% Kubernetes overhead on one core</li>
<li>90% your app on one core and near 100% on others
Therefore you are paying $40 for 90% + 3x100% = 390% your app</li>
</ul>
<p>2 nodes with 2 cores each cost a total of $40/month running:</p>
<ul>
<li>10% Kubernetes overhead on one core (PER NODE)</li>
<li>90% you app on one core and near 100% on other (PER NODE)
Now you are paying $40 for 2 x (90% + 100%) = 2 x 190% = 380% your app</li>
</ul>
<p>I am assuming balancing the 2 around like 4-8 cores is ideal so you aren't paying so much for each node, scaling nodes less often, and getting hight percentage of compute running your app per node. Is my logic right?</p>
<p>Edit: Math typo</p>
| danthegoodman | <p>because the node does not come empty, but it has to run some core apps like :</p>
<ul>
<li>kubelet</li>
<li>kube-proxy</li>
<li>container-runtime (docker, gVisor, or other)</li>
<li>other daemonset.</li>
</ul>
<p>Sometimes, 3 <strong>large</strong> VMs are better than 4 <strong>medium</strong> VMs in term of the best usage of capacity.</p>
<p><a href="https://i.stack.imgur.com/le3cH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/le3cH.png" alt="enter image description here" /></a></p>
<p>However, the main decider is the type of your workload (your apps):</p>
<ul>
<li><p>If your apps eats memory more than CPUs (Like Java Apps), you will need to choose Node of <strong>[2CPU, 8GB]</strong> is better than <strong>[4CPUs, 8GB]</strong>.</p>
</li>
<li><p>If your apps eats CPUs more than memory (Like ML workload), you will need to choose the opposite; computing-optimized instances.</p>
</li>
<li><p>The golden rule 🏆 is to calculate the <strong>whole capacity</strong> is better than looking into the individual capacity for each node.</p>
</li>
</ul>
<p>At the end, you need to consider not only cost effectiveness but also :</p>
<ul>
<li>Resilience</li>
<li>HA</li>
<li>Redundancy</li>
</ul>
| Abdennour TOUMI |
<p>I understand the difference between ReplicaSet and ReplicationController, of former being Set based and the latter Equality based. What I want to know is why was a newer implementation (Read ReplicaSet) introduced when the older ReplicationController achieves the same functionality.</p>
| Vinodh Nagarajaiah | <p><code>ReplicaSet</code> is usually not standalone, these are owned by <code>Deployment</code>. A single <code>Deployment</code> can have many <code>ReplicaSet</code>s in its life cycle as new <code>Version</code> deployment added one more <code>ReplicaSet</code>. </p>
<p><code>Deployment</code> allows us to rollback to previous stable releases if required.</p>
| Akash Sharma |
<p>I use <a href="https://k3s.io/" rel="nofollow noreferrer">K3S</a> for my Kubernetes cluster. It's really fast and efficient. By default K3S use <a href="https://traefik.io/" rel="nofollow noreferrer">Traefik</a> for ingress controller which also work well til now.</p>
<p>The only issue I have is, I want to have HTTP2 server push. The service I have is behind the ingress, generates <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Link" rel="nofollow noreferrer">Link header</a> which in the case of <a href="https://www.nginx.com/" rel="nofollow noreferrer">NGINX</a> I can simply turn it into the HTTP2 server push (explained <a href="https://www.nginx.com/blog/nginx-1-13-9-http2-server-push/" rel="nofollow noreferrer">here</a>). Is there any same solution for Traefik? Or is it possible to switch to NGINX in K3S?</p>
| user1079877 | <p>You probably do not want HTTP/2 Server Push given it's <a href="https://brianli.com/2020/12/chrome-to-drop-support-for-http2-server-push/" rel="nofollow noreferrer">being removed from Chromium</a>. If you would like to switch ingress controllers you can choose another by:</p>
<ul>
<li>Starting K3s with the <code>--disable traefik</code> option.</li>
<li>Adding another controller such as NGINX or Ambassador</li>
</ul>
<p>For detailed instructions on adding Ambassador to K3s see the following link: <a href="https://rancher.com/blog/2020/deploy-an-ingress-controllers" rel="nofollow noreferrer">https://rancher.com/blog/2020/deploy-an-ingress-controllers</a></p>
| vhs |
<p>I have a deployment that runs two containers. One of the containers attempts to build (during deployment) a javascript bundle that the other container, nginx, tries to serve.</p>
<p>I want to use a shared volume to place the javascript bundle after it's built.</p>
<p>So far, I have the following deployment file (with irrelevant pieces removed):</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
...
spec:
hostNetwork: true
containers:
- name: personal-site
image: wheresmycookie/personal-site:3.1
volumeMounts:
- name: build-volume
mountPath: /var/app/dist
- name: nginx-server
image: nginx:1.19.0
volumeMounts:
- name: build-volume
mountPath: /var/app/dist
volumes:
- name: build-volume
emptyDir: {}
</code></pre>
<p>To the best of my ability, I have followed these guides:</p>
<ul>
<li><a href="https://kubernetes.io/docs/concepts/storage/volumes/#emptydir" rel="nofollow noreferrer">https://kubernetes.io/docs/concepts/storage/volumes/#emptydir</a></li>
<li><a href="https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/" rel="nofollow noreferrer">https://kubernetes.io/docs/tasks/access-application-cluster/communicate-containers-same-pod-shared-volume/</a></li>
</ul>
<p>One other things to point out is that I'm trying to run this locally atm using <code>minikube</code>.</p>
<p>EDIT: The Dockerfile I used to build this image is:</p>
<pre><code>FROM node:alpine
WORKDIR /var/app
COPY . .
RUN npm install
RUN npm install -g @vue/cli@latest
CMD ["npm", "run", "build"]
</code></pre>
<p>I realize that I do not need to build this when I actually run the image, but my next goal is to insert pod instance information as environment variables, so with javascript unfortunately I can only build once that information is available to me.</p>
<h2>Problem</h2>
<p>The logs from the <code>personal-site</code> container reveal:</p>
<pre><code>- Building for production...
ERROR Error: EBUSY: resource busy or locked, rmdir '/var/app/dist'
Error: EBUSY: resource busy or locked, rmdir '/var/app/dist'
</code></pre>
<p>I'm not sure why the build is trying to remove <code>/dist</code>, but also have a feeling that this is irrelevant. I could be wrong?</p>
<p>I thought that maybe this could be related to the lifecycle of containers/volumes, but the docs suggest that "An emptyDir volume is first created when a Pod is assigned to a Node, and exists as long as that Pod is running on that node".</p>
<h2>Question</h2>
<p>What are some reasons that a volume might not be available to me after the containers are already running? Given that you probably have much more experience than I do with Kubernetes, what would you look into next?</p>
| wheresmycookie | <p>The best way is to customize your image's entrypoint as following:</p>
<ul>
<li><p>Once you finish building the <code>/var/app/dist</code> folder, copy(or move) this folder to another empty path (.e.g: <code>/opt/dist</code>)</p>
<pre><code>cp -r /var/app/dist/* /opt/dist
</code></pre>
</li>
</ul>
<p>PAY ATTENTION: this Step must be done in the script of ENTRYPOINT not in the RUN layer.</p>
<ul>
<li><p>Now use <code>/opt/dist</code> instead..:</p>
<pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
...
template:
...
spec:
hostNetwork: true
containers:
- name: personal-site
image: wheresmycookie/personal-site:3.1
volumeMounts:
- name: build-volume
mountPath: /opt/dist # <--- make it consistent with image's entrypoint algorithm
- name: nginx-server
image: nginx:1.19.0
volumeMounts:
- name: build-volume
mountPath: /var/app/dist
volumes:
- name: build-volume
emptyDir: {}
</code></pre>
</li>
</ul>
<p>Good luck!</p>
<p>If it's not clear how to customize the entrypoint, share with us your entrypoint of the image and we will implement it.</p>
| Abdennour TOUMI |
<p>We use Kubernetes cronjobs on GKE (version 1.9) for running several periodic tasks. From the pods, we need to make several calls to external API outside our network. Often (but not all the time), these calls fail because of DNS resolution timeouts.</p>
<p>The current hypothesis I have is that the upstream DNS server for the service we are trying to contact is rate limiting the requests where we make lots of repeated DNS requests because the TTL for those records was either too low or just because we dropped those entries from dnsmasq cache due to low cache size.</p>
<p>I tried editing the kube-dns deployment to change the cache size and ttl arguments passed to dnsmasq container, but the changes get reverted because it's a managed deployment by GKE.
Is there a way to persist these changes so that GKE does not overwrite them? Any other ideas to deal with dns issues on GKE or Kubernetes engine in general?</p>
| Ashu Pachauri | <p>Not sure if all knobs are covered, but if you update the ConfigMap used by the deployment you should be able to reconfigure KubeDNS on GKE. It will use the ConfigMap when deploying new instances. Then nuke the existing pods to redeploy them with the new config.</p>
| KarlKFI |
<p>I have a job definition based on example from kubernetes website.</p>
<pre><code>apiVersion: batch/v1
kind: Job
metadata:
name: pi-with-timeout-6
spec:
activeDeadlineSeconds: 30
completions: 1
parallelism: 1
template:
metadata:
name: pi
spec:
containers:
- name: pi
image: perl
command: ["exit", "1"]
restartPolicy: Never
</code></pre>
<p>I would like run this job once and not restart if fails. With comand exit 1 kubernetes trying to run new pod to get exit 0 code until reach activeDeadlineSeconds timeout. How can avoid that? I would like run build commands in kubernetes to check compilation and if compilation fails I'll get exit code different than 0. I don't want run compilation again.</p>
<p>Is it possible? How?</p>
| esio | <p>By now this is possible by setting <a href="https://kubernetes.io/docs/concepts/workloads/controllers/job/#pod-backoff-failure-policy" rel="noreferrer"><code>backoffLimit: 0</code></a> which tells the controller to do 0 retries. default is 6</p>
| pHiL |
<p>I tried to find useful information when should i use <code>--record</code>. I created 3 commands:</p>
<ul>
<li><code>k set image deployment web1 nginx=lfccncf/nginx:latest --record</code></li>
<li><code>k rollout undo deployment/web1 --record</code></li>
<li><code>k -n kdpd00202 edit deployment web1 --record</code></li>
</ul>
<p>Could anyone tell me if I need to use <code>--record</code> in each of these 3 commands?</p>
<p>When is it necessary to use <code>--record</code> and when is it useless?</p>
| O.Man | <p>Kubernetes desired state can be updated/mutated thru two paradigms :</p>
<ol>
<li>Either <strong>imperatively</strong> using kubectl adhoc commands ( <code>k set</code>, <code>k create</code>, <code>k run</code>, <code>k rollout</code> ,..)</li>
<li>Or <strong>declaratively</strong> using YAML manifests with a single <code>k apply</code></li>
</ol>
<p>The declarative way is ideal for treating your k8s manifests as Code, then you can share this Code with the team, version it thru Git for example, and keep tracking its history leveraging GitOps practices ( branching models, Code Review, CI/CD ).</p>
<p>However, the imperative way cannot be reviewed by the team as these adhoc-commands will be run by an individual and no one else can easily find out the <strong>cause of the change</strong> after the change has been made.</p>
<p>To overcome the absence of an audit trail with imperative commands, the <code>--record</code> option is there to bind the root cause of the change as annotation called <code>kubernetes.io/change-cause</code> and the value of this annotation is the imperative command itself.</p>
<p>(note below is from the <a href="https://kubernetes.io/docs/concepts/workloads/controllers/deployment/" rel="noreferrer">official doc</a>)</p>
<blockquote>
<p>Note: You can specify the --record flag to write the command executed in the resource annotation kubernetes.io/change-cause. The recorded change is useful for future introspection. For example, to see the commands executed in each Deployment revision.</p>
</blockquote>
<p>As conclusion :</p>
<ul>
<li>Theoretically ,<code>--record</code> is not mandatory</li>
<li>Practically, it's mandatory in order to ensure the changes leave a rudimentary audit trail behind and comply with SRE process and DevOps culture.</li>
</ul>
| Abdennour TOUMI |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.